Storage Guide 1.2
One key aspect of virtualization management is the process of dealing with Virtual Machines images. Allegedly, there are a number of possibly different configurations depending on the user needs. For example, the user may want all her images placed on a separate repository with only http access. Or images can be shared through NFS between all the hosts. OpenNebula aims to be flexible enough to support as many different image storage configurations as possible.
The image storage model upon which OpenNebula can organize the images uses the following concepts:
<VM_DIR>/<VID>
, <VM_DIR>
path is defined in the oned.conf file for the cluster. Deployment files for the hypervisor to boot the machine, checkpoints and images being used or saved, all of them specific to that VM will be placed into this directory. Note that the <VM_DIR>
should be shared for most hypervisors to be able to perform live migrations.Any given VM image to be used crosses through the next steps:
clone
) for the images that can mark them as to be cloned or not.save
qualifier is activated the image will be saved for later use under $ONE_LOCATION/var/<VID>/images
.
Note: If OpenNebula was installed in system wide mode this directory becomes /var/lib/one/images
. The rest of this guide refers to the $ONE_LOCATION
paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here
The storage model assumed by OpenNebula does not require any special software to be installed. The following are two cluster configuration examples supported by OpenNebula out-of-the-box. They represent the choices of either sharing the <VM_DIR>
among all the cluster nodes and the cluster front-end via NFS
, or not sharing any folder and have the machines accessible using SSH
. Please note that the Transfer Manager was built using a modular architecture, where each action is associated with a small script that can be easily tuned to fit your cluster configuration. A third choice (share the image repository and not the <VM_DIR>
) is explained in the Customizing & Extending section.
This arrangement of the Storage Model assumes that the <VM_DIR>
is shared between all the cluster nodes and the OpenNebula server. In this case, the semantics of the clone and save actions described above are:
Cloning
: If an image is clonable, it will be copied from the image repository to <VM_DIR>/<VID>/images
, form where it will be used for the VM. If not, a symbolic link will be created from the image repository also to <VM_DIR>/<VID>/images
, so effectively the VM is going to use the original image.Saving
: This will have only effect if the image is not clonable, if it is then saving comes for free. Therefore, if the image is not clonable and savable, the image will be moved from <VM_RDIR>/<VID>/images
to $ONE_LOCATION/var/<VID>/images
.
Please note that by default <VM_DIR>
is set to $ONE_LOCATION/var
.
In this scenario, the <VM_DIR>
is not shared between the cluster front-end and the nodes. Note, that <VM_DIR>
can be shared between the cluster nodes to perform live migrations. The semantics of clone and save are:
Cloning
: This attribute is ignored in this configuration, since images will always be cloned from the image repository to the <VM_DIR>/<VID>/images
.Saving
: If enabled, the image will be transferred back from <VM_DIR>/<VID>/images
to $ONE_LOCATION/var/<VID>/images/
. If not enabled, the image will be simply erased. It is therefore the users responsability to reuse the image from $ONE_LOCATION/var/<VID>/images/
in subsequent uses of the VM in order to use any configuration done or data stored in it.
The Transfer Manager is configured in the $ONE_LOCATION/etc/oned.conf
file, see the OpenNebula configuration guide. Being flexible, the TM is always the same program, and different configurations are achieved by changing the configuration file. This file regulates the assignment between actions, like CLONE
or LN
, and scripts, effectively changing the semantics of the actions understood by the TM.
TM_MAD = [ name = "tm_nfs", executable = "one_tm", arguments = "<tm-configuration-file>", default = "<default-tm-configuration-file" ]
Current OpenNebula release contains two set of scripts for the two scenarios described above, Shared - NFS
($ONE_LOCATION/etc/tm_nfs/tm_nfs.conf
) or the Non Shared SSH
TM ($ONE_LOCATION/etc/tm_ssh/tm_ssh.conf
). Each different TM will have their own directory inside $ONE_LOCATION/etc
.
Lets see a sample line from the Shared - NFS
configuration file:
... CLONE = nfs/tm_clone.sh ...
Basically, the TM here is being told that whenever it receives a clone action it should call the tm_clone.sh
script with the received parameters. For more information on modifying and extending these scripts see Customizing and Extending.
Note: Remember that if OpenNebula is installed in root, the configuration files are placed in /etc/one
.
To configure OpenNebula to be able to handle images with this arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf
, so the TM know what set of scripts it needs to use:
TM_MAD = [ name = "tm_nfs", executable = "one_tm", arguments = "tm_nfs/tm_nfs.conf", default = "tm_nfs/tm_nfs.conf" ]
To configure OpenNebula to be able to handle images with non-shared arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf
, so the TM know what set of scripts it needs to use:
TM_MAD = [ name = "tm_nfs", executable = "one_tm", arguments = "tm_ssh/tm_ssh.conf", default = "tm_ssh/tm_ssh.conf" ]
Default debug information is sent back to OpenNebula, and thus written in the $ONE_LOCATION/var/oned.log
. In case of TM MAD crashing or misbehaving it may be useful to activate the specific TM MAD log file, as it will be receiving information about exceptions produced at run time. This feature can be enabled via the TM specific rc file, placed in the TM directory in $ONE_LOCATION/etc
, using the following line:
ONE_MAD_DEBUG=1
Finally, you can find useful debugging information from the $ONE_LOCATION/var/<VID>/vm.log
file. In this file, each action performed by the TM and the corresponding result is logged.
Note: When OpenNebula is installed in root, log files can be found in /var/log/one/oned.log
and /var/log/one/<VID>.log
Transfer Manager (TM) architecture is highly modular. There are high level actions that OpenNebula asks the TM to perform, which in turn the TM translates into different tasks depending on the storage model specific configuration. Current release of OpenNebula comes with two different set of action scripts that conforms two different TMs for different storage models (see Configuration Interface). There is information available on how to build custom sets of action scripts to tackle custom configuration of the storage model.
TM is all about moving images around and performing file system operations on (possibly) remote hosts, so all the actions receive as arguments at least one ORIGIN
or one DESTINATION
, or both. These have the form:
<host>:<path>
The different available actions that the TM understands are:
ORIGIN
to DESTINATION
. This can mean different things depending on the storage model configuration (see Sample Configurations)DESTINATION
that points to ORIGIN
DESTINATION
. The size is given in ORIGIN
in MB.DESTINATION
and populates it with the files inside ORIGIN
directory.ORIGIN
file or directory.ORIGIN
to DESTINATION
.Action scripts must conform to some rules to work. These are the general rules:
ERROR MESSAGE --8<------ error message here ERROR MESSAGE ------>8--
There is a helper shell script with some functions defined to do some common tasks. It is located in $ONE_LOCATION/lib/mads/tm_common.sh
and can be loaded adding this line at the start of your script:
. $ONE_LOCATION/lib/mads/tm_common.sh
Here are the description of those functions.
log "Creating directory $DST_DIR"
error_message "File '$FILE' not found"
SRC_HOST=`arg_host $SRC`
SRC_PATH=`arg_path $SRC`
exec_and_log "chmod g+w $DST_PATH"
exec_and_log
but takes as first parameter the max number of seconds the command can runtimeout_exec_and_log 15 "cp $SRC_PATH $DST_PATH"
Note: This script is placed in /usr/lib/one/mads
if you installed OpenNebula in root.
As an example here is described how to modify NFS scripts for a different configuration. In this configuration we will have images directory shared but not <VM_DIR>
.
This script is responsible of creating a link in <VM_DIR>
that points to the original image when it has clone set to โoffโ. As we are dealing with non shared <VM_DIR>
it has to be modified so it creates the remote directory and also makes the link in the destination host.
#!/bin/bash SRC=$1 DST=$2 . $ONE_LOCATION/lib/mads/tm_common.sh SRC_PATH=`arg_path $SRC` DST_PATH=`arg_path $DST` DST_HOST=`arg_host $DST` DST_DIR=`dirname $DST_PATH` log "Creating directory $DST_DIR" exec_and_log "ssh $DST_HOST mkdir -p $DST_DIR" log "Link $SRC_PATH" exec_and_log "ssh $DST_HOST ln -s $SRC_PATH $DST_PATH"
We added the mkdir
command and changed link creation to be executed in the remote machine.
Here the modifications are similar to LN, changed the commands so they are executed in the destination host.
#!/bin/bash SRC=$1 DST=$2 . $ONE_LOCATION/lib/mads/tm_common.sh SRC_PATH=`arg_path $SRC` DST_PATH=`arg_path $DST` DST_HOST=`arg_host $DST` log "$1 $2" log "DST: $DST_PATH" DST_DIR=`dirname $DST_PATH` log "Creating directory $DST_DIR" exec_and_log "ssh $DST_HOST mkdir -p $DST_DIR" exec_and_log "ssh $DST_HOST chmod a+w $DST_DIR" case $SRC in http://*) log "Downloading $SRC" exec_and_log "ssh $DST_HOST wget -O $DST_PATH $SRC" ;; *) log "Cloning $SRC_PATH" exec_and_log "ssh $DST_HOST cp $SRC_PATH $DST_PATH" ;; esac exec_and_log "ssh $DST_HOST chmod a+w $DST_PATH"
Note the ssh command appended to each of commands.
The rest of the the commands follow similar modifications, executing the commands using ssh on the remote machine.
When you specify a disk that has โ:โ in its path is not treated in any way and it is passed to TM action script directly. This lets us pass things like <hostname>:<path>
so you can specify an image that is located in a remote machine that is accessible using ssh or customize clone scripts so it accepts different kinds of URLS. This modification should be done in tm_clone.sh
script adding a new option in the case statement. For example, if we want to add ftp support using wget we can add this code:
case $SRC in ftp://*) log "Downloading $SRC" exec_and_log "wget -O $DST_PATH $SRC" ;;
Note that we are using -O
option of wget
, that tells where to download the file. This is needed as the file should be in an specific directory and also needs to be correctly named.