Configuration Guide 1.2
This section describes the basic configuration of the main subsystems of a typical cluster architecture: users, storage and networking. Also in this section you can find some hints and pointers to make the supported virtualizers work with OpenNebula.
OpenNebula needs a cluster-wide user account that will be running the OpenNebula daemons in the cluster front-end, and will be interacting with the virtualizer to create and control the VMs in the cluster nodes. Typically the <oneadmin>
user will access the cluster nodes using SSH, so the sshd
needs to be properly configured.
The following steps summarizes the configuration needed:
<oneadmin>
, in the cluster front-end and the cluster nodes. This can be done using NIS
. This account should have the following environment variables set:
ONE_LOCATION | If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here |
---|---|
ONE_XMLRPC | http://localhost:2633/RPC2 |
PATH | $ONE_LOCATION/bin:$PATH if self-contained. Otherwise this is not needed. |
trust each other in the ssh scope. This means that all the hosts have to trust the public key of the common user (the one administrator). Public keys needs to be generated in the cluster front-end and then copied into all the cluster hosts so they trust the cluster front-end without prompting for a password.
You can also use ssh-agent to keep the private key encrypted in the cluster front-end. A good tutorial on howto do this easily can be found in here
OpenNebula does not require any specific storage arrangement to work. We'll just assume that there is a valid image repository (virtually any storage medium that holds the images of the VMs) accessible by the cluster front-end, and that basic file system operations, like cp
or ln
, can be initiated from the front-end using the SSH
protocol (see above).
The image management module of OpenNebula has been designed to be highly flexible and to be easily integrated with VM image management tools. So it can take advantage of you current cluster set up (SANs, shared FS…) and tools. Please, for more details on configuring and fine tunning the storage for your cluster check the Storage Guide
There are no special requirements for networking, a apart from those derived from the previous configuration steps. However to make an effective use of your VM deployments you'll probably need to make one ore more physical networks accessible to them. Typically, this is achieved with ethernet bridging, there are several guides that explain how to deal with this configuration, check for example the networking howto in the XEN documentation.
Please for more details on using virtual networks check the Networking Guide
The virtualization technology installed in your cluster nodes, have to be configured so the <oneadmin>
user can start, control and monitor VMs. This usually means the execution of commands with root privileges or making <oneadmin
> part of a given group. Please take a look to the virtualization guide that fits your site:
Additionally OpenNebula can deploy some of your VMs in a remote cloud, please take a look to the Amazon EC2 Configuration Guide for more information.
OpenNebula comprises the execution of two components: the OpenNebula daemon and the associated drivers, and a scheduler. In this section you'll learn how to configure and start these services.
The OpenNebula daemon oned
manages the cluster nodes, virtual networks and virtual machines. oned
uses specialized and separate process to interface with the underlying technologies (e.g. virtualizers, node information systems or storage facilities).
The configuration file for the daemon is called oned.conf
and it is placed inside the $ONE_LOCATION/etc
.
Note: If OpenNebula was installed in system wide mode this directory becomes /etc/one/
. The rest of this guide refers to the $ONE_LOCATION
paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here.
In this file you'll configure the following sections:
HOST_MONITORING_INTERVAL
: Time in seconds between host monitorization VM_POLLING_INTERVAL
: Time in seconds between virtual machine monitorizationVM_DIR
: Remote path to store the VM images, it should be shared between all the cluster nodes to perform live migrations. This path will be used for all the cluster nodes.MAC_PREFIX
: Default MAC prefix to generate virtual network MAC addressesNETWORK_SIZE
: Default size for virtual networks PORT
: Port where oned will listen for xml-rpc callsDEBUG_LEVEL
: Sets the level of verbosity of $ONE_LOCATION/var/oned.log
log file. Possible values are:
DEBUG_LEVEL | Meaning |
---|---|
0 | ERROR |
1 | WARNING |
2 | INFO |
3 | DEBUG |
Example of this section:
#------------------------------------------------------------------------------- # Daemon configuration attributes #------------------------------------------------------------------------------- HOST_MONITORING_INTERVAL = 10 VM_POLLING_INTERVAL = 10 VM_DIR = /local/images MAC_PREFIX = "00:01" NETWORK_SIZE = 254 PORT = 2633 DEBUG_LEVEL = 3
The information drivers are used to gather information from the cluster nodes, and they depend on the virtualizer you are using. You can define more than one information manager but make sure it has different names. To define it, the following needs to be set:
$ONE_LOCATION/lib/mads
(or /usr/lib/one/mads/
in a system wide installation)$ONE_LOCATION/etc
(or /etc/one/
in a system wide installation).$ONE_LOCATION/etc
(or /etc/one/
in a system wide installation).For more information on configuring the information and monitoring system and hints to extend it please check the information driver configuration guide.
Sample configuration:
#------------------------------------------------------------------------------- # Information Driver Configuration #------------------------------------------------------------------------------- IM_MAD = [ name = "im_xen", executable = "bin/one_im_ssh", arguments = "im_xen/im_xen.conf", default = "im_xen/im_xen.conf" ]
The transfer drivers are used to transfer, clone, remove and create VM images. You will be using one transfer driver or another depending on the storage layout of your cluster. You can define more than one transfer manager (e.g. you have different configurations for several cluster nodes) but make sure it has different names. To define it, there needs to be set:
$ONE_LOCATION/lib/mads
(or /usr/lib/one/mads/
in a system wide installation)$ONE_LOCATION/etc
(or /etc/one/
in a system wide installation)for the driver executable
$ONE_LOCATION/etc
(or /etc/one/
in a system wide installation)For more information on configuring different storage alternatives please check the storage configuration guide.
Sample configuration:
#------------------------------------------------------------------------------- # Transfer Driver Configuration #------------------------------------------------------------------------------- TM_MAD = [ name = "tm_ssh", executable = "one_tm", arguments = "tm_ssh/tm_ssh.conf", default = "tm_ssh/tm_ssh.conf" ]
The virtualization drivers are used create, control and monitor VMs on the cluster nodes. You can define more than one virtualization driver (e.g. you have different virtualizers in several cluster nodes) but make sure it has different names. To define it, the following needs to be set:
$ONE_LOCATION/lib/mads
(or /usr/lib/one/mads/
in a system wide installation)$ONE_LOCATION/etc
(or /etc/one/
in a system guide installation)For more information on configuring and setting up the virtualizer please check the guide that suits you:
Sample configuration:
#------------------------------------------------------------------------------- # Virtualization Driver Configuration #------------------------------------------------------------------------------- VM_MAD = [ name = "vmm_xen", executable = "one_vmm_xen", default = "vmm_xen/vmm_xen.conf", type = "xen" ]
Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading the driver, two run commands (RC) files are sourced to optionally obtain environmental variables and perform tasks described in shell script.
These two RC files are:
$ONE_LOCATION/etc/mad/defaultrc
. Global environment and tasks for all the drivers. Variables are defined in the following fashion:ATTRIBUTE=VALUE
and, upon read, exported to the environment. This attributes are set for all the drivers, and could be superseded by the same attribute present in the driver's own specific RC file. Common attributes suitable to be set for all the drivers are:
# Debug for MADs. # If set, MADs will generate cores and logs in $ONE_LOCATION/var. # Possible values are [0=ERROR, 1=DEBUG] ONE_MAD_DEBUG= # Nice Priority to run the drivers PRIORITY=19
The only out-of-the-box default value set in this file is the PRIORITY, and as seen above, is set to 19.
The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. OpenNebula's architecture defines this module as a separate process that can be started independently of oned
. The OpenNebula scheduling framework is designed in a generic way, so it is highly modifiable. OpenNebula comes with a match making
scheduler (mm_sched) that implements the Rank Scheduling Policy.
The goal of this policy is to prioritize those resources more suitable for the VM. First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out. The ''RANK'' expression is evaluated upon this list to sort it. Those resources with a higher rank are used first to allocate VMs.
You can start oned
without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm
command.
The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests (more generally, it allows you to lease your resources as VMs, with a variety of lease terms). The Haizea documentation includes a guide on how to use OpenNebula and Haizea to manage VMs on a cluster
Once the environment is correctly set up, we have to let OpenNebula know about which resources it can use. In other words, we have to set up the OpenNebula cluster.
But firsts things first, we need to start the OpenNebula daemon and the scheduler. You can do them both by issuing the following command as the <oneadmin>
user:
$> one start
Now we should have running two process:
oned
: Core process, attends the CLI requests, manages the pools and all the componentsmm_sched
: Scheduler process, in charge of the VM-HOST matching
If those process are running, you should see content in their log files (log files are placed in /var/log/one/
if you installed OpenNebula system wide):
$ONE_LOCATION/var/oned.log
$ONE_LOCATION/var/sched.log
Once we made sure that both processes are running, let's set up the cluster for OpenNebula. First thing is adding hosts, this can be done by means of the onehost
command (See the User Guide for more information). So lets add all the cluster nodes you want to be used by OpenNebula, you have to specify the information, virtualization and storage driver to be use with each host, like this:
$> onehost create host1.mydomain.org im_xen vmm_xen tm_ssh
There are different log files corresponding to different OpenNebula components:
$ONE_LOCATION/var/oned.log
. All problems related with DB access, component communication, command line invocations and so on will be stated here. Also, in this file will be stored all the information that is not specific to the Scheduler or the VM life-cycle. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf
.$ONE_LOCATION/var/<VID>/
(or /var/lib/one/<VID>
in a system wide installation). You can find the following information in it:/var/log/one
if OpenNebula was installed system wide.deployment.<EXECUTION>
, where <EXECUTION>
is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on). transfer.<EXECUTION>
, where <EXECUTION>
is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).images/
sub-directory, images are in the form disk.<id>
.checkpoint
.$ONE_LOCATION/var/name-of-the-driver-executable.log
.