This Installation & Configuration Guide aims to show how to install and configure OpenNebula.
The ONE Server machine needs to have installed the following software:
There packages are only needed if you want to rebuild template parsers:
Most of this software is already packaged in linux distributions. Here are the packages needed in a debian lenny.
Provisioning hosts need to have installed:
The execution hosts also need to have configured some xen features:
(xend-relocation-server yes) (xend-relocation-port 8002)
(xend-relocation-hosts-allow 'host*')
It is necessary that the ONE Server and all the hosts:
<oneadmin>
). This can be done using NIS
.trust each other in the ssh scope. This means that all the hosts have to trust the public key of the common user (the one administrator). Public keys needs to be generated in the ONE server and then copied into all the cluster hosts so they trust the ONE server without prompting for a password.
Also, we recommend the use of ssh-agent to keep the private key encrypted in the ONE server. A good tutorial on howto do this easily can be found in here
<oneadmin>
user execute xen commands using root privileges and also letting root user to read and write xen image files. This can be done creating a group containing <oneadmin>
and root and setting appropriate group ownership and permissions to xen image files. You also need to add this two lines to sudoers file of executing nodes so <oneadmin>
user can execute xen commands as root (change paths to suit your installation): %xen ALL=(ALL) NOPASSWD: /usr/sbin/xm * %xen ALL=(ALL) NOPASSWD: /usr/sbin/xentop *
The ONE server and the cluster hosts have to share directories using, for example, NFS
. The filesystem layout for the cluster has to conform to the definitions below:
$ONE_LOCATION
: Path to the ONE installation.$ONE_LOCATION/var
: Directory containing log files and directories for the different VMs. This directory needs to be shared, exported by the ONE server and mounted by all the cluster hosts. In the ONE server, it corresponds to $ONE_LOCATION/var
. If this directory is mounted in the remote hosts in a different point than $ONE_LOCATION/var
, you would need to set the VM_RDIR variable in the one configuration file.$ONE_LOCATION/var/<VID>
: Home directory for the VM with Identifier=<VID>. The system will create the following files in this location:
The ONE software needs to be installed on a machine that needs to export (at least) the $ONE_LOCATION
folder using NFS
. This is necessary for the checkpointing feature to work. There is an known issue regarding sqlite
and NFS
, please see the Release Notes for more info.
Follow these simple steps to install the ONE server
scons OPTION=VALUE
the argument expression [OPTIONAL] is used to set non-default paths for :
OPTION | VALUE |
---|---|
sqlite | path-to-sqlite-install |
xmlrpc | path-to-xmlrpc-install |
parsers | yes if you want to rebuild flex/bison files |
./install.sh <destination_folder>
The configuration file is called oned.conf
and it is placed inside the etc
directory of the $ONE_LOCATION
, which in turn is the directory where OpenNebula is installed.
In this file the next aspects of oned can be defined:
$ONE_LOCATION
<oneadmin>
)$ONE_LOCATION
<oneadmin>
)$ONE_LOCATION/var
is mounted in the cluster hosts, if this location differs from $ONE_LOCATION/var
. In other words, if $ONE_LOCATION/var
in the ONE server corresponds to /local/nebula/var
then the cluster hosts can mount it also as /local/nebula/var
, in which case no there is no need to set this variable. But if the mounting point in the cluster hosts differs from $ONE_LOCATION/var
(say, it is /opt/nebula
for example), then this VM_RDIR variable should point to that mounting point (i.e. /opt/nebula
).0 | ERROR |
---|---|
1 | WARNING |
2 | INFO |
3 | DEBUG |
An example of a complete oned.conf
is shown below.
# Time in seconds between host monitorization HOST_MONITORING_INTERVAL=10 # Information manager configuration. IM_MAD=[name="one_im",executable="bin/one_im_ssh",arguments="etc/one_im_ssh.conf",owner="oneadmin"] # Time in seconds between virtual machine monitorization VM_POLLING_INTERVAL=10 # Virtual Machine Manager configuration. VM_MAD=[name="one_vmm",executable="bin/one_vmm_xen",owner="oneadmin"] # Port where oned will listen for xmlrpc calls. PORT=2633
Drivers are separate proceeses that communicate with the ONE core using an internal ASCII protocol. Before loading the driver, two files are sourced to optionally obtain enviromental variables and perform tasks described in shell script.
These two files are:
$ONE_LOCATION/etc/mad/defaultrc
. Global environment and tasks for all the drivers.One possible use of this feature is to pass the nice priority to all the drivers or to tailor the priority per driver.
The VMM component uses the xentop
command remotely. Therefore, it is important that it knows where to find this command. The default value for the XENTOP_PATH
is /usr/sbin/xentop
. In order to modify this if it doesn't correspond to the cluster hosts xentop path, you will ned to edit $ONE_LOCATION/bin/one_vmm_xen
script and change the third line
$> head -5 $ONE_LOCATION/bin/one_vmm_xen #!/usr/bin/env ruby XENTOP_PATH="/usr/sbin/xentop" ONE_LOCATION=ENV["ONE_LOCATION"]
Before this driver is loaded, the $ONE_LOCATION/etc/mad/vmm_xenrc
is sourced.
The Information Manager can be configured to extract different information from the cluster hosts that the one extracted by default. It works based on im_probes, that is, scripts or programs that monitor certain aspect of the remote cluster host and outputs its findings in a NAME=VALUE fashion.
To let the IM know about the probes, we use the one_im_ssh.conf passed as argument to the IM_MAD, as seen in the one configuration. Let's the default one_im_ssh.conf to clarify this:
# Remote directory where scripts are written REMOTE_DIR=/tmp/ne_im_scripts cpuarchitecture=lib/im_probes/architecture.sh nodename=lib/im_probes/name.sh cpu=lib/im_probes/cpu.sh xen=lib/im_probes/xen.rb
the REMOTE_DIR
variable tells ONE where to place the probes in the remote host. ONE will then transfer the probes and run them, using ssh.
The rest of the file specifies the probes, by giving them a name and the path of the corresponding script of program to be run in the remote host. Let's see a simple one to understand how they work:
$> cat $ONE_LOCATION/lib/im_probes/name.sh #!/bin/sh echo NAME=`uname -n`
This uses the uname command to get the hostname of the remote cluster host, and then outputs the information as
NAME=host1.mydomain.org
You can add and remove probes as you wish.
The xen.rb script also uses the xentop command as the one_vmm_xen
in the previous section, so if the cluster hosts have their xentop command in a location different to the default one (/usr/sbin/xentop
) you will need to change the fifth line of $ONE_LOCATION/lib/im_probes_xen.rb
$> head -5 xen.rb #!/usr/bin/env ruby require "pp" XENTOP_PATH="/usr/sbin/xentop"
Before this driver is loaded, the $ONE_LOCATION/etc/mad/im_sshrc
is sourced.
In order to use Open Nebula, you need to set the following variables:
ONE_LOCATION | pointing to <destination_folder> |
---|---|
ONE_XMLRPC | http://localhost:2633/RPC2 |
PATH | $ONE_LOCATION/bin:$PATH |
The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. The ONE architecture defines this module as a separate process that can be started independently of oned
. The ONE scheduling framework is designed in a generic way, so it is highly modifiable. The ONE TP comes with a match making
scheduler (mm_sched) that implements the Rank Scheduling Policy.
You can start oned
without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm
command.
The goal of this policy is to prioritize those resources more suitable for the VM. First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out. The ''RANK'' expression is evaluated upon this list to sort it. Those resources with a higher rank are used first to allocate VMs.
Rank and requirement expressions are build using any of the attributes provided by the IM (e.g. ARCH, FREECPU…). Check the IM configuration section to extend the ONE information model.
Note that there is a difference between
Once the environment is correctly set up, we have to let ONE know about which resources it can use. In other words, we have to set up the cluster.
But firsts things first, we need to start the ONE daemon and the scheduler. You can do them both by issuing the following command as the <oneadmin>
user:
$> one start
Now we should have running two process:
oned
: Core process, attends the CLI requests, manages the pools and all the componentsmm_sched
: Scheduler process, in charge of the VM-HOST matching If those process are running, you should see content in their log files:
$ONE_LOCATION/var/oned.log
$ONE_LOCATION/var/sched.log
Once we made sure that both processes are running, let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost
command (See the User Guide for more information). So let's add one host:
$> onehost add host1.mydomain.org one_im one_vmm
We are giving ONE hints about what it needs in order to run VMs in our cluster hosts. More details about it in the Command Line Interface.
Once the ONE software is installed, the next tree should be found under $ONE_LOCATION
:
To verify the installation, we recommend to follow the steps in the QuickStart guide, from this step onwards. Before tacking it, please make sure that your environment is correctly set.
There are different log files corresponding to different ONE components:
$ONE_LOCATION/var/oned.log
. All problems related with DB access, component communication, command line invocations and so on will be stated here. Also, in this file will be stored all the information that is not specific to the Scheduler or the VM life-cycle.$ONE_LOCATION/var/<VID>/vm.log
. Information related to this VM would be dumped into this file.