Configuration Guide 1.4
OpenNebula comprises the execution of three type of processes:
oned
), to orchestrate the operation of all the modules and control the VM's life-cycleIn this section you'll learn how to configure and start these services.
The configuration file for the daemon is called oned.conf
and it is placed inside the $ONE_LOCATION/etc
directory (or in /etc/one
if OpenNebula was installed system wide).
A detailed description of all the configuration options for the OpenNebula daemon can be found in the oned.conf reference document
The oned.conf
file consists in the following sections:
The following example will configure OpenNebula to work with KVM and a shared FS:
# Attributes HOST_MONITORING_INTERVAL = 60 VM_POLLING_INTERVAL = 60 VM_DIR = /srv/cloud/one/var #Path in the cluster nodes to store VM images NETWORK_SIZE = 254 #default MAC_PREFIX = "00:03" #Drivers IM_MAD = [name="im_kvm", executable="one_im_ssh", arguments="im_kvm/im_kvm.conf"] VM_MAD = [name="vmm_kvm",executable="one_vmm_kvm", default="vmm_kvm/vmm_kvm.conf", type= "kvm" ] TM_MAD = [name="tm_nfs", executable="one_tm", arguments="tm_nfs/tm_nfs.conf" ]
VM_DIR
is set to the path where the front-end's $ONE_LOCATION/var directory is mounted in the cluster nodes.
The Scheduler module is in charge of the assignment between pending Virtual Machines and cluster nodes. OpenNebula's architecture defines this module as a separate process that can be started independently of oned
. OpenNebula comes with a match making
scheduler (mm_sched) that implements the Rank Scheduling Policy.
The goal of this policy is to prioritize those resources more suitable for the VM. You can configure several resource and load aware policies by simply specifying specific RANK
expressions in the Virtual Machine definition files. Check the scheduling guide to learn how to configure these policies.
You can use OpenNebula without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the
onevm
command.
The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests.
Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands (RC) files are sourced to optionally obtain environmental variables.
These two RC files are:
$ONE_LOCATION/etc/mad/defaultrc
. Global environment and tasks for all the drivers. Variables are defined using sh syntax, and upon read, exported to the driver's environment:# Debug for MADs [0=ERROR, 1=DEBUG] # If set, MADs will generate cores and logs in $ONE_LOCATION/var. ONE_MAD_DEBUG= # Nice Priority to run the drivers PRIORITY=19
defaultrc
variables. Please see each driver's configuration guide for specific options:
$ONE_AUTH
file.
The OpenNebula daemon and the scheduler can be easily started with the $ONE_LOCATION/bin/one
script. Just execute as the <oneadmin>
user:
<xterm>
$ one start
</xterm>
If you do not want to start the scheduler just use oned
, check oned -h
for options.
Now we should have running two process:
oned
: Core process, attends the CLI requests, manages the pools and all the componentsmm_sched
: Scheduler process, in charge of the VM to cluster node matching
If those process are running, you should see content in their log files (log files are placed in /var/log/one/
if you installed OpenNebula system wide):
$ONE_LOCATION/var/oned.log
$ONE_LOCATION/var/sched.log
There are two account types in the OpenNebula system:
oneadmin
has enough privileges to perform any operation on any object (virtual machine, network, host or user)<oneadmin>
and they can only manage their own objects (virtual machines and networks)
oneadmin
are public and can be used by every other user.
OpenNebula users should have the following environment variables set:
ONE_AUTH | Needs to point to a file containing just a single line stating “username:password”. If ONE_AUTH is not defined, $HOME/.one/one_auth will be used instead. If no auth file is present, OpenNebula cannot work properly, as this is needed by the core, the CLI, and the cloud components as well. |
---|---|
ONE_LOCATION | If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here |
ONE_XMLRPC | http://localhost:2633/RPC2 |
PATH | $ONE_LOCATION/bin :$PATH if self-contained. Otherwise this is not needed. |
User accounts within the OpenNebula system are managed by <oneadmin>
with the oneuser
utility. Users can be easily added to the system like this:
<xterm>
$ oneuser create helen mypass
</xterm>
In this case user helen
should include the following content in the $ONE_AUTH file:
<xterm>
$ export ONE_AUTH=“/home/helen/.one/one_auth”
$ cat $ONE_AUTH
helen:mypass
</xterm>
Users can be deleted by simply: <xterm> $ oneuser delete john </xterm>
To list the users in the system just issue the command: <xterm>
oneuser list
UID NAME PASSWORD ENABLE
0 oneadmin c24783ba96a35464632a624d9f829136edc0175e True 1 paul e727d1464ae12436e899a726da5b2f11d8381b26 True 2 helen 34a91f713808846ade4a71577dc7963631ebae14 True
</xterm>
Detailed information of the
oneuser
utility can be found in the Command Line Reference
Finally to set up the cluster, the nodes have to be added to the system as OpenNebula hosts. You need the following information:
im_kvm
.tm_nfs
.vmm_kvm
.Before adding a host check that you can ssh to it without being prompt for a password
Hosts can be added to the system anytime with the onehost
utility. You can add the cluster nodes to be used by OpenNebula, like this:
<xterm>
$ onehost create host01 im_kvm vmm_kvm tm_nfs
$ onehost create host02 im_kvm vmm_kvm tm_nfs
</xterm>
The status of the cluster can be check with the list
command:
<xterm>
$ onehost list
HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT
0 host01 2 100 90 90 523264 205824 on 1 host02 7 100 99 99 523264 301056 on 2 host03 0 100 99 99 523264 264192 off
</xterm>
And specific information about a host with show
:
<xterm>
$ onehost show host01
HOST 0 INFORMATION
ID : 0
NAME : host01
STATE : MONITORED
IM_MAD : im_kvm
VM_MAD : vmm_kvm
TM_MAD : tm_nfs
HOST SHARES MAX MEM : 523264 USED MEM (REAL) : 317440 USED MEM (ALLOCATED) : 131072 MAX CPU : 100 USED CPU (REAL) : 10 USED CPU (ALLOCATED) : 20 RUNNING VMS : 2
MONITORING INFORMATION ARCH=i686 CPUSPEED=1995 FREECPU=90 FREEMEMORY=205824 HOSTNAME=host01 HYPERVISOR=xen MODELNAME=Intel(R) Xeon(R) CPU L5335 @ 2.00GHz NETRX=0 NETTX=0 TOTALCPU=100 TOTALMEMORY=523264 USEDCPU=10 USEDMEMORY=317440 </xterm>
If you want not to use a given host you can temporarily disable it:
<xterm>
$ onehost disable host01
</xterm>
A disable host should be listed with STAT off
by onehost list
. You can also remove a host permanently with:
<xterm>
$ onehost delete host01
</xterm>
Detailed information of the
onehost
utility can be found in the Command Line Reference
There are different log files corresponding to different OpenNebula components:
$ONE_LOCATION/var/oned.log
. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf
.$ONE_LOCATION/var/<VID>/
(or /var/lib/one/<VID>
in a system wide installation). You can find the following information in it:/var/log/one
if OpenNebula was installed system wide.deployment.<EXECUTION>
, where <EXECUTION>
is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on). transfer.<EXECUTION>.<OPERATION>
, where <EXECUTION>
is the sequence number in the execution history of the VM, <OPERATION>
is the stage where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.images/
sub-directory, images are in the form disk.<id>
.checkpoint
.$ONE_LOCATION/var/name-of-the-driver-executable.log
; log information of the drivers is in oned.log
.