Installation & Configuration Guide for ONE TP2

This Installation & Configuration Guide aims to show how to install and configure OpenNebula.

Requirements

Frontend

The ONE Server machine needs to have installed the following software:

  • ruby >= 1.8.5
  • sqlite3 >= 3.5.2
  • sqlite3-dev >= 3.5.6-3
  • sqlite3-ruby
  • libxmlrpc-c >= 1.06
  • scons >= 0.97
  • g++ >= 4

There packages are only needed if you want to rebuild template parsers:

  • flex >= 2.5
  • bison >= 2.3

Most of this software is already packaged in linux distributions. Here are the packages needed in a debian lenny.

  • ruby: ruby
  • sqlite3: libsqlite3-0, sqlite3
  • sqlite3-dev : libsqlite3-dev
  • sqlite3-ruby: libsqlite3-ruby
  • libxmlrpc-c: libxmlrpc-c3-dev, libxmlrpc-c3
  • scons: scons

Hosts

Provisioning hosts need to have installed:

  • ruby >= 1.8.5
  • sudo >= 1.6.9
  • xen >= 3.1

The execution hosts also need to have configured some xen features:

  • xend-relocation-server activated and using the same port in all the nodes.
(xend-relocation-server yes)
(xend-relocation-port 8002)
  • Give access to all the nodes to reallocate. This example is for hosts named host01, host02 and so on:
(xend-relocation-hosts-allow 'host*')

Users Configuration

It is necessary that the ONE Server and all the hosts:

  • have a common user (the ONE admin user <oneadmin>). This can be done using NIS.
  • trust each other in the ssh scope. This means that all the hosts have to trust the public key of the common user (the one administrator). Public keys needs to be generated in the ONE server and then copied into all the cluster hosts so they trust the ONE server without prompting for a password.

    Also, we recommend the use of ssh-agent to keep the private key encrypted in the ONE server. A good tutorial on howto do this easily can be found in here

  • let <oneadmin> user execute xen commands using root privileges and also letting root user to read and write xen image files. This can be done creating a group containing <oneadmin> and root and setting appropriate group ownership and permissions to xen image files. You also need to add this two lines to sudoers file of executing nodes so <oneadmin> user can execute xen commands as root (change paths to suit your installation):
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xm *
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xentop *

Filesystem Layout

The ONE server and the cluster hosts have to share directories using, for example, NFS. The filesystem layout for the cluster has to conform to the definitions below:

  • $ONE_LOCATION : Path to the ONE installation.
  • $ONE_LOCATION/var : Directory containing log files and directories for the different VMs. This directory needs to be shared, exported by the ONE server and mounted by all the cluster hosts. In the ONE server, it corresponds to $ONE_LOCATION/var. If this directory is mounted in the remote hosts in a different point than $ONE_LOCATION/var, you would need to set the VM_RDIR variable in the one configuration file.
  • $ONE_LOCATION/var/<VID> : Home directory for the VM with Identifier=<VID>. The system will create the following files in this location:
    • Log files : Log file of the VM. All the information specific of the VM will be dumped in a file in this directory called vm.log.
    • Deployment description files : Stored in deployment.$EXECUTION, where $EXECUTION is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
    • Restore files : checkpointing information is also stored in this directory to restore the VM in case of failure. The state information is stored in a file called checkpoint.
  • Also the VM images needs to be accessible from all the cluster hosts and the local root account of all the cluster hosts must have read-write permissions. Once you specify the location of this images in the VM template take into account that they must refer to where this directory is mounted in the remote hosts.

Installation

The ONE software needs to be installed on a machine that needs to export (at least) the $ONE_LOCATION folder using NFS. This is necessary for the checkpointing feature to work. There is an known issue regarding sqlite and NFS, please see the Release Notes for more info.

Follow these simple steps to install the ONE server

  • Download and untar the one-tp.tar.gz
  • Change to the created folder and run scons to compile Open Nebula
scons OPTION=VALUE

the argument expression [OPTIONAL] is used to set non-default paths for :

OPTION VALUE
sqlite path-to-sqlite-install
xmlrpc path-to-xmlrpc-install
parsers yes if you want to rebuild flex/bison files
  • Run the install script
./install.sh <destination_folder>

ONE Configuration

The configuration file is called oned.conf and it is placed inside the etc directory of the $ONE_LOCATION, which in turn is the directory where OpenNebula is installed.

In this file the next aspects of oned can be defined:

  • HOST_MONITORING_INTERVAL : Time in seconds between host monitorization
  • VM_POLLING_INTERVAL : Time in seconds between virtual machine monitorization
  • PORT : Port where oned will listen for xmlrpc calls
  • IM_MAD : Information manager configuration. You can define more than one Information manager. To define it, there needs to be set:
    • name: name for this information manager.
    • executable: path of the information manager executable, can be an absolute path or a relative path from $ONE_LOCATION
    • arguments: path where the information manager configuration resides, can also be a relative path
    • owner: user that will be used for monitoring (we recommend this to be <oneadmin>)
  • VM_MAD : Virtual Machine Manager configuration.
    • name : name of the virtual machine manager.
    • executable: path of the virtual machine manager executable, can be an absolute path or a relative path from $ONE_LOCATION
    • owner: user that will be used for vm managing (we recommend this to be <oneadmin>)
    • default: file with default values for the driver (for example to set the default Kernel).
  • VM_RDIR : Where $ONE_LOCATION/var is mounted in the cluster hosts, if this location differs from $ONE_LOCATION/var. In other words, if $ONE_LOCATION/var in the ONE server corresponds to /local/nebula/var then the cluster hosts can mount it also as /local/nebula/var, in which case no there is no need to set this variable. But if the mounting point in the cluster hosts differs from $ONE_LOCATION/var (say, it is /opt/nebula for example), then this VM_RDIR variable should point to that mounting point (i.e. /opt/nebula).
  • DEBUG_LEVEL : Sets the level of verbosity of $ONE_LOCAITON/var/oned.log log file. Possible values are:
0 ERROR
1 WARNING
2 INFO
3 DEBUG

An example of a complete oned.conf is shown below.

# Time in seconds between host monitorization
HOST_MONITORING_INTERVAL=10

# Information manager configuration. 
IM_MAD=[name="one_im",executable="bin/one_im_ssh",arguments="etc/one_im_ssh.conf",owner="oneadmin"]

# Time in seconds between virtual machine monitorization
VM_POLLING_INTERVAL=10

# Virtual Machine Manager configuration.
VM_MAD=[name="one_vmm",executable="bin/one_vmm_xen",owner="oneadmin"]

# Port where oned will listen for xmlrpc calls.
PORT=2633

Drivers Configuration

Drivers are separate proceeses that communicate with the ONE core using an internal ASCII protocol. Before loading the driver, two files are sourced to optionally obtain enviromental variables and perform tasks described in shell script.

These two files are:

  • $ONE_LOCATION/etc/mad/defaultrc. Global environment and tasks for all the drivers.

One possible use of this feature is to pass the nice priority to all the drivers or to tailor the priority per driver.

VMM Configuration

The VMM component uses the xentop command remotely. Therefore, it is important that it knows where to find this command. The default value for the XENTOP_PATH is /usr/sbin/xentop. In order to modify this if it doesn't correspond to the cluster hosts xentop path, you will ned to edit $ONE_LOCATION/bin/one_vmm_xen script and change the third line

$> head -5 $ONE_LOCATION/bin/one_vmm_xen
#!/usr/bin/env ruby


XENTOP_PATH="/usr/sbin/xentop"

ONE_LOCATION=ENV["ONE_LOCATION"]

Before this driver is loaded, the $ONE_LOCATION/etc/mad/vmm_xenrc is sourced.

IM Configuration

The Information Manager can be configured to extract different information from the cluster hosts that the one extracted by default. It works based on im_probes, that is, scripts or programs that monitor certain aspect of the remote cluster host and outputs its findings in a NAME=VALUE fashion.

To let the IM know about the probes, we use the one_im_ssh.conf passed as argument to the IM_MAD, as seen in the one configuration. Let's the default one_im_ssh.conf to clarify this:

# Remote directory where scripts are written
REMOTE_DIR=/tmp/ne_im_scripts

cpuarchitecture=lib/im_probes/architecture.sh
nodename=lib/im_probes/name.sh
cpu=lib/im_probes/cpu.sh
xen=lib/im_probes/xen.rb

the REMOTE_DIR variable tells ONE where to place the probes in the remote host. ONE will then transfer the probes and run them, using ssh.

The rest of the file specifies the probes, by giving them a name and the path of the corresponding script of program to be run in the remote host. Let's see a simple one to understand how they work:

$> cat $ONE_LOCATION/lib/im_probes/name.sh
#!/bin/sh

echo NAME=`uname -n`

This uses the uname command to get the hostname of the remote cluster host, and then outputs the information as

NAME=host1.mydomain.org

You can add and remove probes as you wish.

The xen.rb script also uses the xentop command as the one_vmm_xen in the previous section, so if the cluster hosts have their xentop command in a location different to the default one (/usr/sbin/xentop) you will need to change the fifth line of $ONE_LOCATION/lib/im_probes_xen.rb

$> head -5 xen.rb 
#!/usr/bin/env ruby

require "pp"

XENTOP_PATH="/usr/sbin/xentop"

Before this driver is loaded, the $ONE_LOCATION/etc/mad/im_sshrc is sourced.

Environment Configuration

In order to use Open Nebula, you need to set the following variables:

ONE_LOCATION pointing to <destination_folder>
ONE_XMLRPC http://localhost:2633/RPC2
PATH $ONE_LOCATION/bin:$PATH

Schedule Module

The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. The ONE architecture defines this module as a separate process that can be started independently of oned. The ONE scheduling framework is designed in a generic way, so it is highly modifiable. The ONE TP comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy.

:!: You can start oned without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm command.

Rank Scheduling Policy

The goal of this policy is to prioritize those resources more suitable for the VM. First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out. The ''RANK'' expression is evaluated upon this list to sort it. Those resources with a higher rank are used first to allocate VMs.

:!: Rank and requirement expressions are build using any of the attributes provided by the IM (e.g. ARCH, FREECPU…). Check the IM configuration section to extend the ONE information model.

:!: Note that there is a difference between

  • Free CPU : Is the free physical CPU as reported by the monitoring process executed by the Information Manager.
  • Available CPU : More of a logical concept, is the total physical CPU minus the reserved CPU for the running VMs, the latter which can be significantly lower than the actual CPU consumed by the VM.

Setting up the cluster

Once the environment is correctly set up, we have to let ONE know about which resources it can use. In other words, we have to set up the cluster.

But firsts things first, we need to start the ONE daemon and the scheduler. You can do them both by issuing the following command as the <oneadmin> user:

$> one start

Now we should have running two process:

  • oned : Core process, attends the CLI requests, manages the pools and all the components
  • mm_sched : Scheduler process, in charge of the VM-HOST matching

If those process are running, you should see content in their log files:

  • $ONE_LOCATION/var/oned.log
  • $ONE_LOCATION/var/sched.log

Once we made sure that both processes are running, let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost command (See the User Guide for more information). So let's add one host:

$> onehost add host1.mydomain.org one_im one_vmm

We are giving ONE hints about what it needs in order to run VMs in our cluster hosts. More details about it in the Command Line Interface.

Verifying ONE Installation

Once the ONE software is installed, the next tree should be found under $ONE_LOCATION:

To verify the installation, we recommend to follow the steps in the QuickStart guide, from this step onwards. Before tacking it, please make sure that your environment is correctly set.

Logging

There are different log files corresponding to different ONE components:

  • ONE Daemon: The core component of ONE dumps all its logging information onto $ONE_LOCATION/var/oned.log. All problems related with DB access, component communication, command line invocations and so on will be stated here. Also, in this file will be stored all the information that is not specific to the Scheduler or the VM life-cycle.
  • Scheduler: All the scheduler information is collected into the $ONE_LOCATION/var/sched.log file. This information is formed by error messages and also information about the scheduling process is dumped.
  • Virtual Machines: All VMs controlled by ONE have their folder, corresponding to their ID (VID). This folder would be $ONE_LOCATION/var/<VID>/vm.log. Information related to this VM would be dumped into this file.