Enabling OpenVZ support in OpenNebula 2.2.1

:!: This is the old Community Wiki page. Please go to the new Community Wiki.

:!: The information below is incomplete and outdated! Please, check that one.

:!: This page is under construction and incomplete yet!

:!: The instructions below are written and has been tested against OpenNebula 2.2.1 installation on CentOS 5, cluster nodes were running on CentOS 5 as well as VMs.
For a time being OpenVZ support in OpenNebula has been developed and tested for the following use case:
1) VM images is copied on each cluster node over ssh;
2) only OpenVZ venet network device is used inside VMs i.e. veth has not been tested yet and thus network bridge shouldn’t be set up on cluster node.

:!: OpenVZ support in OpenNebula hasn’t been tested yet on RHEL6-based OpenVZ kernel. It has a new memory management model called VSwap which supersedes User beancounters.

:!: The acronyms VE (virtual environment), VPS (virtual private server), CT (container) and VM (virtual machine) are synonyms in that doc and in the scripts.

inlinetoc

Legend

CN = cluster node
FN = front-end node
IR = Image repository
ONE = OpenNebula environment.

Front-end setup

Software installation

Creating oneadmin user and his home dir

<xterm> [root@FN]$ groupadd -g 1000 cloud

[root@FN]$ mkdir -p /srv/cloud/

[root@FN]$ useradd –uid 1000 -g cloud -d /srv/cloud/one -m oneadmin

[root@FN]$ id oneadmin

uid=1000(oneadmin) gid=1000(cloud) groups=1000(cloud) </xterm>

Creating a directory for images

<xterm> [root@FN]$ mkdir /srv/cloud/images

[root@FN]$ chown oneadmin:cloud /srv/cloud/images

[root@FN]$ chmod g+w /srv/cloud/images/ </xterm>

Installing required packages

The steps below are based on the procedure mentioned in “CentOS 5 / RHEL 5” section of “Platform Notes 2.2” doc. <xterm> [root@FN]$ wget http://centos.karan.org/kbsingh-CentOS-Extras.repo -P /etc/yum.repos.d/ </xterm> One can disable kbsingh-CentOS-* repo and enable them explicitly upon necessity.

On x86_64 machine one can force yum to install 64-bit version of rpms only by adding ‘exclude=*.i386 *.i586 *.i686’ line in /etc/yum.conf.

Install the packages listed below if they are not yet installed: <xterm> [root@FN]$ yum install bison emacs rpm-build gcc autoconf readline-devel ncurses-devel gdbm-devel tcl-devel tk-devel libX11-devel openssl-devel db4-devel byacc emacs-common gcc-c++ libxml2-devel libxslt-devel expat-devel </xterm>

Nokogiri gem requires ruby >= 1.8.7 and rake gem requires rubygems >= 1.3.2. The required versions of these packages can be installed e.g. from Southbridge repo: <xterm> [root@FN]$ cat /etc/yum.repos.d/southbridge-stable.repo [southbridge-stable] name=Southbridge stable packages repository gpgcheck=1 gpgkey=http://rpms.southbridge.ru/RPM-GPG-KEY-southbridge enabled=0 baseurl=http://rpms.southbridge.ru/stable/$basearch/ [root@FN]$ yum --enablerepo="southbridge-stable" install ruby-enterprise-1.8.7-3 ruby-enterprise-rubygems-1.3.2-2 </xterm>

After that nokogiri, rake and xmlparser gems should be installed: <xterm> [root@FN]$ gem install nokogiri rake xmlparser –no-ri –no-rdoc </xterm>

Install scons: <xterm> [root@FN]$ wget  http://prdownloads.sourceforge.net/scons/scons-2.1.0-1.noarch.rpm [root@FN]$ rpm -ivh scons-2.1.0-1.noarch.rpm </xterm>

Install xmlrpc-c and xmlrpc-c-devel packages: <xterm> [root@FN]$ yum –enablerepo=“kbs-CentOS-Testing” install xmlrpc-c-1.06.18 xmlrpc-c-devel-1.06.18 </xterm>

:!: Note the exact version of xmlrpc-c* rpms (1.06.18). Scons fails on newer ones (like e.g. 1.16.24).

Rebuild sqlite srpm under unprivillege user (e.g. oneadmin) and install compiled packages: <xterm> [root@FN]$ su - oneadmin [oneadmin@FN]$ mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} [oneadmin@FN]$ echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros [oneadmin@FN]$ wget http://download.fedora.redhat.com/pub/fedora/linux/releases/13/Fedora/source/SRPMS/sqlite-3.6.22-1.fc13.src.rpm -P ~/rpmbuild/SRPMS/ [oneadmin@FN]$ rpm -ivh --nomd5 rpmbuild/SRPMS/sqlite-3.6.22-1.fc13.src.rpm [oneadmin@FN]$ rpmbuild -ba --define 'dist .el5' ~/rpmbuild/SPECS/sqlite.spec [oneadmin@FN]$ exit [root@FN]$ yum localinstall --nogpgcheck  ~oneadmin/rpmbuild/RPMS/x86_64/{sqlite-3.6.22*,sqlite-devel-3.6.22*,lemon*} </xterm>

Installing OpenNebula

Download OpenNebula source tarball from http://opennebula.org/software:software to build OpenNebula from stable release. <xterm> [oneadmin@FN]$ tar -xzf opennebula-2.2.1.tar.gz </xterm>

The next step is build OpenNebula <xterm> [oneadmin@FN]$ cd opennebula-2.2.1

[oneadmin@FN]$ scons -j2 </xterm> Install OpenNebula into some dir: <xterm> [oneadmin@FN]$ mkdir ~/one-2.2.1

[oneadmin@FN]$ ./install.sh -d ~oneadmin/one-2.2.1 </xterm> FIXME Download openvz4opennebula tarball from <…>. It contains OpenVZ-related scripts what have to be placed in $ONE_LOCATION/var/remotes/im/ and $ONE_LOCATION/var/remotes/vmm/ (a copies in $ONE_LOCATION/lib/remotes/{im,vmm} dirs are for backup purposes).

:!: Note that $ONE_LOCATION/etc/tm_ssh/tm_ssh.conf file be will overwritten by the file from tarball. Thus one can make a copy of the original tm_ssh.conf if needed: <xterm> [oneadmin@FN]$ mv $ONE_LOCATION/etc/tm_ssh/tm_ssh.conf{,.orig} </xterm>

For self-contained OpenNebula installation one can perform the following command: <xterm> [oneadmin@FN]$ tar -xjf openvz4opennebula-1.0.0.tar.bz2 -C $ONE_LOCATION/ </xterm>

For system-wide installation the commands are as below: <xterm> [oneadmin@FN]$ tar -C /usr/lib/one/remotes/ -xvjf openvz4opennebula-1.0.0.tar.bz2 lib/remotes/ –strip-components=2

[oneadmin@FN]$ tar -C /etc/one/tm_ssh/ -xvjf openvz4opennebula-1.0.0.tar.bz2 etc/tm_ssh/tm_ssh.conf –strip-components=2

[oneadmin@FN]$ tar -C /etc/one/vmm_ssh/ -xvjf openvz4opennebula-1.0.0.tar.bz2 etc/vmm_ssh/vmm_ssh_ovz.conf –strip-components=2 </xterm>

Changes to be done in one_vmm_ssh.rb file

To perform resume action on VMs (a $SCRIPTS_REMOTE_DIR/vmm/ovz/restore is invoked) there is a need to pass a deployed VM ID whereas only a dump file is passed by OpenNebula. Thus a #{deploy_id} variable as a second argument needs to be added into the line 104 in the file $ONE_LOCATION//lib/mads/one_vmm_ssh.rb i.e.:

<xterm> $ diff one_vmm_ssh.rb one_vmm_ssh.rb.orig 104c104 <         remotes_action(“#{@remote_path}/restore #{file} #{deploy_id}”, —

        remotes_action(“#{@remote_path}/restore #{file}”,

</xterm> There is a ticket to track that issue.

Configuration

Environment

Set a proper environment in ~oneadmin/.one-env file:

export ONE_LOCATION=/srv/cloud/one/one-2.2.1
export ONE_AUTH=$HOME/.one-auth
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:/sbin:/usr/sbin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
export LD_LIBRARY_PATH=$ONE_LOCATION/lib

<xterm> [oneadmin@FN]$ echo “source $HOME/.one-env” » .bash_profile </xterm>

local authorization

Put login and a password for oneadmin user in ~/.one-auth in the format as below:

<oneadmin_username>:<oneadmin_passwd>

:!: <oneadmin_passwd> is not what is used for access via ssh.

ssh authorization

oneadmin user must be able to perform passwordless login from FN on FN (localhost) and Cluster nodes (CNs). Generate ssh keys (don't enter any passphrase, i.e. just press “Enter”): <xterm> [oneadmin@FN]$ ssh-keygen -t rsa </xterm>

Add ~oneadmin/.ssh/id_rsa.pub to ~oneadmin/.ssh/authorized_keys file: <xterm> [oneadmin@FN]$ cat ~/.ssh/id_rsa.pub » ~/.ssh/authorized_keys </xterm>

Check if oneadmin can login on localhost without being asking for a password: <xterm> [oneadmin@FN]$ ssh localhost </xterm>

If the command above ask for a password then check a firewall settings, /etc/{hosts.allow, hosts.deny,hosts} files. If access control is done using /etc/hosts.{allow,deny} files then make sure a localhost present into /etc/hosts.allow as below: <xterm> [root@FN]$ cat /etc/hosts.allow |grep sshd sshd:localhost </xterm> The configuration for passwordless ssh access from FN to CNs and vice versa is described below.

oned.conf

As it has been already mentioned before for a time being OpenVZ support in OpenNebula has been developed and tested for the following use case:
1) VM images is copied on each cluster node over ssh;
2) only OpenVZ venet network device is used inside VMs i.e. veth has not been tested yet and thus network bridge shouldn’t be set up on cluster node.

To enable OpenVZ support in OpenNebula the following lines need to be added into $ONE_LOCATION/etc/oned.conf file:

#-------------------------------------------------------------------------------
#  OpenVZ Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
name         ="im_ovz",
executable   = "one_im_ssh",
arguments    = "ovz" ]
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
#  OpenVZ Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
name       = "vmm_ovz",
executable = "one_vmm_ssh",
arguments  = "ovz",
default    = "vmm_ssh/vmm_ssh_ovz.conf",
type       = "xml" ]
#-------------------------------------------------------------------------------

Enable ssh transfer manager in oned.conf too.
The rest parameters has to be tuned according to your cluster configuration.
A tool vzfirewall can be used to easily configure open ports and hosts for incoming connections in OpenVZ environment. It can be run via OpenNebula hooks mechanism (for details on vzfirewall use as OpenNebula hook see below).

oned.conf example:

HOST_MONITORING_INTERVAL = 60
VM_POLLING_INTERVAL      = 60
VM_DIR=/vz/one/vm
SCRIPTS_REMOTE_DIR=/vz/one/scripts
PORT=2633
DB = [ backend = "sqlite" ]
VNC_BASE_PORT = 5900
DEBUG_LEVEL=3
NETWORK_SIZE = 254
MAC_PREFIX   = "02:00"
DEFAULT_IMAGE_TYPE    = "OS"
DEFAULT_DEVICE_PREFIX = "sd"
IMAGE_REPOSITORY_PATH = /srv/cloud/images
#-----------------------------------------------------
#  OpenVZ Information Driver Manager Configuration
#-----------------------------------------------------
IM_MAD = [
  name       = "im_ovz",
  executable = "one_im_ssh",
  arguments  = "ovz" ]
#------------------------------------------------------
#------------------------------------------------------
#  OpenVZ Virtualization Driver Manager Configuration
#------------------------------------------------------
VM_MAD = [
name       = "vmm_ovz",
executable = "one_vmm_ssh",
arguments  = "ovz",
default    = "vmm_ssh/vmm_ssh_ovz.conf",
type       = "xml" ]

#-------------------------------------------------------
# SSH Transfer Manager Driver Configuration
#-------------------------------------------------------
TM_MAD = [
    name       = "tm_ssh",
    executable = "one_tm",
    arguments  = "tm_ssh/tm_ssh.conf" ]
#-------------------------------------------------------

#*******************************************************
# Hook Manager Configuration
#*******************************************************

HM_MAD = [
    executable = "one_hm" ]

#---------------------- Image Hook ---------------------
# This hook is used to handle image saving and overwriting when virtual machines
# reach the DONE state after being shutdown.

VM_HOOK = [
    name      = "image",
    on        = "DONE",
    command   = "image.rb",
    arguments = "$VMID" ]

#-------------------------------------------------------

#-------------------- vzfirewall Hook ------------------
# This hook is used to apply firewall rules specified
# in VMs config when VMs reachs the RUNNING state after
# being booted

VM_HOOK = [
    name      = "vzfirewall",
    on        = "RUNNING",
    command   = "/vz/one/scripts/hooks/vzfirewall.sh",
    arguments = "",
    remote    = "yes" ]

#--------------------------------------------------------

The default values for OpenVZ VMs attributes can be set in $ONE_LOCATION/etc/vmm_ssh/vmm_ssh_ovz.conf file.

Running oned and scheduler

In order to start oned and a scheduler do <xterm> [oneadmin@FN]$ one start </xterm> (use ‘one stop’ to stop oned and scheduler)

Check $ONE_LOCATION/var/oned.log for errors.

Configuring cluster node (CN)

OpenVZ installation

Follow  OpenVZ quick installation guide to enable OpenVZ on your CNs.

Mandatory software packages

On all CNs the following rpms need to be installed: file perl perl-XML-LibXML.

Oneadmin user

<xterm> [root@CN]$ groupadd –gid 1000 cloud

[root@CN]$ useradd –uid 1000 -g cloud -d /vz/one oneadmin

[root@CN]$ su - oneadmin </xterm> assuming /vz/one as oneadmin's home dir.

Image Repository and Virtual Machine directory

Among supported configurations of storage subsystems in OpenNebula a non-shared (SSH based) storage deployment model is implemented in the OpenVZ scripts for the moment.

Configuring passwordless access over ssh

OpenVZ hypervisor is running under root user on CNs and all command on OpenVZ VMs are performed by privileged user. The OpenNebula daemons are running under oneadmin user and the same user runs all scripts on remote nodes. Thus oneadmin has to have a superuser privileges on CNs. Moreover, to perform VMs livemigration oneadmin has to have a permissions to read all objects inside VM file system and to keep all their attributes (owner, timestamps, permissions, etc) the same on destination node. That can be done only with super-user priviliges (without being prompted for a password).
One of the possible way to implement to described behavior is to use key pairs (for passwordless ssh access) and add appropriate entries in /etc/sudoers file for oneadmin.
Add a oneadmin’s public key into /root/.ssh/authorized_keys file on all CNs:
<xterm> [oneadmin@FN]$ ssh-copy-id root@<CN> </xterm> Check if oneadmin user is able to perform passwordless login from FN to CN both as oneadmin and root: <xterm> [oneadmin@FN]$ ssh <CN>

[oneadmin@FN]$ ssh root@<CN> </xterm>

Sudo

Add to the /etc/sudoers file the following line:

%cloud  ALL=(ALL)           NOPASSWD: ALL

and comment out “Defaults        requiretty” line:

#Defaults        requiretty

If vzfirewall tool is planned to be used then the line as below needs to be added into /etc/sudoers file:

Defaults:%cloud secure_path="/bin:/sbin:/usr/bin:/usr/sbin"

It’s because vzfirewall uses iptables-restore command located in /sbin dir which is not in $PATH for sudoer users by default.

Check if oneadmin user can perform actions on cluster nodes: <xterm> [oneadmin@FN]$ ssh <CN> “sudo vzlist -a” </xterm>

Performing basic operations in ONE with OpenVZ hypervisor enabled

Creating OpenVZ cluster

<xterm> [oneadmin@FN]$ onecluster create ovz_x64

[oneadmin@FN]$ onehost create <OVZ_enabled_cluster_node_hostname> im_ovz vmm_ovz tm_ssh </xterm>

:!: Adding CNs either specify their FQDN (fully qualified domain name) or make sure that a domain is specified in /etc/resolv.conf as ‘search <your CNs domain>’ if host in the command above is specified without domain.

<xterm> [oneadmin@FN]$ onecluster addhost <host_id> <cluster_id> </xterm> Check if CNs are monitored properly: <xterm> [oneadmin@FN]$ onehost list </xterm>

Image repository

OpenVZ OS template

An OS image name for OpenVZ VM has to be the same as OpenVZ template filename excluding extension (e.g. centos-5-x86 for centos-5-x86.tar.gz OpenVZ OS template). Then a registered in ONE image repository OS image with hash name (e.g. /srv/cloud/images/<hash>) is copied into $TEMPLATE dir (the value of $TEMPLATE variable defined in /etc/vz/vz.conf file points to the dir on OpenVZ enabled cluster node in which subdirectory “cache” VM OS templates are kept) on remote node and renamed as it’s specified in the image NAME attribute with added extension (e.g. “tar.gz”). In other words the value for NAME attribute in ONE image description file has to be the same as a filename (without extension) of OpenVZ template specified in PATH attribute of ONE image description file.

For example: <xterm> [oneadmin@FN]$ cat centos-5.x86.one.img NAME              = “centos-5-x86” PATH              = “/srv/cloud/one/one-2.2.1/centos-5-x86.tar.gz” PUBLIC            = YES DESCRIPTION   = “CentOS 5 x86 OpenVZ template” </xterm> Register OpenVZ OS template in ONE image database: <xterm> [oneadmin@FN]$ oneimage register centos-5.x86.one.img

[oneadmin@FN]$ oneimage list ID         USER                     NAME TYPE                  REGTIME PUB PER STAT  #VMS 0 oneadmin             centos-5-x86   OS   Jul 15, 2011 08:22 Yes  No  rdy     0

[oneadmin@FN]$ oneimage show 0 IMAGE  INFORMATION ID                 : 0 NAME               : centos-5-x86 TYPE               : OS REGISTER TIME  : 07/15 12:22:28 PUBLIC             : Yes PERSISTENT         : No SOURCE             : /srv/cloud/images/70f38bbaf574eef06b8e3ca4e8ebee3eb1f1786d STATE              : rdy RUNNING_VMS        : 0

IMAGE TEMPLATE DESCRIPTION=CentOS 5 x86 OpenVZ template DEV_PREFIX=sd NAME=centos-5-x86 PATH=/srv/cloud/one/one-2.2.1/centos-5-x86.tar.gz </xterm>

Persistent images

Please, note that since OpenVZ based VM filesystem is just a directory on the host server (see that doc) i.e. an VM OS image from ONE image repository is not used directly but its content is extracted to certain dir on the host server filesystem then there is no much sense to make OpenVZ OS image registered in ONE image repository as a persistent.
In order to keep changes made during VM functioning use ‘onevm saveas’ command (see next).

‘onevm saveas’ command

To save a VM disk in Image Repository (IR) using ‘onevm saveas’ command the argument image_name has to start from distribution name what saving VM is based on. It’s because the name of the image is used during VM deployment as OSTEMPLATE parameter (see vzctl and ctid.conf man pages for more details about OSTEMPLATE config option).
The list of supported by installed version of vzctl tool distributions can be found in /etc/vz/dists/ dir on OpenVZ enabled CN. Since IMAGE name has to be unique among all registered in IR  images in OpenNebula version 2.2.1 the argument <image_name> of ‘onevm saveas’ command has to be different from image names already registered in Image Repository.
For example if the image with name centos-5-x86 is already registered in IR then in order to register another images based on the same OS one can specified centos-5-x86-vm21 or centos-5-x86-2 or centos-5-1, etc, i.e. the command can be something like <xterm> $ onevm saveas 36 0 centos-5-vm36. </xterm> The main idea is to provide in the image name the information about for what linux distribution a postcreation scripts have to be run by vzctl tool which are defined by config file in /etc/vz/dists/ (the filename of that config has to match the begining of the image name).
There is a possibility to add some additional attributes for registered in IR images (e.g. DESCRIPTION attribute). I.e. as soon as VM image was saved in OpenNebula Image Repository a command ‘oneimage addattr DESCRIPTION <some image description>’ can be run in order to provide more info about saved VM image than it’s already specified in image name.

Virtual network

The current implementation of OpenVZ support in OpenNebula is able to manage OpenVZ VMs with venet network devices only. That type of network devices doesn’t use bridge on cluster node. Since a BRIDGE parameter in OpenNebula virtual network template is mandatory it has to present but its value is not taken into account by OpenVZ scripts.

For example: <xterm> [oneadmin@FN]$ cat public.net NAME = “Public” TYPE = FIXED BRIDGE = eth0 BROADCAST=X.Y.Z.255 NETMASK=255.255.255.0 NETWORK=X.Y.Z.0 GATEWAY=X.Y.Z.1 PEERDNS=yes DNS=<dns_ip>

LEASES = [IP=<ip_address_1>] LEASES = [IP=<ip_address_2>] LEASES = [IP=<ip_address_3>] LEASES = [IP=<ip_address_4>] </xterm>

OpenNebula VM description file for OpenVZ hypervisor

To create OpenVZ VM in ONE a VM definition file has to be written according to OpenNebula docs (e.g. that one). But there are some issues need to be taken into account in case of OpenVZ VM.

:!: Remember there is a possibility to specify a default configuration attributes for VM in $ONE_LOCATION/etc/vmm_ssh/vmm_ssh_ovz.conf file on FN.

Memory

The current implementation of OpenVZ support in OpenNebula was developed for RHEL5-based OpenVZ kernels which resource management model is based on so called User Beancounters). Memory resources for particular VM in that model are specified via several parameters (e.g. KMEMSIZE, LOCKEDPAGES, PRIVVMPAGES and others). Thus MEMORY parameter of OpenNebula VM definition file needs to be written as in example below:

MEMORY  = [ KMEMSIZE="14372700:14790164",
            LOCKEDPAGES="2048:2048",
            PRIVVMPAGES="65536:69632",
            SHMPAGES="21504:21504",
            PHYSPAGES="0:unlimited",
            VMGUARPAGES="33792:unlimited",
            OOMGUARPAGES="26112:unlimited" ]

CPU

There are a several parameters in OpenVZ container config file what control CPU usage: CPUUNITS, CPULIMIT, CPUS, CPUMASK. All of them can be specified in OpenNebula VM description file following a raw OpenVZ syntax like below:

CPUUNITS="2000"
CPULIMIT="25"
CPUS="2"

Disks

OS disk

According to “Virtual Machine Definition File 2.2” doc “there are two ways to attach a disk to a VM: using an image from OpenNebula Image Repository, or declaring a disk type that can be created from a source disk file in your system”.

Only single disk attribute apart from the swap one can be defined for OpenVZ-based VM (because of OpenVZ specifics).

OS disk from an image in Image Repository

When VM disk is specified as an image from Image Repository then only IMAGE or IMAGE_ID attributes have an effect whereas others like BUS, TARGET and DRIVER are ignored by deployment OpenVZ script.

An example of the OS template image definition:

DISK = [ IMAGE  = "centos-5-x86" ]
OS disk from local file

It is possible to define a DISK from a OpenVZ OS template file without having to register it first in the Image Repository.

In that case a SOURCE sub-attribute of DISK attribute has to point to the file with valid OpenVZ OS template name.

For example

DISK = [ SOURCE  = "/srv/cloud/one/one-2.2.1/centos-5-x86_64.tar.gz" ]

Such DISK sub-attributes for OS disk as TYPE, BUS, FORMAT, TARGET, READONLY and DRIVER are ignored.

DISKSPACE and  DISKINODES OpenVZ parameters can be define either as sub-attributes of DISK attribute like

DISK = [ SOURCE  = "/srv/cloud/one/centos-5-x86.tar.bz2",
         DISKSPACE="10485760:11530240",
         DISKINODES="200000:220000" ]

or as a separate attributes following a raw OpenVZ syntax:

DISK = [ SOURCE  = "/srv/cloud/one/one-2.2.1/centos-5-x86_64.tar.gz" ]
DISKSPACE="10485760:11530240"
DISKINODES="200000:220000"

Swap disk

A SWAPPAGES OpenVZ VM parameter can be defined as a swap disk:

DISK = [ TYPE = swap,
         SIZE = 1024 ]

A OpenNebula attribute specified in such way will be converted into SWAPPAGES OpenVZ parameter as SWAPPAGES=“0:1048576”.

Network

As it was already mentioned above the current implementation of OpenVZ support in OpenNebula is able to manage OpenVZ VMs with venet network devices only. That type of network devices doesn’t use bridge on cluster node. Hence a BRIDGE parameter in OpenNebula VM description file is ignored. A TARGER, SCRIPT and MODEL attributes listed in Network section of “Virtual Machine Definition File 2.2” doc are also not taken into account as well.

vzfirewall

The tool vzfirewall can be used to easily configure open ports and hosts for incoming connections in OpenVZ environment. It can be run via OpenNebula hooks mechanism. In order to make it works the following steps have to be done.

1) Download vzfirewall and put it on all OpenVZ enabled CNs (e.g. in /usr/sbin/ dir where all OpenVZ commands are located by default): <xterm> [root@CN]$ wget http://github.com/DmitryKoterov/vzfirewall/raw/master/vzfirewall -P /usr/sbin/ </xterm>

2) Enable executable permission: <xterm> [root@CN]$ chmod +x /usr/sbin/vzfirewall </xterm>

3) Patch vzfirewall script to make it return a proper exit code in case if no changes have been done: <xterm> $ diff vzfirewall.orig vzfirewall.new 192c192,193 <                           die “Nothing is changed.\n”; —

                          print “Nothing is changed.\n”;
                          exit;

</xterm>

Basically the changes above just make normal termination with 0 exit code in case if nothing was changed in iptables rules for deployed VMs on particular CN.

4) Configure vzfirewall hook in oned.conf:

#----------------------------- vzfirewall Hook ---------------------------------
# This hook is used to apply firewall rules specified in VMs config when VMs
# reachs the RUNNING state after being booted

VM_HOOK = [
name      = "vzfirewall",
on        = "RUNNING",
command   = "/vz/one/scripts/hooks/vzfirewall.sh",
arguments = "",
remote    = "yes" ]
#-------------------------------------------------------------------------------

assuming that $SCRIPTS_REMOTE_DIR is defined in oned.conf as /vz/one/scripts. Please, note that a value of $SCRIPTS_REMOTE_DIR variable can’t be used as a part of path for hook ‘command’ parameter (like command   = “$SCRIPTS_REMOTE_DIR/hooks/vzfirewall.sh”) since $SCRIPTS_REMOTE_DIR is unknown for hook manager and thus has an empty value.

5) Create a dir on FN for remote hooks: <xterm> [oneadmin@FN]$ mkdir $ONE_LOCATION/var/remotes/hooks </xterm>

and put inside it a vzfirewall.sh script with the following content: <xterm> [oneadmin@FN]$ cat $ONE_LOCATION/var/remotes/hooks/vzfirewall.sh

#!/bin/bash

sudo /usr/sbin/vzfirewall -a </xterm> Make vzfirewall.sh script executable: <xterm> [oneadmin@FN]$ chmod +x $ONE_LOCATION/var/remotes/hooks/vzfirewall.sh </xterm>

6) Restart oned: <xterm> [oneadmin@FN]$ oned stop

[oneadmin@FN]$ oned start </xterm>

Contextualization

Contextualization can be done as written in OpenNebula doc “Contextualizing Virtual Machines 2.2”.

For example:

CONTEXT = [
   hostname  = "$NAME.example.org",
   nameserver        = "$NETWORK[DNS, NAME=\"Public\" ]",
   firewall  = "
                [*]
                111.111.111.0/24

                [22]
                222.222.0.0/16
               ",
   files = "/srv/cloud/one/vps145/init.sh /srv/cloud/one/vps145/id_rsa.pub" ]

All files listed in FILES attribute of CONTEXT section in VM template will be copied into /mnt dir on VM by default. That dir can be changed in $context_dir variable in $ONE_LOCATION/var/remotes/vmm/ovz/deploy script.

Take the value of that variable into account if there is a need to do something with the specified files.
For example if some operations need to be performed on VM boot then write them into init.sh script and list it in FILES attribute of CONTEXT section of OpenNebula VM description file. Then the commands it contains will be added to VM /etc/rc.d/rc.local file and thus will be executed on VM boot. The example of init.sh script is below:

AUTH_KEYS=/root/.ssh/authorized_keys

CONTEXT_DIR=/mnt

if [ ! -d `dirname $AUTH_KEYS` ]
  then
    mkdir `dirname $AUTH_KEYS`
fi

cat $CONTEXT_DIR/id_rsa.pub >> $AUTH_KEYS

Example of full VM definition file

NAME = vps145
ONBOOT="yes"
MEMORY  = [ KMEMSIZE="14372700:14790164",
            LOCKEDPAGES="2048:2048",
            PRIVVMPAGES="65536:69632",
            SHMPAGES="21504:21504",
            PHYSPAGES="0:unlimited",
            VMGUARPAGES="33792:unlimited",
            OOMGUARPAGES="26112:unlimited" ]

CPUUNITS="2000"
CPULIMIT="25"
CPUS="2"

DISK = [ SOURCE  = "/srv/cloud/one/centos-5-x86.tar.bz2",
         SAVE    = "yes" ]

DISKSPACE="10485760:11530240"
DISKINODES="200000:220000"

NIC = [ NETWORK = "Public", IP="111.111.111.111" ]

DISK = [ TYPE = swap,
         SIZE = 1024,
         READONLY = "no" ]

CONTEXT = [
  HOSTNAME   = "$NAME.example.org",
  NAMESERVER = "$NETWORK[DNS, NAME=\"Public\" ]",
  FIREWALLl  = "
                [*]
                111.111.111.0/24

                [22]
                222.222.222.0/16
               ",
FILES = "/srv/cloud/one/vps145/init.sh /srv/cloud/one/vps145/id_rsa.pub" ]

VM deployment

OpenVZ VE_PRIVATE and VE_ROOT dirs are set to $VM_DIR/<VMID>/images/private and $VM_DIR/<VMID>/images/root respectively which are not a default locations for OpenVZ hypervisor (default paths are /vz/private/ and /vz/root/ accordingly).

VM shutdown and cancel actions

There is no way to destroy OpenVZ VM without stopping it before a shutdown. Thus a cancel OpenVZ VMM scripts behave almost in the same way as the shutdown one: VM is stopped first and then it is destroyed. The only difference that during shutdown the VM filesystem is saved first (CT private area is tar’ed and stored as disk.0) before destroying the VM. The side-effect of such shutdown script functionality is the CT filesystem is always archived despite of the fact the SAVE attribute is enabled or disabled.