Quick Start Guide 1.2
This QuickStart guide aims to show how to prepare a simple cluster consisting of one cluster front-end and two cluster nodes to install OpenNebula, and how to configure OpenNebula to manage VMs in the aforementioned cluster. As seen in the following picture, the front-end machine is where OpenNebula is installed. OpenNebula supports XEN and KVM hypervisors (and it can interface with Amazon's EC2) on the hosts to manage the VM lifecycle across them. This QuickStart guide focuses on the XEN hypervisor.
We are going to use a cluster formed by three computers:
Take a look at the software requisites to install OpenNebula. We assume in this guide that XEN
and NIS
are correctly configured. Additionally you will need the OpenNebula tarball, which can be downloaded from the Software section.
We are going to set up a common user and group for the three machines, one of the easiest ways is to use NIS. Assuming makito is the NIS server, log on to it and type:
makito$ groupadd xen makito$ useradd -G xen oneadmin makito$ cd /var/yp makito$ make
We need to know the group id of the created xen
group. Now it ought to be between the GIDs to which <oneadmin>
pertains
makito$ id oneadmin
From the output of the previous command, get the GID of the xen
group.
Now we have to create local groups (let's call them rootxen
group) in aquila01 and aquila02 that includes their local root and share GID with the xen
group, so the local group shares the xen
group privileges.
aquila01$ echo "rootxen:x:<xen_gid>:root >> /etc/group
change in the previous command <xen_gid>
for the corresponding number. Repeat this for aquila02.
The <oneadmin>
account has to be trusted in the nodes from the OpenNebula server, being able to log into them in a passwordless fashion. Logged in as <oneadmin>
in makito:
makito$ ssh-keygen
Press enter when prompted for a password (or, alternatively, set a passsword and use ssh-agent
). This will create a pair of public/private keys, the public one being stored in id_rsa.pub
.
Now we need to tell all the nodes that they have to trust this key. So we need to copy the created id_rsa.pub
into all the nodes and then add it as a trusted public key. Repeat the following for all the nodes in the cluster:
makito$ scp id_rsa.pub aquila01:
and then, logged in the cluster node
aquila01$ cd ~/.ssh aquila01$ cat id_rsa.pub >> authorized_keys
You can now try to ssh with the <oneadmin>
account from makito to one of the cluster nodes, you should gain a login session without having to type a password.
In this QuickStart guide we are assuming that the cluster front-end is also the image repository. Lets create the image folder:
makito$ mkdir /opt/nebula/images
Both the images and the folders has to be readable from the <oneadmin>
account. OpenNebula will stage the images from the cluster front-end to the chosen node for execution using secure copy (scp
).
For other, possibly more complex image repository configurations, see the Storage Guide.
In this guide we are assuming that the cluster nodes are configured to allow bridging networking for the virtual machines. This means having a software bridge in the nodes that is linked to its physical network interface. In this guide we will assume that this bridge is called eth0
.
See the Network Guide for more information.
As the oneadmin in makito download the OpenNebula tarball and untar it in the home folder. Change to the recently created folder and type:
makito$ scons
If there are any problems in the compilation, maybe this helps. We are going to install OpenNebula in self-contained mode. Once the compilation finishes successfully, lets install it to the target folder:
makito$ ./install.sh -d /opt/nebula/ONE
Now lets set the environment:
makito$ export ONE_LOCATION=/opt/nebula/ONE/ makito$ export ONE_XMLRPC=http://localhost:2633/RPC2 makito$ export PATH=$ONE_LOCATION/bin:$PATH
Now it's time to start the OpenNebula daemon and the scheduler. So don't get nervous and type the following in makito:
makito$ $ONE_LOCATION/bin/one start
If you get a “oned and scheduler started” message, your OpenNebula installation is up&runnin'.
OpenNebula needs to know how to access and use the cluster nodes, so lets set up the cluster in OpenNebula. First thing is adding hosts to OpenNebula. This can be done by means of the onehost
command (See the Command Line Interface for more information). So lets add both aquila01 and aquila02:
makito$ onehost create aquila01 im_xen vmm_xen tm_ssh makito$ onehost create aquila02 im_xen vmm_xen tm_ssh
We are giving OpenNebula hints about what it needs in order to run VMs in those both hosts. im_xen
,vmm_xen
and tm_ssh
are referencing the information driver (for monitoring the physical cluster nodes), the virtualization driver (to be able to interface the cluster nodes hypervisors) and the transfer driver (to move images to and from the cluster nodes) respectively. These drivers are configured in the $ONE_LOCATION/etc/oned.conf
. The driver names used above are the default drivers that you get configured out-of-the-box installing OpenNebula, so the OpenNebula configuration file doesn't need to be fiddled with in this QuickStart guide. They correspond with an information driver that can extract host information from the XEN
utilities, a virtualization driver that understands how to deploy virtual machines using the XEN
hypervisor and a transfer driver that is able to transfer images using secure copy (scp
).
Lets do a sample session to make sure everything is working. First thing to do, check that the adding of the cluster hosts went smoothly. Issue the following command as <oneadmin>
and check the output:
makito$ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT 0 aquila01 0 800 800 800 8194468 7867604 on 1 aquila02 0 800 797 800 8387584 1438720 on
Now we need to prepare a virtual network with just one lease (meaning, just one pair of IP and MAC to be assigned to a virtual machine). This is going to be a fixed network (that is, leases have to be defined explicitly) with one valid public IP address, so the virtual machines gains access to the internet. This is the network template we are going to use, note that it references the eth0
bridge of the cluster nodes:
NAME = "Public VLAN" TYPE = FIXED BRIDGE = eth0 LEASES = [IP=130.10.0.1,MAC=50:20:20:20:20:20]
If we have this template saved into a file called myFirstNW.net, then we can create the network using the onevnet
command likewise:
makito$ onevnnet create myFirstNW.net
We can verify the correct creation of the network using again the onevnet
command, first listing all the available virtual networks with onevnet list
and then showing details for the recently created virtual network with onevnet show
:
makito$ onevnet list NID NAME TYPE SIZE BRIDGE 0 Public VLAN 1 eth0 makito$ onevnet show "Public VLAN" NID : 0 UID : 0 Network Name : Public VLAN Type : Fixed Size : 1 Bridge : eth0 ....: Template :.... BRIDGE=eth0 LEASES=IP=130.10.0.1,MAC=50:20:20:20:20:20 ....: Leases :.... IP = 130.10.0.1 MAC = 50:20:20:20:20:20 USED = 0 VID = -1
Once we have checked the nodes and created the virtual network, we can then submit a VM to OpenNebula, by using onevm
. We are going to build a VM template to submit the image we had placed in the /opt/nebula/images
directory. The following will do:
NAME = vm-example CPU = 0.5 MEMORY = 128 OS = [ kernel = "/boot/vmlinuz-2.6.18-4-xen-amd64", initrd = "/boot/initrd.img-2.6.18-4-xen-amd64", root = "sda1" ] DISK = [ source = "/opt/nebula/images/disk.img", target = "sda1", readonly = "no" ] DISK = [ type = "swap", size = 1024, target = "sdb"] NIC = [ NETWORK = "Public VLAN" ]
Save it in your home and name it myfirstVM.template.
You can add more parameters, check this for a complete list. From this template, it is worth noting that it is not necessary to have a swap image in the image repository (the OpenNebula server in this guide), but rather OpenNebula will create it for the virtual machine previously to its booting.
Once we have tailored the requirements to our needs (specially, CPU and MEMORY fields), ensuring that the VM fits into at least one of both hosts, lets submit the VM (assuming you are currently in your home folder):
makito$ onevm submit myfirstVM.template
This should come back with an ID, that we can use to identify the VM for monitoring and controlling, again through the use of the onevm
command:
$> onevm list
The output should look like:
ID NAME STAT CPU MEM HOSTNAME TIME 0 one-0 runn 0 65536 aquila01 00 0:00:02
The STAT field tells the state of the virtual machine. If there is an runn state, your virtual machine is up and running. Depending on how you set up your image, you may be aware of it's IP address. If that is the case you can try now and log into the VM. Keep that connection alive in another terminal so we can check the live migrate. This migration ought to occur with no apparent downtime.
To perform a migration (we cannot use live migration since one requisite is to share the images repository between all the cluster nodes), we use yet again the onevm
command. Let's move the VM (with VID=0) to aquila02 (HID=1):
makito$ onevm migrate 0 1
This will move the VM from aquila01 to aquila02. After a few minutes (necessary to transfer the images and the checkpoint from aquila01 to aquila02), your onevm list
should show something like the following if all went smoothly:
ID NAME STAT CPU MEM HOSTNAME TIME 0 one-0 runn 0 65536 aquila02 00 0:00:06
The last test to verify the correctness of this migration is to try a ssh connection to the VM (it should have the same IP address). If that is the case, you have succeeded in completing this simple usage scenario.