Quick Start Guide for ONE TP2

This QuickStart guide aims to show how to prepare a simple cluster consisting of two physical hosts with a shared file system to install ONE, and how to configure ONE to manage VMs in the aforementioned cluster. As seen in the following picture, the frontend machine is where ONE is installed. The ONE software supports Xen hypervisor on the hosts to manage the VM lifecycle across them.

Requisites

Hardware

We are going to use a cluster formed by three computers:

  • ONE Server : This is going to be the front end. For clarity sake, this is also going to be the NIS and NFS server. Let's call it makito and assume it has an IP of 192.168.3.1.
  • Nodes : The two other computers are going to be the execution hosts for the virtual machines. They will be called aquila01 and aquila02 throughout this guide and they will have IPs of 192.168.3.2 and 192.168.3.3 respectively.

Software

The software requisites to install ONE in this cluster can be found here. We assume in this guide that xen, NFS and NIS are correctly configured. Additionally you will need the ONE tarball (one-tp.tar.gz), which can be downloaded from the Software section.

System Configuration

NIS Configuration

We are going to set up a common user and group for the three machines, one of the easiest ways is to use NIS. Assuming makito is the NIS server, log on to it and type:

makito$ groupadd xen
makito$ useradd -G xen oneadmin
makito$ cd /var/yp
makito$ make

We need to know the group id of the created xen group. Now it ought to be between the GIDs to which <oneadmin> pertains

makito$ id oneadmin

From the output of the previous command, get the GID of the xen group.

Now we have to create local groups (let's call them rootxen group) in aquila01 and aquila02 that includes their local root and share GID with the xen group, so the local group shares the xen group privileges.

aquila01$ echo "rootxen:x:<xen_gid>:root >> /etc/group

change in the previous command <xen_gid> for the corresponding number. Repeat this for aquila02.

NFS Configuration

The ONE server, here also de NFS server, is going to export two folders:

  • The /home folder : Sharing this folder is useful to configure the ssh setup, and also to provide a /home folder for <oneadmin>. This step is optional.
  • The /opt/nebula folder : Here we are going to place the ONE installation and the VM images.

First thing to do, log in into the ONE server and copy into /etc/exports the following lines:

/home           192.168.3.0/255.255.255.0(rw,async,no_subtree_check)
/opt/nebula     192.168.3.0/255.255.255.0(rw,async,no_subtree_check)

Log in into aquila01 and add the following to the /etc/fstab:

makito:/home /home nfs soft,intr,rsize=32768,wsize=32768,rw 0 0
makito:/opt/nebula /opt/nebula nfs soft,intr,rsize=32768,wsize=32768,rw 0 0

Repeat the above for aquila02.

SSH Configuration

The <oneadmin> account has to be trusted in the nodes from the ONE server, being able to log into them in a passwordless fashion. Let's do the trick. Logged in as <oneadmin> in makito:

makito$ ssh-keygen

Press enter when prompted for a password. As we now have shared homes the following will be enough to achieve the ssh configuration in all the nodes:

makito$ cd ~/.ssh
makito$ cat id_rsa.pub >> authorized_keys

You can now try to ssh with the <oneadmin> account from makito to one of the nodes, you should gain a login session without having to type a password.

VM Images placing

The folder that will hold the images of the Virtual Machines has to be shared and there are special requirements with regard to permissions. Let's create the image folder:

makito$ mkdir /opt/nebula/images

Both the images and the folders has to:

  • Be owned by group xen
  • Have Read-Write permissions for group xen

Let's assume we have an image called disk.img, it needs to be placed in that folder with permissions like the following file:

makito$ ls -lrta /opt/nebula/images/disk.img
-rw-rw-r-- 1 oneadmin xen 4294967296 2008-03-26 15:56 /opt/nebula/images/disk.img

ONE Installation

As the oneadmin in makito download the one-tp.tar.gz and untar it in the home folder. Change to the recently created folder and type:

makito$ scons 

If there are any problems in the compilation, maybe this helps. Once the compilation finishes successfully, let's install it to the target folder:

makito$ ./install.sh /opt/nebula/ONE

Now let's set the environment:

makito$ export ONE_LOCATION=/opt/nebula/ONE/
makito$ export ONE_XMLRPC=http://localhost:2633/RPC2
makito$ export PATH=$ONE_LOCATION/bin:$PATH

Now is time to start the ONE daemon and the scheduler. So don't get nervous and type in makito:

makito$ $ONE_LOCATION/bin/one start

If you get a “oned and scheduler started” message correctly, your ONE installation is up&runnin'.

Setting up the cluster in ONE

Let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost command (See the Command Line Interface for more information). So let's add both aquila01 and aquila02:

makito$ onehost add aquila01 one_im one_vmm

makito$ onehost add aquila02 one_im one_vmm

We are giving ONE hints about what it needs in order to run VMs in those both hosts.

Using ONE

Let's do a sample session to make sure everything is working. First thing to do, check that the adding of the cluster hosts went smoothly. Issue the following command as <oneadmin> and check the output:

makito$ onehost list
 HID NAME                           RVM   TCPU   FCPU   ACPU    TMEM    FMEM
   0 aquila01                         0      0    100    100       0       0
   1 aquila02                         0      0    100    100       0       0

Once we have checked the nodes, we can then submit a VM to ONE, by using onevm. We are going to build a VM template to submit the image we had placed in the /opt/nebula/images directory. The following will do:

DISK=[image="/opt/nebula/images/disk.img",dev="sda1",mode=w]
KERNEL=/boot/vmlinuz-2.6.18-4-xen-amd64
RAMDISK=/boot/initrd.img-2.6.18-4-xen-amd64
MEMORY=64
CPU=0.6

Save it in your home and name it myfirstVM.template.

You can add more parameters, check this for a complete list. Also, you can add more DISKs if you need, say, a swap partition.

Once we have tailored the requirements to our needs (specially, CPU and MEMORY fields), ensuring that the VM fits into at least one of both hosts, lets submit the VM (assuming you are currently in your home folder):

makito$ onevm submit myfirstVM.template

This should come back with an ID, that we can use to identify the VM for monitoring and controlling, again through the use of the onevm command:

$> onevm list

The output should look like:

  ID     NAME STAT CPU     MEM        HOSTNAME                TIME
   0    one-0 runn   0   65536        aquila01          00 0:00:02

The STAT field tells the state of the virtual machine. If there is an runn state, your virtual machine is up and running. Depending on how you set up your image, you may be aware of it's IP address. If that is the case you can try now and log into the VM. Keep that connection alive in another terminal so we can check the live migrate. This migration ought to occur with no apparent downtime.

To perform a live migration we use yet again the onevm command. Assuming the VM was scheduled into aquila01 (with HID=0), let's move it to aquila02 (HID=1):

$> onevm livemigrate 0 1

This will move the VM from aquila01 to aquila02. Then, your onevm list should show something like the following if all went smooth:

  ID     NAME STAT CPU     MEM        HOSTNAME                TIME
   0    one-0 runn   0   65536        aquila02          00 0:00:06

The last test to verify the correctness of this live migration is to make sure the ssh connection to the VM is still open. If that is the case, you have succeeded in completing this simple usage scenario.