Planning the Installation 2.2
OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of cluster nodes where Virtual Machines will be executed. There is at least one physical network joining all the cluster nodes with the front-end.
The basic components of an OpenNebula system are:
This section details the software that you need to install in the front-end to run and build OpenNebula. The front-end will access the image repository that should be big enough to store the VM images for your private cloud. Usually, these are master images that are cloned (copied) when you start the VM. So you need to plan your storage requirements depending on the number of VMs you'll run in your virtual infrastructure (see the section below for more information). The base installation of OpenNebula only takes 10MB.
OpenNebula can be installed in two modes:
In either case, you do not need the root account to run the OpenNebula services.
This machine will act as the OpenNebula server and therefore needs to have installed the following software:
Additionally, to build OpenNebula from source you need:
Then install the following packages:
expat
libraries with its development files and install xmlparser using gem:<xterm>
# gem install xmlparser --no-ri --no-rdoc</xterm>
nokogiri
library:<xterm> # gem install nokogiri</xterm>
The nodes will run the VMs and do not need any additional storage requirement, see the storage section for more details
These are the requirements in the cluster nodes that will run the VM's:
This section guides you through steps needed to prepare your cluster to run a private cloud.
var
directory can be access in the cluster nodes by a Shared FS of any kind. In this way you can take full advantage of the hypervisor capabilities (i.e. live migration) and OpenNebula Storage module (i.e. no need to always clone images).
OpenNebula can work without a Shared FS, this will make you to always clone the images and you will only be able to do cold migrations. However this non-shared configuration does not impose any significant storage requirements. If you want to use this configuration check the Storage Customization guide and skip this section.
The cluster front-end will export the image repository and the OpenNebula installation directory to the cluster nodes. The size of the image repository depends on the number of images (and size) you want to store. Also when you start a VM you will be usually cloning (copying) it, so you must be sure that there is enough space to store all the running VM images.
Create the following hierarchy in the front-end root file system:
/srv/cloud/one
, will hold the OpenNebula installation and the clones for the running VMs/srv/cloud/images
, will hold the master images and the repository<xterm> $ tree /srv /srv/
`– cloud
|-- one `-- images
</xterm>
/srv/cloud/one
, you will also want to store 10-15 master images so ~200GB for /srv/cloud/images
. A 1TB /srv/cloud
will be enough for this example setup.
Export /srv/cloud
to all the cluster nodes. For example, if you have all your physical nodes in a local network with address 192.168.0.0/24 you will need to add to your /etc/exports file a line like this:
<xterm> $ cat /etc/exports /srv/cloud 192.168.0.0/255.255.255.0(rw) </xterm>
In each cluster node create /srv/cloud
and mount this directory from the front-end.
The Virtual Infrastructure is administered by the oneadmin
account, this account will be used to run the OpenNebula services and to do regular administration and maintenance tasks.
Follow these steps:
cloud
group where OpenNebula administrator user will be:<xterm># groupadd cloud </xterm>
oneadmin
), we will use OpenNebula directory as the home directory for this user:<xterm># useradd -d /srv/cloud/one -g cloud -m oneadmin </xterm>
$ id oneadmin uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud) </xterm> In this case the user id will be 1001 and group also 1001.
# groupadd –gid 1001 cloud # useradd –uid 1001 -g cloud -d /srv/cloud/one oneadmin </xterm>
oneadmin
unix user should be used to run OpenNebula commands in the frontend. That means that you eventually need to login as that user or to use the “sudo -u oneadmin
” method. You can assign or change the password of an existing oneadmin
account with running “passwd oneadmin
” as root
.
cloud
group and oneadmin
account in the nodes, for example NIS.
There are no special requirements for networking, apart from those derived from the previous configuration steps. However to make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them.
This is achieved with ethernet bridging, there are several guides that explain how to deal with this configuration, check for example the networking howto in the XEN documentation.
For example, a typical cluster node with two physical networks one for public IP addresses (attached to eth0
NIC) and the other for private virtual LANs (NIC eth1
) should have two bridges:
<xterm>
$ brctl show
bridge name bridge id STP enabled interfaces
vbr0 8000.001e682f02ac no eth0
vbr1 8000.001e682f02ad no eth1
</xterm>
Please for more details on using virtual networks check the Virtual Network Usage Guide and the Networking Customization Guide
You need to create ssh
keys for oneadmin
user and configure machines so it can connect to them using ssh
without need for a password.
oneadmin
ssh
keys:<xterm>$ ssh-keygen </xterm> When prompted for password press enter so the private key is not encripted.
~/.ssh/authorized_keys
to let oneadmin
user log without the need to type a password. Do that also for the frontend:<xterm>$ cat ~/.ssh/id_rsa.pub » ~/.ssh/authorized_keys </xterm>
known_hosts
file. This goes into ~/.ssh/config
:<xterm>$ cat ~/.ssh/config Host *
StrictHostKeyChecking no
</xterm>
sshd
daemon is running in the cluster nodes. oneadmin
must able to log in the cluster nodes without being prompt for a password. Also remove any Banner
option from the sshd_config
file in the cluster nodes.
The virtualization technology installed in your cluster nodes, have to be configured so the oneadmin
user can start, control and monitor VMs. This usually means the execution of commands with root privileges or making oneadmin
part of a given group. Please take a look to the virtualization guide that fits your site:
If you have followed the previous steps your cluster should be ready to install and configure OpenNebula. You may want to print the following checklist to check your plan and proceed with the installation and configuration steps.
Software Requirements | |
---|---|
ACTION | DONE/COMMENTS |
Installation type: self-contained, system-wide | self-contained |
Installation directory | /srv/cloud/one |
OpenNebula software downloaded to /srv/cloud/one/SRC | |
sqlite, g++, scons, ruby and software requirements installed | |
User Accounts | |
ACTION | DONE/COMMENTS |
oneadmin account and cloud group ready in the nodes and front-end | |
Storage Checklist | |
ACTION | DONE/COMMENTS |
/srv/cloud structure created in the front-end | |
/srv/cloud exported and accessible from the cluster nodes | |
mount point of /srv/cloud in the nodes if different | VMDIR=<mount_point>/var/ |
Cluster nodes Checklist | |
ACTION | DONE/COMMENTS |
hostnames of cluster nodes | |
ruby, sshd installed in the nodes | |
oneadmin can ssh the nodes passwordless |