Planning the Installation 4.4
In order to get the most out of a OpenNebula Cloud, we recommend that you create a plan with the features, performance, scalability, and high availability characteristics you want in your deployment. This guide provides information to plan an OpenNebula installation, so you can easily architect your deployment and understand the technologies involved in the management of virtualized resources and their relationship.
OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end.
The basic components of an OpenNebula system are:
OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade hypervisor, monitoring, storage, networking and user management services. This guide briefly describes the different choices that you can make for the management of the different subsystems. If your specific services are not supported we recommend to check the drivers available in the Add-on Catalog. We also provide information and support about how to develop new drivers.
The machine that holds the OpenNebula installation is called the front-end. This machine needs network connectivity to each host, and possibly access to the storage Datastores (either by direct mount or network). The base installation of OpenNebula takes less than 50MB.
OpenNebula services include:
sunstone-server
)
There are several certified platforms to act as front-end for each version of OpenNebula. Refer to the platform notes and chose the one that better fits your needs.
OpenNebula's default database uses sqlite. If you are planning a production or medium to large scale deployment, you should consider using MySQL.
If you are interested in setting up a high available cluster for OpenNebula, check the High OpenNebula Availability Guide.
The maximum number of servers (virtualization hosts) that can be managed by a single OpenNebula instance (zone) strongly depends on the performance and scalability of the underlying platform infrastructure, mainly the storage subsystem. We do not recommend more than 500 servers within each zone, but there are users with 1,000 servers in each zone. The OpenNebula Zones (oZones) component allows for the centralized management of multiple instances of OpenNebula (zones), managing in turn potentially different administrative domains. You may also find interesting the following guide about how to tune OpenNebula for large deployments.
The monitoring subsystem gathers information relative to the hosts and the virtual machines, such as the host status, basic performance indicators, as well as VM status and capacity consumption. This information is collected by executing a set of static probes provided by OpenNebula. The output of these probes is sent to OpenNebula in two different ways:
ssh
. This mode is limited by the number of active connections that can be made concurrently, as hosts are queried sequentially. Please read the KVM and Xen SSH-pull guide or the ESX-pull guide for more information.
Please check the the Monitoring Guide for more details.
The hosts are the physical machines that will run the VMs. There are several certified platforms to act as nodes for each version of OpenNebula. Refer to the platform notes and chose the one that better fits your needs. The Virtualization Subsystem is the component in charge of talking with the hypervisor installed in the hosts and taking the actions needed for each step in the VM lifecycle.
OpenNebula natively supports three hypervisors:
Please check the Virtualization Guide for more details of the supported virtualization technologies.
If you are interested in failover protection against hardware and operating system outages within your virtualized IT environment, check the Virtual Machines High Availability Guide.
OpenNebula uses Datastores to handle the VM disk Images. A Datastore is any storage medium used to store disk images for VMs, previous versions of OpenNebula refer to this concept as Image Repository. Typically, a datastore will be backed by SAN/NAS servers. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.
When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an LVM volume.
OpenNebula is shipped with 3 different datastore classes:
Image datastores can be of different type depending on the underlying storage technology:
Please check the Storage Guide for more details.
OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the specific network requirements of existing datacenters. At least two different physical networks are needed:
The OpenNebula administrator may associate one of the following drivers to each Host:
Please check the Networking Guide to find out more information of the networking technologies supported by OpenNebula.
You can choose from the following authentication models to access OpenNebula:
Please check the External Auth guide to find out more information of the auth technologies supported by OpenNebula.
Once you have an OpenNebula cloud up and running, you can install the following advanced components: