Information and Monitoring Guide 1.4
The Information Manager (IM) is in charge of monitoring the cluster nodes. It comes with various sensors, each one responsible of a different aspects of the computer to be monitored (CPU, memory, hostname…). Also, there are sensors prepared to gather information from different hypervisors (currently KVM and XEN).
Depending on the sensors that are going to conform the IM driver there are different requirements, mainly the availability of the hypervisor corresponding to the sensor if one of the KVM sensor or XEN sensor are used at all. Also, as for all the OpenNebula configurations, SSH
access to the hosts without password has to be possible.
This section details the files used by the Information Drivers to monitor the cluster nodes. This files are placed in the following directories:
$ONE_LOCATION/lib/mads/
: The drivers executable files$ONE_LOCATION/lib/im_probes/
: The cluster nodes probe to gather each monitoring metric$ONE_LOCATION/etc/
: The drivers configuration files, usually in subdirectories in the form im_<virtualizer>
.
Note: If OpenNebula was installed in system wide mode these directories become /usr/lib/one/mads
, /usr/lib/one/im_probes
and /etc/one
, respectively. The rest of this guide refers to the $ONE_LOCATION
paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here
These files are used by the IM regardless of the hypervisor present on the machine to be monitored:
<xterm> $> cat $ONE_LOCATION/lib/im_probes/name.sh #!/bin/sh
echo NAME=`uname -n` </xterm>
This uses the uname command to get the hostname of the remote cluster host, and then outputs the information as:
NAME=host1.mydomain.org
The following files contain pre-made configuration files to get the IM working with different hypervisors. These files are not fixed, they can be modified, and even the IM can be set to work with different files (this is set in OpenNebula configuration file, more details in next section).
There is always one obligatory attribute set by all the Information Drivers, HYPERVISOR set to the kind of hypervisor (XEN, KVM, VMWare, EC2, etc) present on the host this particular Information Driver is monitoring.
cpuarchitecture=architecture.sh nodename=name.sh cpu=cpu.sh xen=xen.rb
cpuarchitecture=architecture.sh nodename=name.sh cpu=cpu.sh kvm=kvm.rb
The OpenNebula daemon loads its drivers whenever it starts. Inside $ONE_LOCATION/etc/oned.conf
there are definitions for the drivers. The following lines, will configure OpenNebula to use the Xen probes:
IM_MAD = [ name = "im_xen", executable = "bin/one_im_ssh", arguments = "im_xen/im_xen.conf", default = "im_xen/im_xen.conf" ]
Equivalently for KVM, you'd put the following in oned.conf
:
IM_MAD = [ name = "im_kvm", executable = "one_im_ssh", arguments = "im_kvm/im_kvm.conf", default = "im_kvm/im_kvm.conf" ]
And finally for EC2:
IM_MAD = [ name = "im_ec2", executable = "one_im_ec2", arguments = "im_ec2/im_ec2.conf", default = "im_ec2/im_ec2.conf" ]
Please remember that you can add your custom probes for later use by other OpenNebula modules like the scheduler.
In order to test the driver, add a host to OpenNebula using onehost, specifying the defined IM driver:
<xterm> # onehost create ursa06 im_xen vmm_xen tm_ssh </xterm>
Now give it time to monitor the host (this time is determined by the value of HOST_MONITORING_INTERVAL in $ONE_LOCATION/etc/oned.conf
). After one interval, check the output of onehost list, it should look like the following:
<xterm> HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT
0 ursa06 1 800 797 700 8387584 4584448 on
</xterm>