Information Driver 1.2

The Information Manager (IM) is in charge of monitoring the cluster nodes. It comes with various sensors, each one responsible of a different aspects of the computer to be monitored (CPU, memory, hostname…). Also, there are sensors prepared to gather information from different hypervisors (currently KVM and XEN).

Requirements

Depending on the sensors that are going to conform the IM driver there are different requirements, mainly the availability of the hypervisor corresponding to the sensor if one of the KVM sensor or XEN sensor are used at all. Also, as for all the OpenNebula configurations, SSH access to the hosts without password has to be possible.

Driver Files

This section details the files used by the Information Drivers to monitor the cluster nodes. This files are placed in the following directories:

  • $ONE_LOCATION/lib/mads/: The drivers executable files
  • $ONE_LOCATION/lib/im_probes/: The cluster nodes probe to gather each monitoring metric
  • $ONE_LOCATION/etc/: The drivers configuration files, usually in subdirectories in the form im_<virtualizer>.

Note: If OpenNebula was installed in system wide mode these directories become /usr/lib/one/mads, /usr/lib/one/im_probes and /etc/one, respectively. The rest of this guide refers to the $ONE_LOCATION paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here

Common files

These files are used by the IM regardless of the hypervisor present on the machine to be monitored:

  • $ONE_LOCATION/lib/mads/one_im_ssh : shell script wrapper to the driver itself. Sets the environment and other bootstrap tasks.
  • $ONE_LOCATION/lib/mads/one_im_ssh.rb : The actual Information driver.
  • $ONE_LOCATION/lib/im_probes/* : sensors home. Little scripts or binaries that extract information from the remote hosts. Let's see a simple one to understand how they work:
$> cat $ONE_LOCATION/lib/im_probes/name.sh
#!/bin/sh
 
echo NAME=`uname -n`

This uses the uname command to get the hostname of the remote cluster host, and then outputs the information as:

NAME=host1.mydomain.org

Hypervisor specific

The following files contain pre-made configuration files to get the IM working with different hypervisors. These files are not fixed, they can be modified, and even the IM can be set to work with different files (this is set in OpenNebula configuration file, more details in next section).

XEN Hypervisor

  • $ONE_LOCATION/etc/im_xen/im_xenrc : environment setup and bootstrap instructions
  • $ONE_LOCATION/etc/im_xen/im_xen.conf : This file defines the REMOTE_DIR, the path where the sensors will be uploaded in the remote physical to perform the monitoring. It also defines which sensors will be used (you can add and remove probes as you wish), for XEN the defaults are:
cpuarchitecture=architecture.sh
nodename=name.sh
cpu=cpu.sh
xen=xen.rb
  • $ONE_LOCATION/lib/im_probes/xen.rb : xen specific sensor.

KVM Hypervisor

  • $ONE_LOCATION/etc/im_kvm/im_kvmrc : environment setup and bootstrap instructions
  • $ONE_LOCATION/etc/im_kvm/im_kvm.conf : This file defines the REMOTE_DIR, the path where the sensors will be uploaded in the remote physical to perform the monitoring. It also defines which sensors will be used (you can add and remove probes as you wish), for KVM the defaults are:
cpuarchitecture=architecture.sh
nodename=name.sh
cpu=cpu.sh
kvm=kvm.rb
  • $ONE_LOCATION/lib/im_probes/kvm.rb : kvm specific sensor.

OpenNebula Configuration

The OpenNebula daemon loads its drivers whenever it starts. Inside $ONE_LOCATION/etc/oned.conf there are definitions for the drivers. The following lines, will configure OpenNebula to use the Xen probes:

IM_MAD = [
    name       = "im_xen",
    executable = "bin/one_im_ssh",
    arguments  = "im_xen/im_xen.conf",
    default    = "im_xen/im_xen.conf" ]

Equivalently for KVM, you'd put the following in oned.conf:

IM_MAD = [ 
    name       = "im_kvm",
    executable = "one_im_ssh",
    arguments  = "im_kvm/im_kvm.conf",
    default    = "im_kvm/im_kvm.conf" ]

And finally for EC2:

IM_MAD = [
    name       = "im_ec2",
    executable = "one_im_ec2",
    arguments  = "im_ec2/im_ec2.conf",
    default    = "im_ec2/im_ec2.conf" ]

Please remember that you can add your custom probes for later use by other OpenNebula modules like the scheduler.

Testing

In order to test the driver, add a host to OpenNebula using onehost, specifying the defined IM driver:

# onehost create ursa06 im_xen vmm_xen tm_ssh

Now give it time to monitor the host (this time is determined by the value of HOST_MONITORING_INTERVAL in $ONE_LOCATION/etc/oned.conf). After one interval, check the output of onehost list, it should look like the following:

 HID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT
   0 ursa06                      1    800    797    700 8387584 4584448   on