Scheduler 3.0

The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. OpenNebula's architecture defines this module as a separate process that can be started independently of oned. The OpenNebula scheduling framework is designed in a generic way, so it is highly modifiable and can be easily replaced by third-party developments.

The Match-making Scheduler (mm_sched)

OpenNebula comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy. The goal of this policy is to prioritize those resources more suitable for the VM.

Configuring the Scheduler

The behavior of the scheduler can be tuned to adapt it to your infrastructure with the following configuration parameters are:

SCHEDULER OPTIONS VALUE DEFAULT
-p <port> to connect to oned 2633
-t <timer> seconds between two scheduling actions 30
-m <machines limit> max number of VMs managed in each scheduling action 300
-d <dispatch limit> max number of VMs dispatched in each scheduling action 30
-h <host dispatch> max number of VMs dispatched to a given host in each scheduling action 1

These parameters can be setup in the /usr/bin/one script, just locate the line where the scheduler is started and update the values: <xterm> …

      # Start the scheduler
      # The following command line arguments are supported by mm_shed:
      #  [-p port]           to connect to oned - default: 2633
      #  [-t timer]          seconds between two scheduling actions - default: 30
      #  [-m machines limit] max number of VMs managed in each scheduling action
      #                      - default: 300
      #  [-d dispatch limit] max number of VMs dispatched in each scheduling action
      #                      - default: 30
      #  [-h host dispatch]  max number of VMs dispatched to a given host in each
      #                      scheduling action - default: 1

      $ONE_SCHEDULER -p $PORT -t 30 -m 300 -d 30 -h 1&

… </xterm>

The optimal values of the scheduler parameter depends on the hypervisor, storage subsystem and number of physical hosts. The values can be derived by finding out the max number of VMs that can be started in your set up with out getting hypervisor related errors.

The Match-making Algorithm

The match-making algorithm works as follows:

  • First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out.
  • The ''RANK'' expression is evaluated upon this list using the information gathered by the monitor drivers. Any variable reported by the monitor driver can be included in the rank expression.
  • Those resources with a higher rank are used first to allocate VMs.

Placement Policies

You can implement several placement heuristics by carefully choosing the RANK expression. Note that each VM has its own RANK and so its own policy. This is specially relevant when configuring a Cloud Interface as you can apply different policies to different instance types.

Packing Policy

  • Target: Minimize the number of cluster nodes in use
  • Heuristic: Pack the VMs in the cluster nodes to reduce VM fragmentation
  • Implementation: Use those nodes with more VMs running first
RANK = RUNNING_VMS

Striping Policy

  • Target: Maximize the resources available to VMs in a node
  • Heuristic: Spread the VMs in the cluster nodes
  • Implementation: Use those nodes with less VMs running first
RANK = "- RUNNING_VMS"

Load-aware Policy

  • Target: Maximize the resources available to VMs in a node
  • Heuristic: Use those nodes with less load
  • Implementation: Use those nodes with more FREECPU first
RANK = FREECPU