Networking Guide 1.2
The physical hosts that will conform the fabric of our virtual infrastructures will need to have some constraints in order to be able to deliver virtual networks effectively to our virtual machines. Therefore, we can define our physical cluster under the point of view of networking as a set of hosts with one or more network interfaces, each of them connected to a different physical network.
The examples used throughout this section are based on the figure depicted above. We can see in the picture two physical hosts with two network interfaces each, thus there are two different physical networks. There is one physical network that connects the two hosts using a switch, and other one that gives the hosts access to the public internet. This is one possible configuration for the physical cluster, and it is the one we recommend since it can be used to make both private and public virtual networks for our virtual machines. Moving up to the virtualization layer, we can distinguish three different virtual networks. One is mapped on top of the public internet network, and we can see a couple of virtual machines taking advantage of it. Therefore, this two VMs will have access to the internet. The other two are mapped on top of the private physical network: the Red and Blue virtual network (VN). Virtual Machines connected to the same private VN will be able to communicate with each other, otherwise they will be isolated and won't be able to.
In order to allow multiple VNs to share a physical network interface in a transparent way, we recommend the use of ethernet bridging. In this setup, a software bridge is associated with each physical network interface card, so an arbitrary number of VMs can be connected to the same physical NIC.
Details on how to configure the physical host in this fashion can be found in the XEN documentation.
OpenNebula allows for the creation of Virtual Networks by mapping them on top of the physical ones. All Virtual Networks are going to share a default value for the MAC preffix, set in the oned.conf
file.
There are two types of Virtual Networks in OpenNebula: Fixed and Ranged. Basically, a VN defines, regardless of the means to perform the definition, a set of pairs confirmed by one IP and one MAC address to be delivered to a VM when it enters the OpenNebula core. We call each of these pairs a lease.
Conceptually, these are the simplest VNs available. They consist of a set of IP addresses and associated MACs, defined in a text file. If an IP is defined, and there is no associated MAC, OpenNebula will generate it using the following rule:
MAC_ADDRESS = MAC_PREFFIX:IP_IN_HEXADECIMAL
So, for example, from IP 10.0.0.1 and MAC_PREFFIX 00:16, OpenNebula will generate a MAC like 00:16:0a:00:00:01. Defining only a MAC address with no associated IP is not allowed.
We need four pieces of information to define a fixed VN:
Let's see an example of a Fixed Virtual Network template:
NAME = "Public" TYPE = FIXED BRIDGE = eth1 LEASES = [IP=130.10.0.1,MAC=50:20:20:20:20:20] LEASES = [IP=130.10.0.2,MAC=50:20:20:20:20:21] LEASES = [IP=130.10.0.3] LEASES = [IP=130.10.0.4]
This type of VNs allows for a definition supported by a base network address and a size (that is, the total number of hosts that can be fitted in this network). We can define such size as a number or as a network class (class B - 65,534 hosts, or class C - 254 hosts). For example, we could define a Virtual Network with 3 hosts a base network address of 10.0.0.0, thus having room for three VNs with IPs 10.0.0.1, 10.0.0.2, 10.0.0.3.
We need five pieces of information to define a ranged VN:
The following is an example of a Ranged Virtual Network template:
NAME = "Red Private LAN" TYPE = RANGED BRIDGE = eth0 NETWORK_SIZE = C NETWORK_ADDRESS = 10.0.1.0
Default value for the network size can be found in oned.conf
.
Once a template for a VN has been defined, the onevnet
command can be used to create it:
makito $ onevnet -v create private_red.net NID: 0
Also, onevnet
can be used to query OpenNebula about available VNs:
makito $ onevnet list NID NAME TYPE SIZE BRIDGE 0 Red Private LAN 0 eth0
In order to set an example that will aid in explaining the lease allocation, lets assume that we are going to deploy a VM very much like the red VM in Host B as seen in the picture. This means a VM with two network interfaces (NICs), one connected to a private VN and other connected to the public internet. Assuming we have created two VNs like the ones described in the above section, we can build the VM template with the definition of the NICs including references to these VNs. For example, the following will grant access to the internet for the VM:
NIC=[NETWORK="Public"]
Whenever the VM is submitted to OpenNebula, the Virtual Network Manager will be queried as if a lease for the VN named Public is available. If not, the submission will fail. If it indeed is one available, it will be marked as used and a pair of IP and MAC addresses will be used to enable a network interface in the VM that will enable it to access the internet.
Additionally, more than one NIC can be set for a VM. If we want to give access to the Red VN, we would add the following line to the VM template:
NIC=[NETWORK="Red Private LAN"]
This will set another network interface in the VM with an IP and MAC address connected to the appropriate bridge so to grant access to the Red VN for said VM.
Alternatively, a user may want her VM to connect to the Red VN using a specific IP address, if available, like:
NIC=[NETWORK="Red Private LAN", IP=10.0.1.3]
Reached this point, it is worth noting that network management can still be done manually without relaying in the capabilities of the Virtual Network Manager. Therefore, users can define NICs without the help of the NETWORK attribute.
NIC=[IP="10.0.0.1", MAC="00:16:0a:00:00:01"]
Upon submission, OpenNebula will try and get a lease for the VM. If successful, the onevm show
command should return information about the machine, including network information. Assuming we have sent a VM that wants to use the Red Private LAN network, it will be something like:
NIC : BRIDGE=eth0,IP=10.0.1.1,MAC=00:16:0a:00:01:01,NETWORK=Red Private LAN,VNID=0
Now we can query the Virtual Machine Manager using onevnet show
to find out about given leases and other VN information:
makito $ onevnet show 0 NID : 0 UID : 0 Network Name : Red Private LAN Type : Ranged Size : 256 Bridge : eth0 ....: Template :.... BRIDGE=eth0 NAME=Red Private LAN NETWORK_ADDRESS=10.0.1.0 NETWORK_SIZE=C TYPE=RANGED ....: Leases :.... IP = 10.0.1.1 MAC = 00:01:0a:00:01:01 USED = 1 VID = 8
Hypervisors can attach a specific MAC address to a virtual network interface, but Virtual Machines need to obtain an IP address. There are a variety of ways to achieve this, including DHCP
configuration.
With OpenNebula you can also derive the IP address from the MAC address using the MAC_PREFFIX:IP rule. In order to achieve this, a shell script is shipped with OpenNebula source code. It can be found in share/scripts/vmcontext.sh
. To make it work, users should place it in their VNs /etc/init.d
folder.
Having done so, whenever the VN boots it will execute this script, which in turn would scan the available network interfaces, extract their MAC addresses, make the MAC to IP conversion and construct a /etc/network/interfaces
that will ensure the correct IP assignment to the corresponding interface.