Networking Guide 1.4
The physical hosts that will conform the fabric of our virtual infrastructures will need to have some constraints in order to be able to deliver virtual networks effectively to our virtual machines. Therefore, we can define our physical cluster under the point of view of networking as a set of hosts with one or more network interfaces, each of them connected to a different physical network.
In the proposed architecture several virtual networks will share the same physical network. Doing this makes this configuration very flexible as you do not need to setup new physical networks or configure vLANs (IEEE 802.1Q) in the switch each time you need a new virtual network.
However sharing the same physical network also makes the virtual networks potentially vulnerable to a compromised VM. Here we will explain a method of isolating networks at the ethernet level so the traffic generated in different virtual networks can not be seen in others.
Network isolation allows you to put any network service within your VMs. For example you can put a VM with a DHCP server in each virtual network to handle the VM's IP in that virtual network.
The virtual networks are isolated by filtering the ethernet traffic in the device where the VM is attached to. So we let the VM to send packets only to machines in the same network (those with mac addresses in the virtual network, as defined with onevnet
. This is achieved using ebtables
that lets you change bridge tables and the OpenNebula Hooks.
All the network configuration will be done in the cluster nodes, these are the additional requisites:
ebtables
package installedsudoers
configured so oneadmin
can execute ebtables
Add this line to sudoers on each cluster node:
oneadmin ALL=(ALL) NOPASSWD: /sbin/ebtables *
If you happen to have ebtables
binary installed in other path change the line for sudoers file accordingly.
You have to configure OpenNebula so the script is called each time a new VM is created and also for VM shutdown (to delete ebtables rules). This is the configuration needed in oned.conf
:
VM_HOOK = [ name = "ebtables-start", on = "running", command = "/srv/cloud/one/share/hooks/ebtables-kvm", # or ebtables-xen arguments = "one-$VMID", remote = "yes" ] VM_HOOK = [ name = "ebtables-flush", on = "done", command = "/srv/cloud/one/share/hooks/ebtables-flush", arguments = "", remote = "yes" ]
The first script takes one parameter: the VM name. The second script removes all the ebtables rules which refer to a nonexistent VM, therefore no argument is needed. These hooks are executed when the VM enters the running (on = “running”
), shutdown or stop states. Note also that they are executed in the cluster nodes (remote = yes
). Check the oned configuration reference for more information.
Here is the script that configures ebtables for kvm machines. A version for Xen is located in $ONE_LOCATION/share/hooks
.
#!/usr/bin/env ruby require 'pp' require 'rexml/document' VM_NAME=ARGV[0] # Uncomment to act only on the listed bridges. #FILTERED_BRIDGES = ['beth0'] def activate(rule) system "sudo ebtables -A #{rule}" end def get_bridges bridges = Hash.new brctl_exit=`brctl show` cur_bridge = "" brctl_exit.split("\n")[1..-1].each do |l| l = l.split if l.length > 1 cur_bridge = l[0] bridges[cur_bridge] = Array.new bridges[cur_bridge] << l[3] else bridges[cur_bridge] << l[0] end end bridges end def get_interfaces bridges = get_bridges if defined? FILTERED_BRIDGES FILTERED_BRIDGES.collect {|k,v| bridges[k]}.flatten else bridges.values.flatten end end nets=`virsh -c qemu:///system dumpxml #{VM_NAME}` doc=REXML::Document.new(nets).root interfaces = get_interfaces() doc.elements.each('/domain/devices/interface') {|net| tap=net.elements['target'].attributes['dev'] if interfaces.include? tap iface_mac=net.elements['mac'].attributes['address'] mac=iface_mac.split(':') mac[-1]='00' net_mac=mac.join(':') in_rule="FORWARD -s ! #{net_mac}/ff:ff:ff:ff:ff:00 -o #{tap} -j DROP" out_rule="FORWARD -s ! #{iface_mac} -i #{tap} -j DROP" activate(in_rule) activate(out_rule) end }