01 Aug 2021 - tsp
Last update 06 Aug 2021
First of it’s about the usage of a hypervisor called Xen
which is in my opinion one of the best virtualization systems out there. This
is a really powerful type 1 hypervisor that does not run above a specific operating
system and that’s used by many large cloud companies (as of 2021 such
as Amazon Webservices, Rackspace, etc.
and it’s also the hypervisor that’s sold for Citrix solutions). To provide access
to basic services and manage the hypervisor it runs a privileged
guest called Domain 0 or
dom0 for short. Usually one’s running some kind
of Linux or in my case FreeBSD in this privileged domain
that provides at least basic configuration and virtual network interfaces, often storage
backends and so on - there are ways one can segment that even more of course. As
any hypervisor Xen is able to run many virtual guest machines - either in a hardware
virtualized machine mode (HVM) which is what one usually imagines when using virtual
machines or in paravirtualized mode (PV). The main difference is that guests in
HVM machines usually don’t know they’re running in an virtualization environment
whereas PV guests do actively interact with the hypervisor which allows for way
more performant guests that also perform some kind of cooperation - this is usually
interesting for library operating systems, exokernels or unikernels which is also
a currently trending topic in serverless environments. On the other hand HVM
allows traditional virtualization.
Large cloud providers often run Xen in an 1:1 mode running only one virtual machine per hardware machine for different reasons than most home users run virtual machines. It allows easier management, hides details about the hardware and since one is able to perform runtime life migration of VMs one can move running machines through the datacenter without interruption and without having potential hardware security problems that one would have with 1:N scenarios under which a single host might run many different VMs. This is something that I won’t touch in this blog post though.
In case one runs multiple VMs on the same machine they usually are required to get access to the network. On simple setups it might be sufficient to just configure a simple bridge (which can be imagined to work like a simple switch), give all VMs a single virtual network adapter and access to the same bridge - and use that bridge also by Domain 0 to access the network (i.e. a single nic). This will be the first configuration that I’m going to describe.
Depending on the network setup one might also go for a separate management interface for the Domain 0 as well as virtual LAN support which is what I’m going to describe in the second step. Doing this approach one can attach VMs to different virtual networks depending on their usage - and of course also allow them access to different VLANs on ones network when one uses managed switches that allow VLAN tagging which in my opinion is a requirement for any non totally hobbyist network.
In case one requires even more sophisticated networking features such as OpenFlow support on the machine itself one might use OpenVSwitch instead of native bridges which also works pretty well - the ideas are the same as for native bridges but configuration is a little bit more challenging though one can then use a centralized OpenFlow controller (like OpenDaylight or Floodlight for smaller deployments) in conjunction with VM management software.
The most simple setup uses a single bridge for everything. On the Domain 0
one can simply configure the bridge - in this example it will be called
attach the network card which is
ixgbe0 in the following listings and
statically configure an IP address on the bridge that the host will use:
ifconfig ixgbe0 up ifconfig bridge0 create ifconfig bridge0 inet 18.104.22.168/24 addm ixgbe0 up ifconfig bridge0 inet6 accept_rtadv
This will already bring up the network of domain 0 and allow remote access - those
settings can be persisted in
/etc/rc.conf so they will be applied on each
cloned_interfaces="bridge0" ifconfig_ixgbe0="up" ifconfig_bridge="inet 22.214.171.124/24 addm ixgbe0 up" ifconfig_bridge0_ipv6="inet6 accept_rtadv"
It might also be interesting to set the bridges MAC address to some deterministic
value either by setting the sysctl
/etc/sysctl.conf which would inherit the first interfaces MAC address
or by setting the
ether address during interface configuration.
Configuration of guests in the Xen configuration file is pretty easy:
vif = [ 'mac=00:16:3e:01:01:01,bridge=bridge0' ]
This will create a virtual interface with the given MAC address (chosen from the
range XenSource range
assigned to XenSource Inc) that will be automatically assigned to bridge0.
Now it gets more interesting - what if one wants to give VMs access to different
VLANs depending on their designation and privileges? And if one wants to use
a different interface for management an for VMs? The route I’m taking to solve
this problem in this blog post is to create different bridges for different
VLANs and use VLAN tagging on the top of rack switch. This even allows one to
setup IP forwarding between different bridges on the host that can be filtered
by the standard
ipfw packet filter. For simple home and office use switches
such as the TL-SG3216 are more than sufficient as top
of rack switches and provide all required features. One can also use multiple
interfaces in link aggregation groups (
lagg interfaces) in case one wants
to support higher bandwidth or use designated hardware interfaces for system
management which is usually a good idea.
This is pretty simply. Again
ixgbe0 will be the hardware interface available. I’m
also assuming that two virtual LANs with IDs
2 are going to be
used for virtual machines as well as a third VLAN
3 for management. The
domain 0 will again be assigned the IP address
The most simple solution is to create two bridge interfaces
create the associated VLAN devices and assign them to the bridges. This can be
imagined like a virtual network cable being attached to a virtual switch per
ifconfig bridge0 create ifconfig bridge1 create ifconfig vlan0 create ifconfig vlan1 create ifconfig vlan2 create ifconfig ixgbe0 up ifconfig vlan1 vlan 1 vlandev ixgbe0 up ifconfig vlan2 vlan 2 vlandev ixgbe0 up ifconfig bridge0 addm vlan1 up ifconfig bridge1 addm vlan2 up
To use one of the bridges as management interface just add an IP address to the given bridge as before:
ifconfig bridge0 inet 126.96.36.199/24 inet6 accept_rtadv
In case one uses a separate management interface just configure that second interface as usual and do not assign IP addresses to the bridges.
ifconfig em0 inet 188.8.131.52/24 inet6 accept_rtadv
Or - in case one uses a separate management VLAN - assign the IP to the specific VLAN device that has not been attached to any bridge.
In the virtual machines virtual interface configuration just select the specific bridges. To add two network adapters on both VLANs for example:
vif = [ 'mac=00:16:3e:01:01:01,bridge=bridge0', 'mac=00:16:3e:01:02:01,bridge=bridge1' ]
Of course the top of rack switch also has to be configured to threat the switches
port as a
TRUNK port - it should of course also reject all non 802.1q tagged
frames just in case there are untagged frames due to some misconfiguration.
There is one thing that I think is worth to note even if it should be obvious:
Of course any configured routing behavior of the host system still
applies - so if you have configured IP interfaces on different VLANs and
host system will happily route packets between your different IP subnets as
usual and as expected.
This article is tagged: