Research on multi hypervisor network virtualization using Open Virtual Network

Summary

The cleanest technical path is not to bolt a foreign br-int onto a host that already has its networking lifecycle owned by XAPI. The safer pattern is to let XAPI create and own a dedicated internal network bridge, then point ovn-controller at that existing XAPI bridge by setting external_ids:ovn-bridge=<xapi bridge name>. In other words: reuse an XAPI-created bridge as the OVN integration bridge, instead of trying to move XAPI-owned VIFs onto a separately created br-int. That matches OVN’s model that VM VIFs should be plugged into the integration bridge, and it matches XCP-ng’s model that network objects, bridges, and VIF lifecycle are normally controlled through XAPI and xcp-networkd.

The reason this matters is that XCP-ng’s own documentation says almost all network configuration is handled through XAPI, with only a few explicit exceptions, notably the Xen Orchestra SDN features. OVN, on the other hand, expects a chassis-local integration bridge, VIFs attached to that bridge, and external_ids:iface-id on those interfaces so ovn-controller can bind them to logical switch ports. That is why a naive “ovs-vsctl add-port br-int vifX.Y” approach clashes with the toolstack: both systems think they own the same port lifecycle.

There is also a supportability caveat. XCP-ng’s public “extra installable” package list for 8.3 contains openvswitch-ipsec, but not ovn or ovn-controller; and XCP-ng explicitly warns that installing packages outside its supported set, or from extra repositories, is at your own risk and can complicate updates and upgrades. That does not make an OVN host-agent prototype impossible, but it does mean this should be treated as an experimental integration unless you have a controlled packaging path.

Sources of conflict

XCP-ng’s current architecture is straightforward: Xen Orchestra and xe talk to XAPI, XAPI dispatches network work to xcp-networkd, and xcp-networkd applies the required state into Open vSwitch. The XCP-ng architecture documentation is explicit that “almost all configuration is handled through XAPI,” and that Xen Orchestra’s SDN controller is one of the rare cases that interacts directly with ovsdb-server or ovs-vswitchd.

OVN’s local model is equally explicit. Each chassis must have an integration bridge dedicated to OVN. On hypervisors, the VIFs that participate in logical networks are supposed to be plugged into that integration bridge, and OVN relies on the normal OVS integration convention where the interface carries external_ids:iface-id. When a VM powers on, the hypervisor integration code adds the VIF to the integration bridge and stores the VIF identifier in external_ids:iface-id; ovn-controller then notices that interface and binds the corresponding logical port to the local chassis.

That leads to the central design tension. XAPI naturally creates one OVS bridge per XAPI Network object, while OVN naturally wants one dedicated integration bridge per chassis. If you add a separate br-int and then manually re-home Xen VIFs onto it, you are stepping outside the model where XAPI is the system of record for the VM-to-network attachment. If instead you make one XAPI-created internal bridge serve as the integration bridge, the two models line up far better: XAPI still owns VM/VIF/network lifecycle, and OVN owns logical forwarding and overlays.

A useful precedent comes from the old XAPI tunnel design itself. The XAPI tunnelling design creates the network object and tunnel access bridge first through XAPI, then leaves the controller responsible for establishing the actual GRE links. If the controller is absent, the network simply degrades into an internal network. That separation—XAPI owns objects, controller owns dynamic connectivity—is exactly the separation that makes OVN coexistence plausible here.

Configuration keys

The required OVN host keys live in the Open_vSwitch table. OVN’s own documentation lists these as the required chassis-side keys: external_ids:system-id, external_ids:ovn-remote, external_ids:ovn-encap-type, and external_ids:ovn-encap-ip. The ovn-controller man page also documents external_ids:ovn-bridge, which selects the integration bridge name and defaults to br-int, and external_ids:ovn-bridge-mappings, which maps provider or physical network names to local OVS bridges. external_ids:ovn-bridge-datapath-type is optional. For hypervisor overlays, Geneve is the normal encapsulation choice; OVN’s docs note that hypervisors support geneve and stt, while gateways may also use vxlan.

The XenServer/XCP-ng bridge identity keys live in the Bridge table and should be preserved on any XAPI-owned bridge. Open vSwitch’s schema documentation for XenServer states that external_ids:bridge-id uniquely identifies the bridge and, on XenServer, commonly matches external_ids:xs-network-uuids. It also documents external_ids:xs-network-uuids itself as the semicolon-delimited set of Xen network UUIDs associated with that bridge. In practice, those are the bridge identity markers that XAPI and the XenServer-style integration expect to exist.

The VIF identity keys live in the Interface table and are the ones OVN most benefits from inheriting instead of reinventing. The XenServer schema and the integration guide document external_ids:attached-mac, external_ids:iface-id, external_ids:iface-status, external_ids:xs-vif-uuid, external_ids:xs-network-uuid, external_ids:vm-id, and external_ids:xs-vm-uuid. The same documentation notes that on XenServer the default iface-id is commonly the Xen VIF UUID, and vm-id commonly matches the Xen VM UUID. That is the key insight that makes the reuse-of-XAPI-bridge design attractive: if XAPI plugs the VIF into the bridge that OVN treats as its integration bridge, the identifiers OVN wants are already the identifiers Xen’s OVS integration knows how to populate.

A concrete coexistence configuration therefore looks like this:

ovs-vsctl set Open_vSwitch . \
external_ids:system-id="$HOST_UUID" \
external_ids:ovn-remote="ssl:10.0.0.10:6642,ssl:10.0.0.11:6642,ssl:10.0.0.12:6642" \
external_ids:ovn-encap-type="geneve" \
external_ids:ovn-encap-ip="$TUNNEL_IP" \
external_ids:ovn-bridge="$XAPI_OVN_BRIDGE" \
external_ids:ovn-bridge-mappings="physnet-mgmt:$UPLINK_BRIDGE"

The important part is that external_ids:ovn-bridge points at the XAPI-created internal bridge you dedicate to OVN, while external_ids:ovn-bridge-mappings points at one or more other bridges that XAPI already uses for provider or uplink connectivity. OVN’s architecture documentation is explicit that underlay physical ports should not be attached to the integration bridge, and that patching from the integration bridge to another bridge is the right way to implement localnet or gateway connectivity.

Community evidence

The strongest public template I found is what Vates already built for Xen Orchestra’s SDN features. The official Xen Orchestra SDN documentation and the XCP-ng networking docs describe a plugin that creates pool-wide and cross-pool private networks, initially via direct OpenFlow and OVSDB interaction, then later via a newer XAPI-side plugin approach. The docs are explicit that the SDN controller is the exception to the usual XAPI-only networking model, and that the newer direction is to move the OVS logic into XAPI plugins instead of having Xen Orchestra talk directly to host OVS components forever.

The old SDN devblogs are also informative. In the 2019 SDN controller devblog, the authors explain that they built their own lightweight controller instead of using broader SDN projects; they also call out that their design uses a passive SSL connection on OVSDB port 6640, allowing multiple controllers to share pool control as long as they trust the same CA. In the 2020 VIF traffic-control devblog, they document that the plugin “becomes the SDN controller … thanks to XAPI” and then uses OVSDB to become the bridge manager and OpenFlow to install traffic rules. That is not OVN, but it is a very relevant proof that the XCP-ng/XO ecosystem can layer an SDN control plane over XAPI-owned OVS state when the responsibilities are kept separate.

The XCP-ng forum threads show the same pattern in the field. A 2025 thread explains that when an SDN network creation fails, the XAPI pool network can already exist even though the OVSDB-controlled tunnels do not; the suggested validation is ovs-vsctl show, where a healthy result is a normal XAPI bridge such as xapi6 plus a controller entry and a GRE/VXLAN port. That is a strong practical hint that XAPI-created bridges are the right substrate and the controller’s job is to add extra ports or flows—not to replace the bridge lifecycle.

Other forum replies give useful operational constraints. One forum answer enumerates the SDN network other_config fields, including xo:sdn-controller:pif-device, and states that the selected PIF must be physical, VLAN-backed, or a bond master and must have an IP configuration. Another issue report shows what happens when the chosen transport PIF is ambiguous or lacks IP configuration: XAPI returns TRANSPORT_PIF_NOT_CONFIGURED, exactly as the old XAPI tunnel design describes. Those are not OVN-specific, but they tell you what kind of host-side interface selection logic survives XAPI scrutiny.

The historical XenServer/Open vSwitch mailing-list material also supports preserving Xen’s external IDs instead of fighting them. The OVS XenServer integration work documents that iface-id should mirror the Xen VIF UUID and that vm-id should mirror the Xen VM UUID, and one XenServer-specific OVS fix explicitly says that getting bridge-id wrong broke controller integration. There is also historical evidence that XenServer-specific tooling reset OVS manager_options, which is another reason not to base a coexistence design on ad hoc bridge-manager state that XAPI does not know about.

Recommended design

The recommended design is:

Create one XAPI-managed internal network whose bridge is dedicated to OVN, and configure that bridge as ovn-controller’s integration bridge. Then attach every VM VIF that should be OVN-managed to that XAPI network through normal XAPI/XO/xe operations. Use ovn-bridge-mappings only for provider uplinks or external connectivity to other XAPI-managed bridges. This gives OVN exactly what it wants—a dedicated integration bridge carrying VM VIFs—without taking bridge or VIF ownership away from XAPI.

This is better than introducing a separate classic br-int for three reasons. First, OVN explicitly allows the integration bridge to have a name other than br-int; br-int is only the default. Second, XenServer/XCP-ng already knows how to create and maintain internal bridges that have no physical uplink, which fits OVN’s own rule that underlay physical ports should not be attached directly to the integration bridge. Third, Xen’s existing OVS integration already populates the VIF-facing external IDs that OVN uses to correlate local interfaces with logical switch ports.

There is one important architectural consequence: the dedicated XAPI network used as OVN’s bridge should be treated as an OVN access network, not as a normal standalone L2 domain. The real tenant segmentation, routing, ACLs, and distributed forwarding happen in OVN’s northbound and southbound databases. The XAPI network exists mainly so that XAPI has a supported place to plug VIFs and so that ovn-controller can see them.

For connectivity to existing underlay or external networks, use OVN localnet ports and external_ids:ovn-bridge-mappings. OVN documents that a localnet port is the way to model “direct connectivity to an existing network,” and that ovn-controller implements it as a pair of patch ports between the integration bridge and another bridge. That is exactly the right place to map into an existing XAPI uplink bridge such as the management bridge, a VLAN bridge, or a bond-backed bridge.

Implementation guide

First, create a dedicated internal XAPI network for OVN. XenServer’s networking guide states that xe network-create name-label=<name> creates a network, and if it is not connected to a PIF it is internal. After creating it, query the network’s bridge field; that bridge name is what you will hand to OVN. In a pool, create it once on the coordinator in the normal XAPI way so the network object is consistent pool-wide.

OVN_NET_UUID=$(xe network-create name-label=ovn-int)
XAPI_OVN_BRIDGE=$(xe network-list uuid="$OVN_NET_UUID" params=bridge --minimal)
echo "$OVN_NET_UUID $XAPI_OVN_BRIDGE"

Next, set the OVN chassis keys in the local Open_vSwitch record on each host. The minimum set is system-id, ovn-remote, ovn-encap-type, and ovn-encap-ip. For coexistence, also set ovn-bridge=$XAPI_OVN_BRIDGE; that is the key that tells ovn-controller to use an existing bridge other than the default br-int. If the host must reach provider or external networks, also set ovn-bridge-mappings to one or more existing XAPI uplink bridges.

HOST_UUID=$(xe host-list --minimal | cut -d, -f1)
ovs-vsctl set Open_vSwitch . \
  external_ids:system-id="$HOST_UUID" \
  external_ids:ovn-remote="ssl:10.0.0.10:6642,ssl:10.0.0.11:6642,ssl:10.0.0.12:6642" \
  external_ids:ovn-encap-type="geneve" \
  external_ids:ovn-encap-ip="$TUNNEL_IP" \
  external_ids:ovn-bridge="$XAPI_OVN_BRIDGE" \
  external_ids:ovn-bridge-mappings="physnet-mgmt:$UPLINK_BRIDGE"

Then start ovn-controller and verify that it adopts the XAPI bridge rather than auto-creating a new br-int. OVN’s architecture documentation says that if the integration bridge does not exist, ovn-controller will create one automatically; the whole point here is to make sure the bridge already exists and is the XAPI-owned internal bridge you chose. At that point, any VIF that XAPI plugs into that dedicated network should appear on the integration bridge in the ordinary Xen/OVS way, with the relevant external IDs populated.

Now attach VM networking through XAPI, not through ovs-vsctl. XAPI’s API model is that a VIF is the attachment between a VM and a Network object, and plug/unplug are the lifecycle operations for a running VM. In practice, create a new VIF or reattach an existing one so that its network is the dedicated ovn-int network. If you do this through XAPI/XO/xe, XAPI remains the source of truth and live migration, unplug, and host restart semantics stay aligned with the rest of the toolstack.

A workable script sequence is:

# Pseudocode / shell logic
# 1. Create or move a VM VIF onto the dedicated OVN XAPI network.
VIF_UUID=<create via XAPI or discover existing VIF now attached to OVN_NET_UUID>
MAC=<read from XAPI VIF record>

# 2. Create the matching OVN logical switch port.
ovn-nbctl --may-exist ls-add tenant-a
ovn-nbctl --may-exist lsp-add tenant-a "$VIF_UUID"
ovn-nbctl lsp-set-addresses "$VIF_UUID" "$MAC 192.0.2.10"

The crucial detail is that the OVN logical switch port name should match the value OVN will see in external_ids:iface-id. On XenServer-style OVS integration, the default iface-id is commonly the Xen VIF UUID, so using the VIF UUID as the logical port name is the least-friction mapping. Once the VM is plugged and ovn-controller sees the interface on the integration bridge, OVN should bind the corresponding Port_Binding row to the local chassis. citeturn11view3turn21view0

For north-south or provider connectivity, add a localnet port in OVN and map it to an existing XAPI-owned uplink bridge. OVN’s NB and controller docs explicitly describe localnet as the mechanism for direct connectivity to an existing network, and describe the implementation as a pair of patch ports between the integration bridge and the other local bridge. That lets you keep physical/VLAN/bond topology fully under XAPI while still giving OVN a way out to the underlay.

ovn-nbctl --may-exist lsp-add tenant-a provnet
ovn-nbctl lsp-set-type provnet localnet
ovn-nbctl lsp-set-options provnet network_name=physnet-mgmt

Operationally, I would also keep the transport choices conservative. Use a real host IP for ovn-encap-ip, prefer geneve, and keep management traffic, provider uplinks, and the OVN access bridge separate. That lines up both with OVN’s own bridge layout guidance and with the constraints the Xen Orchestra SDN work has already surfaced around transport PIF selection and IP-configured uplinks.

Experimental fallback

If you absolutely must keep a classic separate br-int instead of reusing an XAPI bridge as ovn-bridge, the only plausible pattern I found is an event-driven reconciliation layer: let XAPI create and plug the VIF onto its normal XAPI bridge, then immediately move the resulting OVS port to br-int in a host-local script or daemon, preserving the Xen external IDs and making sure external_ids:iface-id remains the VIF UUID. Inference-wise, the logic is sound because OVN only needs the VIF on the integration bridge with the right iface-id, and the world already contains XCP-ng SDN examples that add extra GRE/VXLAN ports to XAPI bridges without breaking XAPI. But I did not find public evidence that permanently re-homing Xen VIF ports away from their XAPI-selected bridge is a supported configuration, so this should be treated as a high-risk lab technique rather than the recommended design.

If you prototype that fallback anyway, the reconciliation loop should be idempotent and keyed only on Xen metadata that XAPI already writes: xs-vif-uuid, xs-network-uuid, xs-vm-uuid, attached-mac, iface-id, and iface-status. It must run after every VIF plug, VM power-on, migration, and potentially after host-side network reconciliation. That is exactly the sort of long-term complexity the newer Xen Orchestra work is trying to eliminate by moving SDN logic closer to XAPI rather than relying on persistent ad hoc host-side OVS manipulations.

In short: if the goal is reliable coexistence, use an XAPI-created internal bridge as OVN’s integration bridge. If the goal is to preserve the name br-int at all costs, be prepared to own a custom reconciler for the life of the deployment.

Aliens? Any thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.