Libvirt takes care of networking, whether a VM is in NAT or bridge mode. In order to enable proper networking, libvirt manages some iptables rules. However, if the host firewall is not aware of changes made to iptables by libvirt and vice versa, conflicts must be expected.

Think of the following situation:

A VM is NAT-ed and provides a webservice. In order to make the service accessible, a port-forwarding iptables rule is necessary to forward packets from the hypervisor to the VM. libvirt automatically creates the necessary forwarding rules. However, those forwarding rules might not be sufficient, because the host firewall configured a DROP-all policy on the respective iptables chains. In order to allow traffic forwarding, additional iptables rules are required.

It is possible to create a post-hook script for libvirt, which creates those rules once a VM has been started or stopped. More information on this approach can be found here, here, here and here.

But this only solves half of the problem. In case the hypervisor firewall is restarted, iptables are flushed and reconfigured. Therefore, all iptables rules, configured by libvirt, are lost. This conflict can only be solved properly, if libvirt and the host firewall are aware of each other. firewalld is supposed to be able to talk to libvirt and vice versa.

If firewalld is not an option, there is, for instance, the following workaround available as described by libvirt:

“Finally, in terms of problems we have in deployment. The biggest problem is that if the admin does service iptables restart all our work gets blown away. We’ve experimented with using lokkit to record our custom rules in a persistent config file, but that caused different problem. Admins who were not using lokkit for their config found that all their own rules got blown away. So we threw away our lokkit code. Instead we document that if you run service iptables restart, you need to send SIGHUP to libvirt to make it recreate its rules.” - https://libvirt.org/firewall.html.

Accordingly, the hypervisor firewall needs to be post-hooked. Meaning, that after a reload of the hypervisor firewall, libvirt needs to receive SIGHUP automatically. However, this approach leads to loss of connectivity of all VMs from the time between the hypervisor firewall flushes all iptables until the reload process of libvirt has completed.

On a side note: As seen in the libvirt systemd service file, SIGHUP is used to reload the service:

$ cat /etc/systemd/system/libvirt-bin.service | grep HUP
ExecReload=/bin/kill -HUP $MAINPID

Therefore, it makes more sense to create a hypervisor firewall post-hook that executes systemctl reload libvirtd.service rather then sending SIGHUP directly (e.g. kill -SIGHUP $(pidof libvirtd)). This way, proper logs are written in regards of service activities, allowing better audit-ability.

Loosing connectivity might not be an option in a production environment. In this case, it might be better to disable the host firewall and operate an external firewall.

A third approach might be to disable libvirt network management and do it manually. Read more here and here.

There is a forth approach, which is anticipated in this setup (see diagram below). Again, the problem is that port forwarding is required to access services of VMs (not all VMs can be bridged and have public IP addresses). And when the hypervisor firewall flushes iptables, port forwarding configurations are lost. To solve this problem, a different network setup, as documented below, has been chosen (second half of the diagram). The hypervisor has only one physical network adapter on which two public IP addresses are configured. Traffic to the hypervisor has been separated from traffic designated for the VMs by bridging the WAN network adapter of the firewall VM directly to the physical network adapter. By applying this approach, port forwarding to VMs can be done by the firewall VM instead of the hypervisor, which is a separated networking stack❗ Therefore, iptables on the hypervisor can be flushed and re-installed at any time, without affecting traffic that goes directly to the firewall VM, thanks to its separate public IP address ✔️.

Be aware of potential vulnerability: libvirt creates forwarding rules assuming that incoming connections to the hypervisor IP address need to be forwarded to VMs. That is not foreseen using this approach. Incoming connections must go only through the firewall VM. Therefore, on the hypervisor IP address, incoming traffic must be explicitely blocked, except for some administrative traffic (e.g. SSH, VPN, etc.). Besides blocking incoming traffic on the hypervisor IP address, IP forwarding might be disabled as well.

Of course, there is the exception of services being hosted on the hypervisor and accessed via NAT. Those services are still affected by the conflict of a host firewall and libvirt..

The following diagram shows first an implementation that deals with the hypervisor firewall & libvirt conflict. The second implementation documents the approach to solve the conflict: