KVM is a powerful tool for creating virtual machines but one of the issues that seem to be haunting people is; how the hell do you connect to the VM from another point on your network. Furthermore, with the development of Cockpit advancing so rapidly, Cockpit is becoming an amazing tool for home server management. Sure, there are a lot of images where direct port management to localhost is perfectly fine, and the “containerized” VM serves its purpose well where no other access is required. But, in my case, what if you need a development environment, and you don’t want to have an entirely new machine.
My initial setup wasn’t successful as the VM was isolated behind the virtual bridge and because the gateway/router couldn’t explicitly define routes, it was impossible for other computers to see the bridge or the VM. Below is a simple network diagram that depicts my issue.
The behavior of this configuration was secure, but not what I needed. There were a few things I observed:
- PC to virbr0 ping was not successful
- PC to VM ping was not successful
- Server to virbr0 ping was successful
- Server to VM ping was not successful (I believe this was because of the Windows 10 firewall)
- VM to PC ping was successful
- VM to Gateway ping was successful
- VM to virbr0 ping was successful
- VM to Google ping was successful
I also noticed that when running nmap on the server, port’s 5900 and 5901 were only open to localhost. When I ran nmap on the ethernet device IP, these ports were closed. I assumed that without proper routing configured, the ports would only ever be accessible from the host machine. (Again, this is a great security feature that could be useful for certain containerized virtual machines)
First, I decided to ditch the idea of using VNC and went with RDP instead. The responsiveness is way better and it seemed like a more clean approach for a development/coding environment. This option requires that you enable remote connections in the. Windows 10 settings.
The network bridge features that KVM and Cockpit deploy is very secure and utilizes good firewall protocols to protect your equipment. The one method I found that allowed me to get bypass these limitations was a direct attachment to a secondary NIC. I know this probably isn’t the first choice that everyone has, but after about a week of iptables, ebtables, and other various firewall management, this was the easiest configuration and allowed the VM to tap directly into the host network making it accessible from anywhere.
There are a few Windows 10 firewall features that will also need to be disabled so that your host machine and other devices on the Network will be able to see your VM. (Here will help you turn off the Windows Defender firewall)
Once Cockpit, libvirt, virt-install, bridge-utils and ebtables are installed properly, then you should be able to proceed with the setup.
I had two devices configured on my network; the first is the primary ethernet card. The secondary device is the one I configured for the direct attachment. Once your VM is installed and running, you can disable the default bridge network.
Navigate to the Virtual Machine’s Network Interface settings tab. I deleted the initial network device and created a new one. The new device interface type is Direct Attachment. The Source was the secondary ethernet card and the model was the e1000e. After you restart your virtual machine the changes will take effect.
To confirm everything is working properly, just open cmd in your Windows VM, and check the IP address to make sure that the gateway is your router on the host network and not a bridge IP address.