Caddy 2 Web Server Configuration

I run a multitude of services from my network. Having the ability to configure a simple and secure web server is an absolute necessity. Until recently, I have always continually bounced between Lighttpd and Nginx. Nginx seemed more secure, but the configurations could be similar to setting up an apache server. Lighttpd had some issues using weird header information and the documentation was lacking. I had a lot of issues with Cockpit and Lighttpd, and the community seemed to be lacking when it came to user support.

I recently stumbled upon Caddy as a possibility (which worked out well). I was a bit skeptical about this one as it seemed to be one of the ‘new kids on the block’ in terms of web servers.

After installing, I realized that the documentation was fairly straight forward and the community was very willing to help out. One of the first things I needed to do was get the service file so that I could enable and launch Caddy at boot. Like any other service file, this one was pretty clean and easy:



ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile


Caddy provided the service script on their GitHub.

Then it was off to the configuration file. After seeing the simplicity of the setup, I knew this was going to be perfect. As you can see from the service file, the configuration is stored in /etc/caddy/ in a file called Caddyfile.

WordPress Configuration:

Configuring a WordPress blog is really easy. Declare the root folder, make sure that you add the right location for PHP FastCGI (also, this might be work mentioning, but the link does include // instead of : after unix, this tripped me up the first time). Make sure to use the proper location for php-fpm.sock; in Arch Linux, this is the proper location. The last two options are required for proper handling. I noticed what when I didn’t add encode gzip, the login page for WordPress was missing a theme and the add post/page was blank. {
    root * /var/http/site
    php_fastcgi unix//var/run/php-fpm/php-fpm.sock

    encode gzip
    file_server {
index index.php
hide .htaccess

Standard/Static HTML Website Configuration:

Like WordPress, configuring a standard HTML website with Caddy is a very simple process. The only difference is that you don’t need a FastCGI configuration. {
    root * /var/http/site

    encode gzip
    file_server {
index index.html

Reverse Proxy:

This was another very easy module to setup. Simply add the subdomain name, and declare the reverse proxy location. I am not sure if there is a difference between using localhost or the loopback address ( From my experience with firewall configuration, localhost seemed to be the better option. {
reverse_proxy localhost:8123

One of the best features of Caddy is the automated SSL features and the security of the entire server. I had no issues with setting up a reverse proxy Cockpit and Home Assistant, out of the box, they just worked.

The best part is, just run all of these together and make one configuration file to host multiple sites/services. Ultimately, I think Caddy is now the best solution for simple web servers. Especially when it comes to at-home solutions.

Access Cockpit Created Windows 10 VM (kvm/qemu) from Host Network

KVM is a powerful tool for creating virtual machines but one of the issues that seem to be haunting people is; how the hell do you connect to the VM from another point on your network. Furthermore, with the development of Cockpit advancing so rapidly, Cockpit is becoming an amazing tool for home server management. Sure, there are a lot of images where direct port management to localhost is perfectly fine, and the “containerized” VM serves its purpose well where no other access is required. But, in my case, what if you need a development environment, and you don’t want to have an entirely new machine.

My initial setup wasn’t successful as the VM was isolated behind the virtual bridge and because the gateway/router couldn’t explicitly define routes, it was impossible for other computers to see the bridge or the VM. Below is a simple network diagram that depicts my issue.

Didn’t work….

The behavior of this configuration was secure, but not what I needed. There were a few things I observed:

  • PC to virbr0 ping was not successful
  • PC to VM ping was not successful
  • Server to virbr0 ping was successful
  • Server to VM ping was not successful (I believe this was because of the Windows 10 firewall)
  • VM to PC ping was successful
  • VM to Gateway ping was successful
  • VM to virbr0 ping was successful
  • VM to Google ping was successful

I also noticed that when running nmap on the server, port’s 5900 and 5901 were only open to localhost. When I ran nmap on the ethernet device IP, these ports were closed. I assumed that without proper routing configured, the ports would only ever be accessible from the host machine. (Again, this is a great security feature that could be useful for certain containerized virtual machines)


First, I decided to ditch the idea of using VNC and went with RDP instead. The responsiveness is way better and it seemed like a more clean approach for a development/coding environment. This option requires that you enable remote connections in the. Windows 10 settings.

The network bridge features that KVM and Cockpit deploy is very secure and utilizes good firewall protocols to protect your equipment. The one method I found that allowed me to get bypass these limitations was a direct attachment to a secondary NIC. I know this probably isn’t the first choice that everyone has, but after about a week of iptables, ebtables, and other various firewall management, this was the easiest configuration and allowed the VM to tap directly into the host network making it accessible from anywhere.

There are a few Windows 10 firewall features that will also need to be disabled so that your host machine and other devices on the Network will be able to see your VM. (Here will help you turn off the Windows Defender firewall)

Once Cockpit, libvirt, virt-install, bridge-utils and ebtables are installed properly, then you should be able to proceed with the setup.

I had two devices configured on my network; the first is the primary ethernet card. The secondary device is the one I configured for the direct attachment. Once your VM is installed and running, you can disable the default bridge network.

Navigate to the Virtual Machine’s Network Interface settings tab. I deleted the initial network device and created a new one. The new device interface type is Direct Attachment. The Source was the secondary ethernet card and the model was the e1000e. After you restart your virtual machine the changes will take effect.

VM Overview
VM Network Interface

To confirm everything is working properly, just open cmd in your Windows VM, and check the IP address to make sure that the gateway is your router on the host network and not a bridge IP address.

Static IP Using dhcpcd

Static IP addresses are very useful on networks where a lot of devices are managed and/or you need to track specific devices for any reason. In my specific case, I have a multi-purpose server where I manage specific rules related to port management. The easiest application for port management is to assign a static IP.

In linux, there are various different network management tools available. In my opinion, the most intuitive tool for local system IP address management for a server is dhcpcd as it is very simple to use and gets the job done.

I appended the following snippet to the bottom of /etc/dhcpcd.conf which allowed for static assignment.

interface eno2
static ip_address=
static routers=
static domain_name_servers=