Caddy 2 Web Server Configuration

I run a multitude of services from my network. Having the ability to configure a simple and secure web server is an absolute necessity. Until recently, I have always continually bounced between Lighttpd and Nginx. Nginx seemed more secure, but the configurations could be similar to setting up an apache server. Lighttpd had some issues using weird header information and the documentation was lacking. I had a lot of issues with Cockpit and Lighttpd, and the community seemed to be lacking when it came to user support.

I recently stumbled upon Caddy as a possibility (which worked out well). I was a bit skeptical about this one as it seemed to be one of the ‘new kids on the block’ in terms of web servers.

After installing, I realized that the documentation was fairly straight forward and the community was very willing to help out. One of the first things I needed to do was get the service file so that I could enable and launch Caddy at boot. Like any other service file, this one was pretty clean and easy:

/etc/systemd/system/caddy.service

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
User=http
Group=http
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512\
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

Caddy provided the service script on their GitHub.

Then it was off to the configuration file. After seeing the simplicity of the setup, I knew this was going to be perfect. As you can see from the service file, the configuration is stored in /etc/caddy/ in a file called Caddyfile.

WordPress Configuration:

Configuring a WordPress blog is really easy. Declare the root folder, make sure that you add the right location for PHP FastCGI (also, this might be work mentioning, but the link does include // instead of : after unix, this tripped me up the first time). Make sure to use the proper location for php-fpm.sock; in Arch Linux, this is the proper location. The last two options are required for proper handling. I noticed what when I didn’t add encode gzip, the login page for WordPress was missing a theme and the add post/page was blank.

thebytes.net {
    root * /var/http/site
    php_fastcgi unix//var/run/php-fpm/php-fpm.sock

    encode gzip
    file_server {
index index.php
hide .htaccess
}
}

Standard/Static HTML Website Configuration:

Like WordPress, configuring a standard HTML website with Caddy is a very simple process. The only difference is that you don’t need a FastCGI configuration.

thebytes.net {
    root * /var/http/site

    encode gzip
    file_server {
index index.html
}
}

Reverse Proxy:

This was another very easy module to setup. Simply add the subdomain name, and declare the reverse proxy location. I am not sure if there is a difference between using localhost or the loopback address (127.0.0.1). From my experience with firewall configuration, localhost seemed to be the better option.

place.thebytes.net {
reverse_proxy localhost:8123
}

One of the best features of Caddy is the automated SSL features and the security of the entire server. I had no issues with setting up a reverse proxy Cockpit and Home Assistant, out of the box, they just worked.

The best part is, just run all of these together and make one configuration file to host multiple sites/services. Ultimately, I think Caddy is now the best solution for simple web servers. Especially when it comes to at-home solutions.

CentOS 7 on Raspberry Pi 4

There are a lot of guides and walkthroughs on how to install CentOS on a Raspberry Pi 4, but once you get the OS up and running, it’s pretty useless until you get it set up and configured properly for the Pi. The CentOS 7 image is a pretty barebone image that doesn’t provide a lot in the way of Linux.

Getting the image is a pretty straight forward process; download the image here. One key thing to pay attention to is the version you are downloading. Make sure to download the minimal armv7hl version with “minimal-4” in the name as it is the only image that has the proper configurations Pi4.

Flash the image using Etcher or Raspberry Pi Imager. Make sure you flash to the right drive (the SD card and not your hard drive).

Once you have the image flashed, then boot up your Raspberry Pi. You can log into the Pi with the following credintials:

username: root
password: centos

(I recommend changing these soonest.. but for this, I will be using root the entire time)

Configure the Network

This process is a fairly easy process too. Use the preinstalled network management tool ‘nmtui‘ to configure your wired or wireless network. Since you have a Raspberry Pi 4 (at least I assume you do since you’re reading this), the wireless configuration process is very easy. Don’t skip this step because you will need internet access to get everything else working. While here, you can assign a hostname if you desire.

Expand the rootfs

One of the weird nuances with CentOS images for the Raspberry Pi is that the file system is fairly small. (you can check the size with ‘df -l‘. In my case, the image was around 2G. That is obviously not very large. There is a default command that you can run to resize the partition to the full size of the SD card you installed on. Run the following:

# rootfs-expand

and wait a few seconds. The screen will clear, then you will see a script output of two commands; growpart and resize2fs. Once completed, the partition and file system should match the right size of the SD card. (Again, you can check with ‘df -l’).

Disable the annoying kernel messages in console.

This is more of a personal preference over anything else. It becomes really annoying to see kernel messages flood the screen when trying to accomplish anything in a console-only build. If I want to see these messages, I can reenable them later or check dmesg. Echo the following line to disable the kernel messages. A reboot is required (the easiest way) for the changes to take effect.

# echo kernel.printk = 3 4 1 3 >> /etc/sysctl.conf

No more annoying messages. (if this is your thing, then by all means, keep the messages and… good luck).

Update the system (and enable the EPEL)

Now that the internet is working, the system size is properly set and you aren’t receiving annoying messages, we can now update the system and install some packages. First, update your system.

# yum update

If you want to install an editor like ‘nano’, now would be the time. If not, ‘vi’ is available be default.

Next, you will probably want the Extra Packages for Enterprise Linux (EPEL) installed and available for your CentOS Raspberry Pi build. You will need to add the repo before you can install it. (Not that it matters, but this is an unofficial repo as it is NOT officially supported by CentOS yet. Some things might be broken).

Add this to the repo folder:

# vi /etc/yum.repos.d/epel.repo

[epel]
name=Epel Repo
baseurl=https://armv7.dev.centos.org/repodir/epel-pass-1/
enabled=1
gpgcheck=0

Once you have the repo added, then you can update the system. I highly recommend updating the system with the ‘Continuous Release’ repo as well. This will help with keeping things up-to-date on the Pi. Especially if you want packages like gpio-rpi and snapd.

# yum --enablerepo=cr update
# yum install epel-release

(if you prefer dnf, then this would be the best time to install it).

Install and Enable Snapd

Now that everything is updated and you are running with extra repos, it’s time to install snapd. If you don’t know what snapd is, it’s a daemon for running snapcraft’s snaps on your system. More about snaps here.

# yum install snapd
# systemctl enable snapd

Install Extra Packages

For the final piece, I needed to install gpio-rpi. Snap was the best option. For snap, there are some options required before installing “less-than-secure” packages. Both edge and devmode are required.

# snap install --edge --devmode gpio-rpi

Access Cockpit Created Windows 10 VM (kvm/qemu) from Host Network

KVM is a powerful tool for creating virtual machines but one of the issues that seem to be haunting people is; how the hell do you connect to the VM from another point on your network. Furthermore, with the development of Cockpit advancing so rapidly, Cockpit is becoming an amazing tool for home server management. Sure, there are a lot of images where direct port management to localhost is perfectly fine, and the “containerized” VM serves its purpose well where no other access is required. But, in my case, what if you need a development environment, and you don’t want to have an entirely new machine.

My initial setup wasn’t successful as the VM was isolated behind the virtual bridge and because the gateway/router couldn’t explicitly define routes, it was impossible for other computers to see the bridge or the VM. Below is a simple network diagram that depicts my issue.

Didn’t work….

The behavior of this configuration was secure, but not what I needed. There were a few things I observed:

  • PC to virbr0 ping was not successful
  • PC to VM ping was not successful
  • Server to virbr0 ping was successful
  • Server to VM ping was not successful (I believe this was because of the Windows 10 firewall)
  • VM to PC ping was successful
  • VM to Gateway ping was successful
  • VM to virbr0 ping was successful
  • VM to Google ping was successful

I also noticed that when running nmap on the server, port’s 5900 and 5901 were only open to localhost. When I ran nmap on the ethernet device IP, these ports were closed. I assumed that without proper routing configured, the ports would only ever be accessible from the host machine. (Again, this is a great security feature that could be useful for certain containerized virtual machines)

Solution

First, I decided to ditch the idea of using VNC and went with RDP instead. The responsiveness is way better and it seemed like a more clean approach for a development/coding environment. This option requires that you enable remote connections in the. Windows 10 settings.

The network bridge features that KVM and Cockpit deploy is very secure and utilizes good firewall protocols to protect your equipment. The one method I found that allowed me to get bypass these limitations was a direct attachment to a secondary NIC. I know this probably isn’t the first choice that everyone has, but after about a week of iptables, ebtables, and other various firewall management, this was the easiest configuration and allowed the VM to tap directly into the host network making it accessible from anywhere.

There are a few Windows 10 firewall features that will also need to be disabled so that your host machine and other devices on the Network will be able to see your VM. (Here will help you turn off the Windows Defender firewall)

Once Cockpit, libvirt, virt-install, bridge-utils and ebtables are installed properly, then you should be able to proceed with the setup.

I had two devices configured on my network; the first is the primary ethernet card. The secondary device is the one I configured for the direct attachment. Once your VM is installed and running, you can disable the default bridge network.

Navigate to the Virtual Machine’s Network Interface settings tab. I deleted the initial network device and created a new one. The new device interface type is Direct Attachment. The Source was the secondary ethernet card and the model was the e1000e. After you restart your virtual machine the changes will take effect.

VM Overview
VM Network Interface

To confirm everything is working properly, just open cmd in your Windows VM, and check the IP address to make sure that the gateway is your router on the host network and not a bridge IP address.

Static IP Using dhcpcd

Static IP addresses are very useful on networks where a lot of devices are managed and/or you need to track specific devices for any reason. In my specific case, I have a multi-purpose server where I manage specific rules related to port management. The easiest application for port management is to assign a static IP.

In linux, there are various different network management tools available. In my opinion, the most intuitive tool for local system IP address management for a server is dhcpcd as it is very simple to use and gets the job done.

I appended the following snippet to the bottom of /etc/dhcpcd.conf which allowed for static assignment.

interface eno2
static ip_address=192.168.1.90/24
static routers=192.168.1.1
static domain_name_servers=8.8.8.8 8.8.4.4

Home Assistant with Podman

# podman run -dt -v /srv/hassio:/config:Z -v /etc/localtime:/etc/localtime:ro --name=home-assistant --net=host -d docker.io/homeassistant/home-assistant:latest

When most pods are configured, I would recommend using specific port management instead of the host network. However, since Home Assistant is exclusive to port 8123, this would be one of those exceptions that shouldn’t matter.

Additionally, I have all of my configurations stored in /srv/hassio, if you want your configurations stored in another location then replace that with your desired configuration location.

Envoi Project Progress

There has been a substantial amount of progress made on the Envoi Project.  The setup page is now completely functional and requires users to complete the setup prior to using the platform.  Other things included are:

  • Password encryption using Argon2i (requires PHP 7.2)
  • Proper password verification for sessions
  • Upgrade to Bootstrap 5 (alpha)
  • User settings manipulation
  • Dynamic HTML class for page creation (work in progress, but functional)
  • Admin panel functionality
  • CRUD model for posts (Create, Remove, Update, and Delete)
  • All posts are sorted and posted on the main page

While there is a long road ahead, the baseline is coming together very nicely.  My goal is to clean up the backend, get the admin panel completed, and begin the API integration very soon.

Add posts pageAdd Content

Setup page

Setup

View posts page

View Posts

Envoi Project

I have started working on something that I think is interesting.  There aren’t enough flat-file blogging platforms around that support full integration with social media.  I have decided to create my own version of this with the Envoi Project.  This project is currently hosted on GitHub, but will soon have its own website.

The Envoi Project is planned to be a flat-file blog that simultaneously shares your content across all of your social media.  I am also attempting to keep a simple approach to requirements and languages.  I have opted to stick with PHP, JavaScript, and Bootstrap’s CSS.  The only requirement so far is an Apache webserver.  As for the content, Envoi will categorize your posts into 6 different types (text, photo, video, link, quote, file).  I felt that these are the core reasons that people blog, so by creating types, this will help narrow down the social media sharing experience and focus on getting the right content onto the right platform.

The idea is to focus on centralized content sharing.  No matter what type of team or group you are trying to share, you will be able to reach everyone with a single post.

Another huge topic I want to focus on is security.  Since you will be syncing your social media with Envoi, securely storing your credentials is very important.

The project is in an infancy stage at the moment, but I am hoping to gain some traction soon and get a working project released before the end of the year.

If you are interested, follow the project at GitHub.

Envoi Screenshot

Using ‘certbot’ for SSL Encryption

Let’s Encrypt offers a great service of offering self-signed SSL certificates for your self-hosted websites. Using these certificates are fairly easy, and when you add cron jobs into the mix, you don’t have to worry about completely stopping your web services to renew your certs.

If you don’t have ‘certbot’ installed yet, install through your distribution of choice. Then run the following commands (pay attention to the parts that require your actual information).

$ sudo systemctl stop [INSERT WEBSERVICE HERE]
$ sudo certbot certonly --standalone --email [EMAIL-ADDRESS] -d thebytes.net,www.thebytes.net,[ALL OTHER SUBDOMAINS]

If all goes according to plan, then all of your certificates will be generated under the first name in the run. You will get a congratulations message and you can check if they exist by looking in /etc/letsencrypt/live/ for a folder named after the site.

The last thing you need to do is create a small script that your cron jobs can execute to automatically update the certificates. I created a file in the monthly cron folder for this exact requirement.

$ sudo nano /etc/cron.monthly/update-certbot
$ sudo nano /etc/cron.monthly/update-certbot
$ sudo chmod +x /etc/cron.monthly/update-certbot

#!/bin/bash
certbot renew --force-renew

Save the file with the above lines, and restart your web service. Your certbot SSL certificates will now renew monthly.

Minecraft PE Server Without Microsoft Account Verification

Yeah, a Minecraft server… Probably one of the most annoying games that most children are completely addicted to. The issue I have with this game is the requirement for an Xbox Live account in order to play on a server. Not the traditional online server, but rather the local server. Like most parents, I’d like to have a little control over what my kids do on the internet, and playing a game with strangers at their age isn’t something I’m ready for. And, since they both enjoy playing together but still want to play in the same world when each other aren’t around, having to self-host a local game doesn’t seem to solve their problem.

So, I set out on a mission to find the best way to solve this. The solution seemed simple, host a server and bypass the Microsoft requirement so I don’t have to give them an Xbox account. This wasn’t as much of a problem as I had anticipated based on what options are provided from the Bedrock Server edition.

Simply switching ‘online-mode’ to false in the server.properties allowed local server connections. The benefit is that no outsiders can join because I control port forwarding.

My choice for server was the Bedrock edition because of the simplicity and ease of setup coupled with the basic configuration. Another great benefit was the ability to take on of the save files from their tablets and upload it as a server world (so they didn’t have to “start over”).

Installing on Arch Linux was pretty intuitive. I downloaded the server files from Mojang’s website, unzipped it to my /srv folder and launched the server (from within ‘screen’).

In the default folder is the properties file, that was easy to configure as well. I changed all of the game settings for the server to match one of the world’s they created (seed, name, difficulty, etc)and FTPed the files to the ‘worlds’ folder. The last thing I did was change the settings to disable online mode. This allowed them both to join without having an Xbox account.