This is my blog now

State of my Homelab in August 2025

It's been a while since I've last written about the stuff I self-host. Once upon a time I was simple-minded enough to believe that I would get some hardware, install some software that seems useful to me, and then leave it be, happily using my self-hosted stuff for years to come. Turns out this was never to be because self-hosting is an ever changing landscape of discovery, try and error, breakage, frustration, successes and temporary stability. So instead of presenting the definitive setup I run, I'll make a snapshot at this point in time. No guarantees how long this will last!

As detailed in my Uses page, I own an HP small form factor PC that I use as a server, its setup is described here. It serves as a NAS holding my documents and media files, namely pictures, music, audiobooks, movies and series. Furthermore, it runs several applications, most of which serve the files the NAS holds. These are:

For Home Assistant I own a bunch of ZigBee devices that are coordinated by my server which has a ZigBee USB dongle plugged in. All of these services started out as things I found interesting or that seemed like they could be useful but after a while I ended up depending on most of them. They make life easier or more enjoyable for me because I have less manual work to do or because I like having control over my data and don't like subscribing to all kinds of SaaS things or using proprietary software.

All of these services run in Docker containers (complaints can go to /dev/null, thank you). Since this server runs in my home, I don't really want to expose any of this to the public internet. I do, however, wish to access them when I'm not at home. For this reason I use Tailscale as a Wireguard-based mesh-VPN solution. I have DNS entries to access my various services on subdomains of this domain, but these point to Tailscale IP addresses which only someone in my Tailnet can access. I use caddy as a reverse proxy which, thankfully, has the nice feature that it automatically gets TLS certs from Let's Encrypt using the DNS-01 challenge. Unfortunately, this means that I have to build caddy from source myself because support for my DNS provider's API isn't included in the standard build, but that's fine. All of this is described in more detail here.

Another thing I have is a Raspberry Pi 3. This is hooked up to a monitor and runs MagicMirror. I use this to display a slideshow of images that sit on my NAS (which is mounted via NFS) as well as the time, weather, my calendar and some sensor data from Home Assistant. This too began as a fun side project but has evolved into critical infrastructure.

The most recent addition to my setup is a Hetzner VPS (the cheapest one for ~4,5 €). I was prompted to get one by this post of a Fedifriend detailing how to set up Pi-hole for all personal devices, using Tailscale DNS. I'm not going to repeat all the steps involved because I simply followed the same setup. The gist is this:

  1. Install Tailscale
  2. Install Pi-hole on the VPS
  3. Instruct Pi-hole to only listen on the Tailscale network interface (usually called tailscale0)
  4. Enter the Tailscale IP of the VPS as the global nameserver in the DNS settings of the Tailscale admin console
  5. Enable "Override DNS servers" in the DNS settings of the Tailscale admin console

Additionally, a firewall setup blocking all incoming traffic except for that coming in on the Tailscale network interface on port 53 (unless you changed the standard Pi-hole settings) is a good idea---in addition to all the usual steps to secure a public-facing server.

Inspired by yet another blost from the same person, I also installed unbound and instructed Pi-hole to use localhost (i.e. unbound) as the upstream DNS resolver. That way, I don't have to rely on third party recursive DNS resolvers to do this for me.

Something that I also really like about this setup is that, since I'm using the Tailscale network interface to access the Pi-hole, all my clients are logged with different IP addresses so I can clearly distinguish them. This had been a problem for me in the past. What's more, I added a few entries to dns.hosts in the Pi-hole settings so the Tailscale IP addresses are automatically associated with their respective host names now. Really sweet.

In principle, this could have been it, but I wanted to do a couple more things, just because I wanted to try it. I decided to move this very blog from being hosted by Vercel to self-hosting it on this very Hetzner VPS. The webserver, i.e. caddy, was already there, I only had to upload the files, instruct caddy to serve them and change the DNS records to point my domain there. That's it. I then took a while to fiddle with HTTP response headers for common security and caching. Since the website it totally static, most of this is pretty pointless but I wanted to anyway, because this is new to me and I thought it'd be an interesting exercise. This is what I finally landed on:

bfloeser.de {
  root * /var/www/blog
  file_server
  encode gzip

  @static {
    file
    path *.ico *.css *.js *.svg *.gif *.jpg *.jpeg *.png *.woff
  }
  header @static {
    Cache-Control "max-age=31536000"
  }

  header {
    # disable FLoC tracking
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"

    # enable HSTS
    Strict-Transport-Security "max-age=31536000"

    # disable clients from sniffing the media type
    X-Content-Type-Options nosniff

    # clickjacking protection
    X-Frame-Options DENY
    Referrer-Policy strict-origin-when-cross-origin
    Content-Security-Policy "default-src 'self'; script-src 'self' blob: https://umami.bfloeser.de 'unsafe-inline' https://cdn.jsdelivr.net; font-src 'self' https://cdn.jsdelivr.net; style-src 'self' 'unsafe-inline' https://cdn.jsdelivr.net; connect-src 'self' https://umami.bfloeser.de"

    # Advertise HTTP/3
    alt-svc "h3=':443'; ma=86400"
  }
}

If you carefully read this, you'll have spotted a reference to Umami. Just for fun, I wanted to get some analytics for my blog. It's not necessary but it's just idle curiosity to see how many people read which blog post, where they're from and what devices they dis this with. I have no way to tell who these people are, I just like to stare at graphs and numbers and click around with them. I do absolutely nothing with the data, apart from that, and I might kill the whole thing in the future. We'll see.

So, I self-host Umami, because I want to have control over the data flow. This presented me with a challenge. The tracking script is loaded client-side when someone opens a page on my blog. A GET request is sent to my Umami instance to fetch the script, it runs and then a POST request containing some data is sent back to the instance. In order for this to work, my Umami instance needs to be publicly accessible. However, the web interface---including a login screen---is thus also publicly accessible and this is a bit icky, I think. I thought about several approaches, opened an issue with Umami and finally landed on a solution. IP allowlisting isn't an option because when accessing the instance via the standard network interface I don't send my Tailscale IP address but my regular public one, which is not stable. Plus, I regularly use a VPN so that doesn't really work for me. Umami has the option to disable the login entirely but, on one hand, this didn't work when I tried it, and on the other hand, that would mean that I could no longer easily access the dashboard and the data. I could try using the API (which is also public) or---as suggested by the maintainer---I could log in once, disable login, and just keep using the auth token thus created. That seemed very weird to me because every once in a while I clear cookies on my browser which would log me out from Umami. Also, this implies that the token has an unlimited lifetime which seems like... not a terribly good idea to me.

Long story short, I decided to use caddy to simply block all requests to my Umami instance, except GETing the tracking script and POSTing the resulting data back. That means, the public API is also off limits which I'm much happier with. In order to actually access the data I simply use SSH port forwarding from my machine and then access the service from localhost (thanks @irgndsondepp@gts.da-miez.de for the reminder that this exists!).

Finally, I wanted some observability and alerting to make sure, things keep working or at least to get a notification when they don't. I know there's a plethora of options but I wanted something stupid and simple. To I went with Uptime Kuma combined with ntfy which handle monitoring and alerting, respectively. I self-host both services and set everything up so that I get a notification to the ntfy Android app I have installed when something goes down. I know there's an elephant in the room because if the VPS goes down, so does my monitoring but for the time being, that's how it is. I might set up another set of monitoring tools on my server at home to watch for outages of the VPS so they can watch each other. As I said above, self-hosting is always evolving.

That's it (for now)! Really happy with this so far and it'll be interesting to see where the journey takes me in the future. I'll be sure to keep you updated.

Linux, Technology, Software, Self-hosting

⬅ Previous post
Consumerism and my Relationship with Money

Next post ➡
The Perfect Porridge