This is my blog now

Next Round of Homelab Updates

Monitoring Cleanup

Has it been a week already? Time never stands still, does it? Anyhoo, I ended my last post being unsure about how exactly to do monitoring. Who watches the Watchmen?

I decided to go ahead and implement something I've thought about previously. I installed uptime-kuma and ntfy on both my VPS and my server so they can keep tabs on each other. An Observability Mexican Standoff, if you will. Works pretty well so far, the only weakness is that none of this monitoring will do anything, if Tailscale networking is down or has issues because all of the routing between services is done that way. For this purpose I added a monitor that constantly pings 100.100.100.100 which would be routed through the Tailscale net, if it is present and functional. If it isn't the ping will fail. Good enough for now.

Getting Rid of Dependencies

So far I've been using caddy as web server and reverse proxy and I'm very happy with it because it takes a lot of headache out of managing TLS certs. Also, the configuration is much more readable than nginx, but that may be my personal preference. The fly in the ointment is that installing caddy requires a third party repo on Ubuntu. What's more, since I need to integrate it with the Porkbun API (my domain registrar), caddy needs to be built from source with the appropriate module. For this purpose, I installed xcaddy which is the build tool for this. This in turn depends on golang which pulls in a whole slew of dependencies, solely for the purpose of building caddy.

I figured there should be a better way to do this and I landed on simply using another Docker image for caddy. And, sure enough, there a a whole bunch of these, already prebuilt with the modules I need. I found this repo that builds a whole host of images with all kinds of modules. It also adds the dockerproxy module, which makes reverse proxying a nicer and less manual experience. Instead of adding all the directives to the Caddyfile, you can add appropriate labels to the Composefile, caddy will pick up on it and do the rest, without needing to be restarted. Pretty neat.

So I pulled the image, set it up with networking_mode: "host" so it has access to the host network and can properly do its job. Consequently, I can rip out the caddy and xcaddy packages, along with their repos and all the dependencies. I subscribed to an RSS feed of the releases of the GitHub repo that builds this image so I get notifications when a new version is available and also release notes to check, if I need to intervene in some way. Nice!

Getting Rid of the Reverse Proxy Completely?

I recently stumbled upon this article from the Tailscale people. It details how you can use Tailscale with Docker. The general idea is that you add a sidecar container containing Tailscale to all your existing containers running services. This sidecar container contains Tailscale and provides the networking needed to route the Tailscale IPs. As detailed in the article linked above, this can be linked with tailscale serve which takes care of reverse proxying and getting a TLS cert so your service is accessible on your Tailnet URL via HTTPS. The only thing left to do to arrive at the same user experience I have now, would be to add CNAME records that point from the current URLs to the ones using my Tailnet domain but this is optional.

I quite like this approach, to be honest. It would give me more granular control over my services, because each of them would become its own node on my Tailnet so I can define precisely who can access what with ACLs. Also, this would completely remove the need for a dedicated webserver for reverse proxying. Sure, I'd still need it to serve my blog but for everything else I could just get rid of it! What's more, I could compartmentalize my services and better isolate them from each other and from the host machines they run on. If my Tailnet was compromised, an attacker could access my services but not the hosts they run on, unless they manage to break out of the container. I could manage accessing my servers for administration via regular Wireguard, in order to achieve this.

The downside, of course, is that this would create more dependencies on the Tailscale ecosystem so it would hurt even more to migrate to something else, should I want or need to. Headscale, the open-source community reimplementation of the service, supports tailscale serve but automatic provisioning of TLS certs don't work at the time of this writing. It's being worked on, however (tracking issue here).

So I could go ahead, do this and accept that I'm more dependent on Tailscale for now. I could run Headscale myself and accept that my browser throws a tantrum every time I try to use my own stuff, until automatic TLS certs are implemented. I could also just wait because my current setup is working, innit? Yeah, right...

Outlook

Lots of cool and exciting stuff! I discovered that Forgejo is pretty easy to host, maybe I should give that a go? Not sure why I'd need it but who knows? Also, manually maintaining servers get out of hand, now that I have a VPS and am trying to get a second one as well. Maybe I should again look into automating more of it. I have a bunch of Ansible code lying around from last time I tried this, I just didn't stick with it but maybe need will drive the adoption. Who knows? Stay tuned!

Technology, Linux, Self-hosting

⬅ Previous post
VPN Musings

Next post ➡
My Subscriptions