Recently, I’ve found myself walking several friends through what is essentially the same basic setup:
- Install Ubuntu server
- Install Docker
- Configure Tailscale
- Configure Dockge
- Set up automatic updates on Ubuntu/Apt and Dockge/Docker
- Self-host a few web apps, some publicly available, some on the Tailnet.
After realizing that this setup is generally pretty good for relative newcomers to self-hosting and is pretty stable (in the sense that it runs for a while and remains up-to-date without much human interference) I decided that I should write a few blog posts about how it works so that other people can set it up for themselves.
As of right now, there’s:
- An introduction (with Ubuntu basics)
- Tailscale setup
- Optional Docker Explainer
- Dockge setup with watchtower for automatic updates
- MicroBin as a first self-hosted webapp
Coming soon:
- Immich
- Backups with Syncthing
- Jellyfin
- Elementary monitoring with Homepage
- Cloudflare Tunnels
Constructive feedback is always appreciated.
EDIT: Forgot to mention that I am planning a backups article
Set up automatic updates
Immich
You like to live dangerously, right?
Yeah a little xD but FWIW this article series is based on what I personally run (and have set up for several friends) and its been doing pretty well for at least a year.
But I have backups which can be used to recover from the issues with breaking updates.
This is very cool, but also very dangerous. Many projects release versions that need some sort of manual intervention to be updated, and automatically updating to new versions on docker can lead to data loss in those situations.
Here’s a recent example from Immich:
https://github.com/immich-app/immich/releases/tag/v1.133.0
It is my humble opinion that teaching newbies to do automatic updates will cause them to lose data and break things, which will probably sour them from ever self hosting again.
Automatic OS updates are fine, and docker update notifications are fine, but automatic docker updates are just too dangerous.
That’s reasonable, however, my personal bias is towards security and I feel like if I don’t push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a “backups with Syncthing” article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.
it’ll still cause downtime, and they’ll probably have a hard time restoring from backup for the first few times it happens, if not for other reason then stress. especially when it updates the wrong moment, or wrong day.
they will leave vulnerable, un-updated containers exposed to the web
that’s the point. Services shouldn’t be exposed to the web, unless the person really knows what they are doing, took the precautions, and applies updates soon after release.
exposing it to the VPN and to tge LAN should be plenty for most. there’s still a risk, but much lower
“backups with Syncthing”
Consider warning the reader that it will not be obvious if backups have stopped, or if a sync folder on the backup pc is in an inconsistent state because of it, as errors are only shown on the web interface or third party tools
Yeah I agree with the warnings. One of the things I’m trying to ensure I get across accurately (which will be discussed later in the series) is how to do monitoring. Making sure backups are functioning properly would need to be a part of that.
You say this as though security is naturally a consideration for most docker images.
I use diun for update notifications. I wish there was something that could send me a notification, and if I gave it an okay or whatever it would apply the update. Maybe with release notes for the latest version so I could quickly judge if I need to do anything besides update.
Naturally, the same day that I publish this, I discover that Watchtower is semi-abandoned, so I’m gonna have to look into alternatives to that…
Just call me Mr. BuzzKill. LOL I learned that there is a fork at https://watchtower.devcdn.net/. Deployed it yesterday, and for the first round of updates, everything went as it should. No runs, no drips, no errors. Time will tell.
Sweet! Thank you! I’ll test it out and update the blog posts to reflect that
In case it’s of help, a common problem I find with guides in general is that they assume I don’t already use Apache (or some other service), and describe as though I’m starting with a clean system. As a newbie, it’s hard to know what damage the instructions will do to existing services, or how to adapt the instructions.
Since docker came along it’s gotten easier, and I’ve learned enough about ports etc to be able to avoid collisions. But it would be great if guides and tutorials in general covered that situation.
Hmmmm that’s a good point. I’ll try to work. that in P: cause Tailscale can cause issues if you’re already doing Wireguard or something.
This is great, thanks!
Did I miss the part where we set up the server?
Its covered in the introduction what’s expected of the reader and server setup, and towards the end of the intro I go over the unattended-upgrades setup.
So yes, there’s nothing about setting up Ubuntu, just that you have to have it set up already
Something really fun I found out recently, when my friend lost all access to his system except for a single WebDAV share by accidentally turning off all his remote admin access:
If you write “b” to /proc/sysrq-trigger, it will immediately reboot the system (like holding down the reset button, so inherently a bit dangerous).
He was running Nephele with / mounted as the share, so luckily he just uploaded that file with a single “b” in it, and all his remote admin stuff came back up after the reboot.
that’s horrible and funny at the same time.
I will assume they fixed that vuln later
That’s not a vulnerability. That’s intended and desired behavior. It was really useful in this case too.
I should mention that the WebDAV share is password protected, so only he has access to do that.
ok, a backdoor then. can they overwrite any file with it?
It’s their machine. It’s a front door.
This is appreciated. As a hobbyist, I feel like my setup is hold by pins.
EDIT: I rely on Nextcloud, BTW.
Hell yeah, dude.
Thanks 😊👌🏻
I’ve been following your posts to set up some self hosting and it’s been going well so far, thank you!
I have noticed that a lot of the images in the posts don’t seem to be loading (422 errors), it hasn’t been too much of a deal breaker so far but in the section about running tailscale in a container and editing the acls file there’s a couple images that are showing the before and after of this file that aren’t loading for me. Hopefully this is something that isn’t too difficult for you to fix.
Thanks again for these posts, I’ve learned a lot so far and even got jellyfin working on my phone outside my local network using tailscale.
I am very happy to hear that. Sadly I don’t know what to do about the images not loading. I am just using the free tier of pico.sh, so I imagine that corners have been cut for pathological consumers like me. If you ever need a specific image or something doesn’t make sense, feel free to DM!
Try Pangolin instead of cloudfare, though it requires a VPS (e.g. oracle free tier, or pay €1/month to ionos)
I’m hesitant to ask because I’m running pangolin also, but why are there downvotes here? did i miss something about pangolin?
it could also be because I mentioned oracle :P
i guess i hope so!








