Do you guys expose the docker socket to any of your containers or is that a strict no-no? What are your thoughts behind it if you don’t? How do you justify this decision from a security standpoint if you do?
I am still fairly new to docker but I like the idea of something like Watchtower. Even though I am not a fan of auto-updates and I probably wouldn’t use that feature I still find it interesting to get a notification if some container needs an update. However, it needs to have access to the docker socket to do its work and I read a lot about that and that this is a bad idea which can result in root access on your host filesystem from within a container.
There are probably other containers as well especially in this whole monitoring and maintenance category, that need that privilege, so I wanted to ask how other people handle this situation.
Cheers!
I just follow the software release pages with RSS.
That is actually a pretty good idea. I wanted to try out FreshRSS anyways, so this might be one more reason to do that. Thanks!
Sorry this doesn’t answer your question really but I’ve had issues when I used to auto update containers so stopped doing that. Some things have breaking changes, others just had issues in that release that caused me issues accessing stuff when not at home. I update every so often when I have ten minutes to do updates, check release notes and deal with any issues if they arise or roll back to that version. I spin up what’s up docker to see what’s changed then when finished, stop the container so it doesn’t keep on polling docker hub using my free allowance.
In short, it could be an option to spin it up, let it run, then stop the container so theres less risk it could be used for an attack.
That is the exact reason why I wouldn’t use the auto-update feature. I just thought about setting it up to check for updates and give me some sort of notification. I just feel like a reminder every now and then helps me to keep everything up to date and avoid some sort of never change a running system mentality.
Your idea about setting it up and only letting it run occasionally is definitely one to consider. It would at least avoid manually checking the releases of each container similar to the RSS suggestion of /u/InnerScientist
To be honest, you would get frequent notifications for updates that are probably more often than just to remind you. If you’re like me, you’ll just end up ignoring them anyway! There are a lot of small updates to a lot of software, most often not from a security point of view but just as people develop their projects. I update every week if I can but can be a couple of weeks, in which I start to feel “guilty” so when it builds up I know I have to do it
Fair point. It is probably best to keep it simple. I can always setup a reminder in my calendar twice a month if I really have to.
Mounting the docker socket into Watchtower is fine from a security perspective, but automatic updates can definitely cause problems. I used to use Rennovate and it would open a pull request to update the version.
There are lots of articles out there that say the opposite. Not about Watchtower per se, but giving a container access to the socket is generally considered to be a bad idea from a security point of view.
Giving a container access to the docker socket allows container escapes, but if you’re doing it on purpose with a service designed for that purpose there is no problem. Either you trust Watchtower to manage the other containers on your system or you don’t. Whether it’s managing the containers through a mounted docker socket or with direct socket access doesn’t make a difference in security.
I don’t know if anybody seriously uses Watchtower, but I wouldn’t be surprised. I know that companies use tools like Argo CD, which has a larger attack surface and a similar level of system access via its Kubernetes service user.
I think I get where your coming from. In this specific case of Watchtower it is not a security flaw it just uses the socket to do what it is supposed to do. You either trust them and live with the risks it comes with or you don’t and find another solution. I used Watchtower as the example because it was the first one I came across that needs this access. There might be a lot of other containers out there that use this, so I wanted to hear peoples opinions on this topic and their approach.
If you mean updating the images themselves, I just use kubernetes and rolling updates. Works like a charm.
As for monitoring, kubernetes also handles that well. Liveness probes are kind of standard, then Prometheus for more intense monitoring.
If you don’t mind the extra overhead it would probably address these issues for you.
I have heard the name Kubernetes and know that is also some kind of container thing, but never went really deeper than that. It was more a general question how people handle the whole business of exposing the docker socket to a container. Since I came across it in Watchtower and considered installing that I used it as an example. I always thought that Kubernetes and Docker swarms and things like that are something for the future when I have more experience with Docker and containers in general, but thank you for the idea.
I’ve seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven’t seen any?):
-
Watchtower, which does auto updates and/or notifies people
-
Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.
-
Traefik, which reads the docker socket to automatically reverse proxy services.
Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.
Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just
collabara.enabled: truein the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.For case 3, Kubernetes has a feature called an “Ingress”, which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.
Kubernetes handles these things pretty well, and it’s part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won’t break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.
TLDR: You are asking questions that Kubernetes has answers to.
Thanks for the write-up and sorry for the late reply. I guess I didn’t come very far without exposing the docker socket. Nextcloud was actually one of the services on my list I wanted to try out. But I haven’t looked at the compose file yet. It makes sense why it is needed by the AIO image. Interestingly, it uses a Docker socket proxy to presumably also mitigate some of the security risks that come from exposing the socket. Just like another comment in this thread already mentioned.
However, since I don’t know much about Kubernetes I can’t really tell if it improves something, or if the privileges are just shifted e.g. from the container having socket access to the Kubernetes orchestration thingy having socket access. But it looks indeed interesting and maybe it is not a bad idea to look into it even early on in my selfhost and container adventure.
Even though I said otherwise in another comment, I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port. I have also seen it in a monitoring setup, where I think Prometheus has access to the socket to track different Docker/Container statistics.
I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port
I don’t think you need socket access for this? This is what I did: https://stackoverflow.com/questions/31149501/how-to-reach-docker-containers-by-name-instead-of-ip-address#35691865
Yeah, you are right a custom bridge network can do DNS resolution with container names. I just saw in a video from Lawrence Systems, that he exposed the socket. And somewhere else I saw that container names where used for the proxy hosts in NPM. Since the default bridge doesn’t do DNS resolution I assumed that is why some people expose the socket.
I just checked again and apparently he created the compose file with ChatGPT which added the socket. https://forums.lawrencesystems.com/t/nginx-proxy-manager-docker/24147/6 I always considered him to be one of the more trustworthy and also security conscious people out there, but this makes me question his authority. Atleast he corrected the mistake, so everyone who actually uses his compose file now doesn’t expose the socket.
-
Is the container exposed to the internet?
If yes, do not.
If no, I think it will be ok so long as it’s actually not exposed to the internet, e.g. ideally behind NAT with no port forwards and all ipv6 traffic turned off or some other deny all inbound firewall outside the system itself that sits between it and the system on which the container runs.
In the worst case scenario: you’ve given someone a file share on your root partition, but if it’s not exposed to the internet, then the chance of it happening is extremely remote.
No, none of my containers are exposed to the internet and I don’t intend to do so. I leave that to people with more experience. I have however setup the Wireguard VPN feature of my router to access my home network from outside which I need occasionally. But as far as I read, that is considered one of the savest options IF you have to make it available. No outside access is of course always preferred.


