Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    5
    ·
    21 days ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      21 days ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        21 days ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        20 days ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 days ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    21 days ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      21 days ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    21 days ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    21 days ago

    My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 days ago

      A NAS as bare metal makes sense.
      It can then correctly interact with the raw disks.

      You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
      Let a storage device be a storage device, and let a hypervisor be a hypervisor.

    • zod000@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 days ago

      I feel like this too. I do not feel comfortable using docker containers that I didn’t make myself. And for many people, that defeats the purpose.

  • layzerjeyt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    21 days ago

    Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.

    However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.

      • layzerjeyt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 days ago

        I don’t know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.

        If you think it’s likely that I followed some “how to get started with docker” tutorial that had completely wrong information in it, that just demonstrates the point I am making.

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    21 days ago

    Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 days ago

      Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration

  • Lucy :3@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    21 days ago

    That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.

    Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.

    And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      21 days ago

      especially once a service does fail or needs any amount of customization.

      A failed service gets killed and restarted. It should then work correctly.
      If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
      So, either build your recovery process to account for this… or fix it so it can recover.
      It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.

      As for customisation, if it isn’t exposed via env vars then it can’t be altered.
      If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

      It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
      It’s using a chisel incorrectly.

      • Lucy :3@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 days ago

        Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.

    • Boomer Humor Doomergod@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      21 days ago

      You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and exec will get you into the container.

  • missfrizzle@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    21 days ago

    pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

    and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

    until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

    /uj not really but that’d be sick as hell.

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    20 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      20 days ago

      Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

      • communism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 days ago

        Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          20 days ago

          Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.

          Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.

          I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 days ago

    I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

  • neidu3@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 days ago

    I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.

    Beyond that, it’s mostly that I’m not very used to containers.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    19 days ago

    Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

    Just having all these things sandboxed makes it SO much easier.

  • Strider@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    21 days ago

    Erm. I’d just say there’s no benefit in adding layers just for the sake of it.

    It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.

  • kutsyk_alexander@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    21 days ago

    I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    20 days ago

    Im a hobbiest who just learned how to self host my website over the summer. I didn’t know anything so I went with what I knew which is a fresh install of linux and installing from the apt package manager. As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.

          • TeddE@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            20 days ago

            They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.

            Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.

            • BrianTheFirst@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 days ago

              …because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷‍♂️

              • TeddE@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                18 days ago

                Fair point. I think my eyes glossed over the part where they said they where taking a second look at docker (but caught the rest about rebuilding the OS in general). My sincere apologies 😓😅