I am working on setting up a home server but I want it to be reproducible if I need to make large changes, switch out hardware, or restore from a failure. What do you use to handle this?

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 hours ago

    Packer builds the terraformable/openTofuable templates to launch into the hypervisor where chef (eventually mgmtConfig) will manage them from there until they die.

    All that is launched by git. Fire and forget. Updates are cronned.

    There are no containers. Don’t got time to fuck about. If Systemd wasn’t an absolute embarrassment I’d not worry about updates even as much as I do, which isn’t much aside from the aforementioned cancer.

    • xyx@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Out of curiosity: Are you running nix-ops with nix-secrets or how did you cover orchestration & credentials?

      • adf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        I use flakes and all hosts are configured from a single flake, where each host has its own configuration. I have some custom modules and even custom package in the same flake. I also use home manager. I have 4 hosts managed in total: home server, laptop, gaming PC, and a cloud server. All hosts were provisioned using nixos-anywhere + disko, except for the first one which was installed manually. For secrets I use sops-nix, encrypted secrets are stored in the same flake/repo.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 hours ago

    Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC’s are cloned from a golden image that has my keys, tailscale setup etc.

    • eli@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      This is pretty much my setup as well. Proxmox on bare metal, then everything I do are in Ubuntu LXC containers, which have docker installed inside each of them running whatever docker stack.

      I just installed Portainer and got the standalone agents installed on each LXC container, so it’s helped massively with managing each docker setup.

      Of course you can do whatever base image you want for the LXC container, I just prefer Ubuntu for my homelab.

      I do need to setup a golden image though to make stand-ups easier…one thing at a time though!

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 hours ago

    Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.

  • 🇵🇸antifa_ceo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I got a bunch of docker compose files and the envs documented so its easy to spin things up again or rollback changes. It works well enough if I’m good about keeping everything all up to date and not making changes without noting it down for myself later.

  • relaymoth@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I went the nuclear option and am using Talos with Flux to manage my homelab.

    My source of truth is the git repo with all my cluster and application configs. With this setup, I can tear everything down and within 30 min have a working cluster with everything installed automatically.

  • i_stole_ur_taco@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.

    The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.

    I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.

  • Seefoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    23 hours ago

    I use git and commit configs/setup/scripts/etc. to it. I at least have a road map for how to get everything back this way. Testing this can be difficult, but it really depends on what you care about really.

    • Testing my kopia backups of important data? that I manually test every once n’ while.
    • Testing if my ZFS setup script is 100% identical to my setup? that’s not that important, as long as I have a general idea I can figure out the gaps and improve the script for the next time around. Obviously, you can spend a lot more time ensuring scripts and what not stays consistent, but it depends on what you care about!

    For a lot of my service config, git has always worked well for me and I can go back to older configs if needed. You can get super specific here and save versions in git, then have something update the versions (e.g. WUD)