Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it’s working great, but I want to learn MOAR and I need help…

Recently, I’ve been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don’t understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it’s really not.

It’s very much a noob question, but I really want to understand what I am doing wrong. I’m really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    K3s (and k8s for that matter) expect you to build a hierarchy of yaml configs, mostly because spinning up docker instances will be done in groups with certains traits applying to whole organization, certain ones applying only to most groups, but not all, and certain configs being special for certain services (http nodes added when demand is higher than x threshold).

    But I wonder why you want to cluster navidrome or pihole? Navidrome would require a significant load before service load balancing is required (and non-trivial to implement), and pihole can be put behind a round-robin DNS forwarder, and also be weird to implement behind load balancing.

    • Sunoc@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      My goal is to have a k3s cluster as a deployment env and try and run the services I’m already using. I don’t need to have any advance load balancing, I just want pods to be restarted if one of my machine stops.

  • ChaosMonkey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    You’re right to be reluctant to apply everything by hand. K3s has a built-in feature that watches a directory and applies the manifests automatically: https://docs.k3s.io/installation/packaged-components

    This can be used to install Helm charts in a declarative way as well: https://docs.k3s.io/helm

    If you want to keep your solution agnostic to the kubernetes environment, I would recommend that you try ArgoCD (or FluxCD, but I never tried it so YMMV).

  • zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    And this is why I do not like K8s at all. The only reason to use it is to have something on your CV. Besides that, Docker Swarm and Hashicorp Nomad feel a lot better and are a lot easier to manage.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I’ve thought about k8s, but there is so much about Docker that I still don’t fully know.

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.

    Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.

    More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?

    So I switched to Kubernetes.

    To answer some of your questions:

    Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??

    So what I (and the industry) uses is called “GitOps”. It’s essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.

    Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher’s Fleet or the most popular ArgoCD.

    As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords to search for appropriate pieces of yaml.

    I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?

    So the first issue is that Kubernetes doesn’t really have “containers”. Instead, the smallest controllable unit in Kubernetes is a “pod”, which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.

    There are ways to convert a docker-compose to a kubernetes pod.

    But in general, Kubernetes doesn’t use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.

    Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard

    So what you’re supposed to do is deploy an “ingress”, (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress “objects”.

    Actually, traefik comes with it’s own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.

    Although it seems complex, I’ve come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.

    • Sunoc@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Thanks for the detailed reply! You’re not the first to mention gitops for k8s, it seems interesting indeed, I’ll be sure to check it!

  • UnsavoryMollusk@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    I use Kube everyday for work but I would recomend you to not use it. It’s complicated to answer problems you don’t care about. How about docker swarm, or podman services ?

    • Keelhaul@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      I disagree, it is great to use. Yes, some things are more difficult but as OP mentioned he wants to learn more, and running your own cluster for your services is an amazing way to learn k8s.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Yeah - k8s has a bit of a steep learning curve. I recentlyish make the conversion from “a bunch of docker-compose files” to microk8s myself. So here are some thoughts for you (in no particular order).

    I would avoid helm like the plague. Everybody is going to recommend it to you but it just puts a wrapper on a wrapper and is MUCH more complicated than what you’re going to need because you’re not spinning up hundreds of similar-but-different services. Making things into templates adds a ton of complexity and overhead. It’s something for a vendor to do, not a home-gamer. And you’re going to need to understand the basics before you can create helm charts anyway.

    The actual yml files you need are actually relatively simple compared to a helm chart that needs to be parameterized and support a bazillion features.

    So yes - you’re going to create a handful of yml files and kubectl apply -f them. But - you can do that with Ansible if you want, or you can combine them into a single yml (separate sections with ----).

    What I do is - for each service I create a directory. In it I have name_deployment.yml, name_service.yml, name_ingress.ymlandname_pvc.yml`. I just apply them when I change them, which isn’t frequent. Each application I deploy generally has its own namespace for all its resources. I’ll combine deployments into a NS if they’re closely related (e.g. prometheus and grafana are in the same NS).

    Do yourself a favor and install kubens which lets you easily see and change your namespace globally. Gawd I hate having to type out my namespace for everything. 99% of the time when you can’t find a thing with kubectl get you’re not looking in the right namespace.

    You’re going to need to sort out your storage situation. I use NFS for long-term storage for my pods and have microk8s configured to automatically create space on my NFS server when pods request a PV (persistent volume). You can also use local directories but that won’t cluster.

    There are two basic types of “ingress” load balancing. “ClusterIp” means the cluster controller will act like a hostname-based router for HTTP. You can point your DNS entries at that server and it will route to your pods on their internal IP address based on the DNS name of the request. It’s easy to use and works very well - but it only works for HTTP traffic. The other is to use LoadBalancerIp that will give your pods an IP address on the network that you can connect to directly. The former only works for HTTP, the latter will let you use any ports (e.g. ssh for a forgejo instance).

  • towerful@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Everyone talks about helm charts.
    I tried them and hate writing them.
    I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
    Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
    Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
    But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.

    Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
    Though, if you are used to ansible, it might make more sense to use ansible.
    Pretty sure ansible will be able to do it all in a way you are familiar with.

    As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
    If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
    Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).

    I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
    I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.

    The way I use kubernetes for the projects I do is:
    Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
    Then apply all my pods, services, certificates etc from hand written manifests.
    Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
    If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.

    Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.

    However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
    Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
    I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Interesting, I might check them out.
        I liked garden because it was “for kubernetes”. It was a horse and it had its course.
        I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.

        I’m willing to re-evaluate my deployment stack, tbh.
        I’ll definitely dig more into flux and ansible.
        Thanks!

        • moonpiedumplings@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          that all those CD tools were specifically tailored to run as workers in a deployment pipeline

          That’s CI 🙃

          Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.

      • bob@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        If you’re genuinely interested then fair enough. Just saying it’s not the only option, as a lot of people seem to think these days, and for personal projects I think it’s bonkers.