• puck@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    Hi there, I’m thinking about getting into self-hosting. I already have a Jellyfin server set up at home but nothing beyond that really. If you have a few minutes, how can self-hosting help in the context of OPs post? Do you mean hosting LLMs on Ollama?

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Yes, Ollama or a range of other backends (Ooba, Kobold, etc.) can run LLMs locally. Huggingface has a huge number of models suited to different tasks like coding, storywriting, general purpose, and so on. If you run both the backend and frontend locally, then no one monetizes your data.

      The part I’d argue that the previous poster is glazing over a little bit is performance. Unless you have an enterprise-grade GPU cluster sitting in your basement, you’re going to make compromises on speed and/or quality relative to the giant models that run on commercial services.

      • puck@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Thanks for the info. Yeah, I was wondering what kind of hardware you’d need to host LLMs locally with decent performance and your post clarifies that. I doubt many people would have the kind of hardware required.