• DandomRude@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    24 days ago

    Although Grok’s manipulation is so blatantly obvious, I don’t believe that most people will come to realize that those who control LLMs will naturally use this power to pursue their interests.

    They will continue to use ChatGPT and so on uncritically and take everything at face value because it’s so nice and easy, overlooking or ignoring that their opinions, even their reality, are being manipulated by a few influential people.

    Other companies are more subtle about it, but from OpenAI to MS, Google, and Anthropic, all cloud models are specifically designed to control people’s opinions—they are not objective, but the majority of users do not question them as they should, and that is what makes them so dangerous.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      24 days ago

      It’s why I trust my random unauditable chinese matrix soup over my random unauditable american matrix soup frankly

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          24 days ago

          Most aren’t really running Deepseek locally. What ollama advertises (and basically lies about) is the now-obselete Qwen 2.5 distillations.

          …I mean, some are, but it’s exclusively lunatics with EPYC homelab servers, heh. And they are not using ollama.

          • DandomRude@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            24 days ago

            Thx for clarifying.

            I once tried a community version from huggingface (distilled), which worked quite well even on modest hardware. But that was a while ago. Unfortunately, I haven’t had much time to look into this stuff lately, but I wanted to check that again at some point.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              24 days ago

              You can run GLM Air on pretty much any gaming desktop with 48GB+ of RAM. Check out ubergarm’s ik_llama.cpp quants on Huggingface; that’s state of the art right now.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              24 days ago

              Also, I’m a quant cooker myself. Say the word, and I can upload an IK quant more specifically tailored for whatever your hardware/aim is.

        • khepri@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          24 days ago

          naw, I mean more that the kind of people who uncritically would take everything a chatbot says a face value are probably better off being in chatGPTs little curated garden anyway. Cause people like that are going to immediately get grifted into whatever comes along first no matter what, and a lot of those are a lot more dangerous to the rest of us that a bot that won’t talk great replacement with you.

          • DandomRude@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            24 days ago

            Ahh, thank you—I had misunderstood that, since Deepseek is (more or less) an open-source LLM from China that can also be used and fine-tuned on your own device using your own hardware.

        • khepri@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 days ago

          There you go. Any of these things is just another datapoint. You need many datapoints to decide if the information you’re getting is valuable and valid.