I’m being serious. I think that if instead of Trump there was just a prompt “engineer” the country would actually run better. Even if you train it to be far right.

And this is not a praise of LLMs…

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 days ago

    It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.

    Here’s the latest exploration of this topic I could find.

    LLMs continue to be one of the least understood mass-market technologies ever

    Tracing even a single response takes hours and there’s still a lot of figuring out left to do.

    • Slippery_Snake874@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      16 days ago

      Hmm. That is interesting, and I admit it does seem like the company that made it is also still researching their own model, but some parts of the article seem a bit dramatic (not sure if there is a better word).

      Like when it says the model doesn’t “admit” to how it solved the math problem when asked. Of course it doesn’t, it is made for humans to interact with so it is not going to tell a human how a computer does math, it makes more sense for it to explain the “human” method.

      Interesting stuff though, thanks for the article!