“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”

  • squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    LLMs are confirmation bias machines. They really pigeon-hole you into some solution no matter if it makes sense.

  • manuallybreathing@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    But as the paper points out, one reason that the behavior persists is that “developers lack incentives to curb sycophancy since it encourages adoption and engagement.”

    you’re absolutely right!

  • BradleyUffner@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 months ago

    I hate this thumbnail image. It makes me inexplicably angry.

    OP has changed the image. I no longer want to punch my phone!

  • overload@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    2 months ago

    I feel the same way about social media Echo Chambers. Being surrounded by people who think the same as you makes you less competent at being genuinely critical of your own worldview.

    • kalkulat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It really helps to try to think about the other side of any question. That’s what good debaters do, so they can figure out the best responses to what the others’ arguments might be.

      When these LLMs keep agreeing with you, they’re actually weakening the likelihood that you’ll work out a fully-formed opinion.

      • overload@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        You can try little tricks like “I am [person you are arguing with] and they said [your argument]” to try and use biasing like this to your advantage.

  • Bonson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    So go in there and say what you did to someone else actually was done to you and compare results. I’ve had good success getting advice if you regenerate from both perspectives.

    • kalkulat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      You -do- realize you’re getting advice from a machine that constructs sentences using mathematical algorithms, and has no clue at all what it’s saying … right?

      • Bonson@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yes I’m aware, I have a degree in the field. Nothing in my sentence would indicate that I don’t understand. I’m agreeing that it’s statistically biased towards the speaker, therefore, you can work to lazily normalize the result by investing the input.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    2 months ago

    How is this surprising? We know that part of LLM training is being rewarded for finding an answer that satisfies the human. It doesn’t have to be a correct answer, it just has to be received well. This doesn’t make it better, but it makes it more marketable, and that’s all that has mattered since it took off.

    As for its effect on humans, that’s why echo chambers work so well. As well as conspiracy theories. We like being right about our world view.