• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

      This flat out should not have passed review. There should be consequences.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Conversely, while the research is good in theory, the data isn’t that reliable.

      The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

      There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

      And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

  • TwinTitans@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        7 months ago

        Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

        I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          7 months ago

          Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

          Do you still think you’re going to be allowed to vote for the next president?

          • Serinus@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            2
            ·
            7 months ago

            Everyone who disagrees with you is a bot

            I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              7 months ago

              Sure, but you seem to be under the impression the only bots are the people that disagree with you.

              There’s nothing stopping bots from grooming you by agreeing with everything you say.

  • Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • FourWaveforms@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 months ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

      • angrystego@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    as opposed to thousands of bots used by russia everyday on politics related subs.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

    • Ilandar@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.

      • Oniononon@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

        Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

          • Oniononon@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

            Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

            • Chulk@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

              These two groups are not mutually exclusive

  • Fat Tony@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

  • mke@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

    This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      The only way this could be an even remotely scientifically rigorous study is if they randomly selected the people who were going to respond to the AI responses and made sure they were human.

      Anybody with half a brain knows just reading reddit comments and not assuming most of them are bots or shills is a hilariously naive act, the fact that “researchers” did the same for a scientific study is embarassing.

  • vordalack@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    2
    ·
    7 months ago

    This just shows how gullible and stupid the average Reddit user is. There’s a reason that there’s so many meme’s mocking them and calling them beta soyjacks.

    It’s kind of true.

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      You think it’s anti science to want complete disclosure when you as a person are being experimented on?

      What kind of backwards thinking is that?

      • Sculptus Poe@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        7 months ago

        Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.

        • thedruid@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          You do realize the ends do not justify the means?

          You do realize that MANY people on social media have emotional and mental situations occuring and that these experiments can have ramifications that cannot be traced?

          This is just a small reason why this is so damn unethical

          • Sculptus Poe@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 months ago

            In that case, any interaction would be unethical. How do you know that I don’t have an intense fear of the words “justify the means”? You could have just doomed me to a downward spiral ending in my demise. As if I didn’t have enough trouble. You not only made me see it, you tricked me into typing it.

            • thedruid@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              you are being beyond silly.

              in no way is what you just posited true . unsuspecting nd non malicious social faux pas are in no way equal to Intentionally secretive manipulation used to garner data from unsuspecting people

              that was an embarrassingly bad attempt to defend an indefensible position, and one no-one would blame you for deleting and re-trying

              • Sculptus Poe@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                7 months ago

                Well, you are trying embarrassingly hard to silence me at least. That is fine. I was definitely positing an unlikely but possible case, I do suffer from extreme anxiety and what sets it off has nothing to do with logic, but you are also overstating the ethics violation by suggesting that any harm they could cause is real or significant in a way that wouldn’t happen with regular interaction on random forums.