• Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    2
    ·
    edit-2
    2 months ago

    Worthless research.

    That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.

    Even if a user called it out, they’d be censored.

    Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.

    • yesman@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 months ago

      accusing others of speaking in bad faith

      You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?

      And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.

      • Sixty@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        2 months ago

        I won’t tell you.about civiity because

        How could that do anything besides shield the sealions, JAQoffs, and grifters?

        Not shield, but amplify.

        That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.

        ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.

        I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          2 months ago

          To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

          • Sixty@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            I agree and recognized that. I’m more emotionally upset about it tbh. The debates aren’t for the debaters, it’s to hopefully disillusion and remove indoctrinated fears from those on the fence willing to read them. It’s oft repeated there when people ask “what’s the point, same stupid debate for centuries.” Well religions unfortunately persist, and haven’t lost any ground globally. Gained, actually. Not our fault they have no new ideas.

  • jabathekek@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 months ago

    To me it was kind of obvious. There were a bunch of accounts that would comment these weird sentences and all of them had variants of JohnSmith1234 as their username. Part of the reason I left tbh.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      I was gonna say, anyone with half a brain who has poked their head into Reddit over the past year or two will have seen a shitload of obvious bots in the comments.

  • rooster_butt@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 months ago

    CMV: this was a good research akin to something like white hat hackers where the point is to find and expose security exploits. What this research did is point out how easy it is to manipulate people in a “debate” forum that doesn’t allow people from pointing out bad behavior. If this is being done by researchers and publishing it. It’s also being done be nefarious actors that will not disclose it.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    I haven’t seen this question asked.

    how can the results be trusted that they were actually interacting with real humans?

    what’s the percentage of bot-to-bot contamination?

    this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.

    any professional that puts their name on this steaming pile should be ashamed of themselves.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Reddit: “Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!”

    • Zenoctate@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      They literally have some AI thing called “answers” which is shitty practice of pushing AI by reddit

  • Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    With all the bots on the site why complain about these ones?

    Edit: auto$#&"$correct

  • BossDj@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    What they should do is convince a smaller subsection of reddit users to break off to a new site, maybe entice them with promises of a FOSS platform. Maybe a handful of real people and all the rest LLM bots. They’ll never know

  • Neuromorph@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    Good i spent at least the last 3 years on reddit making asinine comments, phrases, and punctuation to throw off any AI botS

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Just in my own understanding of life. There are these political think tanks. They are staffed by your old professors professors professor. These guys make big bucks to sit around and do this stuff then figure out attack points. I really think they had this research probably 20 years ago. I figure that’s what the guys do all day. Eventually the results end up in the firm that handles Steven Crowder, Ben Shapiro, that guy living in the Philippines.

  • MichaelMuse@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Wow, this is pretty concerning. As someone who spends a lot of time on Reddit, I find it really unsettling that researchers would experiment on users without their knowledge. It’s like walking into a coffee shop for a casual chat and unknowingly becoming part of a psychology experiment!