• mavu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    7 days ago

    A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    WTF?

    Doesn’t the fucking BBC have at least 1 or 2 experts for spotting fakes? RAN THROUGH AN AI CHATBOT?? SERIOUSLY??

    • bilgamesch@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      People need to get that with the proliferation of AI the only way to build credibility is not by using it for trust but to go the exact opposite way: Grab your shoes and go places. Make notes. Take images.

      As AI permeates the digital space - a process that is unlikely to be reversed - everything that’s human will need to get - figuratively speaking - analogue again.

    • myplacedk@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      I haven’t read it, but it could be to demonstrate how easy it was to identify it as a fake, without the ressources of BBC.

    • wieson@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Pr because it was between 0 and 2 in the night. Still, as an author I wouldn’t have mentioned it.

    • helpImTrappedOnline@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      An “expert” could be anyone who convinces someone else to pay them. The “expert” is probably the one that ran it through the chatbot.

    • Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?

      It’s a threat to free speech.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        3
        ·
        7 days ago

        Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down

      • Lumisal@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        6 days ago

        Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.

        That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.

          • Lumisal@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 days ago

            Proton isn’t social media.

            If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.

          • Dozzi92@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 days ago

            Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I don’t think it’d be that simple.

          Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.

          So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website’s reach and its accountability (to paraphrase Spider-Man’s Uncle Ben).

          • Lumisal@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I agree it’s not that simple, but it’s just a proposed possible beginning to a solution. We could refine it further and then give the vet refined idea as a charter for a lawyer to them draft up as a proper proposal that could then be present to a relative governmental body to consider.

            But few people like to put in that work. Even politicians don’t - that’s why corporations get so much of what they want - they do that and pay people to do that for them.

            That said, view count isn’t the same as membership. This solution wouldn’t be perfect.

            But it would be better than nothing at all, especially now with the advent of AI turning the firehouse of lies into the tsunami of lies. Currently one side only grows stronger in their opportunity for causing havoc and mischief while the other, quite literally, does nothing and sometimes advocates for doing nothing. You could say it’s a reflection of the tolerance paradox that we’re seeing today.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      7 days ago

      I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.

    • Rhoeri@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      7 days ago

      Sites AND the people that post them. The age of consequence-less action needs to end.

  • sircac@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    6 days ago

    It feels like a privilege escalation exploit: at a certain point the authority chain jumped from a random picture provided who knows where/when to a link in the chain that should be reliable enough to blindly trust in this subject.

    • Dozzi92@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      I dunno, someone just throws this up on social media, and you’re the person in the position to say hey, halt the trains, don’t you do just that out of an abundance of caution?

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 days ago

        lives are worth more than the dysfunction caused by the delay in services.

        the only thing this did was to weaken the resolution of leadership when a real disaster happens.

        the next time information like this comes forward, be it real or fake, it will cause a delayed reaction which will ultimately cost lives.

  • entwine@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    This is terrifying. Does the BBC not have anyone on the team that understands why this does not, and will never work?