China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.

  • some_dude@lemm.ee
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    6
    ·
    1 month ago

    This is a smart and ethical way to include AI into everyday use, though I hope the watermarks are not easily removed.

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      3
      ·
      edit-2
      1 month ago

      Think a layer deeper how can it misused to control naratives.

      You read some wild allegation, no AI marks (they required to be visible), so must written by someone? Right? What if someone, even the government jumps out as said someone use an illiegal AI to generate the text? The questioning of the matter will suddently from verifying if the allegation decribed happened, to if it itself is real. The public sentiment will likely overwhelmed by “Is this fakenews?” or “Is the allegation true?” Compound that with trusted entities, discrediting anything become easier.

      Give you a real example. Before Covid spread globally there was a Chinese whistleblower, worked in the hospital and get infected. He posted a video online about how bad it was, and quickly got taken down by the government. What if it happened today with the regulation in full force? Government can claim it is AI generated. The whistleblower doesn’t exist. Nor the content is real. 3 days later, they arrested a guy, claiming he spread fakenews using AI. They already have a very efficient way to control naratives, and this piece of garbage just give them an express way.

      You though that only a China thing? No, every entities including governments are watching, especially the self-claimed friend of Putin and Xi, and the absolute free speech lover. Don’t think it is too far to reach you yet.

    • Cocodapuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I’m going to develop a new AI designed to remove watermarks from AI generated content. I’m still looking for investors if you’re interested! You could get in on the ground floor!

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I’ve got a system that removes the watermark and adds two or three bonus fingers, free of charge! Silicon Valley VC is gonna be all over this.

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 month ago

    Will be interesting to see how they actually plan on controlling this. It seems unenforceable to me as long as people can generate images locally.

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      4
      ·
      edit-2
      1 month ago

      That’s what they want. When people doing it locally, they can discredit anything as AI generated. The point isn’t about enforability, but can it be a tool to control narative.

      Edit: it doesn’t matter if people actually generating locally, but if people can possibly doing it. As long as it is plausible, the argument stands and the loop completes.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    1 month ago

    As an exception to most regulations that we hear about from China, this approach actually seems well considered - something that might benefit people and work.

    Similar regulations should be considered by other countries. Labeling generated content at the source, hopefully without the metadata being too extensive (this is where China might go off the handle) would help avoid at least two things:

    • casual deception
    • training AI with material generated by another AI, leading to degradation of ability to generate realistic content
  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 month ago

    Stable Diffusion has the option to include an invisible watermark. I saw this in the settings when I was running it locally. It does something like adds a pattern that is easy to detect with machines but impossible to see. The idea was that you could check an image for it before putting it into training sets. Because I never needed to lie about things I generated I left it on.

    • blurryface@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 month ago

      They plan to ban hating on the supreme leader.

      China is long ahead with that so maybe there is hope.

  • puppinstuff@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    Having some AIs that do this and some not will only muddy the waters of what’s believable. We’ll get gullible people seeing the ridiculous and thinking “Well there’s no watermark so it MUST be true.”

    • Initiateofthevoid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Sorry but the problem right now is much simpler. Gullibility doesn’t require some logical premise. “It sounds right so it MUST be true” is where the thought process ends.

  • Jin@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 month ago

    China, oh you Remembering something about go green and bla bla, but continue to create coal plants.

    The Chinese government has been caught using AI for propaganda and claiming to be real. So I don’t see it happening within the Chinese government etc.

  • Magister@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Me: “hé <AI name> remove the small text which is at the bottom right in this picture”

    AI: “Done, here is the picture cleaned of the text”

  • Lexam@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    7
    ·
    1 month ago

    This is a bad idea. It creates a stigma and bias against innocent Artificial beings. This is the equivalent of forcing a human to wear a collar. TM watermark