• Halcyon@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    13 days ago

    Another fear campaign that ultimately aims only at marketing.

    The AI bubble will burst, and it won’t end well for the US economy.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      14 days ago

      Yup. We’re in a situation where everyone is thinking “if we don’t, then they will.” Bans are counterproductive. Instead we should be throwing our effort into “if we’re going to do it then we need to do it right.”

      • stealth_cookies@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 days ago

        This is actually an interesting point I hadn’t thought about or see people considering with regards to the high investment cost into AI LLMs. Who blinks first when it comes to stopping investment into these systems if they don’t prove to be commercially viable (or viable quick enough)? What happens to the West if China holds out for longer and is successful?

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    14 days ago

    Honestly just ban mass investment, mass power consumption and use of information acquired as part of mass survelince, military usage, etc.

    Like those are all regulated industries. Idc if someone works on it at home, or even a small DC. AGI that can be democratized isn’t the threat, it’s those determined to make a super weapon for world domination. Those plans need to fucking stop regardless if it’s AGI or not

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    edit-2
    14 days ago

    I genuinely don’t understand the people who are dismissing those sounding the alarm about AGI. That’s like mocking the people who warned against developing nuclear weapons when they were still just a theoretical concept. What are you even saying? “Go ahead with the Manhattan Project - I don’t care, because I in my infinite wisdom know you won’t succeed anyway”?

    Speculating about whether we can actually build such a system, or how long it might take, completely misses the point. The argument isn’t about feasibility - it’s that we shouldn’t even be trying. It’s too fucking dangerous. You can’t put that rabbit back in the hat.

    • ErmahgherdDavid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      14 days ago

      Here’s how I see it: we live in an attention economy where every initiative with a slew of celebrities attached to it is competing for eyeballs and buy in. It adds to information fatigue and analysis paralysis . In a very real sense if we are debating AGI we are not debating the other stuff. There are only so many hours in a day.

      If you take the position that AGI is basically not possible or at least many decades away (I have a background in NLP/AI/LLMs and I take this view - not that it’s relevant in the broader context of my comment) then it makes sense to tell people to focus on solving more pressing issues e.g. nascent fascism, climate collapse, late stage capitalism etc.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    14 days ago

    The current point of our human civilization is like cave men 10,000 years ago being given machine guns and hand grenades

    What do you think are we going to do with all this new power?

  • SaraTonin@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 days ago

    Okay, firstly, if we’re going to get superintelligent AIs, it’s not going to happen from better LLMs. Secondly, we seem to have already reached the limits of LLMs, so even if that were how to get there it doesn’t seem possible. Thirdly, this is an odd problem to list: “human economic obsolescence”.

    What does that actually mean? Feels difficult to read it any way other than saying that money will become obsolete. Which…good? But I suppose not if you’re already a billionaire. Because how else would people know that you won capitalism?

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 days ago

    It doesn’t matter. It’s too late. The goal is to build AI up enough that the poor can starve and die off in the coming recession while the rich just rely on AI to replace the humans they don’t want to pay.

    We are doomed for the crimes of not being rich and not killing off the rich.

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      13 days ago

      We’re probably some two or three decades before any early prototypes are even conceivable, mate.