• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    8 months ago

    I know there’s some that roll their eyes at the mention of AI safety, saying that what we have isn’t AGI and won’t become it. That’s true, but that doesn’t eliminate the possibilities of something in the future. And between this and China’s laxness of trying everything to be first, if we get to that point, we’ll find out the hard way who was right.

    The laughable part is that the safeguards put up by Biden’s admin were very vague and lacking of anything anyway. But that doesn’t matter now.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      Keep worrying about entirely hypothetical scenarios of an AGI fucking over humanity, it will keep you busy so humanity can fuck over itself ten times in the meantime.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        You’re correct, it’s more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it’s doing what it says it’s doing while it’s not. That’s why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn’t seem to matter, we’re just going to go full throttle and get what we get.