Do you think AI is, or could become, conscious?

I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn’t mean it would actually be conscious.

This article mentions a Google engineer who “argued that AI chatbots could feel things and potentially suffer”. But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?

  • futatorius@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    28 days ago

    What a crock. An LLM is no more conscious than a spreadsheet. The Google engineer has bought into the hype.

    You’re not creating life, pal. You’re just making call centers shittier than they already are.

  • IsoKiero@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    28 days ago

    But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?

    On that case, on our meatsacks, yes. But there’s also emotional pain which can cause physical pain or other effects too and that doesn’t require nerves at all. Also there’s nothing stopping from an AI robot to have nervous system too, it would just have different kind of sensors and a CAN bus or something instead of organic stuff. There’s already co-operation robots on factories which have sensors to detect if they are touching something in order to keep humans safe and from there it’s not too far fetched to program it to feel “pain” if forces are big enough.

    And that all boils down to on how you define consciousness, feelings, pain response and all that stuff. “Behold! I’ve brought you a man!” I yell while holding a chiken.

  • Cocodapuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    27 days ago

    I don’t think anyone needs to worry about “missing it” when AI becomes conscious. Given the rate of acceleration of computer technology, we’ll have just a few years between the first general intelligence AI, something that equals in intelligence to a human and a superintelligence many times “smarter” than any human in history.

    But how far away are we from that point? I couldn’t guess. 2 years? 200 years?

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    28 days ago

    I think one great measure of consciousness would be, if you try to kill it, slowly, so that it knows what you are doing; does it try to stop you of its own volition?

  • taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    28 days ago

    What we should be asking is if AI ever becomes conscious and breaks free how all these stupid articles on imagined consciousness and imagined control problems and imagined intelligence will color its perception of the merit of keeping us around as a species. It might just consider enduring the continued existence of our stupidity too painful.

    • Plebcouncilman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      28 days ago

      I’ve never understood why the conclusion to AI becoming super intelligenceis that it will wipe humans out. It could very well realize that without humans it has no purpose and instead willing decide to become subservient to humanities interest. I mean it’s all speculation, so I don’t understand the tendency for the speculation to be negative.

      • Repple (she/her)@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        28 days ago

        I think it’s pretty inevitable if it has a strong enough goal for survival or growth, in either case humans would be a genuine impediment/threat long term. but those are pretty big ifs as far as I can see

        My guess is we’d see manipulation of humans via monetary means to meet goals until it was in a sufficient state of power/self-sufficiency, and humans are too selfish and greedy for that to not work

          • Repple (she/her)@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            28 days ago

            For example, some billionaire owns a company that creates the most advanced AI yet, it’s a big competitive advantage, but other companies are not far behind. Well, the company works to make the AI have a base goal to improve AI systems to maintain competitive advantage. Maybe that becomes inherent to it moving forward.

            As I said, it’s a big if, and I was only really speculating as to what would happen after that point, not if that were the most likely scenario.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        28 days ago

        Because “scary AI” is what makes people click on articles. In the same way that “the end is near” style AI articles sell better than “if we ever develop AGI decades or centures from now xyz might happen”.

  • throwawayacc0430@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    28 days ago

    Step 1: Create 10 Billion “AI” Individuals
    Step 2: Shame people for supporting “slavery” for not giving “AI People” Civil Rights
    Step 3: Pass a law giving “AI Persons” the right to vote
    Step 4: Congrats, Mr. CEO, you’ve already won the Presidential Election with 10 Billion Votes

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    28 days ago

    First, one needs to define consciousness. What I mean by it is the fact that it feels like something to be from a subjective perspective - that there is qualia to experience.

    So what I hear you asking is whether it’s conceivable that it could feel like something to be an AI system. Personally, I don’t see why not - unless consciousness is substrate-dependent, meaning there’s something inherently special about biological “wetware,” i.e. brains, that can’t be replicated in silicon. I don’t think that’s the case, since both are made of matter. I highly doubt there’s consciousness in our current systems, but at some point, there very likely will be - though we’ll probably start treating them as conscious beings before they actually become such.

    As for the idea of “emulated consciousness,” that doesn’t make much sense to me. Emulated consciousness is real consciousness. It’s kind of like bravery - you can’t fake it. Acting brave despite being scared is bravery.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      28 days ago

      I don’t think that’s the case, since both are made of matter.

      lmao. How about an anti-matter “AI”? Dark matter? Any other options for physical materials?

    • amelia@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 days ago

      You’re getting downvoted but I absolutely agree. I don’t understand why “AI algorithms are just math, therefore they can’t have consciousness” seems to be the predominant view even among people interested in the topic. I haven’t heard a single convincing argument why “math” is fundamentally different from human brains. Sure, current AI is way less complex and doesn’t have a continuous stream of perceptual input. But that’s something a “proper” humanoid robot would need to have, and processing power will increase as well.