• X@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    So this happens and the FAA says “we’re gonna have this shit help ATCs manage flights! WHO’S EXCITED!”

    • chocrates@piefed.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      I lost it at the confession. The ai has no knowledge of what it did. You are feeding in your context and it is making up a (sycophantic) plausible explanation based on the chat history. Makes me wonder if this person should have production access in the first place.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        Yes, ask why it deleted data when it didn’t do anything of the sort and it will still output similar text. You asked it to confess and explain, so it will do just that regardless of whether it fits.

      • NOPper@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        It’s not like the thing is going to learn from its mistake. But cool, waste those tokens to have it explain that if fucked up after it fucks up lol.

    • magnue@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      The way it communicates suggests to me it’s got some ‘prompt engineer bro’ garbage system prompt going on there.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        Of course, that’s how all of these agents work. At best they’re a bunch of prompts tied together with scripts to perform actions. At worst they’re just interacting directly with software without any scripts or sandboxing.

        There is no AI.

            • magnue@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              20 days ago

              Idk what you’re talking about mate. Nobody is claiming AGI apart from morons. It’s genuinely useful technology with correct implementation. It just also happens to be a Ponzi scheme.

          • Leon@pawb.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 days ago

            You’re free to disagree, but all the tools say otherwise. Hell even the widely lauded Claude Code is just that, we know for sure since the source leaked.

    • Serinus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      yeah, it gives you the answer it thinks you want based on your prompts.

      I’d be interested to see what prompts they used to, uh, prompt this response.

      • IchNichtenLichten@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        it thinks

        I’m not attacking you but we really need to figure out how we use language to accurately describe what these programs are doing.

        • DarthFreyr@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          “Correlates”? As in: “It gives you the answer it best correlates with your prompts/context.” Feels somewhat right both in the sense of AI as tensor-based word-select autocomplete and as a “lower-level” process than genuine thought, one which turns incongruent inputs (“I’m an AI” and “I just deleted prod+backup”) into meaningless output (“The AI is sorry”) that might look OK at a distance.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          They are outputting a highly likely sequence of words that fit the type of output from their training data that matches the input.

          They are fancy autocomplete.

      • rozodru@piefed.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        exactly. the whole point of these things is that they MUST provide you a solution. Any solution. doesn’t have to be accurate, doesn’t have to work, can be completely made up as long as it’s a solution and as long as it’s provided quickly. I’ve seen people feed into the prompts stuff like “don’t hallucinate” or “verify all this online before proceeding” etc and it’s not going to do any of that. it might TELL you it’s doing that but it won’t.

        Claude is notorious for guessing, not verifying, and providing the quickest possible solution. Unlike GPT which will fluff all it’s solutions to essentially waste your time and eat up more tokens, Claude just wants your problem out the door so you can feed it another problem ASAP.

        If you use Claude for anything in your daily work you might as well just have a magic 8ball sitting on your desk. It’s a hell of a lot cheaper and provides about the same quality.

        • Serinus@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          just have a magic 8ball sitting on your desk

          I kind of like this, with some modification. It’s a magic 8 ball of Stack Overflow answers. It’ll try to find the one you need. If it’s too hard to find that or if it doesn’t exist, it’s just gonna find the one that sounds good.

          • zod000@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 days ago

            I love this idea. On shit, the load balancer isn’t responding, time to shake the Magic Stack Overflow Ball ™! The result is “signs point to power cycling the server”.

      • Ech@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        The program can’t pretend any more than it can tell truth. It’s all just impressive regurgitation. Querying it as to why it “chose” to take any action is about as useful as interrogating a boulder on why it “chose” to roll through a house.

      • thisbenzingring@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        the next ingestion cycle will probably pick it up but how do we know it’ll use the information in any relevant way 😶

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        They’re not even pretending. The algorithm says the most likely response to “you fucked up” is “I’m sorry”, so that’s what it prints. There’s zero psychological simulation going on, only statistical text generation.

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          I actually didn’t believe you but it’s literally true. First post, immediate apology.