Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    15 hours ago

    Suddenly trying to write small programs in assembler on my Commodore 64 doesn’t seem so bad. I mean, I’m still a disgrace to my species, but I’m not struggling.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway

          • HugeNerd@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Sure, mostly to make up for the shortcomings of BASIC 2.0. You could use a bunch of different approaches for easier programming, like cartridges with BASIC extensions or other utilities. The C64 BASIC for example had no specific audio or graphics commands. I just do this stuff out of nostalgia. For a few hours I’m a kid again, carefree, curious, amazed. Then I snap out of it and I’m back in WWIII, homeless encampments, and my failing body.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there

      • Aggravationstation@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

      • Mediocre_Bard@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 hours ago

      Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity

  • Mohamad20ZX@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    14 hours ago

    After What Microsoft Did To My Back On 2019 I know They Have Gotten More Shady Than Ever Lets Keep Fighting Back For Our Freedom Clippy Out

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    23 hours ago

    Literally what the actual fuck is wrong with this software? This is so weird…

    I swear this is the dumbest damn invention in the history of inventions. In fact, it’s the dumbest invention in the universe. It’s really the worst invention in all universes.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      23 hours ago

      But it’s so revolutionary we HAD to enable it to access everything, and force everyone to use it too!

  • Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    Again? Isn’t this like the third time already. Give Gemini a break; it seems really unstable

  • Rose@slrpnk.net
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 day ago

    (Shedding a few tears)

    I know! I KNOW! People are going to say “oh it’s a machine, it’s just a statistical sequence and not real, don’t feel bad”, etc etc.

    But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said “goes to show we should never use computers again”, roll credits.

    (sigh) I can’t analyse this stuff this weekend, sorry

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    82
    ·
    2 days ago

    I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.

    • Guidy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Weird because I’ve used it many times fr things not related to coding and it has been great.

      I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.

      I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.

      I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.

      • Jo Miran@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 days ago

      Gemrni is dogshit, but it’s objectively better than chatgpt right now.

      They’re ALL just fuckig awful. Every AI.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      Not a single of the issues I brought up years ago was ever addressed except one.

      That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).

      A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.

  • Mika@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Wonder what did they put in the system prompt.

    Like there is a technique where instead of saying “You are professional software dev” you say “You are shitty at code but you try your best” or something.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    S-species? Is that…I don’t use AI - chat is that a normal thing for it to say or nah?

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    2 days ago

    Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      I always hear people saying Gemini is the best model and every time I try it it’s… not useful.

      Even as code autocomplete I rarely accept any suggestions. Google has a number of features in Google cloud where Gemini can auto generate things and those are also pretty terrible.

      • Jesus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I don’t know anyone in the Valley who considers Gemini to be the best for code. Anthropic has been leading the pack over the year, and as a results, a lot of the most popular development and prototyping tools have been hitching their car to Claude models.

        I imagine there are some things the model excels at, but for copy writing, code, image gen, and data vis, Google is not my first choice.

        Google is the “it’s free with G suite” choice.

        • panda_abyss@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          There’s no frontier where I choose Gemini except when it’s the only option, or I need to be price sensitive through the API

          • Jesus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Interesting thing is that GPT 5 looks pretty price competitive with . It looks like they’re probably running at a loss to try to capture market share.

            • panda_abyss@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 hours ago

              I think Google’s TPU strategy will let them go much cheaper than other providers, but its impossible to tell how long they last and how long it takes to pay them off.

              I have not tested GPT5 thoroughly yet

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The overall interface can, which leads to fun results.

        Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.

        A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yes, and this is pretty common with tools like Aider — one LLM plays the architect, another writes the code.

        Claude code now has sub agents which work the same way, but only use Claude models.