Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    86
    ·
    4 days ago

    I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.

    • Guidy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Weird because I’ve used it many times fr things not related to coding and it has been great.

      I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.

      I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.

      I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.

      • Jo Miran@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      4 days ago

      Gemrni is dogshit, but it’s objectively better than chatgpt right now.

      They’re ALL just fuckig awful. Every AI.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      Not a single of the issues I brought up years ago was ever addressed except one.

      That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).

      A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.