Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I am not a developer, but:

      I told the owner of the company recently that, and I quote, “I will fucking kill myself if my job becomes reviewing AI output”

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      it’s pretty fucking stark right? these are the devs that stayed after management mandated they USE the shit in the first place, now they want the same devs to become responsible for what the shit does to their codebases.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Exactly. If you’re too stupid or lazy to adequately vet what your LLM puts out yourself, it shouldn’t be somebody else’s job to wade through the sewage you’re producing. You either shouldn’t be using one or, if you can’t do your job without it, you shouldn’t have that job.

      —Someone who doesn’t use genAI but has spent way too much time digging through LLM slop

      • inclementimmigrant@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I mean honestly yeah, I’m not going to waste my time with some junior developer who can’t explain how the code works and how it interacts with whatever framework I’m working on. I ain’t got time for that nonsense, especially when the code I deal with involves safety critical sections of code.

        Honestly if my work ever decided to allow unfettered AI code generation into my code base, I would immediately look for a new job at that point.

      • Lost_My_Mind@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        You know what my favorite pizza topping is? Bleach.

        Dominoes REFUSES to put bleach on my pizza, so I gotta do it myself. I found out about it from AI. Now my pizza tastes great! The downside is having to go to the hospital to get a stomach pump everytime.

    • Hegar@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s going to make snr devs get fired, surely?

      They either refuse to sign off when boss wants them to and get fired or sign off and get fired when ai code they signed off on causes issues.

      • frank@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Or quit/find new jobs. I suspect that’s by design by Business Idiots.

        *Get rid of the most expensive engineers and the cheaper ones can just use AI to make up the difference in output. And we can make the lower engineers the fall guy when convenient and replace them at our leisure *

        The disdain bosses have for average people is astonishing.

      • Windex007@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Bingo.

        Maybe not outright fired, but absolutely open them up to career limits based on what you described.

        All of Amazon’s code undergoes code reviews already. Accepting a PR is already spiritually a sign off.

        This is just explicitly a threat, explicitly trying to find someone to hold accountable because you can’t hold ai accountable. What are they gonna do, fire the ai? Sign here to be the fall guy. Fuck off.

  • laranis@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    How in the glorious fuck was this not a thing from the start? In a system this big and this critical all code should be reviewed by cognizant individuals. Anyone who thought an LLM would be perfect and not need code reviews has their heads so far up their asses they can see through their pee hole.

    • titanicx@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      If you do this, you signal the AI isn’t ready for production capabilities, which limits your sales groups capability to market it. Which is in reality the actual case and AI sucks and should never be trusted.

  • Bytemeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI is an assistant, not a replacement. It amazes me that Amazon, Microsoft, Google, and all these “tech leader” companies are going to make the same tech fuckup multiple times.

  • WraithGear@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    or hear me out, they can build it themselves so they don’t have to chase hallucinations. as a matter of fact, let’s cut the ai out of the project and leave it to summarizing emails.

    • laranis@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This 1000x. You think that senior dev got to that level hoping one day all they’d have to do is evaluate randomly generated code? No! They want to create, build, design, integrate, share. Cut out the middle, useless step and get back to the work these professionals have dedicated their careers to.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.

    What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.

    This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they’ll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn’t it be simpler just to say “No AI”?

    • Earthman_Jim@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      AI’s greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.

      Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one’s going to take the blame anyway?

    • Joeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      If you ask a writer what is Ai good for? They will say it’s good for art. But never use it for writing, because it’s terrible at it.

      If you ask a artist what is Ai good for? They will say it’s good for writing. but never use it for art, because it’s terrible at it.

        • Overzeetop@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          The output looks good to people who are poorly versed in the segment for which AI is being asked to perform, but often inefficient or fails in ways that an expert in the field would never miss.

          —ignore this part, I’m just rambling from here on Depending on the context, you’ll almost certainly get something that looks correct on first glance, especially if you’re not an expert. If you’re an expert, you wouldn’t need to ask for such a task and, if you did to save time, you’d probably end up adjusting, correcting, or fixing several things to produce a production-ready output. I use it regularly for code because the last language I had any training in proper syntax was Fortran 77. And eventually the simple tasks I ask it to code for me work. I’ve asked it to do some excel calculations (I’m not an excel expert, I do a lot of mathematic manipulation in custom sheets) and some of them work, but most are either wildly convoluted or relay on obscure calls/functions rather than simply using standard logic and mathematic operations which are easy to edit and change. I’ve also asked it to do some graphical illustration (because I’m not a graphic artist) and it has produced nice looking illustrations with zero basis in reality - i.e. “draw me an outline of Scotland in the style you’d see on a tourist map and label, with a star, these four cities”. It produced what I would expect an average American would estimate the outline of Scotland looked like and was equally as accurate with the location of the four cities (i.e. utterly incorrect).

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        In my experience, LLMs suck at making smart, small changes. To know how to do that they need to “understand” the entire codebase, and that’s expensive.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Couldn’t they, I don’t know, just go back to people writing the code, and stop using AI to do something it clearly can’t handle? Just an idea.

    I guess they’ve invested (thrown) so much money at this thing, they’re determined to make it work. Also, I know they’ve gone into insanely deep debt and if it doesn’t work they’re going to lose an eye watering amount of money, and perhaps the bubble bursting will be the catalyst to bringing down the entire world economy.

    Oh, so yeah, they do have great incentive to make this work, but I don’t see it happening. As usual, they fuck up and the rest of us pay the bill. None of the billionaires will suffer any more than loss of face over this. Even if they’ve broken laws, all they ever get is a small fine and a slap on the back, “Better luck, next time, ol’ boy!”

  • pedroapero@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Yes, so now when there’s a success, it gets attributed to AI. When there’s an outage, that’s the fault of humans not reviewing correctly. These senior engineers will get fucked in all scenarios.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Precisely. From Cory Doctorow’s latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:

      “if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

      This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.

      • kimara@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I don’t think it’s fair to compare LLM code generation to machine vision in this way. These very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.

        • BlameTheAntifa@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.

          • kimara@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist’s expertise. It basically outputs “yes”, “maybe”, and “no”, which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.

            This is why I wouldn’t equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.

            • AnarchistArtificer@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:

              'If my Kaiser hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 x-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 x-rays per day. That’s fine, we just care about finding all those tumors.”

              If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

              “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

              This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.’

              In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.

              That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?

              !Not us, that’s for sure!<

        • Frenchgeek@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          The kind of AI doesn’t matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I seriously don’t understand how something as static as Amazon, a fucking webpage serving up pictures and ads, generating orders, needs to constantly write software in these quantities.

  • Simulation6@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I always saw a code review like a dissertation defense. Why did you choose to implement the requirement in this way? Answers like ‘I found a post on Stackoverflow’ or ‘the AI told me to’ would only move the question back one step; why did you choose to accept this answer?
    I was a very unpopular reviewer.