Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

  • pedroapero@lemmy.ml
    link
    fedilink
    English
    arrow-up
    64
    ·
    edit-2
    18 hours ago

    Yes, so now when there’s a success, it gets attributed to AI. When there’s an outage, that’s the fault of humans not reviewing correctly. These senior engineers will get fucked in all scenarios.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      42
      ·
      edit-2
      17 hours ago

      Precisely. From Cory Doctorow’s latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:

      “if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

      This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.

      • kimara@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        11
        ·
        edit-2
        14 hours ago

        I don’t think it’s fair to compare LLM code generation to machine vision in this way. These are very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.

        • BlameTheAntifa@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          14 hours ago

          How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.

          • kimara@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            14 hours ago

            My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist’s expertise. It basically outputs “yes”, “maybe”, and “no”, which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.

            This is why I wouldn’t equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.

            • AnarchistArtificer@slrpnk.net
              link
              fedilink
              English
              arrow-up
              13
              ·
              13 hours ago

              Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:

              'If my Kaiser hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 x-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 x-rays per day. That’s fine, we just care about finding all those tumors.”

              If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

              “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

              This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.’

              In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.

              That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?

              !Not us, that’s for sure!<

        • Frenchgeek@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 hours ago

          The kind of AI doesn’t matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.