A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.

The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.

  • DrunkenPirate@feddit.org
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    3
    ·
    6 days ago

    And then you‘re lying on the table. Unfortunately, your case is a little different than the standard surgery. Good luck.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      10
      ·
      6 days ago

      At some point in a not very distant future, you will probably be better off with the robot/AI. As it will have wider knowledge of how to handle fringe cases than a human surgeon.
      We are not there yet, but maybe in 10 years or maybe 20?

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        Or the most common cases can be automated while the more nuanced surgeries will take the actual doctors.

      • its_prolly_fine@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        The main issue with any computer is that they can’t adapt to new situations. We can infer and work through new problems. The more variables the more “new” problems. The problem with biology is there isn’t really any hard set rules, there are almost always exceptions. The amount of functional memory and computing power is ridiculous for a computer. Driving works mostly because there are straightforward rules.

      • DrunkenPirate@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        6 days ago

        I doubt it. It simply would be enough, if the AI could understand and say when it reaches its limits and hand over to a human. But that is even hard for humans as Dunning & Kruger discovered.

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      4
      ·
      6 days ago

      realistic surgery

      lifelike patient

      I wonder how doctors could compare this simulation to a real surgery. I’m willing to bet it’s “realistic and lifelike” in the way a 4D movie is.

      Biological creatures don’t follow perfect patterns you have all sorts of unexpected things happen. I was just reading an article about someone whose entire organs are mirrored from the average person.

      Nothing about humans is “standard”.

      • alleycat@feddit.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 days ago

        I wonder how doctors could compare this simulation to a real surgery. I’m willing to bet it’s “realistic and lifelike” in the way a 4D movie is.

        I think “lifelike” in this context means a dead human. The robot was originally trained on pigs.

        • CrazyLikeGollum@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          The article mentions that previously they used pig cadavers with dyes and specially marked tissues to guide the robot. While it doesn’t specify exactly what the “lifelike patient” is, to me the article reads like they’re still using a pig cadaver just without those aids.

      • Zexks@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        6 days ago

        Right I’m sure a bunch of arm chair docs on lemme are totally more knowledgeable and have more understanding of all this and their needed procedures than actual licensed doctors.

        • skulblaka@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 days ago

          More than the doctors? No, absolutely not.

          More than the bean counters who want to replace these doctors with unsupervised robots? I’m a lot more confident on that one.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I assume my insides are pretty much like everyone else’s. I feel like if there was that much of a complication it would have been pretty obvious before the procedure started.

      “Hey this guy had two heads, I’m sure the AI will work it out.”

      • DrunkenPirate@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 days ago

        That’s a different thing indeed. In your case the AI 🤖 goes wild, will strip dance and tell poor jokes (while flirting with the ventilation machine)

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      6 days ago

      know what? let’s just skip the middleman and have the CEO undergo the same operation. you know like the taser company that tasers their employees.

      can’t have trust in a product unless you use the product.

      • cactusupyourbutt@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        I understand what you are saying is intended as „if they trust their product they should use it themselves“ and I agree with that

        I do think that undergoing an operation that a person doesnt need isnt ethical however

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Hey boss ready for your unnecessary heart transplant just to please some random guy on the internet?

        Yeah so let’s get this done I’ve got a meeting in 2 hours.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        Then it saw Inner Space and invented nanobots. So you win some, you lose some.

    • Smoogs@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      You underestimate the demands on a surgeon’s body to perform surgery. This makes it much less prone to tiredness, mistakes, or even if the surgeon is physically incapable in any way of continuing life saving surgery

  • ChicoSuave@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 days ago

    Not fair. A robot can watch videos and perform surgery but when I do it I’m called a “monster” and “quack”.

    But seriously, this robot surgeon still needs a surgeon to chaperone so what’s being gained or saved? It’s just surgery with extra steps. This has the same execution as RoboTaxis (which also have a human onboard for emergencies) and those things are rightly being called a nightmare. What separates this from that?

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      AI and robotics are coming for the highest paid jobs first. The attack on education is much more sinister than you think. We are approaching an era where many thinking and high cost labor fields will be eliminated. This attack on education is because the plan is to replace it all with AI.

      It is pretty sickening really to think of a world where your AI teacher supplied by Zombie Twitter will teach history lessons to young pupils about whether or not the Holocaust is real. I am not making this shit up.

      This is no longer about wars against nations. This has become the war for the human mind and billionaires just found the cheat code.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    6 days ago

    So are we fully abandoning reason based robots?

    Is the future gonna just be things that guess but just keep getting better at guessing?

    I’m disappointed in the future.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    6 days ago

    “OMG it was supposed to take out my LEFT kidney! I’m gonna die!!!”

    “Oops, the surgeon in the training video took out a Right kidney. Uhh… sorry.”

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    edit-2
    4 days ago

    See the part that I dont like is that this is a learning algorithm trained on videos of surgeries.

    That’s such a fucking stupid idea. Thats literally so much worse than letting surgeons use robot arms to do surgeries as your primary source of data and making fine tuned adjustments based on visual data in addition to other electromagnetic readings

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      4 days ago

      Yeah but the training set of videos is probably infinitely larger, and the thing about AI is that if the training set is too small they don’t really work at all. Once you get above a certain data set size they start to become competent.

      After all I assume the people doing this research have already considered that. I doubt they’re reading your comment right now and slapping their foreheads and going damn this random guy on the internet is right, he’s so much more intelligent than us scientists.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Theres no evidence they will ever reach quality output with infinite data, either. In that case, quality matters.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          4 days ago

          No we don’t know. We are not AI researchers after all. Nonetheless I’m more inclined to defer to experts then you. No offence, (I mean there is some offence, because this is a stupid conversation) but you have no qualifications.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 days ago

            It’s less of an unknown and more of a “it has never demonstrated any such capability.”

            Btw both OpenAI and Deepmind wrote papers proving their then models would never approach human error rate with infinite training. It correctly predicted performance of ChatGPT4.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 days ago

      That’s such a fucking stupid idea.

      Care to elaborate why?

      From my point of view I don’t see a problem with that. Or let’s say: the potential risks highly depend on the specific setup.

      • JustARaccoon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Unless the videos have proper depth maps and identifiers for objects and actions they’re not going to be as effective as, say, robot arm surgery data, or vr captured movement and tracking. You’re basically adding a layer to the learning to first process the video correctly into something usable and then learn from that. Not very efficient and highly dependant on cameras and angles.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        4 days ago

        Imagine if the Tesla autopilot without lidar that crashed into things and drove on the sidewalk was actually a scalpel navigating your spleen.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 days ago

          Absolutely stupid example because that kind of assumes medical professionals have the same standard as Elon Musk.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Elon Musk literally owns a medical equipment company that puts chips in peoples brains, nothing is sacred unless we protect it.

            • Echo Dot@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 days ago

              Into volunteers it’s not standard practise to randomly put a chip in your head.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        edit-2
        4 days ago

        Being trained on videos means it has no ability to adapt, improvise, or use knowledge during the surgery.

        Edit: However, in the context of this particular robot, it does seem that additional input was given and other training was added in order for it to expand beyond what it was taught through the videos. As the study noted, the surgeries were performed with 100% accuracy. So in this case, I personally don’t have any problems.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          4 days ago

          I actually don’t think that’s the problem, the problem is that the AI only factors for visible surface level information.

          AI don’t have object permanence, once something is out of sight it does not exist.

          • Showroom7561@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 days ago

            If you read how they programmed this robot, it seems that it can anticipate things like that. Also keep in mind that this is only designed to do one type of surgery.

            I’m cautiously optimist.

            I’d still expect human supervision, though.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    Naturally as this kind of thing moves into use on actual people it will be used on the wealthiest and most connected among us in equal measure to us lowly plebs right…right?

    • brown567@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 days ago

      Are you kidding!? It’ll be rolled out to poor people first! (gotta iron out the last of the bugs somehow)

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    That’s ridiculous. Everyone knows that for a robot to perform an operation like this safely, it needs human-written code and a LiDAR.

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    6 days ago

    My son’s surgeon told me about the evolution of one particular cardiac procedure. Most of the “good” doctors were laying many stitches in a tight fashion while the “lazy” doctors laid down fewer stitches a bit looser. Turns out that the patients of the “lazy” doctors had a better recovery rate so now that’s the standard procedure.

    Sometimes divergent behaviors can actually lead to better behavior. An AI surgeon that is “lazy” probably wouldn’t exist and engineers would probably stamp out that behavior before it even got to the OR.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 days ago

      That’s just one case of professional laziness in an entire ocean of medical horror stories caused by the same.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        Eliminating room for error, not to say AI is flawless but that is the goal in most cases, is a good way to never learn anything new. I don’t completely dislike this idea but I’m sure it will be driven towards cutting costs, not saving lives.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 days ago

        Or more likely they weren’t actually being lazy, they knew they needed to leave room for swelling and healing. The surgeons that did tight stitches thought theirs was better because it looked better immediately after the surgery.

        Surgeons are actually pretty well known for being arrogant, and claiming anyone who doesn’t do their neat and tight stitching is lazy is completely on brand for people like that.

    • jwmgregory@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      i mean, you could just as easily say professors and university would stamp those habits out of human doctors, but, as we can see… they don’t.

      just because an intelligence was engineered doesn’t mean it’s incapable of divergent behaviors, nor does it mean the ones it displays are of intrinsically lesser quality than those a human in the same scenario might exhibit. i don’t understand this POV you have because it’s the direct opposite of what most people complain about with machine learning tools… first they’re too non-deterministic to such a degree as to be useless, but now they’re so deterministic as to be entirely incapable of diverging their habits?

      digressing over how i just kind of disagree with your overall premise (that’s okay that’s allowed on the internet and we can continue not hating each other!), i just kind of find this “contradiction,” if you can even call it that, pretty funny to see pop up out in the wild.

      thanks for sharing the anecdote about the cardiac procedure, that’s quite interesting. if it isn’t too personal to ask, would you happen to know the specific procedure implicated here?

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Not specifically but I think the guidance is applicable to most incisions of the heart. I think the fact that it’s a muscular and constantly moving organ makes it differently than something like an epidermal stitch.

        And my post isn’t to say “all mistakes are good” but that invariablity can lead to stagnation. AI doesn’t do things the same way every single time but it also doesn’t aim to “experiment” as a way to grow or to self-reflect on its own efficacy (which could lead to model collapse). That’s almost at the level of sentience.