As AI enters the operating room, reports arise of botched surgeries and misidentified body parts
Medical device makers have been rushing to add AI to their products. While proponents say the new technology will revolutionize medicine, regulators are receiving a rising number of claims of patient injuries.
In 2021, a unit of healthcare giant Johnson & Johnson announced "a leap forward": It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.
The device had already been on the market for about three years. Until then, the US Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.
At least 10 people were injured between late 2021 and 11 November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients' heads during operations.
Cerebrospinal fluid reportedly leaked from one patient's nose. In another reported case, a surgeon mistakenly punctured the base of a patient's skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.
FDA device reports may be incomplete and aren't intended to determine causes of medical mishaps, so it's not clear what role AI may have played in these events. The two stroke victims each filed a lawsuit in Texas alleging that the TruDi system's AI contributed to their injuries. "The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented," one of the suits alleges.
Reuters could not independently verify the lawsuits' allegations.
Asked about the FDA reports on the TruDi device, Johnson & Johnson referred questions to Integra LifeSciences, which in 2024 purchased Acclarent and the TruDi Navigation System. Integra LifeSciences said the reports "do nothing more than indicate that a TruDi system was in use in a surgery where an adverse event took place." It added that "there is no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries."
Insight into the incidents comes as AI is beginning to transform the world of health care. Proponents predict the new technology will help find cures for rare diseases, discover new drugs, enhance surgeons' skill and empower patients. But a Reuters review of safety and legal records, as well as interviews with doctors, nurses, scientists and regulators, documents some of the hazards of AI in medicine as device makers, tech giants and software developers race to roll it out.
At least 1,357 medical devices using AI are now authorized by the FDA – double the number it had allowed through 2022. The TruDi system isn't the only one to come under question: The FDA has received reports involving dozens of other AI-enhanced devices, including a heart monitor said to have overlooked abnormal heartbeats and an ultrasound device that allegedly misidentified fetal body parts.
Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That's about twice the recall rate of all devices authorized under similar FDA rules, the review noted.
The AI boom poses a problem for the FDA, five current and former agency scientists told Reuters: The agency is struggling to keep pace with the flood of AI-enhanced medical devices seeking approval after losing key staff. A spokesperson for the US Department of Health and Human Services, which includes the FDA, said it's looking to boost its capacity in this area.
Another form of artificial intelligence, generative-AI chatbots, is also making its way into medicine. Many physicians are now using AI to save time, such as in transcribing patient notes. But doctors also say many patients use chatbots to self-diagnose or challenge professional advice, posing new challenges and risks.
Artificial intelligence became a business and social sensation after the launch of ChatGPT about three years ago. ChatGPT and other popular chatbots, such as Google's Gemini and Anthropic's Claude, use so-called generative AI to create content. They are built on top of large language models, or LLMs, which are trained on huge troves of text and other data to understand and generate human language. These AI tools are now being introduced into medical areas such as consumer healthcare apps.
AI encompasses more than LLMs, however, and the technology made its way into medicine long before AI bots appeared. The field dates back more than 70 years: A key moment was when British mathematician Alan Turing asked in a 1950 paper, "Can machines think?"
The FDA authorized its first AI-enhanced medical devices in 1995 – two systems that used pattern-matching software to screen for cervical cancer. The type of AI used in medical devices today is often called machine learning, along with a subset known as deep learning, which are trained on data to perform specific tasks. The technology is used in radiology, for example, to enhance and analyze medical images. It can help diagnose cancers by identifying tumors that doctors may overlook.
Such systems are also used in surgical devices. In June 2022, a surgeon inserted a small balloon into Erin Ralph's sinus cavity at a hospital in Fort Worth, Texas. According to a lawsuit filed by Ralph, Dr. Marc Dean was employing the TruDi Navigation System, which uses AI, to confirm the position of his instruments inside her head.
The procedure, known as a sinuplasty, is a minimally invasive technique to treat chronic sinusitis. A balloon is inflated to enlarge the sinus cavity opening, to allow better drainage and relieve inflammation.
But the TruDi system "misled and misdirected" Dean, according to the lawsuit Ralph filed in Dallas County District Court against Acclarent and other defendants. A carotid artery – which supplies blood to the brain, face and neck – allegedly was injured, leading to a blood clot. According to a court filing, Ralph's lawyer told a judge that Dean's own records showed he "had no idea he was anywhere near the carotid artery." Reuters wasn't able to review the records, which are subject to a judicial protective order.
After Ralph left the hospital, it became apparent that she had suffered a stroke. The mother of four returned and spent five days in intensive care, according to a GoFundMe fundraising drive that was organized to support her recovery. A section of her skull was removed "to allow her brain room to swell," the GoFundMe appeal stated.
"I am still working in therapy," Ralph said in an interview more than a year later in a blog about stroke victims. "It is hard to walk without a brace and to get my left arm back working, again."
In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough's carotid artery allegedly "blew." Blood "was spraying all over" – even landing on an Acclarent representative who was observing the surgery, according to a lawsuit Fernihough filed in US District Court in Fort Worth against Acclarent and several manufacturers. One of Fernihough's carotid arteries was damaged. She suffered a stroke the day of the surgery, according to her suit.
Acclarent "knew or should have known that the purported artificial intelligence caused or exacerbated the tendency of the integrated navigation system product to be inconsistent, inaccurate, and unreliable," the suit alleges.
Acclarent has denied the allegations in both suits, which are ongoing, according to court filings. The company says it did not design or manufacture the TruDi system but only distributed it, according to court filings. Acclarent's owner, Integra LifeSciences, told Reuters there's no evidence of a link between the AI technology and any alleged injuries.
Dean began consulting for Acclarent in 2014 and received more than $550,000 in consultant's fees from the company through 2024, according to Open Payments, a federal database that tracks financial ties between companies and physicians. At least $135,000 of those fees related to the TruDi system.
An attorney for Dean said the doctor couldn't comment due to patient privacy and ongoing litigation. Integra said Dean is no longer a TruDi consultant and that payments made to him after it acquired Acclarent were for meals.
In 2021, Acclarent's president at the time, Jeff Hopkins, was pushing to put AI in TruDi "as a marketing tool" to claim that the device "had new and novel technology," Fernihough's suit alleges.
The TruDi software uses machine learning to identify specific segments of a patient's anatomy and calculate "the shortest, valid path between two points specified by the physician," according to an Acclarent post on LinkedIn. The technology is designed to simplify surgical planning and provide real-time feedback during procedures such as sinus operations.
Acclarent officials had approached Dean about the plan to add AI, the Fernihough suit states. The surgeon warned Hopkins and Acclarent "that there were issues that needed to be resolved," the complaint adds. Despite that warning, the suit claims, Acclarent "lowered its safety standards to rush the new technology to market," and set "as a goal only 80% accuracy for some of this new technology before integrating it into the TruDi Navigation System."
Reuters couldn't establish whether Dean issued the warning. Reporters were unable to review material submitted in support of Fernihough's claims, which is subject to a judicial protective order.
Hopkins, the former Acclarent president, did not respond to a request for comment.
'Wrong body parts'
The FDA cautions that reports of adverse events and device malfunctions are limited: They often lack detail, are redacted to protect trade secrets, and can't be used alone to place blame. The agency also sometimes receives multiple reports for a single incident.
Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn't comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.
One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images.
"Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts," stated the report, which does not say that any patient was harmed. Sonio Detect is owned by Samsung Medison, a unit of Samsung Electronics. Samsung Medison said the FDA report about Sonio Detect "does not indicate any safety issue, nor has the FDA requested any action from Sonio."
The HHS spokesperson didn't respond to questions about Sonio Detect.
At least 16 reports claimed that AI-assisted heart monitors made by medical-device giant Medtronic failed to recognize abnormal rhythms or pauses. None of the reports mentioned injuries. Medtronic told the FDA that some of the incidents were caused by "user confusion."
The AI algorithms in Medtronic's LINQ series of implantable cardiac monitors are described as "deep learning artificial intelligence." They have greatly reduced false alerts and retained true alerts of heart events, according to the company's website. But the company also says on its site and in product literature that its AI technology, AccuRhythm AI, can misclassify actual abnormal heart rhythms or pauses.
Medtronic told Reuters that it reviewed all 16 episodes and concluded its device only missed one abnormal heart-rhythm event. "None of these reports resulted in patient harm," it said. Medtronic said some of the incidents were related to problems with data display, not the AI technology. It declined to explain fully what went wrong in each incident.
The HHS spokesperson said the agency doesn't discuss possible or ongoing compliance matters.
FDA cutbacks under Trump
In interviews, five current and former FDA scientists who reviewed AI-powered medical devices told Reuters that federal regulators are now less equipped to handle the flood of new ones.
About four years ago, the FDA expanded its roster of scientists who specialize in AI, particularly for reviewing medical imaging and radiology devices that use the technology. Many recruits were stationed in the Division of Imaging, Diagnostics and Software Reliability (DIDSR). The unit became the agency's key resource for assessing the safety of AI in medicine, one current and two former FDA employees told Reuters. It grew to about 40 people early last year.
"Some senior regulators have no idea how these technologies work," one ex-employee said. "We sat closely with senior regulators and explained to them why we think this technology is safe or not safe to use in the market."
It wasn't easy to lure top talent to government service. Recruiting computer scientists often required persuading them to turn down higher pay in the private sector.
In their work, scientists tried to "break" the devices' AI models, a former employee said. They would test a device's algorithms in a variety of clinical situations and check whether the AI's performance deteriorated over time. They also sought to minimize "hallucinations," in which AI models sometimes generate false information, FDA officials wrote in a paper published in October.
But early last year, the Trump administration began to dismantle the AI team as part of Elon Musk's cost-cutting campaign, the Department of Government Efficiency, or DOGE. About 15 of the 40 AI scientists in the DIDSR unit were laid off or opted to go, the FDA insiders said. Another unit that crafted policy on devices using AI, the Digital Health Center of Excellence, lost about a third of its staff of around 30.
Andrew Nixon, the HHS spokesperson, said the FDA is applying the same rigorous standards to medical devices aided by machine learning and other AI as it would to any product.
"Patient safety is the FDA's highest priority and is at the forefront of our work to protect and promote the public health," Nixon said. "The FDA sees tremendous promise in the digital health space," including devices enabled with AI and machine learning, "to help diagnose and treat a range of conditions." He said the FDA continues to recruit and develop talent with expertise in digital health, artificial intelligence and other emerging technologies.
Since the cuts, the workload has nearly doubled for some device reviewers, said two ex-employees. "If you don't have the resources, things are more likely to be missed," said a former device reviewer who left last year.
The FDA requires clinical trials for new drugs, but medical devices face different screening. Most AI-enabled devices coming to market aren't required to be tested on patients, according to FDA rules. Instead, makers satisfy FDA rules by citing previously authorized devices that had no AI-related capabilities, says Dr. Alexander Everhart, an instructor at Washington University's medical school in St. Louis and an expert on medical device regulation.
Positioning new devices as updates on existing ones is a long-established practice, but Everhart says AI brings new uncertainty to the status quo.
"I think the FDA's traditional approach to regulating medical devices is not up to the task of ensuring AI-enabled technologies are safe and effective," Everhart told Reuters. "We're relying on manufacturers to do a good job at putting products out. I don't know what's in place at the FDA represents meaningful guardrails."
