Skip to content

New AI Technology Is Poised to Become First Intraoperative GPS for Surgeons

In This Article

  • Massachusetts General Hospital researchers have used artificial intelligence (AI) to analyze surgical videos, predict upcoming procedural steps and identify deviations from the norm
  • The AI is being refined to provide surgeons with real-time intraoperative feedback and access to a collective surgical consciousness
  • Researchers are engaged with surgical societies, government bodies and nongovernmental organizations with the goal to educate the public, health care providers and the medical industry

Researchers at Massachusetts General Hospital have developed artificial intelligence (AI) capable of accurately identifying and predicting the steps of laparoscopic surgery and endoscopic procedures. Encouraged by the results, the team is driving advancements to allow development of this kind of AI technology in real time during operations.

"AI capability will provide surgeons access to a tool much like an intraoperative global positioning system (GPS), used for intraoperative navigation," says Bariatric and Gastroesophageal Surgeon Ozanan Meireles, MD, director of Mass General's Surgical Artificial Intelligence and Innovation Laboratory (SAIIL). AI will help validate the normal progression of routine surgery, alert surgical teams when deviations occur and provide access to a collective surgical consciousness.

Says Dr. Meireles: "Imagine you're playing chess. You have to play so many games and take so many different approaches to become proficient, but you can only learn one game at a time. Now, imagine if all the chess players in the world could share their collective knowledge with you through AI—the learning would be exponentially higher and faster. This is the concept of a collective consciousness. By using AI, surgeons could be sharing their individual experiences to generate a similar collective consciousness for surgery."

Mitigating the Risk of Adverse Events

Even the most experienced surgeon may not experience the full extent of adverse and rare events over the course of their career. Therefore, encountering such events for the first time can become a potential challenge.

"Currently, the way we share knowledge with colleagues is through surgical meetings, scientific journals and morbidity/mortality conferences," says Dr. Meireles. "This helps surgeons enhance their cognitive skills based on other's experiences. However, we still lack meaningful repositories of visual experiences—even more so in real time during the operation."

He adds: "Furthermore, when an unexpected intraoperative adverse event occurs, a surgeon may not have the opportunity to consult with a colleague in real time."

Faced with such an event, a surgeon may make a real-time decision that could inadvertently result in injury to organs such as the bowel or a blood vessel. Adverse events have the potential to increase mortality, morbidity and length of stay, impacting patient quality of life and increasing procedure-related hospital costs.

Dr. Meireles says, "This brings us the opportunity to develop a system that would be able to compile all the information from different sources, different surgeons, different times, different experiences; to make inferences, predictions and offer solutions in real time during operations."

And that's what Dr. Meireles and his colleagues at SAIIL are working on.

"AI offers a powerful opportunity, allowing surgeons to contribute to, and benefit from, the knowledge of their colleagues worldwide by enhancing skill acquisition and cognitive performance," he says. "Access to this collective surgical consciousness can aid surgeons in mitigating intraoperative adverse events, improving patient care and reducing health care costs."

Applying AI to Surgical Videos (Investigating Temporal and Spatial Components)

The utilization of cameras and monitors is intrinsic to minimally invasive surgery. They provide surgeons, assistants and all viewers the same image in the same field of view for the entire course of the operation. And during the course of each operation, large amounts of video data are being generated.

"The question becomes," says Dr. Meireles, "How can we leverage AI through computer vision to use this rich data to train machines?"

Collaborating with the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (MIT CSAIIL) program, led by CSAIIL Director and MIT professor Daniella Rus, PhD, Dr. Meireles and the SAIIL team—Daniel Hashimoto, MD, associate director of research, Guy Rosman, PhD, associate director of engineering, Thomas Ward, MD, surgical AI and innovation fellow, and Yutong Ban, PhD, postdoctoral research fellow—set out to push machine learning beyond the traditional use of computer vision in medicine by interpreting just independent, single still images.

They train the algorithm to recognize and track a sequence of several images over time that may last hours or more depending on the procedure.

The SAIIL team has developed algorithms and labeling protocols from several laparoscopic and endoscopic procedures, including sleeve gastrectomy, laparoscopic cholecystectomy and per oral endoscopic myotomy. They segmented intraoperative video into several predetermined surgical steps related to each specific operation. For a sleeve gastrectomy, for example, starting with port placement through stomach stapling and inspection of the sleeve staple line. SAIIL's machine learning techniques are capable of performing the assessment of the current state of the operation, and, based on probabilities, anticipate the next series of events.

"After 'viewing' several laparoscopic sleeve gastrectomy procedures, the algorithm was able to recognize the steps of the operation and extract quantitative surgical data from the videos with 85.6% accuracy," says Dr. Meireles. "The AI's interpretation closely mirrored the annotations of a fellowship-trained, board-certified surgeons."

SAIIL researchers demonstrated that during the steps of an operation, a visual representation of the logged probability of each step of the operation over time is created. This representation corresponded to the type of procedure and individual surgeon performing the operation, creating a unique surgical fingerprint for each procedure. For normal, uneventful surgical procedures, performed by the same surgeon, surgical fingerprints tend to look highly similar, which heralds an uneventful course of operation.

Conversely, if there was a deviation of the expected path of the operation, the cumulative logged probability for each frame allowed for real-time recognition of the deviation and resulted in an altered surgical fingerprint that visually summarized potential areas of unexpected operative events.

For example, when the AI identified unknowns like unexpected intra-abdominal adhesions during a routine case, its predictions became chaotic, by default defining normal and abnormal surgeries. This binary capability has near-term applications in areas such as resident training, where the AI can allow a more efficient and focused review of surgical technique.

Real-time Interventions and Proactive Surgical Support

SAIIL's researchers, working from this proof of concept, are working to augment the machine learning repository with vast amounts of surgical data. An infusion of data will provide fodder for an expanded set of surgical fingerprints based on patient-specific criteria and risk for adverse events.

"As we move forward, we want to train the AI to do interventions in the moment," says Dr. Meireles.

SAIIL is developing a technology-enabled operating room that will support these real-time interventions. The operating room will include a user interface that will provide ongoing feedback on surgery progress. If potential adverse events are predicted, the AI can be trained to trigger a series of steps. AI support might recommend mitigation steps or initiate a telemedicine consultation with a colleague who has more experience.

The AI could also aid with logistics, such as time management protocols, informing the main OR desk about changes in the course of the operation to enhance resource utilizations. AI could also anticipate the need of extra physical resources for the operation, such as instruments, blood products, etc.

Keeping Pace With Innovation

AI's promise is accompanied by significant challenges regarding its deployment. Current legal and health care policies can present barriers to a meaningful implementation of AI, which may delay Realizing AI's potential requires measured decisions around:

  • Privacy and legal issues surrounding the collection of surgical videos
  • Ethics and public perception surrounding patient care delivery value-add and reimbursement policies

Dr. Meireles chairs the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) Artificial Intelligence Task Force committee. Along with his colleagues, he has been presenting and executing several initiatives with the goal to create standardization of surgical video annotation, educate the practicing surgeons and other physicians through scientific meetings and publications, and put forward several projects for data collection, sharing and publication standards. These initiatives are designed to enhance surgeons' experience and improve patient care.

"For all of this to become reality, we need to have an enormous amount of data and an array of brilliant people working together to validate applications, rules and policies," says Dr. Meireles. "If we succeed, surgeons will have a valuable tool to enhance their cognitive powers and to improve patient care and outcomes."


Researchers at the Athinoula A. Martinos Center for Biomedical Imaging have devised a deep learning algorithm to measure responses to treatment more effectively.


Surgeons at Massachusetts General Hospital review the evidence on laparoscopic versus open gastrectomy for early and advanced gastric cancer, as well as the current status of robotic gastrectomy.