Imagine being able to diagnose a brain tumor in minutes instead of hours—artificial intelligence could make that a reality, according to a new study.
A new study published in the international journal Nature by researchers from the Oncode Institute, Center for Molecular Medicine, Princess Máxima Center for Pediatric Oncology, and the Department of Neurosurgery at Amsterdam University Medical Centers on Wednesday introduces a new AI-fueled concept aimed at changing how surgeons approach the diagnosis and removal of tumors.
Sturgeon is a neural network AI model that uses nanopore sequencing to quickly diagnose central nervous system (CNS) tumors. A neural network refers to a technology designed to mimic the activities of the human brain and uses algorithms to recognize patterns in that activity. The researchers highlighted the tradition of naming nanopore software after fish—and because it sounded like "surgeon."
A central nervous system tumor is an abnormal growth of cells within the brain or spinal cord. According to the American Society of Clinical Oncology, a CNS tumor can be benign, malignant, or cancerous, the last of which can grow fast and spread to other body parts. Whatever the status, the ASCO says it requires medical attention, as such tumors “represent one of the most lethal cancer types, particularly among children.”
The researchers said Sturgeon delivered an accurate diagnosis within 40 minutes of starting the sequence in 45 of the 50 samples tested—or 90% of the time, regardless of the individual patient’s medical data.
“We conclude that machine-learned diagnosis based on low-cost intraoperative sequencing can assist neurosurgical decision-making, potentially preventing neurological comorbidity and avoiding additional surgeries,” the team said.
Dr. Gabriel Zada, neurosurgery specialist at Keck Medicine of USC, said the Sturgeon study is highly innovative. He sees the use of generative AI only growing in the medical field.
“The problem [the Sturgeon researchers] are trying to solve is that surgeons don't have real-time or rapid knowledge of diagnoses of tumors during cases,” Zada told Decrypt in an interview. “Not just tumors, but subtypes of tumors which we increasingly are focusing attention and priority on—molecular markers of tumors. The classification of tumors is becoming driven by these genetic and molecular markers.”
Keck Medicine of USC is a comprehensive academic medical center located in Los Angeles, CA, that is affiliated with the University of Southern California. It offers patient care, research, and education through its network of hospitals, clinics, and physicians.
As Zada explained, examining tumors is time-consuming, taking up to an hour in some cases, and may not result in proper identification of the tumor—and, more importantly, its genetic subtype, which has implications for how the surgery or treatment is performed.
“Should the tumor be fully removed or just biopsied? Or are there any other techniques we can use intraoperatively to treat that while we're there?” Zada said. “It's becoming increasingly important that we know the tumor type and subtype during surgery.”
Rapid advances in artificial intelligence have led to many uses, including in healthcare and diagnosing illnesses. While experts caution against using chatbots like ChatGPT or Google Bard to self-diagnose, Dr. Zada says Keck and other medical and academic institutions are turning more and more to AI, adding that AI tools are used across the board before, during, and after surgery.
The Sturgeon researchers are just one of the groups attempting to leverage artificial intelligence for cancer treatment and diagnosis. In August, UK-based biotech startup Etcembly said it used generative AI to design an immunotherapy that targets cancer. That same month, Google teamed with medical technology company iCAD to develop an AI model that can detect breast cancer using Google’s deep learning technology.
Earlier this year, a study conducted by researchers at the University of Texas and the University of Massachusetts used large language models to extract and apply medical research text, leading to the creation of CancerGPT, which aims to predict how drug combinations affect tissue found in cancer patients.
As AI models improve, Zada said, they will have significant implications for how scans are interpreted as well as the automation of scans, including predicting tumor types and subtypes using imaging. He added that AI gives doctors the ability to see, at times, the potential outcome of treatment.
“One thing we're doing, especially here at USC in conjunction with Caltech, is using AI and computer vision analysis to analyze surgical video in near real-time,” he said. “So as we train these algorithms, we can extract very useful information from the almost live surgical video once our algorithms have been trained appropriately.”
While tools like Sturgeon won't replace pathologists anytime soon, Dr. Zada said, it will be useful for surgeons and pathologists to complement what they are doing already. He emphasized that improvements to AI models will lessen the risk of AI “hallucinations,” or the instances in which they generate false information.
“This is kind of a proof of principle or feasibility study,” Zada said. “They're showing early results that there are certainly concerns about. But one could imagine that as their methodology improves, their training set increases in numbers, and tumor types and subtypes are added into the algorithm, the diagnostic accuracy will only improve.”
While he is optimistic about using artificial intelligence in the medical field, Zada cautioned that AI is not a silver bullet that removes the risk associated with these complex surgeries.
“The complexity of the surgery doesn't change; the risks don't necessarily change,” he said. “But one could imagine that if you get some information that augments the surgical approach, it could one day conceivably make something safer if you have additional information that would change what you do.”