Mon. Dec 23rd, 2024
New Ai Model Draws And Diagnoses Treasure Maps

image:

Beckman Institute researchers led by Mark Anastasio (right) and Sourya Sengupta have developed an artificial intelligence model that can accurately identify tumors and diseases in medical images. The tool draws a map to explain each diagnosis, helping doctors follow their reasoning, check accuracy, and explain results to patients.

view more

Credit: Jenna Kurtzweil, Beckman Institute Communications Office

Medical diagnostic specialist, physician’s assistant, and cartographer are all legitimate titles for the artificial intelligence model developed by researchers at the Beckman Institute for Advanced Science and Technology.

Their new model is programmed to accurately identify tumors and diseases in medical images and explain each diagnosis with a visual map. The tool’s unique transparency allows physicians to easily follow their reasoning, double-check accuracy, and explain results to patients.

“This idea, like an X on a map, helps us discover cancers and diseases at an early stage and understand how decisions are made. Our model streamlines that process. and helps facilitate both doctors and patients,” said Sourya. Sengupta is the study’s lead author and a graduate research assistant at the Beckman Institute.

This research IEEE Transactions on Medical Imaging.

cat, dog, onion, and demon

First conceptualized in the 1950s, artificial intelligence (the concept that computers can learn to adapt, analyze, and problem-solve in the same way humans do) is coming into the household thanks to ChatGPT and its expanded family of easy-to-use tools. Now it looks like this.

Machine learning (ML) is one of the many methods researchers use to create artificial intelligence systems. ML is to AI what education is to a 15-year-old driver. That is, a controlled and supervised environment in which to practice making decisions, adjusting to new environments, and changing routes after mistakes or wrong turns.

Deep learning (machine learning’s smarter, more worldly cousin) can digest large amounts of information to make more nuanced decisions. Deep learning models derive their decisive power from deep neural networks, the closest computer simulation to the human brain.

These networks, like humans, onions, and demons, are layered and difficult to navigate. The thicker the intellectual layers of a network, the more nonlinear it is, the better it can perform complex, human-like tasks.

Consider a neural network trained to distinguish between pictures of cats and pictures of dogs. The model learns by looking at images in each category and saving its features (size, color, anatomy, etc.) to a file for future reference. Eventually, the model learns to watch out for whiskers and cry like a Doberman at the first sign of a sagging tongue.

But deep neural networks aren’t foolproof either. Like an overeager toddler, says Sengupta, who studies biomedical imaging in his Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign.

“They may get it right sometimes or most of the time, but not always for the right reasons,” he says. “I think we all know a kid who once saw a brown four-legged dog and thought all brown four-legged animals were dogs.”

Sengupta’s dissatisfaction? If you ask your toddler how they made their decision, they’ll probably tell you.

“But you can’t ask a deep neural network how it arrived at its answer,” he said.

black box problem

Despite being sophisticated, skilled, and fast, deep neural networks have a hard time mastering a creative skill instilled in high school calculus students: showing their work . This is known as the black box problem of artificial intelligence, and it has puzzled scientists for years.

On the surface, extracting a confession from a reluctant network of mistaking a Pomeranian for a cat seems incredibly unimportant. But the more life-changing the image in question, the greater the gravity of the black box becomes. Example: Her X-ray image from a mammogram, which can show early signs of breast cancer.

The process of decoding medical images varies depending on the region of the world.

“Many developing countries have a shortage of doctors and long queues of patients. AI can help in such scenarios,” Sengupta said.

Sengupta said automated medical image screening can be deployed as an adjunct tool when time and human resources are in high demand, but it can never replace a doctor’s skills and expertise. Instead, AI models can proactively scan medical images and flag images containing anything unusual (such as tumors or early signs of disease called biomarkers) for review by a doctor. This method saves time and can also improve the performance of those tasked with reading the scans.

While these models work well, their bedside behavior is still lacking when, for example, a patient asks why an AI system flagged an image as containing (or not containing) a tumor. There are many points.

Historically, researchers have answered questions like this using a number of tools designed to decipher the black box from the outside in. Unfortunately, researchers who use them often find themselves in a similar predicament as the unfortunate eavesdropper leaning against an empty, locked door. Put the glass to your ear.

“It’s much easier if you can open the door, walk into the room, and hear the conversation directly,” Sengupta said.

To further complicate matters, there are many variations of these interpretive tools. This means that certain black boxes can be interpreted in “plausible but different” ways, Sengupta said.

“So the question is, which interpretation do you believe?” he said. “Your choices can be influenced by subjective biases, and that’s where the main problem with traditional methods lies.”

Sengupta’s solution? Sengupta says his AI model is a completely new type of model that interprets itself every time, explaining each decision rather than blandly reporting a “tumor or non-tumor” dichotomy.

In other words, since the door is gone, there is no need for a water glass.

Model mapping

A yogi who learns a new posture must practice it repeatedly. An AI model trained to differentiate between cats and dogs by studying countless images of both quadrupeds.

The AI ​​model that acts as a physician’s assistant is generated from thousands of medical images, both with and without abnormalities. When faced with something it has never seen before, it performs a quick analysis and outputs a number between 0 and 1. If the number is less than 0.5, the image is not considered to contain a tumor. Numbers greater than 0.5 require further investigation.

Sengupta’s new AI model mimics this setup with a twist. The model generates values ​​and a visual map that explains their decisions.

This map (researchers call it an equivalent map, or E-map for short) is essentially a transformed version of the original X-ray, mammogram, or other medical imaging medium. Similar to a paint-by-numbers canvas, each area of ​​the E-map is assigned a number. The higher the value, the more medically interesting the region is in predicting the presence of abnormalities. The model sums the values ​​to produce a final number, which is then reflected in the diagnostics.

“For example, if the sum is 1 and three values ​​are displayed on the map (0.5, 0.3, 0.2), the clinician can see exactly which areas on the map contribute most to that conclusion, We can investigate them further. Absolutely,” Sengupta said.

This way, doctors can double-check how well deep neural networks are working (much like a teacher checking a student’s work on a math problem). They can also answer patient questions about the process.

“The result is a more transparent and trusted system between doctors and patients,” Sengupta said.

X marks the spot

The researchers trained the model on three different disease diagnosis tasks, including a total of more than 20,000 images.

First, the model reviewed simulated mammograms and learned how to indicate early signs of tumors. They then practiced analyzing optical coherence tomography images of the retina and identifying deposits called drusen, which can be early signs of macular degeneration. Third, the model studied chest X-rays and learned how to detect cardiomegaly, a condition of heart enlargement that can cause disease.

Once the cartographic model was trained, the researchers compared its performance to existing black-box AI systems (systems without self-interpretation settings). The new model performed equally well in all three categories, with accuracy rates of 77.8% for mammograms, 99.1% for retinal OCT images, and 83% for chest X-rays. In comparison, the existing model was 77.8%, 99.1%, and 83.33. %

These high accuracies are the result of deep neural networks, whose nonlinear layers mimic the nuances of human neurons.

To create such a complex system, researchers peeled back the proverbial onion and took inspiration from simpler, more easily interpreted linear neural networks.

“The question was: how can we leverage the concepts behind linear models to make nonlinear deep neural networks also interpretable in this way?” said the lead researcher. mark anastasio, Beckman Institute researcher, Donald Biggar Willett Professor and head of the Illinois Department of Bioengineering. “This research is a classic example of how basic ideas can lead to new solutions for cutting-edge AI models.”

Researchers hope that future models will be able to detect and diagnose abnormalities throughout the body, and even differentiate between them.

“We are excited that our tools will directly benefit society, not only in terms of improving disease diagnosis, but also in increasing trust and transparency between doctors and patients.” Anastasio he said.


Editor’s note:

The paper associated with this work is titled “A test statistics estimation-based approach to establishing self-interpretable CNN-based binary classifiers” and can be accessed online here. https://doi.org/10.1109/TMI.2023.3348699

Mark Anastasio is also a professor in the Illinois Department of Electrical and Computer Engineering, Computer Science, Biomedical and Translational Sciences, and a member of the Coordinated Science Lab. Contact him at [email protected].

Media Contact: Jenna Kurzweil, [email protected].