Mon. Dec 23rd, 2024
Rethinking Personalized Medicine: The Limits Of Ai In Clinical Trials

summary: New research reveals limitations in the current use of mathematical models in personalized medicine, particularly in the treatment of schizophrenia. Although these models can predict patient outcomes in a particular clinical trial, they fail when applied to different trials, raising questions about the reliability of AI-driven algorithms in different settings.

This study highlights that algorithms need to demonstrate effectiveness in multiple situations before they can be truly trusted. The findings highlight the large gap between the potential and current implementation of personalized medicine, especially when considering the variability in clinical trials and real-world clinical practice.

Important facts:

  1. The mathematical models currently used for personalized medicine are valid within a specific clinical trial, but cannot be generalized across different clinical trials.
  2. This research raises concerns about the application of AI and machine learning to personalized medicine, particularly in diseases like schizophrenia, where treatment responses vary widely from person to person.
  3. This study suggests that more comprehensive data sharing and the incorporation of additional environmental variables could improve the reliability and accuracy of AI algorithms in healthcare.

sauce: yale university

The quest for personalized medicine, a medical approach in which healthcare professionals use a patient’s unique genetic profile to tailor individual treatment, has emerged as an important goal for the medical field. But a new study led by Yale University shows that the mathematical models currently available to predict treatments have limited effectiveness.

In an analysis of multiple schizophrenia treatment clinical trials, researchers found that mathematical algorithms were able to predict outcomes for patients within the specific trial in which they were developed, but not for patients participating in different trials. I discovered that it doesn’t work.

The findings will be published in the journal Jan. 11 science.

“This study truly challenges the status quo in algorithm development and raises the bar for the future,” said Adam Chekroed, adjunct assistant professor of psychiatry at Yale School of Medicine and corresponding author of the paper. “At this point, I think we need to see the algorithm work in at least two different settings before we get really excited.”

“I’m still optimistic. But as a medical researcher, there are some important things to figure out,” he added.

Chekroud is also president and co-founder of Spring Health, a private company that provides mental health services.

Schizophrenia is a complex brain disease that affects about 1% of the U.S. population, making it a perfect example of the need for more personalized treatments, researchers say. As many as 50% of patients diagnosed with schizophrenia do not respond to the first antipsychotic medication prescribed, but it is impossible to predict which patients will respond to treatment and which will not. Is possible.

Researchers say new technology using machine learning and artificial intelligence will create algorithms that can better predict which treatments will be effective for different patients, helping improve outcomes and lower treatment costs. I hope it’s possible.

However, most algorithms are only developed and tested using a single clinical trial, as conducting clinical trials is costly. But the researchers had hoped that these algorithms would work if they were tested on patients with similar profiles and receiving similar treatments.

In the new study, Chekroed and colleagues at Yale University wanted to see if this hope was indeed true. To that end, they aggregated data from his five clinical trials of schizophrenia treatments and made it available through the Yale University Open Data Access (YODA) project, which advocates and supports the responsible sharing of clinical research data. did.

In most cases, the algorithms were found to effectively predict patient outcomes in the clinical trials in which they were developed. However, it has not been able to effectively predict outcomes in schizophrenia patients treated in various clinical trials.

“The algorithm worked for the most part from the beginning,” Cheklod says. “But when we tested them in patients from other trials, the predictive value was only by chance.”

The problem, Chekroud said, is that most of the mathematical algorithms used by medical researchers are designed for use with much larger data sets. Because clinical trials are expensive and time-consuming to conduct, he typically enrolls fewer than 1,000 patients in a study.

Applying powerful AI tools to analyze these small datasets can often result in “overfitting,” he said. This means that the model learns a unique response pattern, or one that is unique only to the initial test data, but then disappears. Contains additional new data.

“The reality is that you need to think about algorithm development the same way you would develop a new drug,” he says. “To really believe in an algorithm, you need to see it work across multiple different times and situations.”

In the future, including other environmental variables may or may not improve the algorithm’s success in analyzing clinical trial data, the researchers added. For example, does the patient abuse drugs or do they receive personal support from family and friends? These are the types of factors that can influence the outcome of treatment.

Most clinical trials use precise criteria, such as guidelines for including (or excluding) which patients, careful measurement of outcomes, and limits on the number of treating physicians, to increase the likelihood of success. It has been. In the real world, however, patients are much more diverse and the quality and consistency of treatment varies widely, the researchers say.

“In theory, clinical trials should be the place where algorithms most easily work. However, if algorithms cannot be generalized from one clinical trial to another, their use in clinical settings is even more difficult. It’s going to be difficult,” said co-author John Crystal, the Robert L. McNeil Jr. Professor of Translational Research. in psychiatry, neuroscience, and psychology from Yale University School of Medicine. Crystal is also the chair of the psychiatry department at Yale University.

Chekrode suggested that increased efforts to share data among researchers and the storage of additional data by large healthcare providers could help improve the reliability and accuracy of AI-driven algorithms. are doing.

“Although this study deals with a clinical trial in schizophrenia, it raises difficult questions about broader personalized medicine and its application to cardiovascular disease and cancer,” said Associate Professor of Psychiatry at Yale University and research co-author. Author Philip Corlett said:

The study’s other Yale authors are Hieronymus Rojo. Laritsa Georgieva, senior research fellow at Yale School of Public Health; and Harlan M. Krumholtz, Harold H. Hines Jr. Professor of Medicine (cardiology) at Yale University.

About this AI/personalized medicine research news

author: Beth Connolly
sauce: yale university
contact: Beth Connolly – Yale University
image: Image credited to Neuroscience News

Original research: Closed access.
Fantastic generalizability of clinical prediction modelsWritten by Adam Chekroed et al. science


abstract

Fantastic generalizability of clinical prediction models

Statistical models are widely expected to improve healthcare decision-making. Because of the cost and scarcity of medical outcomes data, this expectation is typically based on researchers observing model success in one or two datasets or clinical situations.

We put this optimism under scrutiny by investigating how well machine learning models performed in several independent clinical trials of antipsychotic drugs for schizophrenia.

The model predicted patient outcomes with high accuracy within the study in which it was developed, but yielded results comparable to chance when applied outside the sample. Pooling data across trials to predict the outcome of excluded trials did not improve prediction.

These results suggest that models predicting treatment outcome in schizophrenia are highly context-dependent and may have limited generalizability.