Global, collaborative effort leads to development of a world-first roadmap to guide meaningful application of machine learning in patient care
The potential of machine learning (ML) in health care is immense and growing, but few institutions have progressed to its use in patient care due to a wide range of factors, including lack of preparedness and cultural considerations. In a new perspective initiated by machine learning experts from across the globe – including scientists, researchers, clinicians and ethicists – the experts have developed a set of critical steps in a first-ever roadmap for how health-care systems can adopt machine learning into patient care. The paper was published on August 19 in Nature Medicine.
“The majority of ML solutions are currently being developed in silos, away from the real-world clinical problems and settings that these ML models will actually impact,” says senior author Dr. Anna Goldenberg, Senior Scientist in Genetics & Genome Biology and co-chair of Artificial Intelligence in Medicine for Kids (AIM4Kids) at The Hospital for Sick Children (SickKids); Associate Research Director, Health, at the Vector Institute; and Associate Professor in the Department of Computer Science at the University of Toronto. “Our guidelines provide a framework within which many issues stemming from the complexity of adopting ML in health care in particular can be avoided.”
"Just prior to last year's Machine Learning for HealthCare (MLHC) 2018 meeting, an interdisciplinary group of researchers got together to discuss pitfalls, challenges, and roadblocks to deploying machine learning in health care. This culminated in a written piece outlining responsible ML in health care. Our paper serves as a roadmap for researchers who are interested in getting involved in this area, and what (and who) it takes to go from problem formulation to implementation," says Dr. Jenna Wiens, Assistant Professor of Computer Science and Engineering at the University of Michigan in Ann Arbor, and incoming co-director of the Precision Health Initiative at the University of Michigan.Rapid progress in ML is enabling opportunities for better clinical decision-making and the diagnosis or prognosis of disease based on data algorithms. However, the development of ML tools has largely been lacking input from clinicians and patients. The experts suggest that an interdisciplinary team approach – from development to action – is critical to moving ML models from research environments into patient care.
Goldenberg says that an interdisciplinary team-based approach to the development of ML strategies can help increase the chances of successful implementation in clinical settings. These groups, which include clinicians and researchers, decision-makers (including government), and end-users (including patients and families), bring to the table multiple, invaluable perspectives that contribute to this success.
One example is in cases of sepsis, a serious condition that occurs as a complication of an infection.
“An ML model that aims to predict sepsis in children would likely have been developed using adult data. This model would likely not be applicable to a paediatric population, in which sepsis symptoms may present differently than in adults,” says Goldenberg, who is also the Varma Family Chair of Biomedical Informatics and Artificial Intelligence at SickKids; and Fellow, Child & Brain Development at CIFAR. While there have been advances in using artificial intelligence in medicine, the guidelines offer a responsible framework to support better decision-making, guide future ML development, and pave the path for predictive and individualized patient care.
“Machine learning makes it possible to leverage large amounts of data from past health-care encounters to learn what is most effective for each patient – it’s a powerful way to individualize care at scale. But true progress will require the right teams and focusing on getting the scientific and implementation details right,” says Dr. Suchi Saria, Director of Machine Learning and Health Lab; and John C. Malone Assistant Professor in Engineering, Medicine and Public Health at John Hopkins University.
As co-chair of AIM4Kids, Goldenberg works with an interdisciplinary team that aims to provide an integrated platform for clinicians and researchers to use artificial intelligence and data science methods to improve health outcomes and delivery of care to children. She notes that the new guidelines will inform her team’s efforts, and hopes other centres will find them valuable.
“By working together and using these key considerations as a guide for our efforts, we can reduce potential errors that may arise from bias and misinterpretation of ML models. This framework can help to ensure that clinical decisions are both data-driven and human-driven,” says Goldenberg.
The guidelines suggest a seven-step roadmap:
- Choosing the right problems. Machine learning models must provide value to clinical teams, rather than solving problems for which data is simply available. A team approach allows for the creation of clinically relevant problems that ML models can help solve.
- Developing useful solutions. Teams must scrutinize data being used to ensure it will solve clinical problems.
- Considering ethical implications. Machine learning developers need to work with multiple stakeholders to address how to counter bias that is inherent in health data collection before it is incorporated into an ML model.
- Rigorously evaluating the model. Evaluating an ML model with a new set of patients using clinically relevant outcomes, such as positive predictive value and sensitivity, to mimic the conditions under which the model will be used in the real world as accurately as possible.
- Thoughtfully reporting results. Machine learning models should be presented to clinicians with the context of how the model was developed, including a discussion of the pros and cons of using the results to limit the possibility of misinterpreted conclusions.
- Deploying responsibly. Prior to bringing machine learning into the real world, health systems should be tested in conditions that simulate deployment conditions as closely as possible to ensure they will perform as expected.
- Making it into market. To transition to real-life clinical settings, ML tools must be validated with government-required regulatory steps in mind, specific to the country in which they are used.