Global expert consensus defines first framework for building trustworthy AI in health care

The guidelines are the first globally acknowledged framework for developing and deploying health care AI applications and gauging whether the information they generate can be trusted or not.
More than 100 international experts in the application of artificial intelligence (AI) in health care published the first set of consensus guidelines that outline criteria for what it means for an AI tool to be considered trustworthy when implemented in health care settings.
The guidelines, published in the journal the BMJ, are the first globally acknowledged framework for developing and deploying health care AI applications and gauging whether the information they generate can be trusted or not.
What this means
Called the FUTURE-AI framework, the consensus guidelines are organized based on six guiding principles:
- Fairness
- Universality
- Traceability
- Usability
- Robustness
- Explainability
The cadre of experts reviewed and agreed upon a set of 30 best practices that fall within the six larger categories. These practices address technical, clinical, socio-ethical and legal aspects of trustworthy AI. The recommendations cover the entire lifecycle of health care AI: design, development and validation, regulation, deployment and monitoring.
The authors encourage researchers and developers to take these recommendations into account in the proof-of-concept phase for AI-driven applications to facilitate future translation to clinical practice.
Why it matters
“Patients, clinicians, health organizations and authorities need to know that information and analysis generated by AI can be trusted, or these tools will never make the leap from theoretical to real world application in a clinical setting,” says Marius George Linguraru, DPhil, MA, MSc, Connor Family Professor for Research and Innovation in the Sheikh Zayed Institute for Surgical Innovation at Children’s National Hospital and co-author of the guidelines. “Bringing so many international and multi-disciplinary perspectives together to outline the characteristics of a trustworthy medical AI application is part of what makes this work unique. It is my hope that finding such broad consensus will shed light on the greater good AI can bring to clinics and help us avoid problems before they ever impact patients.”
The FUTURE-AI consortium was founded by Karim Lekadir, PhD, ICREA Research Professor at the University of Barcelona in 2021 and now comprises 117 interdisciplinary experts from 50 countries representing all continents, including AI scientists, clinical researchers, biomedical ethicists and social scientists. Over a 2-year period, the consortium established these guiding principles and best practices for trustworthy and deployable AI through an iterative process comprising an in-depth literature review, a modified Delphi survey and online consensus meetings. Dr. Linguraru contributed with a unique perspective on AI for pediatric care and rare diseases.
What’s next
The authors note that, “progressive development and adoption of medical AI tools will lead to new requirements, challenges and opportunities. For some of the recommendations, no clear standard on how these should be addressed yet exists.”
To tackle this uncertainty, they propose FUTURE-AI as a dynamic, living framework. This includes a dedicated website to allow the global community to participate in the FUTURE-AI network. Visitors can provide feedback based on their own experiences and perspectives. The input gathered will allow the consortium to refine the FUTURE-AI guidelines and learn from other voices.
Read the full manuscript outlining all 30 best practices: FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare