The FUTURE-AI guideline is an international consensus framework aimed at ensuring the trustworthy implementation of artificial intelligence (AI) in healthcare.
Established by the FUTURE-AI Consortium in 2021, which includes 117 experts from 50 countries, the framework is grounded in six core principles: fairness, universality, traceability, usability, robustness, and explainability.
To facilitate the operationalisation of trustworthy AI, the framework outlines 30 best practices that encompass technical, clinical, socioethical, and legal aspects.
These recommendations span the entire lifecycle of healthcare AI, guiding processes from design and development to regulation and monitoring.
This comprehensive approach aims to enhance the effectiveness and reliability of AI tools in the healthcare sector.
We’ve summarised the research methods and reporting used to develop the guideline, highlighting the significant points, discussions and ideas.
Major Points
- The increasing use of AI in healthcare for tasks like diagnosis and prognosis faces challenges in real-world clinical adoption due to limited trust and ethical concerns. These concerns include potential errors, biases, lack of transparency, and data privacy issues.
- To address this, the FUTURE-AI Consortium, comprising 117 experts from 50 countries, developed a guideline through a modified Delphi approach over 24 months involving extensive feedback and iterative discussions.
- The FUTURE-AI framework is built upon six guiding principles: fairness, universality, traceability, usability, robustness, and explainability. These principles are intended to streamline and structure best practices for trustworthy AI in medicine.
- The guideline provides a set of 30 detailed best practices that address technical, clinical, socioethical, and legal dimensions across the entire lifecycle of healthcare AI, from design and development to validation, regulation, deployment, and monitoring.
- The development of these recommendations involved multiple rounds of surveys, feedback, and consensus meetings with a diverse group of experts, including AI scientists, clinicians, ethicists, legal experts, and patient advocates.
Discussions
- The paper highlights the need for clear and widely accepted guidelines for responsible and trustworthy AI in healthcare to increase its adoption.
- It distinguishes itself from previous efforts that focused on reporting AI studies or general AI ethics by providing best practices for the actual development and deployment of healthcare AI across its entire lifecycle.
- The framework draws inspiration from the FAIR principles for data management.
- Continuous risk assessment and mitigation are emphasised as fundamental throughout the AI lifecycle to address biases, data variations, and evolving challenges.
- The guideline recognises the different levels of compliance required for AI tools in the research phase versus those intended for clinical deployment.
- The paper discusses the operationalisation of the FUTURE-AI framework, providing a chronological step-by-step guide across the design, development, validation, and deployment phases, with practical examples for each of the 30 recommendations.
Conclusions
- The FUTURE-AI guideline offers a comprehensive and self-contained set of recommendations for developing trustworthy and deployable AI in healthcare.
- By organising the recommendations under six guiding principles, the framework clearly characterises the pathways towards responsible and trustworthy AI.
- The guideline is intended to benefit a wide range of healthcare stakeholders.
Lessons To Be Learnt
- Developing trustworthy AI in healthcare requires a multidisciplinary and international collaborative effort to achieve broad consensus.
- A structured framework with clear guiding principles and actionable recommendations is essential for operationalising trustworthy AI.
- Addressing ethical, legal, social, clinical, and technical dimensions is crucial throughout the AI lifecycle.
- Continuous engagement with stakeholders and ongoing risk management are vital for the successful development and deployment of healthcare AI.
- The need for transparency, accountability, and the ability to explain AI decisions are paramount for building trust among patients, clinicians, and regulatory bodies.
- Local validation, usability testing, and evaluation of clinical utility are critical steps for ensuring successful deployment in real-world healthcare settings.
- The FUTURE-AI framework is designed to be a dynamic and evolving guideline, requiring ongoing feedback and adaptation to technological advancements and stakeholder needs.
Artificial Intelligence is massively transforming whole industries. Education, Manufacturing, Software, Agriculture, you name it. However, for healthcare, the adoption of AI seems to be a bit slower despite numerous research and expert predictions of the benefits and potential of AI in healthcare.
Healthcare AI faces challenges that span clinical, technical, socioethical and legal landscapes, considering that healthcare is close-ended and very personal.
Despite these challenges, several healthcare AI solutions have been integrated into healthcare and shown to be helpful, signifying potential.
Guidelines like this developed by FUTURE-AI will provide safety nets for using Healthcare AI. As we study and understand AI better, we can adapt, adjust, and evolve practices, methods, systems, and processes to accommodate AI in the healthcare system safely.
Read the full paper here: