Coalition for Health AI publishes stakeholder guide, proposing six-phase AI lifecycle

CHAI Releases Draft Guide for Responsible AI Use in HealthcareCHAI Releases Draft Guide for Responsible AI Use in Healthcare The Coalition for Health AI (CHAI) has developed a draft framework to guide the ethical and responsible use of AI in healthcare. The 185-page document, titled “CHAI Guide to Insurance Standards,” is now open for public feedback. Stakeholder Involvement and Real-World Focus The framework was developed with input from experts across the healthcare industry, including patient advocates, technology developers, physicians, and data scientists. It focuses on practical problems and existing practices, aiming to be adopted and implemented by those involved in AI design, development, and deployment. A Six-Phase AI Lifecycle The framework outlines a six-phase lifecycle for AI development in healthcare: 1. Define the Problem and Plan for It: Identify the need, evaluate feasibility, and decide on building, buying, or collaborating. 2. Design the AI System: Specify technical requirements, workflow, and implementation strategy. 3. Develop the AI Solution: Build the model, validate data, and plan operational implementation. 4. Evaluate: Test accuracy, establish risk management, train users, and ensure compliance. 5. Pilot: Implement a small-scale trial, monitor impact, and update risk management. 6. Implement and Monitor: Deploy the solution widely, ensure continuous monitoring, and maintain quality assurance. Actionable Guidance and Checklists The framework provides actionable guidance on ethics and quality assurance, complemented by checklists of tasks for stakeholders at each phase. It aims to ensure that AI systems are developed and used responsibly, with a focus on patient safety and privacy. Collaboration and Feedback CHAI encourages stakeholders to review the draft framework and provide feedback. The final version of the guide will be informed by public input, ensuring it reflects the evolving needs and best practices in the field.

Those who thought healthcare could use a detailed framework for the responsible use of AI in healthcare just got their wish. The Coalition for Health AI, or “CHAI,” has created an in-depth guide to go along with this. And the nonprofit is inviting interested parties to help refine the document, simply called the CHAI Guide to Insurance Standardsbefore it is completed.

In announcing the draft’s release on June 26, CHAI emphasized that the framework represents a consensus view based on the expertise and knowledge of stakeholders from across the US healthcare system. Contributors included patient advocates, technology developers, physicians and data scientists.

The authors focused their approach less on conceptual brainstorming than on real-world problems and practices, hoping that the draft framework will be revised and adapted by people involved in the design, development, implementation and use of AI in healthcare.

The purpose of the framework – which includes accompanying checklists of tasks for stakeholders – is to “provide actionable guidance on ethics and quality assurance.”

In a 16-page summary of the draft framework, which brings the scale to 185 pages, the authors present a brief description of the AI ​​lifecycle. They suggest that this consists of six successive, but sometimes overlapping, phases:

1. Define the problem and plan for it.

Identify the problem, understand stakeholder needs, evaluate feasibility and decide whether to build, buy or collaborate.

“In this phase, the goal is to understand the specific problem that an AI system addresses,” the authors write. “This includes conducting surveys, interviews and research to find root causes. Teams will then decide whether to build a solution in-house, purchase it or collaborate with another organization.”

2. Design the AI ​​system.

Capture technical requirements, design system workflow, and plan implementation strategy.

“During design, the focus is on specifying what a system should do and how it fits into a healthcare workflow. This includes defining the requirements, designing the system and planning its implementation and monitoring to ensure it meets the needs of providers and users.”

3. Develop the AI ​​solution.

Develop and validate the AI ​​model, prepare data and plan operational implementation.

“This phase is about building an AI solution. The team will collect and prepare data, train AI models and develop an interface for users. The goal is to create a functional AI system that can be tested and evaluated for accuracy and effectiveness.”

4. Rate.

Perform local validation, establish a risk management plan, train end users, and ensure compliance.

“The assessment phase is where AI systems are tested to decide whether they are ready for a pilot launch. This includes validating the system, training users and ensuring it meets healthcare standards and regulations. The goal is to confirm that the system is working correctly and is safe to use.”

5. Pilot.

Implement a small-scale pilot, monitor real-world impact and update risk management.

“In this phase, the AI ​​systems are tested in practice on a small scale. The aim is to evaluate its performance, user acceptance and overall impact. Based on the results, the team will decide whether to proceed with a larger-scale deployment.”

6. Implement and monitor.

Deploy the AI ​​solution widely, perform continuous monitoring and maintain quality assurance.

“The final phase involves deploying AI systems on a larger scale and monitoring their performance. This ensures that systems remain effective and can be adjusted as necessary, maintaining high quality and reliability in healthcare.”

Go deeper with CHAI:

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *