Overcoming AI Misconceptions: The Role of Governance in Healthcare
-
August 19, 2024
-
Imagine a crisp Monday morning outside a bustling city hospital. The sun is shining, and the weather is beautiful. As you walk in to see your doctor, you witness a protest. Nurses have gathered, their voices echoing with determination and concern as they chant, “Care Beyond Code” and “Patients Need Heart, Not Algorithms.” Their worries are clear: they fear a future where decisions about patient care are dictated not by compassion and experience but by algorithms focused on optimizing efficiency and reducing costs.
These kinds of protests highlight broader fears and serve as a reminder of the misconceptions and ethical and safety concerns surrounding the rapid advancement of technology in medicine, particularly regarding artificial intelligence.1, 2, 3 What is needed is sound AI governance that includes policies and procedures to address and dispel concerns, along with a structure and strategy to continuously understand and adapt to changing risks and concerns as AI becomes more mainstream in the healthcare industry. In this article, we will explore three common misconceptions about AI in healthcare that spark heightened concern and discuss how establishing an effective AI governance framework is foundational to addressing these misconceptions:
Common misconception 1: AI will replace nursing and ancillary healthcare jobs.
Common misconception 2: AI will make diagnosis and treatment decisions, instead of providers.
Common misconception 3: AI in healthcare does not prioritize patient safety.
Establishing AI Governance
The integration of AI in healthcare is fraught with very real challenges that necessitate a robust AI governance framework, which includes a Steering Committee to provide executive oversight, high-level direction and alignment on what AI is and isn’t throughout the organization, and an Internal Review Committee to provide ongoing tactical and operational support to AI efforts. While the exact AI governance structure depends upon a number of factors (risk appetite, organization size, culture, etc.), two elements are paramount:4
Executive oversight by a high-level board/Steering Committee
- Purpose: Provide direction, mandates, and resourcing to responsible AI efforts in a timely manner
- Composition: Leaders ensure that the Central AI Board/Steering Committee includes C-suite executives, senior legal and compliance officers, and technology leaders. Some leaders also incorporate or frequently consult external advisors or experts to offer independent insights on complex AI issues.
Internal review by an operational committee
- Purpose: Conduct arm’s-length internal reviews of AI systems at various stages, including feasibility and resource allocation, technical scope, and Responsible AI considerations
- Composition: Leaders ensure that Internal Review Committees (“IRCs”) are cross-functional (involving multiple departments) with representation from business lines, technology, and compliance. These committees are tasked with practical implantation and coordination of all Responsible AI efforts and workstreams.
Sound AI governance, with active participation from leaders across the organization, is essential to oversee the development, deployment and monitoring of AI technologies, as well as manage its impacts on the organization and culture. AI governance should be comprised of a diverse group of stakeholders, including clinicians, data scientists, ethicists, legal, compliance, risk, marketing/public relations and patient representatives.5
The goal in assembling this comprehensive group of leaders is to ensure a holistic approach. The Internal Review Committee focuses on addressing the crucial technical and perceived considerations, including utilizing AI data effectively, ensuring its quality, prioritizing consents, promoting sustainable data practices, rigorously reviewing data procedures, managing cultural impacts and equity of data and applying an ethical framework that aligns with the data life cycle.6 While the Steering Committee has oversight of these technical elements, it is also tasked with addressing the human-centric considerations of accountability and transparency as well as understanding and addressing workforce and public concerns. Workforce and patient representatives should be responsible not only for communicating concerns from their communities to the committee, but also for dispelling misconceptions both internal to the organization and externally.
Assembling these committees with the right stakeholders first requires buy-in from the CEO. You will need to define how the need for AI governance (and presumably the expansion of AI capabilities) aligns with your organization’s goals or strategic initiatives. Then identify a comprehensive list of key subject matter experts that draws from all aspects of the organization; it should include AI evangelists as well as skeptics to make clear how AI can be delivered successfully and transparently into the organization’s DNA. Articulate the importance of the governance committee from their perspective, whether it be risk, ethics, safety, medical advancement and research or patient experience, and underscore the importance of being on the forefront of the industry’s transformation. Leverage opportunities to seek workforce representatives through “town hall” forums and patient representatives through patient advocacy groups and providers, and create clearly defined roles, responsibilities and objectives. Opinions and perceptions are important to hear, as that will facilitate opportunities for greater adoption. The overall cadence of meetings, voting structure, escalation processes, feedback loops and individual assignment of roles should be customized to each organization.
Culture, leadership, and acceptance will play an important role in establishing a sound AI governance process focused on transparency and acceptance of AI in the various departments within your organization. While misconceptions, media influence and outside industry experiences will inundate your organization, having the right AI governance structure and processes in place will be essential to dispelling the following common AI misconceptions.
Common misconception 1: “AI will replace nursing and ancillary healthcare jobs.”
AI governance corrective: Establish transparency and accountability
Transparency in AI decision-making processes is a crucial component of sound AI governance. When providers, patients, and sometimes the general public are not well-informed about the usage and reliability of AI platforms, confusion and fear can easily take over. In our initial nursing protest example, nurses were concerned that using AI could compromise quality, patient safety and nursing care. Imagine if the decision-making processes behind deploying AI tools were not transparent to the nurses and direct care teams: to them, these AI tools might seem untested or poorly tested. They were never involved in piloting the tools with patients or providing feedback. They didn't have representatives to voice their concerns or relay information back to nursing forums. They feared that the tools were being deployed as substitutes for nurses rather than as tools to enhance their work. Instead, AI should be viewed as a supplemental and directional tool that extends the expertise of healthcare professionals, enabling them to make better-informed decisions rather than replacing their critical judgment. When applied properly, AI offers the opportunity to enhance workforce capabilities by handling repetitive or monotonous tasks, allowing healthcare professionals to work to the top of their license on more complex cases.
Effective AI governance must ensure transparency that reaches not only clinical teams but also patients and caregivers. Healthcare providers and patients need to understand both the intended purpose of AI tools and how these AI systems arrive at their recommendations. This transparency builds trust and alleviates fear, and it ensures accountability, particularly if AI decisions lead to adverse outcomes. To foster an environment of openness and trust, sound AI governance should establish clear and transparent policies, procedures and guidelines for documenting and disclosing AI decision-making processes. The ideal state is a culture in which concerns can be addressed early, and often collaboratively, before they develop into heightened fears.
Common misconception 2: “AI will make diagnosis and treatment decisions instead of physicians.”
AI governance corrective: Address and account for ethical dilemmas
Another critical role of sound AI governance is addressing ethical concerns, particularly the risk of replacing human judgment with AI, which can lead to potential machine errors and algorithmic biases in diagnosis and treatment decisions. AI systems are only as reliable as the data they are trained on, which typically comes from sources such as enterprise data warehouses (“EDWs”), electronic medical records (“EMRs”) and other repositories of patient information. If this data is unclean, inaccurate, or skewed due to the population being treated, or if the data used to train the models is inherently biased towards a specific organization, the AI's recommendations will also be biased. This bias can potentially lead to disparities in both diagnosis and care.7
Let’s look at a medical imaging example: If a radiology department wants to deploy an AI tool, they must engage the governance committee to (1) assess whether the AI tool meets their specific clinical needs, (2) evaluate the potential AI tool using case data from their own practice or hospital system and compare to standard-of-care performance metrics, and (3) ensure that all parties remain aware of the risks of excessive reliance on technology while maintaining their clinical expertise.8 Sound AI governance must implement rigorous testing protocols to identify and mitigate biases, ensuring that AI systems provide fair and equitable care to all patients. However, there is wide agreement across the industry that even after actively taking these steps to prevent bias and maintain safety and equity while using AI in medicine, ultimately the onus of equitable patient care must remain on physicians and not on technology, no matter how sophisticated.9 In the case of medical imaging, for example, AI can act as a “second pair of eyes,” much like a clinician asking a colleague for a second opinion.10
Common misconception 3: “AI in healthcare does not prioritize patient safety.”
AI governance corrective: Remain current with regulatory compliance and standards
Healthcare AI must comply with regulatory standards to ensure safety and efficacy. Sound AI governance should prioritize staying aware of evolving regulations and ensuring that AI systems meet or exceed these standards. In fact, in October 2023, The World Health Organization published a document defining critical regulatory factors concerning artificial intelligence in healthcare. Among these factors are the pertinent and challenging privacy and data protection regulations outlined in Europe's General Data Protection Regulation (“GDPR”) and the United States' Health Insurance Portability and Accountability Act (“HIPAA”).11, 12 Healthcare organizations have a responsibility to ensure compliance standards are met. By ensuring compliance, the Steering Committee defends patient rights and maintains the integrity of the healthcare system.
Sound AI governance will draw on and collaborate with compliance and risk leaders as well as others who actively monitor regulatory changes. Some organizations outside of healthcare have leaned into AI to help them track the ever-changing regulatory landscape they face. In the future, AI could be used similarly in healthcare, and it could even be leveraged to manage contracts and contract changes for coding or procurement purposes.
Conclusion
Integrating AI into healthcare is a process filled with incredible opportunities and significant challenges. While AI offers immense potential benefits, realizing these benefits requires a balanced and disciplined approach that acknowledges current limitations, addresses ethical concerns and fosters the organization's innovative spirit.
AI in healthcare should be regarded as a powerful tool that complements the expertise of healthcare professionals and improves patient care. However, its deployment must be guided by ethical principles, transparency and rigorous oversight. Sound AI governance plays a crucial role in this process, ensuring that AI technologies are developed and used responsibly, ultimately leading to better health outcomes for all. A recent survey conducted by the Center for Connected Medicine indicates that only 16% of health systems currently have a systemwide governance policy specifically intended to address AI usage and data access, and most are in the very early stages of evaluating and implementing potential AI capabilities — emphasizing that the healthcare industry is only just beginning to uncover both the compelling possibilities and the complicated risks of AI applications in healthcare.13
As healthcare leaders and innovators continue to integrate AI into healthcare, the industry must stay vigilant, ethical and patient-centered to ensure that the AI revolution benefits everyone, both now and in the future. With AI's transformative power, the future of healthcare looks brighter than ever, promising enhanced patient care and groundbreaking advancements that will revolutionize the way we approach health and wellness.
Footnotes:
1: Giles Bruce, “Why nurses are protesting AI,” Beckers Health IT, 24 April 2024.
2: Suzanne King, “Artificial intelligence already plays a part in Kansas City healthcare, without much regulation,” The Beacon Kansas City, 22 February 2024.
3: Jared Kaltwasser, “The Doctor Will See You Now, But AI May Be Listening In,” Managed Healthcare Executive, Vol. 24, no. 7, 29 July 2024.
4: FTI Consulting analysis.
5: Nehal Hassan, Robert Slight, Graham Morgan, David Bates, Suzy Gallier, Elizabeth Sapey & Sarah Slight, “Road map for clinicians to develop and evaluate AI predictive models to inform clinical decision-making,” BMJ Health & Care Informatics, Vol. 30, no.1 (August 2023).
6: Ciro Mennella, Umberto Maniscalco, Giuseppe De Pietro & Massimo Esposito, “Ethical and regulatory challenges of AI technologies in healthcare: A narrative review,” Heliyon, Vol. 10, no. 4 (29 Feb 2024).
7: Jayson Marwaha, Adam Landman, Gabriel Brat, Todd Dunn & William Gordon, “Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation,” NPJ Digital Medicine, Vol. 5, no.1 (Jan 2022).
8: Our understanding of these steps and the role of the physician is informed by the following: Daniel Rubin, “Artificial Intelligence in Imaging: The Radiologist's Role,” Journal of the American College of Radiology, Vol. 16, no. 9, Sept 2019.
9: Shouroug Alowais, Sahar Alghamdi, Nada Alsuhebany, Tariq Alqahtani, Abdulrahman Alshaya, Sumaya Almohareb, Atheer Aldairem, Mohammed Alrashed, Khalid Saleh, Hisham Badreldin, Majed Al Yami, Shmeylan Al Harni, Abdulkareem Albkairy, “Revolutionizing healthcare: the role of artificial intelligence in clinical practice,” BMC Medical Education, Vol. 23, no. 1 (22 Sept 2023).
10: Ibid.
11: “WHO outlines considerations for regulation of artificial intelligence for health,” World Health Organization (19 October 2023).
12: Gabriel Perna, “How healthcare AI is regulated by the FDA, HHS, State Laws,” Modern Healthcare (26 March 2024).
13: For the Center for Connected Medicine Survey, see: “How health systems are navigating the complexities of AI,” Center for Connected Medicine (2024).
Published
August 19, 2024
Key Contacts
Senior Managing Director
Senior Managing Director
Managing Director