TLDR: The American Medical Association (AMA) has released new guidance and a toolkit, ‘Governance for Augmented Intelligence,’ to help health systems and hospitals establish robust policies for the adoption and governance of artificial intelligence. This initiative comes as AI tools rapidly expand in healthcare, with nearly 70% of physicians reporting AI use in 2024, up from 38% in 2023. The guidance emphasizes eight foundational elements for responsible AI integration, aiming to ensure patient safety, mitigate bias, and support organizational readiness amidst the fast-paced technological advancements.
The American Medical Association (AMA) has introduced a critical new framework and toolkit designed to guide health systems and hospitals in the responsible adoption and governance of artificial intelligence (AI) technologies. Titled ‘Governance for Augmented Intelligence,’ this comprehensive guidance, developed in collaboration with Manatt Health, addresses the urgent need for structured policies as AI tools rapidly integrate into clinical practice.
According to Dr. Margaret Lozovatsky, the AMA’s chief medical information officer and vice president of digital health innovations, the pace of technological advancement in AI far outstrips the rate at which healthcare organizations can implement these tools. “Technology is moving very, very quickly. It’s moving much faster than we’re able to actually implement these tools, so setting up an appropriate governance structure now is more important than it’s ever been because we’ve never seen such quick rates of adoption,” Dr. Lozovatsky stated. This sentiment underscores the AMA’s proactive stance in ensuring that AI’s transformative potential is harnessed safely and ethically.
The necessity for such guidance is further highlighted by recent data indicating a significant surge in AI adoption among physicians. AMA survey data reveals that nearly 70% of physicians utilized AI tools in 2024, a substantial increase from 38% in 2023. Concurrently, physician enthusiasm for AI has grown, with 35% expressing more excitement than concern in 2024, up from 30% the previous year.
The AMA’s guidance outlines eight foundational elements crucial for responsible AI adoption:
1. Establishing Executive Accountability and Structure: Defining clear leadership roles and organizational frameworks for AI oversight.
2. Forming a Working Group: Creating a dedicated team to detail priorities, processes, and policies related to AI.
3. Assessing Current Policies: Reviewing existing organizational policies for their relevance and applicability to AI adoption.
4. Developing New AI-Specific Policies: Crafting new guidelines tailored to the unique challenges and opportunities presented by AI.
5. Defining Project Intake, Vendor Evaluation, and Assessment Processes: Establishing rigorous procedures for selecting, evaluating, and integrating AI solutions from external vendors.
6. Updating Standard Planning and Implementation Processes: Integrating AI considerations into existing operational planning and deployment strategies.
7. Establishing an Oversight and Monitoring Process: Implementing mechanisms for continuous monitoring and evaluation of AI tool performance and impact.
8. Supporting AI Organizational Readiness: Fostering a culture and infrastructure that prepares staff and systems for effective AI integration.
The toolkit also provides resources, including a downloadable model AI governance policy that can be customized to fit an organization’s specific needs. The AMA emphasizes that clinical experts should lead the evaluation of AI applications to ensure their quality and clinical validity. Transparency is also paramount, with healthcare organizations urged to clearly communicate how AI impacts medical decisions and patient care at the point of interaction.
Also Read:
- States Intensify Legislative Scrutiny on AI in Mental Health Services and Prior Authorization
- China Unveils Draft Measures to Enhance Ethical Governance in AI Development
Experts stress the importance of comprehensive governance. As one leader noted, “You need to have the governance in place to make sure that you understand all of the tools that are being used, how the tools are being used, the intended outcome of usage, and how you mitigate bias.” This includes defining terms such as generative AI and machine learning, articulating potential risks, and establishing clear rules for permitted and prohibited use cases. Furthermore, the AMA advises health systems to review and align related policies—such as anti-discrimination, code of ethics, contracting, data security, patient safety reporting, and training—with new AI regulations to ensure a cohesive and ethical approach to this rapidly evolving technology. Nearly half of surveyed physicians identified enhanced oversight as the top regulatory need to build trust in AI tools, reinforcing the AMA’s call for robust governance that prioritizes patient safety, health equity, and the quadruple aim of healthcare.


