American Heart Association Issues New Framework for Responsible AI Implementation in Healthcare

The American Heart Association has released new guidance addressing the critical gaps in AI evaluation and monitoring in healthcare, providing a practical framework to ensure AI tools deliver clinical benefits while protecting patients from potential harms.

November 10, 2025
American Heart Association Issues New Framework for Responsible AI Implementation in Healthcare

The American Heart Association has released a new science advisory urging health systems to adopt clear and simple rules for using artificial intelligence in patient care. This guidance comes as hundreds of health care AI tools have been cleared by the U.S Food and Drug Administration, yet only a fraction are rigorously evaluated for clinical impact, fairness or bias.

Published in the Association's flagship journal, Circulation, the advisory introduces a pragmatic, risk-based framework for evaluating and monitoring AI tools in cardiovascular and stroke care. The framework builds on prior published AI frameworks to identify critical gaps in current practices and includes key principles to help health systems build effective AI governance for selecting, validating, implementing, and overseeing AI tools.

The four guiding principles proposed for health systems deploying clinical AI are strategic alignment, ethical evaluation, usefulness and effectiveness, and financial performance. These principles aim to ensure AI tools deliver measurable clinical benefit while safeguarding individuals from known and unknown harms. According to Sneha S. Jain, M.D., M.B.A., volunteer vice chair for the American Heart Association AI Science Advisory writing group, AI is transforming health care faster than traditional evaluation frameworks can keep up.

The advisory highlights significant concerns about current AI deployment practices. A recent survey found that only 61% of hospitals using predictive AI tools validated them on local data prior to deployment, and fewer than half tested for bias. This variability is most pronounced among smaller, rural and non-academic institutions, raising concerns about consistent care delivery across diverse patient populations and safety.

The American Heart Association's extensive network of nearly 3,000 hospitals participating in the Get With The Guidelines® quality improvement programs, including more than 500 rural and critical access facilities, positions the organization as a trusted leader in advancing responsible AI governance. The Association has committed over $12 million in research funding in 2025 to test novel health care AI delivery strategies for safety and efficacy.

The science advisory writing group emphasizes that monitoring of AI tools cannot end after deployment. Performance of AI tools may drift as clinical practice changes or patient populations differ. Health systems should integrate AI governance into existing quality assurance programs and define clear thresholds for retraining or retiring tools if performance declines. Lee H. Schwamm, M.D., FAHA, volunteer member of the American Heart Association committee on AI and Technology Innovation, stated that responsible AI use is not optional but essential for ensuring tools improve patient outcomes and support equitable, high-quality care.