spot_img
HomeAnalytical Insights & PerspectivesHealthcare Professionals Face Peer Skepticism When Utilizing AI for...

Healthcare Professionals Face Peer Skepticism When Utilizing AI for Clinical Decisions, Johns Hopkins Study Reveals

TLDR: A recent study by Johns Hopkins researchers found that doctors who use artificial intelligence for primary clinical decision-making are viewed negatively by their peers, who perceive them as less competent. While using AI for verification partially mitigates this negative perception, not using AI at all garners the most favorable peer views. This “competence penalty” highlights a significant social barrier to AI adoption in healthcare, despite clinicians generally acknowledging AI’s benefits for accuracy. A separate study also indicated that patients perceive doctors using AI as less competent, trustworthy, and empathic.

A groundbreaking study conducted by researchers at Johns Hopkins University has unveiled a significant social hurdle in the adoption of artificial intelligence within clinical practice: doctors who leverage AI for clinical decision-making are often viewed negatively by their colleagues. The findings, published in the medical journal npj Digital Medicine, suggest that while AI holds immense promise for healthcare advancement, its visible use can lead to a “competence penalty” among peers.

The study involved a randomized experiment with 276 practicing clinicians from a major health system, including attending physicians, residents, fellows, and advanced practice providers. Participants were presented with vignettes depicting a physician in one of three scenarios: using no generative AI (control group), using AI as a primary decision-making tool, or using AI to verify clinical assessments. The results were stark: physicians who relied on generative AI as a primary decision-making tool received significantly lower ratings for clinical skills, competence, and overall healthcare experience compared to those who did not use AI at all. On a 1-to-7 scale, physicians using AI for primary decision-making scored a mean of 3.79 for clinical skill, whereas those not using AI scored 5.93 (P < 0.001). Using AI as a verification tool improved perceptions somewhat, with a mean rating of 4.99, but still lagged behind non-AI users.

Tinglong Dai, Bernard T. Ferrari Professor of Business at the Johns Hopkins Carey Business School and co-corresponding author of the study, emphasized the potential impact of this stigma, stating, “What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care.” Haiyang Yang, first author of the study and academic program director at Carey, added, “In the age of AI, human psychology remains the ultimate variable. The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself.”

Ironically, despite the negative peer perceptions associated with AI use, the study also revealed that clinicians generally acknowledge AI as a beneficial tool for enhancing the precision of clinical assessments. They rated generative AI as useful for ensuring accuracy (mean 4.30 overall) and even more so when customized for their institution (mean 4.96). However, this recognition of AI’s utility does not currently translate into positive peer evaluations when it comes to a physician’s perceived competence.

Further complicating the landscape, a separate study published in JAMA Network Open explored patient perceptions. This research randomized 1,300 adults into groups shown fake advertisements for family doctors, with some ads mentioning AI use for administrative, diagnostic, or therapeutic purposes. For every AI use case, patients perceived the doctors as significantly less competent, trustworthy, and empathic compared to those not mentioning AI. For instance, in terms of competence, the control group scored 3.85, while administrative AI scored 3.71, diagnostic AI 3.66, and therapeutic AI 3.58 on a 5-point scale.

Risa Wolf, co-corresponding author and associate professor of pediatric endocrinology at Johns Hopkins School of Medicine, highlighted the need for thoughtful implementation. “As AI tools become more commonly used in healthcare and in medicine, I think this really just demonstrates that there are going to be challenges, some barriers to adoption and increasing use,” she noted. She stressed the importance of understanding specific AI tools, their benefits, and ensuring equitable use. The researchers suggest that the findings align with broader literature indicating that reliance on external input can be perceived as a weakness rather than a strength.

Also Read:

These studies underscore that while healthcare leaders are increasingly adopting generative AI—with roughly 85% of healthcare leaders surveyed by McKinsey by the end of 2024 either using or exploring the technology—social and psychological barriers from both peers and patients remain significant obstacles to its widespread and effective integration into clinical workflows. Overcoming this “stigma” is crucial for AI to truly complement clinical judgment and improve patient care.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -