spot_img
HomeAnalytical Insights & PerspectivesPatients Turn to AI for Lab Result Interpretation, Raising...

Patients Turn to AI for Lab Result Interpretation, Raising Hopes and Concerns

TLDR: A growing number of patients are utilizing AI assistants like Claude, ChatGPT, and Gemini to understand their medical lab results, driven by immediate access to digital health records. While AI offers clarity and empowers patients to engage more effectively with their healthcare providers, experts caution about the risks of inaccurate information, privacy breaches, and the phenomenon of ‘hallucinations’ in AI responses. Healthcare institutions are also exploring AI to assist physicians in drafting patient interpretations.

The landscape of patient engagement with their health data is rapidly evolving, with a notable surge in individuals turning to artificial intelligence (AI) assistants to interpret complex medical lab results. This trend is exemplified by cases like Judith Miller, a 76-year-old Milwaukee resident, who used Anthropic’s Claude to understand her elevated carbon dioxide and low anion gap readings after receiving her results online. Miller reported that Claude provided a ‘clear understanding of the abnormalities,’ alleviating anxiety while she awaited her doctor’s official review.

This shift is largely facilitated by federal laws mandating immediate release of electronic health information, granting patients unprecedented access to their medical records through online portals such as MyChart. A 2023 study highlighted that a significant 96% of surveyed patients desire immediate access to their records, even before a provider’s review. Consequently, many are leveraging large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini to make sense of their health data.

While AI offers potential benefits, such as empowering patients with information and fostering more informed discussions with their healthcare teams, it also introduces considerable risks. Physicians and patient advocates warn that AI chatbots can generate incorrect answers and that sensitive medical information might not remain private. Adam Rodman, an internist at Beth Israel Deaconess Medical Center and chair of a generative AI steering group at Harvard Medical School, noted that ‘LLMs are theoretically very powerful and they can give great advice, but they can also give truly terrible advice depending on how they’re prompted.’ Justin Honce, a neuroradiologist at UCHealth in Colorado, added that it is ‘very difficult for patients who are not medically trained to know whether AI chatbots make mistakes.’

Public sentiment reflects this caution; a 2024 KFF poll indicated that 56% of adults who interact with AI are not confident in the accuracy of information provided by chatbots. Furthermore, privacy is a significant concern, as data shared with AI models goes directly to tech companies and may not comply with federal privacy laws like HIPAA. Sam Altman, CEO of OpenAI, has publicly warned against inputting personal information into ChatGPT. The risk of ‘hallucinations’—AI responses that appear sensible but are factually inaccurate—also poses a challenge, as demonstrated by an instance where OpenAI’s Whisper transcription tool introduced an imaginary medical treatment into a transcript.

Despite these challenges, the use of AI in healthcare is expanding. A recent poll from the University of Michigan found that approximately 1 in 7 adults over 50 use AI for health information, a figure that rises to 1 in 4 for adults under 30, according to the KFF poll. Liz Salmi, communications and patient initiatives director at OpenNotes, an academic lab advocating for healthcare transparency, conducted a proof-of-concept study this year. Her research showed that ChatGPT, Claude, and Gemini performed well in interpreting clinical notes, with accuracy improving when patients framed questions carefully, for example, by asking the AI to adopt a clinician persona or posing one question at a time.

Also Read:

This evolving landscape necessitates a new form of ‘digital health literacy,’ emphasizing careful questioning, cross-referencing AI responses, consulting healthcare professionals, and safeguarding personal privacy online. Beyond patient use, healthcare providers are also integrating AI; Stanford Health Care, for instance, has launched an AI assistant to help physicians draft interpretations of clinical tests and lab results for patients. While AI is proving to be a ‘very important tool’ for patients like Miller to organize questions and conduct research, the ongoing development requires a balanced approach of embracing innovation while mitigating inherent risks.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -