TLDR: The Hastings Center has released “A Patient’s Journey with Medical AI: The Case of Mrs. Jones,” an interactive tool designed to help policymakers, patients, and healthcare providers understand and address the ethical challenges posed by artificial intelligence in healthcare. The tool explores five scenarios where AI impacts patient care, from chatbot interactions to insurance denials, prompting users to consider issues of transparency, informed consent, accuracy, privacy, accountability, and fairness.
The Hastings Center, a leading bioethics research institution, has launched a groundbreaking interactive tool titled “A Patient’s Journey with Medical AI: The Case of Mrs. Jones.” This innovative resource aims to illuminate the complex ethical landscape of artificial intelligence in healthcare for a diverse audience, including policymakers, patients, and healthcare providers. Released on September 25, 2025, the tool delves into five distinct scenarios where AI intersects with patient care, prompting critical reflection on the associated ethical, legal, policy, and practical concerns.
The “Patient’s Journey” tool presents the story of an imaginary patient, Mrs. Jones, through a series of encounters with medical AI. Each scenario is meticulously crafted to highlight specific bioethical issues:
1. Patient Contacts Healthcare System with a Concern: This initial interaction involves Mrs. Jones communicating with an AI-powered chatbot, believing it to be a human clinician. The scenario raises crucial questions about transparency, informed consent, accuracy of AI assessments across diverse populations, and the need for regulatory oversight regarding AI-based triage systems. It prompts users to consider who is responsible if a chatbot’s recommendation leads to harm.
2. Ambient Recording of Clinical Appointment: During a doctor’s visit, an AI-enabled clinical documentation system records and transcribes the conversation. This scenario explores issues of data privacy and ownership, the accuracy and contextual understanding of AI in summarizing complex medical discussions, and the extent of human oversight required to prevent overreliance on AI-generated notes and diagnoses.
3. Reading the Results of a Diagnostic: Mrs. Jones undergoes an examination using an AI-powered diagnostic device, without being fully informed about its AI capabilities or the diversity of the dataset it was trained on. This section emphasizes the importance of transparency regarding AI’s influence on diagnosis and treatment plans, the limitations of FDA approval for AI devices, and the necessity of rigorous testing on diverse patient populations to ensure accuracy and fairness.
4. AI Makes Recommendations Regarding Next Steps: An AI system analyzes Mrs. Jones’ biopsy results and suggests treatment options. The doctor, influenced by these recommendations, leans towards a particular approach without fully disclosing the AI’s role or alternative paths to the patient. This scenario addresses clinical decision-making, medical uncertainty in AI recommendations, and patient autonomy, particularly the right to opt out of AI-influenced decisions.
5. Patient’s Insurance Denies Next Steps: The final scenario depicts Mrs. Jones’ insurance claim being denied by an AI algorithm based on a risk-based, cost-effective prediction model. This highlights ethical issues related to algorithmic decision-making in insurance, transparency and accountability in appeal processes, and the reliability of AI models to deliver fair results for all patient populations, especially concerning potential biases against certain demographics.
Also Read:
- Healthcare’s Generative AI Revolution: Opportunities and Obstacles
- Global Regulators Intensify Scrutiny on AI, Ushering in an Era of Accountability
The Hastings Center emphasizes that the tool is not intended to provide legal or ethical advice but rather to serve as a framework for thoughtful consideration and decision-making. It underscores the importance of clear disclosures to patients about AI involvement, robust regulatory oversight to standardize AI use based on ethical principles (fairness, transparency, patient privacy, and explainability), and human oversight in AI-driven insurance decisions to reduce bias and protect underserved populations. The tool was reviewed by a team of policy experts for accuracy and utility, reflecting a range of political perspectives.


