AI Is Listening. Are Patients Being Heard?
- Sophie Rinzler
- Jan 6
- 5 min read
Authored by: Sophie Rinzler
Art by: Stefanie Chen
From the moment a clinician turns toward a keyboard, something essential is lost: the full gaze, the clarifying nod, the pause that signals undivided attention. As electronic health record (EHR) demands expand, human medical scribes are increasingly replaced by artificial intelligence (AI). Companies like Abridge and Nuance promise to restore clinicians’ focus on patients, reducing burnout and reclaiming time for care. Yet behind these assurances lie serious risks: technical, ethical, and distributive. The question is not only how these systems alter workflow, but also how they reshape the quality and equity of patient care.
AI-powered “ambient scribes” use automatic speech recognition (ASR) and large language models (LLMs) to capture clinician-patient conversations, filter out irrelevant details, and produce draft notes for review [1]. Advocates highlight three main benefits, each with direct patient implications:
Time savings and workload relief: In one ambulatory pilot, documentation time per visit decreased nearly 20%, after-hours EHR work fell 30%, and same-day closure rates rose by more than 9% [1]. At Corewell HealthTM, clinicians reported a 61% reduction in cognitive load and an 85% increase in satisfaction [2]. These gains can result in shorter wait times for documentation and a more manageable workload for healthcare providers.
Greater clinician presence: Physicians often report that reducing live typing enables more eye contact, active listening, and conversational flow. Shah et al. (2025) found that 68% of physicians noted improved patient engagement [3]. Even subtle shifts, like full engagement and continuous conversation, leave patients describing visits as more personal and relational and less rushed and transactional.
Improved documentation quality: AI notes sometimes capture details a hurried clinician might miss, reducing errors in treatment plans or follow-up care [4]. Evaluations also found that clinicians reported lower documentation burden and reduced burnout risk (by 7% in one 2025 study) [5], factors that indirectly support safer and more consistent care.
These benefits, however, come with risks that, if left unaddressed, could undermine trust, safety, and equity:
Inaccuracy and omission: LLMs sometimes “hallucinate,” producing confident yet incorrect statements. In medical documentation, even a single error regarding a medication or diagnosis can lead to serious consequences. Patients depend on records to be precise, yet up to 83% of physicians in one study reported drafts that were verbose or required editing [3]. Another flagged patient safety concerns in real-world use, highlighting the need for strict oversight [6].
Bias and inequity: A great risk also lies in uneven representation and bias. Training data often underrepresents marginalized groups, meaning AI may mischaracterize the voices of patients who use non-standard dialects, have limited English proficiency, or present atypical symptoms [7] [8]. These distortions can flatten or misframe patients’ lived experiences in their own medical records. Moreover, adoption will likely begin in well-funded systems, leaving rural and under-resources clinics without access — deepening disparities in who benefits from streamlined care [7].
Privacy, consent, and transparency: Ambient listening requires continuous audio capture of patient-clinician conversations. Without explicit consent, strict access controls, and strong deidentification, patients may feel their confidentiality is compromised. Some may even prefer manual documentation simply because it feels safer [9]. True trust in AI scribes requires patients to understand what is being recorded, who controls it, and whether opting out is possible.
Erosion of narrative control: Medical notes are not just clerical — they are the story of a patient’s health. If clinicians rely too heavily on AI drafts, patient narratives risk being compressed into standardized templates that prioritize efficiency over individuality [10]. The result may be records that document disease but neglect the human behind it.
Whether AI scribes ultimately enhance or undermine care depends on how they are designed and deployed. Developers and health systems must prioritize human-centered design, incorporating the voices of both clinicians and patients. Continuous auditing should not only measure accuracy but also evaluate whether outcomes differ by race, language, or socioeconomic status. Only then can AI scribes avoid perpetuating the very inequities healthcare seeks to reduce.
Access is another critical concern. Without deliberate investment, affluent hospitals may enjoy the benefits of more attentive clinicians while community clinics fall further behind. This uneven rollout risks exacerbating longstanding gaps in care. Equitable infrastructure, workforce training, and thoughtful integration are essential if patients everywhere are to benefit.
Transparency is equally crucial. Patients deserve clear communication about when and how conversations are recorded, how data is stored, and who bears responsibility when errors occur. Trust is built on informed consent and clear accountability, not vague assurances.
Finally, clinicians must remain active participants in the medical narrative. While AI can streamline note-taking, the deeper responsibility of interpretation, nuance, and care remains human. Patients depend on clinicians not just for treatment, but for an accurate account of their health. AI scribes should amplify human attention — not to minimize or replace it.
If implemented thoughtfully, AI scribes could reduce distraction and enable clinicians to engage more fully with patients. But if adopted without safeguards, they risk embedding bias, undermining privacy, and diminishing patient voices. For patients, the stakes are high: these systems will either strengthen the human connection in medicine or accelerate its erosion. The idealistic healthcare system depends on balancing innovation with fairness, transparency, and above all, the preservation of patient stories.
References:
Duggan, M. J., Gervase, J., Schoenbaum, A., Hanson, W., Howell, J. T., 3rd, Sheinberg, M., & Johnson, K. B. (2025). Clinician Experiences With Ambient Scribe Technology to Assist With Documentation Burden and Efficiency. JAMA network open, 8(2), e2460637. https://doi.org/10.1001/jamanetworkopen.2024.60637
Corewell Health Case Study. (n.d.). Abridge. https://www.abridge.com/case-study/corewell-health
Shah, S. J., Crowell, T., Jeong, Y., Devon-Sand, A., Smith, M., Yang, B., Ma, S. P., Liang, A. S., Delahaie, C., Hsia, C., Shanafelt, T., Pfeffer, M. A., Sharp, C., Lin, S., & Garcia, P. (2025). Physician Perspectives on Ambient AI Scribes. JAMA network open, 8(3), e251904. https://doi.org/10.1001/jamanetworkopen.2025.1904
Lee, C., Britto, S., & Diwan, K. (2024). Evaluating the Impact of Artificial Intelligence (AI) on Clinical Documentation Efficiency and Accuracy Across Clinical Settings: A Scoping Review. Cureus, 16(11), e73994. https://doi.org/10.7759/cureus.73994
Stults, C. D., Deng, S., Martinez, M. C., Wilcox, J., Szwerinski, N., Chen, K. H., Driscoll, S., Washburn, J., & Jones, V. G. (2025). Evaluation of an Ambient Artificial Intelligence Documentation Platform for Clinicians. JAMA network open, 8(5), e258614. https://doi.org/10.1001/jamanetworkopen.2025.8614
Biro J, Handley J, Cobb N, Kottamasu V, Collins J, Krevat S, Ratwani R (2025). Accuracy and Safety of AI-Enabled Scribe Technology: Instrument Validation Study. J Med Internet Res 2025;27:e64993 https://www.jmir.org/2025/1/e64993 10.2196/64993
Cross, J. L., Choma, M. A., & Onofrey, J. A. (2024). Bias in medical AI: Implications for clinical decision-making. PLOS digital health, 3(11), e0000651. https://doi.org/10.1371/journal.pdig.0000651
Chen, X., Wang, T., Zhou, J. et al. Evaluating and mitigating bias in AI-based medical text generation. Nat Comput Sci 5, 388–396 (2025). https://doi.org/10.1038/s43588-025-00789-7
Shuaib A. Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward. Int J Gen Med. 2024;17:1765-1771. https://doi.org/10.2147/IJGM.S449598
Leung, T. I., Coristine, A. J., & Benis, A. (2025). AI Scribes in Health Care: Balancing Transformative Potential With Responsible Integration. JMIR medical informatics, 13, e80898. https://doi.org/10.2196/80898






Comments