top of page

AI in Emergency Medicine Triage: Ethics and Humanism

Authored by: Shree Manivel

Art by: Priscilla Liu


In emergency departments nationwide, artificial intelligence (AI) is quietly reshaping patient care. The emergency department (ED) is a uniquely high-pressure environment where clinicians must decide, often within minutes, who is seen first. AI-based triage systems are emerging as tools to assist in those decisions. These systems can analyze vital signs, laboratory results, and electrocardiograms (ECGs) to identify patients who require immediate care. 


For example, a deep-learning model trained on more than 46,000 ECGs distinguished high- from low-risk patients for in-hospital cardiac arrest within 24 hours, achieving area under the receiver operating characteristic curve (AUROC) values between 0.913 and 0.948, including external validation. In simple terms, these values indicate excellent accuracy in distinguishing high-risk from low-risk patients. When integrated into clinical workflows, such models could allow for earlier intervention rather than prolonged reliance on standard monitoring procedures [1].


Similarly, in a multi-site study of emergency departments across the United States, implementing AI-based triage clinical decision support (CDS) was associated with faster movement to the initial care area, shorter ED stays, and improved identification of high-acuity patients who required critical care [2]. Hence, these studies indicate that AI-based triage tools function as decision-support systems that augment, rather than replace, clinicians’ judgment – potentially freeing physicians to focus more on direct patient interaction. 


The outcomes are promising. Yet the central challenge is not whether AI can perform clinical tasks, but whether it can be integrated in ways that preserve the ethical and humanistic foundations of care. The ethical implications of AI-based triage can be framed through the four core principles of medical ethics: autonomy, beneficence, non-maleficience, and justice [3]. 


These four principles are the basis of high-quality and ethical patient care [4]. Autonomy requires that patients and physicians retain control over decisions rather than deferring entirely to algorithms [3]. Communication in the ED is central to respecting autonomy even under time pressure. Beneficence requires a demonstrable benefit, such as faster recognition of an emergent condition. Non-maleficience requires physicians to avoid practices that would cause harm to a patient; it is commonly referred to as the principle of “do no harm.” Finally, the principle of justice requires health systems and clinicians to distribute care fairly and to ensure that AI systems do not reinforce existing disparities [4]. 

Studies indicate that AI models can reflect patterns found in their training data. If those data mirror real-world healthcare inequities, the resulting algorithm may unintentionally prioritize some patient profiles over others [5]. These challenges highlight the importance of continuous evaluation – not only to ensure accuracy when making predictions, but also to verify that decisions are made fairly [6]. To help ensure that AI-driven triage decisions support equitable outcomes for all patients, health systems should conduct regular audits, share performance results openly, and train algorithms on data that reflect the full diversity of the patient population.


Beyond fairness, AI raises deeper questions about humanism in emergency medicine settings. Some ethicists argue that automating routine documentation and monitoring could provide clinicians with a “gift of time;” essentially, by easing logistical burdens, clinicians would have more time to listen, respond to, and comfort their patients more meaningfully [7]. However, these time savings are merely theoretical and not guaranteed; they can easily be absorbed into higher throughput demands in institutional settings. Protecting space for compassion, therefore, depends on an institution’s priorities, and not on technology alone. 


Transparency also builds trust. When clinicians understand how AI reaches its conclusions, they can integrate it into shared decision-making. Lorenzini (2023) proposes expanding the traditional physician-patient model into a triad of patient, physician, and AI, where each party contributes to informed, collaborative choices without diminishing human autonomy [8]. 


Clinicians themselves often highlight the irreplaceable aspects of care that no algorithm can emulate. For instance, interviews with emergency medicine providers emphasize the importance of empathy, non-verbal communication, and situational awareness in recognizing distress [9]. These qualities, among others, remain essential in patient care even as AI takes on more analytical tasks. In other words, AI may assist with data interpretation, but only human clinicians can perceive the subtleties of fear, trust, and reassurance that so often define compassionate care.


As these considerations, which are informed by the principles of ethics and humanism, suggest, the future of AI in emergency medicine will depend less on the sophistication of algorithms and more on how thoughtfully they are implemented. AI-based triage stands at a crossroads. With rigorous validation mechanisms, ongoing quality monitoring, and clinical oversight, these systems have the potential to enhance safety, efficiency, and fairness – detecting emergent conditions earlier and supporting timely allocation of crucial resources. Yet, if implemented without sufficient thought, they risk narrowing medicine to metrics and nebulously-defined “efficiency,” rather than prioritizing people and their needs. Ultimately the promise of AI in emergency medicine triage procedures lies in using innovation to bring medicine closer to its purpose – not only by improving clinical outcomes, but also by restoring humanity in every encounter. After all, medicine has always been about people: the patients, the clinicians, and the teams united by a shared commitment to care for others. Technology, and especially AI, should serve as a tool to strengthen human connection in the moments that matter most. 

 

References:

  1. Kwon, J.-M., Kim, K.-H., Jeon, K.-H., Lee, S. Y., Park, J., & Oh, B.-H. (2020). Artificial intelligence algorithm for predicting cardiac arrest using electrocardiography. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine. https://doi.org/10.1186/s13049-020-00791-0 

  2. Taylor, R. A., Chmura, C., Hinson, J., Steinhart, B., Sangal, R., Venkatesh, A. K., Xu, H., Cohen, I., Faustino, I. V., & Levin, S. (2025). Impact of artificial intelligence–based triage decision support on emergency department care. NEJM AI. https://doi.org/10.1056/AIoa2400296  

  3. Aacharya, R. P., Gastmans, C., & Denier, Y. (2011). Emergency department triage: An ethical analysis. BMC Emergency Medicine. https://doi.org/10.1186/1471-227X-11-16 

  4. Gillon, R. (2015). Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. Journal of Medical Ethics. https://doi.org/10.1136/medethics-2014-102282 

  5. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science. https://doi.org/10.1126/science.aax2342 

  6. Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine. https://doi.org/10.1038/s41591-021-01595-0 

  7. Sauerbrei, A., Kerasidou, A., Lucivero, F., & Hallowell, N. (2023). The impact of artificial intelligence on the person-centred, doctor-patient relationship: Some problems and solutions. BMC Medical Informatics and Decision Making. https://doi.org/10.1186/s12911-023-02162-y 

  8. Lorenzini, G., Arbelaez Ossa, L., Shaw, D. M., & Elger, B. S. (2023). Artificial intelligence and the doctor–patient relationship: Expanding the paradigm of shared decision making. Bioethics. https://doi.org/10.1111/bioe.13158 

  9. Townsend, B. A., Plant, K. L., Hodge, V. J., Ashaolu, O., & Calinescu, R. (2023). Medical practitioner perspectives on AI in emergency triage. Frontiers in Digital Health. https://doi.org/10.3389/fdgth.2023.1297073


Recent Posts

See All
The Effects of High Dental Education Cost

Authored by: Valentine Kim By the early 2010s, the debt amount of an average dental student had surpassed $200,000 by the time of graduation [1]. Today, this amount has increased to nearly $300,000 [2

 
 
 

Comments


©2023 by The Healthcare Review at Cornell University

This organization is a registered student organization of Cornell University.

Equal Education and Employment

bottom of page