Healthcare Beyond Algorithms: The Human Side of Healthcare
- Natalia Collins
- 7 hours ago
- 4 min read
Authored by: Natalia Collins
Art by: Miriam Alex
Diane was 42 when she was diagnosed with acute myelomonocytic leukemia. Treatment involved several rounds of chemotherapy and bone marrow transplants, of which the likelihood for survival was 25%. Diane’s diagnosis followed her previous battles with vaginal cancer, depression, and alcoholism, each of which were treated by the same doctor who later diagnosed her with leukemia. Diane chose not to undergo chemotherapy, wishing to preserve her dignity and enjoy her remaining time left with her family. Having been through the trials of Diane’s previous diagnoses and understanding her ability to make an informed decision based on her prior experiences, her doctor respected her decision to omit treatment. He supported her in providing a painless end-of-life experience. [1]
This case demonstrates the unique patient-physician relationship and the importance of considering a patient’s life experiences and mental state in their diagnosis and course of treatment. Recent advancements in AI, particularly the integration of AI into patient communication, threaten the essential human-to-human interaction in medical care. Given the immaturity of the decision-making algorithms of AI itself and its inability to treat the patient holistically or empathize with the patient’s experience, AI poses a serious risk to patient care and the unique human interactions of the medical system.
The most recent addition of AI to healthcare fields involves an artificial intelligence system (AIS) that is fed patient information and produces an output describing the recommended course of treatment. Clinicians are expected to incorporate this information into their diagnosis. However, the processes behind the output of the AIS are unclear, specifically in the accumulation of immense quantities of previous patient information, and the uncertainty in the methods involved in weighing patient factors to determine a treatment plan. [2] A clinician cannot account for the decision made by the AIS if little is known about its algorithm.
Recent research at Cornell University has compared the cognitive maturity of AI to that of humans. Through analysis of framing effects, specifically in our representation of losses and gains and our ability to understand the “bottom-line,” AI was compared to human decision-making processes. The researchers found that AI aligns more closely with the cognitive decision-making processes of an adolescent, unable to explain the rationale behind its decisions or to recognize its own bias. [3] For healthcare, this means the diagnosis provided by AI lacks the self-reflection necessary in medical diagnosis, but also the ability that physicians have to learn and adapt over the course of their career as they interact with more patients. [3]
In addition to the inherent flaws in critical decision making, AI does not have a comprehensive understanding of a patient’s history, and lacks the empathetic response provided by a physician’s diagnosis. In Diane’s case, her doctor’s insight into her background and preferences enabled a compassionate, informed care decision. Depending on the specific physical histories of the patient or the patient’s current mental condition, treatment choice may vary, and a different patient with the same surface-level diagnosis may be advised to begin chemotherapy. [4] Diane’s course of treatment also involved a consideration of her mental state, given her previous battles with her health, and the doctor was able to consider her desire to maintain her independence and dignity towards the end of her life.
There are also questions of accountability. In cases where a treatment is flawed or leads to serious health consequences, are the developers behind the AIS accountable? Or is it the physician who accepted the conclusions of the system? Given that a mistake is made, how is the AI adapted to then not repeat the error? [5] Where a physician has firsthand experience of the effects of error on their patient and experiences the emotions that come with such a mistake, they are able to learn and improve in the future. AI lacks this emotional component to patient care, and is unable to improve in the same fashion that is provided through a physician’s experience.
A physician’s duty is unique in that it requires a holistic view of the patient, an application of their previous experiences in medical care, and an empathetic approach only possible through human interaction. AI must be carefully integrated into medical settings while respecting existing physician-patient interactions, to preserve the human side of healthcare.

References:
Quill, T. E. (1991). Death and Dignity. New England Journal of Medicine, Vol. 10, 691–694. 10.1056/NEJM199103073241010.
Smith, H. (2020). Clinical AI: opacity, accountability, Responsibility and Liability. AI & Society, Vol. 36, 535-545. https://doi.org/10.1007/s00146-020-01019-6.
Edelson, S. M., Roue, J. E., Singh, A., Reyna, V.F. (2023). How Decision Making Develops: Adolescents, Irrational Adults, and Should AI be Trusted With the Car Keys? Policy Insights from the Behavioral and Brain Sciences, Vol. 11, 11–18. doi:https://doi.org/10.1177/23727322231220423.
Nagy, M., Sisk, B. (2020). How Will Artificial Intelligence Affect Patient-Clinician Relationships? AMA Journal of Ethics, Vol. 22, 395–400. https://doi.org/10.1001/amajethics.2020.395.
Choudhury, A., Asan, O. (2022). Impact of Accountability, Training, and Human Factors on the Use of Artificial Intelligence in Healthcare: Exploring the perceptions of Healthcare Practitioners in the US. Human Factors in Healthcare, Vol. 5, 100067. doi:https://doi.org/10.1016/j.hfh.2022.100021.
Comments