top of page

The Dangers of Implementing Artificial Intelligence in Healthcare

Authored by: Kenneth Li

Art by: Laura Lee


Artificial intelligence (AI) has rapidly developed in the past decade causing discussions on investment and implementation to dominate many professional fields, especially healthcare. The international AI healthcare market was projected to grow swiftly at a 47.6% compound annual growth rate from 11.2 billion USD in 2023 to 427.5 billion USD in 2032, with investment being spearheaded by leaders including Google, IBM, Johnson and Johnson, Amazon, and Microsoft [1]. AI in healthcare is being increasingly utilized for many purposes such as diagnosis, data analysis, and patient care [2].


However, both physicians and patients have expressed concerns over AI implementation in healthcare involving patient confidentiality, AI tendency for bias against underrepresented groups, AI error, and ambiguous accountability when AI use in healthcare leads to harm [3]. In a survey conducted with 112 physicians, 78.3% believed AI can make healthcare more efficient but 81.74% reported that they would trust a doctor’s judgement over that of AI, 69.6% believed AI could not be used to assist with abnormal medical circumstances, and 67.9% reported that AI would be deficient in empathizing with patients and their wellbeing [4].


Like physicians, patients too are skeptical of the implementation of AI technologies into healthcare. Patients will be most affected by the incorporation of AI into healthcare, because most AI technologies are targeted at improving patient experience. However, while some patients are optimistic in the ability of AI to improve their healthcare experience, many patients generally prefer that AI is used for administrative duties not involving patient diagnosis or treatment. Witkowski et al. (2024) reported that while 84.2% of patients in a survey were comfortable with AI performing clerical work such as appointment scheduling, only 33.7% of patients were comfortable in regards to AI’s involvement in providing prescribed treatments [5].


As physicians and patients navigate AI applications implemented by healthcare organizations, there is sufficient support for exercising caution concerning AI implementation in healthcare. Using AI is not without drawbacks as there is an elevated risk of breaching patient confidentiality when handling data. Machine learning for AI is heavily dependent on existing data; many leaders in AI including Google, IBM, and Microsoft are exploiting user information sharing agreements to access patient data [6]. Additionally, AI is capable of making mistakes and providing false or misleading information. This can be attributed to a variety of factors that can include a flawed chain of reasoning or the data that the AI was trained off of which leads into another issue with implementing AI into healthcare, AI potentially being biased if the data they are learning from is not broad enough, which often disproportionately affects women, LGBTQIA+ individuals, and ethnic and racial minorities who are often underrepresented in datasets [7, 8]. For example, medications, especially anesthetics and cardiovascular treatments, are disproportionately tested on men compared to women leading to less data on how they affect women according to Marotta (2022) [7].


Biased data increases the possibility of erroneous medical decisions regarding underrepresented groups being made and correspondingly, the risk of harm to patients. If the use of AI in healthcare leads to harm, the question of who is accountable for damages further complicates the dangers of implementing AI in healthcare. The black-box problem, unique to AI, involves AI being unable to provide clear context and support for its findings. Due to this, physicians using AI, companies developing AI for healthcare, and patients agreeing to the judgement of AI are implicated in legal liability and navigating accountability becomes difficult [8]. There is also a concern that AI will completely replace physicians as AI automation threatens job loss in many  professional fields, with speculation that AI could gradually take over diagnosis, data analysis, and patient care that would typically be performed by human physicians [3].


While the implementation of AI into healthcare entails many dangers that physicians and patients rightfully have reservations about, there are potential solutions that could reduce the risks and help with taking advantage of AI. Improving AI customization can help physicians and patients control the extent to which AI is involved in their relationship. Enhancing AI empathy and ability to account for bias in the data they are trained from can also improve patient experience. Understanding a patient’s dilemma is important for diagnosing what a patient needs. Additionally, educating people on AI is important for raising awareness on how AI can both be helpful and dangerous [9]. The current limits of AI capability and access to data as well as AI controls backed by medical regulation and health insurance also serve as checks restricting the spread of AI implementation to an extent for now. Furthermore, the unique human touch that physicians possess could be difficult to replicate for patients which AI might complement by taking care of time consuming tasks and improving quality of life for both physicians and patients [10].


Overall, AI investment and implementation in healthcare is expanding rapidly with the market value increasing and use in diagnosis, data analysis, and patient care being tested. However, AI when utilized in healthcare can pose dangers that can threaten patient confidentiality breaches, AI bias, AI error, and unclear liability when AI implementation in healthcare causes harm. While there are potential solutions for mitigating the risks posed by AI in healthcare that can allow physicians and patients to better take advantage of AI, caution should still be employed.



References

  1. Faiyazuddin, M., Rahman, S. J. Q., Anand, G., Siddiqui, R. K., Mehta, R., Khatib, M. N., Gaidhane, S., Zahiruddin, Q. S., Hussain, A., & Sah, R. (2025, Jan 5). The Impact of Artificial Intelligence on Healthcare: A Comprehensive Review of Advancements in Diagnostics, Treatment, and Operational Efficiency. Health science reports, 8(1), e70312. https://doi.org/10.1002/hsr2.70312

  2. Dubuc, A., & Cousty, S. (2025, June 24). Healthy AI: Balancing opportunity and danger. Journal of Oral Medicine and Oral Surgery. https://www.jomos.org/articles/mbcb/full_html/2025/03/mbcb250074/mbcb250074.html 

  3. Rashidi, P., Kilic, A., Kline, A., Liu, T., McCarthy, P. M., Johnston, D. R., & Sade, R. M. (2025, April 23). Artificial intelligence and machine learning in cardiothoracic surgery: Future prospects and ethical issues. The Journal of thoracic and cardiovascular surgery, S0022-5223(25)00329-0. Advance online publication. https://doi.org/10.1016/j.jtcvs.2025.04.029

  4. Reffien, M. alimin M., Selamat, E. M., Sobri, H. N. M., Hanan, M. F. M., Abas, M. I., Ishak, M. F. M., Azit, N. A., Abidin, N. D. I. Z., Hassim, N. H. N., Ahmad, N., Rusli, S. A. S. S., Nor, S. F. S., & ismail, A. (2021, April 24). Physicians’ attitude towards Artificial Intelligence in medicine, their expectations and concerns: An Online Mobile Survey. Malaysian Journal of Public Health Medicine. https://mjphm.org/index.php/mjphm/article/view/742 

  5. Witkowski, K., Dougherty, R. B., & Neely, S. R. (2024, June 22). Public perceptions of artificial intelligence in Healthcare: Ethical concerns and opportunities for patient-centered care - BMC medical ethics. SpringerLink. https://link.springer.com/article/10.1186/s12910-024-01066-4 

  6. Murdoch, B. (2021, September 15). Privacy and artificial intelligence: Challenges for protecting health information in a new era - BMC medical ethics. SpringerLink. https://link.springer.com/article/10.1186/s12910-021-00687-3 

  7. Marotta, A. (2022, June 20). Full article: When ai is wrong: Addressing liability challenges in women’s Healthcare. https://www.tandfonline.com/doi/full/10.1080/08874417.2022.2089773 

  8. Aung, Y. Y. M., Wong, D. C. S., & Ting, D. S. W. (2021, August 17). Promise of artificial intelligence: A review of the opportunities and challenges of artificial intelligence in Healthcare | British medical bulletin | oxford academic. https://academic.oup.com/bmb/article/139/1/4/6353269

  9. Chew, H. S. J., & Achananuparp, P. (2022). Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. Journal of medical Internet research, 24(1), e32939. https://doi.org/10.2196/32939

  10. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future healthcare journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94


Recent Posts

See All
The Effects of High Dental Education Cost

Authored by: Valentine Kim By the early 2010s, the debt amount of an average dental student had surpassed $200,000 by the time of graduation [1]. Today, this amount has increased to nearly $300,000 [2

 
 
 

©2023 by The Healthcare Review at Cornell University

This organization is a registered student organization of Cornell University.

Equal Education and Employment

bottom of page