top of page

The Irreplaceability of Experience and Reflection

Author: Lauren Wilkes


A New York Times op-ed published as of January 2026, written by a physician, seems to

provide lots to think about regarding the use of AI in medicine. He posits that, if physicians

consult other doctors so often, and potentially take or don’t take their opinion/advice and

consequently, inherently create a small degree of risk to whether or not the decided course of action will benefit or harm the patient, why should we not have the same degree of leeway for AI? Personally, I think this argument centers all of the debates, positive and negative, around this argument, extremely well, and thus, for purposes of this discussion, I will choose to address a lot of the claims both made and implied by this argument.


AI can prove to be extremely beneficial in contexts such as equity and efficiency, when it comes to being utilized in the medical field. However, I would argue there is a risk to considering AI to be the same as consulting a fellow physician. If AI were solely useful for consultations, we wouldn’t need it as, by the admission of the op-ed, we have physicians for that. Thus, other than efficient data collection, summaries, reports, etc. what need do we have for AI to be used in consultations if not for improved accuracy in decision making. Consequently, it would seem reasonable that we hold AI to a higher standard than we do human doctors, as the physician authoring the op-ed actually refutes. Further, AI does not have the same reflective capabilities as humans other than the extent to which it is taught. To account for this shortcoming, why should we not hold it to a higher standard and expectation when it is producing presumably more accurate assessments that doctors may rely on.


AI has a lot of potential to be positively integrated into clinical practice. It can reduce economic barriers and create comprehensive summaries of patient interviews and data, and while it can provide potentially very accurate suggestions on treatment or assessment of results, it should be assessed harshly and critically. It should not be consulted with the same ease as taking the suggestion of a fellow physician, but rather heavily analyzed before choosing to align with its decision. AI can make very accurate conclusions based on data inputted and what it is taught. However, human doctors, although inherently flawed, can make and edit decisions based on experience...human experience. In my opinion, there will never be a replacement for life and professional experience, that teaches lessons and provides guidance that can never be quantitatively imputed or trained into a model.

Comments


©2023 by The Healthcare Review at Cornell University

This organization is a registered student organization of Cornell University.

Equal Education and Employment

bottom of page