top of page

How California is Battling AI Discrimination in Healthcare

In the past several decades, there has been a sharp rise in the use of AI technologies across many leading industries — and for good reason, too. AI has been used for a variety of applications, whether that be for business management, image recognition, or even, in clinical practices, going as far as to provide diagnoses and treatment recommendations for patients [1]. It’s no wonder big-name companies like Microsoft, Nvidia, and Intuitive Surgical Inc. have all invested millions, if not billions, of dollars into improving the accessibility, usability, and quality of AI systems in each of their fields [2]. These efforts have largely paid off, providing useful results that will be increasingly relied upon in broad processing domains. However, these successes should not dilute the fact that AI, specifically when examined in a healthcare space, still has a ways to go.


One of the primary challenges facing the industry of AI technology in healthcare is the ability to cultivate quality data sets to use as teaching models [3]. Simplified, AI systems mainly rely on using large sets of data to determine patterns and make subsequent recommendations. The data used to construct a commercial algorithmic tool, however, may not fully represent the patient population for which the tool is used, or may be trained to predict outcomes that don’t match the appropriate healthcare objective [3]. Moreover, the tools themselves are not fully transparent to healthcare consumers or, in some cases, healthcare providers, especially in under-resourced contexts. Without significant review and guidelines for usage, this becomes especially dangerous when realizing these unintended consequences predominantly affect the most vulnerable, mostly minority, patient groups.


This problematic behavior can be identified most clearly using the COVID-19 pandemic as a case study. In one evaluation done by Sutter Health, researchers linked bias in pulse oximeters, a device used to measure the oxygen saturation of blood [4], to impacts on COVID-19 clinical care. The pulse oximeters used to triage patients according to severity of illness were found to not work as effectively in patients with darker skin colors, leading to a delayed wait time of over 4.5 hours for matters of urgent need [5]. And this is not an isolated incident. Dr. Vania de la Fuente Nunez of the WHO’s Healthy Aging unit found that certain system practices seen during the pandemic prioritize younger – over older – individuals in determining who had access to limited oxygen or beds in an otherwise overcrowded ICU [6].


In an attempt to further investigate these disparities, California Attorney General Rob Bonta sent letters across the state to CEOs of multiple hospitals requesting information on how they (and other healthcare providers) are attempting to undertake some of these problems in their usage of AI tools, soliciting a response by the October 15th deadline [7]. In the letter, AG Bonta requests a list of available or purchased AI tools and methodologies that “contribute to the performance of [selected] functions, the purposes for which these tools are currently used, how these tools inform decisions, policies, procedures, training, or protocols that apply to use of these tools, and the name or contact information of the person(s) responsible” for ensuring the equitable application of these tools.


A seemingly favorable policy implementation, the new effect is prone to practical limitations that must be addressed in its overall examination. Most pressing, the request for information could put under-resourced facilities in a difficult position as they may not have the capacity to look at their software for potential bias, but nevertheless face possible state non-discrimination and related federal law violations. However, the mobilization of this framework can largely be considered a good first step in addressing deep-rooted problems in modern-day technologies. This, in combination with increased investment in community engagement, has the potential to increase access not only to digital health tools, but to wider health systems in general.


References

  1. Davenport, T., & Kalakota, R. (2019, June). The potential for artificial intelligence in Healthcare. Future healthcare journal. Retrieved October 27, 2022, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/

  2. Artificial Intelligence Stocks: The 10 best AI companies. (2022, September 9). Retrieved October 28, 2022, from https://money.usnews.com/investing/stock-market-news/slideshows/artificial-intelligence-stocks-the-10-best-ai-companies

  3. Balasubramanian, S. (2022, September 27). Ai in healthcare still has a long journey ahead. Forbes. Retrieved October 27, 2022, from https://www.forbes.com/sites/saibala/2022/09/25/ai-in-healthcare-still-has-a-long-journey-ahead/?sh=6b8328a74706

  4. Pulse oximetry. Pulse Oximetry | Johns Hopkins Medicine. (2019, August 14). Retrieved October 27, 2022, from https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/pulse-oximetry

  5. Fawzy A, Wu TD, Wang K, et al. Racial and Ethnic Discrepancy in Pulse Oximetry and Delayed Identification of Treatment Eligibility Among Patients With COVID-19. JAMA Intern Med. 2022;182(7):730–738. doi:10.1001/jamainternmed.2022.1906

  6. World Health Organization. (n.d.). Demographic change and healthy ageing - combating ageism. World Health Organization. Retrieved October 27, 2022, from https://www.who.int/teams/social-determinants-of-health/demographic-change-and-healthy-ageing/combatting-ageism

  7. Attorney general Bonta launches inquiry into racial and ethnic bias in healthcare algorithms. State of California - Department of Justice - Office of the Attorney General. (2022, August 31). Retrieved October 27, 2022, from https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthcare


12 views0 comments

Recent Posts

See All

Comments


bottom of page