Responsible AI in healthcare: Addressing biases and equitable outcomes

Hitesh
thehealthco

With the rapid growth of healthcare AI, algorithms are often overlooked when it comes to addressing fair and equitable patient care. I recently attended the Conference on Applied AI (CAAI): Responsible AI in Healthcare, hosted by the University of Chicago Booth School of Business. The conference brought together healthcare leaders across many facets of business with a goal of discussing and finding effective ways to mitigate algorithmic bias in healthcare. It takes a diverse group of stakeholders to recognize AI bias and make an impact on ensuring equitable outcomes.

If you’re reading this, it’s likely you may already be familiar with AI bias, which is a positive step forward. If you’ve seen movies like The Social Dilemma or Coded Bias, then you’re off to a good start. If you’ve read articles and papers like Dr. Ziad Obermeyer’s Racial Bias in Healthcare Algorithms, even better. What these resources explain is that algorithms play a major role in recommending what movies we watch, social posts we see and what healthcare services are recommended, among other everyday digital interactions. These algorithms often include biases related to race, gender, socioeconomic, sexual orientation, demographics and more. There has been a significant uptick in interest related to AI bias. For example, the number of data science papers on arXiv’s website mentioning racial bias has doubled between 2019-2021.

We’ve seen interest from researchers and media, but what can we actually do about it in healthcare? How do we put these principles into action?

Before we get into putting these principles into action, let’s address what happens if we don’t.

The impact of bias in healthcare

Let’s take, for example, a patient that has been dealing with various health issues for quite some time. Their healthcare system has a special program designed to intervene early for people who have high risk for cardiovascular needs. The program has shown great results for the people enrolled. However, the patient hasn’t heard about this. Somehow they weren’t included in the list for outreach, even though other sick patients were notified and enrolled. Eventually, they visit the emergency room, and their heart condition has progressed much further than it otherwise would have.

That’s the experience of being an underserved minority and invisible to whatever approach a health system is using. It doesn’t even have to be AI. One common approach to cardiovascular outreach is to only include men that are 45+ in age and women 55+ in age. If you were excluded because you’re a woman who didn’t make the age cutoff, the result is just the same.

How are we addressing it?

Chris Bevolo’s Joe Public 2030 is a 10-year look into healthcare’s future, informed by leaders at Mayo Clinic, Geisinger, Johns Hopkins Medicine and many more. It doesn’t look promising for addressing healthcare disparities. For about 40% of quality measures, Black and Native people received worse care than white people. Uninsured people had worse care for 62% of quality measures, and access to insurance was much lower among Hispanic and Black people.

“We’re still dealing with some of the same issues we’ve dealt with since the 80s, and we can’t figure them out,” stated Adam Brase, executive director of strategic intelligence at Mayo Clinic. “In the last 10 years, these have only grown as issues, which is increasingly worrisome.”

Why data hasn’t solved the problem of bias in AI

No progress since the 80s? But things have changed so much since then. We’re collecting huge amounts of data. And we know that data never lies, right? No, not quite true. Let’s remember that data isn’t just something on a spreadsheet. It’s a list of examples of how people tried to address their pain or better their care.

As we tangle and torture the spreadsheets, the data does what we ask it to. The problem is what we’re asking the data to do. We may ask the data to help drive volume, grow services or minimize costs. However, unless we’re explicitly asking it to address disparities in care, then it’s not going to do that.

Attending the conference changed how I look at bias in AI, and this is how.

It’s not enough to address bias in algorithms and AI. For us to address healthcare disparities, we have to commit at the very top. The conference brought together technologists, strategists, legal and others. It’s not about technology. So this is a call to fight bias in healthcare, and lean heavily on algorithms in order to help! So what does that look like?

A call to fight bias with the help of algorithms

​​Let’s start by talking about when AI fails and when AI succeeds at organizations overall. MIT and Boston Consulting Group surveyed 2,500 executives who’d worked with AI projects. Overall, 70% of these executives said that their projects had failed. What was the biggest difference between the 70% that failed and the 30% that succeeded?

It’s whether the AI project was supporting an organizational goal. To help clarify that further, here are some project ideas and whether they pass or fail.

Purchase the most powerful natural language processing solution.

Fail. Natural language processing can be extremely powerful, but this goal lacks context on how it will help the business.

Grow our primary care volume by intelligently allocating at-risk patients.

Pass. There’s a goal which requires technology, but that goal is tied to an overall business objective.

We understand the importance of defining a project’s business objectives, but what were both these goals missing? They’re missing any mention of addressing bias, disparity, and social inequity. As healthcare leaders our overall goals are where we need to start.

Remember that successful projects start with organizational goals, and seek AI solutions to help support them. This gives you a place to start as a healthcare leader. The KPIs you’re defining for your departments could very well include specific goals around increasing access for the underserved. “Grow Volume by x%,” for example, could very well include, “Increase volume from underrepresented minority groups by y%.”

How do you arrive at good metrics to target? It starts with asking the tough questions about your patient population. What’s the breakdown by race and gender versus your surrounding communities? This is a great way to put a number and a size to the healthcare gap that needs to be addressed.

This top-down focus should drive actions such as holding vendors and algorithmic experts accountable to helping with these targets. What we need to further address here, though, is who all of this is for. The patient, your community, your consumers, are those that stand to lose the most in this.

Innovating at the speed of trust

At the conference, Barack Obama’s former chief technology officer, Aneesh Chopra, addressed this directly: “Innovation can happen only at the speed of trust.” That’s a big statement. Most of us in healthcare are already asking for race and ethnicity information. Many of us are now asking for sexual orientation and gender identity information.

Without these data points, addressing bias is extremely difficult. Unfortunately, many people in underserved groups don’t trust healthcare enough to provide that information. I’ll be honest, for most of my life, that included me. I had no idea why I was being asked that information, what would be done with it, or even if it might be used to discriminate against me. So I declined to answer. I wasn’t alone in this. We look at the number of people who’ve identified their race and ethnicity to a hospital. Commonly one in four people don’t.

I spoke with behavioral scientist Becca Nissan from ideas42, and it turns out there’s not much scientific literature on how to address this. So, this is my personal plea: partner with your patients. If someone has experienced prejudice, it’s hard to see any upside in providing the details people have used to discriminate against you.

A partnership is a relationship built on trust. This entails a few steps:

 

  • Be worth partnering with.                                                                                                                                                  There must be a genuine commitment to fight bias and personalize healthcare or asking for data is useless
  • Tell us what you’ll do.                                                                                                                                                  Consumers are tired of the gotchas and spam resulting from sharing their data. Level with them. Be transparent about how you use data. If it’s to personalize the experience or better address healthcare concerns, own that. We’re tired of being surprised by algorithms.
  • Follow through.                                                                                                                                                                     Trust isn’t really earned until the follow-through happens. Don’t let us down.
Next Post

Labcorp Enhances Clinical Trial and Drug Development Capabilities Through Real World Data Collaboration With HealthVerity

Labcorp® , a leading global life sciences company, today announced a new collaboration with HealthVerity, Inc., the leader in Identity, Privacy, Governance, and Exchange (IPGE) for real-world data (RWD), that will expand Labcorp’s comprehensive, end-to-end drug development and clinical trial programs. HealthVerity’s IPGE platform, an integrated technology and RWD infrastructure, […]
thehealthco