Skip to main content Skip to secondary navigation
Page Content

AI + Health: How to Prioritize Humans

Sonoo Thadaney Israni says AI developers in the healthcare space should use a justice, equity, diversity, and inclusion lens during design and development of their products.

Image
A nurse interacts with a patient who is in a hospital bed.

After 25 years in high tech and 13 more engaging with a diverse array of scholars at Stanford, Sonoo Thadaney Israni sees an alarming risk of techno-chauvinism at the intersection of healthcare and AI. “We have a hammer in our hand, and everything looks like a nail,” she says.

At Stanford Medicine’s AI + Health conference scheduled for December 8-9, 2021, Thadaney Israni, who is the executive director of Presence and the Program in Bedside Medicine/Stanford 25, will lead four separate panel discussions. Here, she introduces some of the themes the panels will touch upon.

Two of your sessions’ titles for the upcoming conference use the terms “Human Prioritized Healthcare AI.” What does that mean to you?

In healthcare, it is a human who needs care. That person is surrounded by friends and family who help support them in that care. A human clinician conducts the differential diagnosis, prepares a treatment plan, and performs the intervention. And that clinician is surrounded by another set of humans who support them in the work they do. So if all we do is take a problem and ask what it takes to be efficient, as narrowly focused AIs often do, and we prioritize that over the humans involved, we end up solving very narrow problems with a very narrow lens. 

Prioritizing humans also requires us to struggle with the implications of taking our racist, sexist, homophobic, ageist, and “othering” past and automating inequality into the future, which is what an AI system using historical data to predict the future will do. My fear is that by moving ahead with AI systems that do things faster, better, and cheaper, we will end up with further exacerbation of healthcare inequities. There will be concierge care for those of us with privilege and kiosk care for those that don’t have privilege; and worse still, that kiosk care will prioritize efficiency and cost and likely be based on data that doesn’t represent the populations it is targeting.

If we take the time to engage with and understand the messiness of humans and the needed struggle for justice, we will ultimately provide more equitable healthcare. After the 2020 summer of race-reckoning, there’s been widespread commitment to JEDI – justice, equity, diversity, and inclusion. To ensure we walk the walk and talk the talk – we must ensure impact beyond intent.

Are there instances where AI is already prioritizing humans as well as justice, equity, diversity, and inclusion?

The applications I have seen don’t prioritize humans or justice and equity from the get-go. I have only seen them do so in retrospect. For example, an algorithm that used patients’ past healthcare costs to predict future health needs turned out to have racist outcomes, as was pointed out in the Obermeyer study published in Science. The problem was corrected, but that was after the fact.

The reality is that given the nature of AI and machine learning, and given the prioritization of market share and profit in our world, if we rely on whistleblowers and postmortems to correct AI injustices, we’ll end up with even more widespread injustice, continued inequity, and chaos.

Presence, the Stanford organization you co-lead, “champions the human experience in medicine.” Do you think it’s possible for AI to do the same?

It is definitely possible. It’s just a question of priorities and a moral rudder. Eric Topol, who is the keynote speaker for the Health + AI conference, wrote a book called AI in Medicine, in which he beautifully articulates the key role that a trusted doctor or clinician plays in bolstering a patient’s sense that the pain they are enduring will pass. In his view, AI can help restore this sense of human caring that we all seek when we are sick by freeing doctors and other healthcare professionals from the minutiae that they have to handle in the clinical environment. 

And when it comes to justice and equity, there might be ways that AI can help as well by providing smart alerts to give us pause, because we’re all human. And when I say we’re all human, I mean we’re all biased. We’re all a function of our own experiences and of our own knowledge. It doesn’t make us bad people to have biases. It makes us human to have biases. The question is whether we have checks in the system to make us pause to consider whether our biases are going to cause unintended harm.

For instance, one of my colleagues, Samantha Wang, who will be on two of the panels at the Health + AI conference, has been working with me on the Five-Minute Moment for Racial Justice. It uses short narratives as part of a medical education curriculum to point out historical injustices and how they have contributed to current standards in medicine, and outlines a path forward. 

I'll give you a real-life example that we use in that curriculum. We talk about Bob Marley, the musician. The story is that he had a lesion on his toe and went to two medical teams who said it comes from playing football without shoes and socks. Sometime later, he received a biopsy and was diagnosed with acral lentiginous melanoma (ALM), which is rare but is the most common type of melanoma in people with dark skin. He died at age 36 from the illness. The medical textbooks during his time and even in the present day have very little representation of dermatologic conditions on dark skin. But there now exist websites and resources that present photographic evidence of what various dermatology issues look like on darker skin. So, if doctors know they are going to see a darker skinned patient who has a dermatology issue, perhaps an alert could suggest they pause and visit a few available websites to get a better understanding of what that might look like – or even request a consult from others who have more experience and wisdom in that space.

That’s the idea – to have smart alerts that nudge doctors to consider whether their training and knowledge might be limited in a way that could exacerbate existing inequities. Or nudge them to check potential personal biases. The nudges would cause a clinician to pause; and urge taking five minutes to consider reframing and reorientation, additional knowledge, non-judgmental language, and more.  

Are doctors being trained or should they be trained to somehow understand when an AI system is or is not reliable, safe, and effective?

I think that’s a challenge. There’s this concept of a moral crumple zone, whereby moral responsibility is attributed to the wrong person or entity. In the context of AI and healthcare, it looks like this: A tech company comes up with an AI application; decision makers at a hospital or clinic opt to deploy it; and then clinicians at the frontline – doctors, nurses, physician assistants – use the system. But from the clinician end-user’s perspective, it’s a black box; they don’t really know what the algorithm does – they’re not trained in AI and machine learning. So if the system makes an error, there is a question of who is responsible and who is liable. How can the frontline clinician be responsible for the error when they didn’t even know what the algorithm is, the framing of the algorithm, the data used to train it, and what it does?

So rather than deploy an AI system across an entire healthcare system, some of us have proposed the idea of an AI consult. Just as the general medicine team brings in specialists in different fields, there could be a cadre of professionals who are trained in both medicine and in data science/AI, and they would be brought in for a consultation as appropriate – assuming the AI system also lives up to being human-centered and ensuring justice and equity.

How can we ensure that AI doesn’t exacerbate inequities in healthcare? 

Using AI to do things faster, better, and cheaper and to gain market share is only going to further exacerbate the inequities we already have. To make sure that doesn’t happen, we’re going to need governance and regulation, as well as a rethinking of the way technology products are developed. Just as there are checks and balances for finance, for legal, for risk management, the healthcare AI product development cycle should require explicitly evaluating every product for justice, equity, diversity, and inclusion before it can advance to market. And the people doing that evaluation should include not only experts but a wide range of patients and caregivers who represent the diversity of the populations being served. Their task would be to consider whether the product or service will solve problems or create problems for justice, equity, diversity, and inclusion. And just as a product won’t make it out the door to the next step if it’s not financially or legally viable, it shouldn’t make it out the door to the next step without a justice, equity, diversity, and inclusion approval. That’s the kind moral rudder and self-governance the tech industry must aspire to and deliver.

And for this shift to happen – for governance and regulation to catch up, and for the tech industry to be accountable to a moral rudder – I’m advocating for a reflective pause where we stop and ask who’s at the tables of power and decision making, what decisions are being made, what decisions are being framed, and how should we proceed. The alternative is to wait for either a PR fiasco, or a whistleblower, or a government hammer that comes down and breaks things up. 

Are we saying that we as humans are not capable of having that tiny amount of foresight to pause and do this right rather than constantly waiting for hindsight and postmortems? That would be a very bleak world. So I do think it’s possible. As Margaret Mead put it, “Never doubt that a small group of thoughtful, committed individuals can change the world. In fact, it’s the only thing that ever has.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics