Ignoring race in health care AI solves nothing. There is a better option
Talk about a contrarian stance. A stats-and-analysis team from RAND Corp. says predicting a patient’s race and ethnicity can result in better health care algorithms.
It can also help clinicians minimize bias in their practices.
In so many other aspects of business, the order of the day is to ignore race, so the Rand commentary is attention-getting.
The article was written by four statisticians: Cheryl Damberg, Marc Elliott, Irineo Cabreros and Denis Agniel; and behavioral scientist Steven Martino. All have deeper bios than that. Damberg is director of RAND’s Center of Excellence on Health System Performance. Elliott holds the company’s distinguished chair in statistics.
They looked at a very specific facet of health care – patients who do not identify themselves racially or ethnically when registering for care.
Algorithms widely used in health care can result in racially biased outcomes, points out Elliott in a summary of the work, and that leads some to call for getting rid of the algorithms. He contends that is throwing away the good that AI has continued to deliver health care.
That would be a little like looking at the aftermath of a failed bridge and deciding the only way people can be safe is by pulling down all bridges.
Codes today are identifying cancerous skin lesions correctly more often than clinical assessments, according to RAND. And yet it is just as true that popular heart health algorithms underestimate the risk Black patients face with heart failure.
Yes, the decision making can be returned solely to humans, but that does not eliminate bias, conscious or otherwise.
Another alternative would be to eliminate all consideration of race and ethnicity in algorithm development, creating a wishful mindset – fairness through unawareness.
At the same time, poor performance of facial analysis algorithms for women and people with darker skin observed by computer vision researcher Joy Buolamwini has been cited repeatedly as evidence of bias in face biometrics.
There is another option, according to the RAND team. Use health disparity tools, of which RAND makes a few. It is possible to measure these coding inequities even if patients have not declared their full identity.
“Imputing missing or unreliable race and ethnicity data facilitates identification of algorithmic bias and makes clear what corrective measures are needed” to cut bias in treatment decisions. The team hold out as an example, the Bayesian Improved Surname Geocoding imputation method.