Health

Trusted AI in Health Care

The director of the Centers for Disease Control and Prevention, Dr. Rochelle Walensky, recently declared racism in America a “serious public health threat,” pointing to the disproportionate impact of COVID-19 on racial and ethnic minorities. It’s true that the pandemic has not only exposed the fault lines of inequality in this country, but also widened them at great cost. In response, the CDC plans to make investments to establish a “durable infrastructure” to “provide the foundation and resources to address disparities related to COVID-19 and other health conditions.”

While the director’s statement did not specify what those investments would include, the use of artificial intelligence should be among the top priorities.

The pandemic has coincided with unprecedented interest in AI: The number of scientific publications mentioning “AI” went up almost 35 percent in 2020. Among them were over 400 preprint or published papers on applying AI to COVID. Independent evaluators have found that none of the AI COVID models proposed in those papers are of clinical use, in large part due to the potential for underlying bias, which these papers did not address or attempt to mitigate.

However, we need to be clear: It does not have to be this way. AI is a powerful tool, and when used properly, one that can be wielded against bias. In fact, AI can become a foundational element of that “durable infrastructure” that helps the CDC and our society tackle racism as a public health crisis in America.

Bias in AI is detectable and measurable, and it can be mitigated through mathematical processes. Compare that to the cumulative decision-making of hundreds of individual people that have their own potentially unconscious biases manifesting in the choices they make. While unconscious bias is tough to combat at a systematic level, AI bias is a technical challenge. It becomes a task of minimizing the role of factors that shouldn’t matter to a decision, and measuring what kinds of predicted outcomes are observed across different groups of people. When AI as a solution is deployed, its impacts are immediate and system-wide, achieving a consistency at scale that is the definition of durable change.

The DataRobot Applied Ethics Team independently conducted an analysis on the uneven burden of disease of COVID-19 in the United States and came to the same conclusions as the CDC. Infection, hospitalization and mortality rates paint a picture of the vulnerabilities of racial and ethnic minorities. Experts at the CDC track these impacts back to what are called the social determinants of health: access to health care, occupation and job conditions, housing instability and transportation challenges. When I used AI to explain the disparities in COVID at the county level, AI uncovered the same socioeconomic variables as key predictors: occupation, household size, rates of uninsurance.

Applied appropriately, AI can help expose and explain existing inequities in health care. To go one step further, AI can be used to heal them. Think about the role of human racial bias in medical decision-making. In studies as recent as 2016, it’s been found that Black patients’ pain is perceived with less severity than that of white patients. For example, 40 percent of medical trainees believe Black patients’ skin “is thicker than white people’s.” This phenomenon is exacerbated for Black women in America, who are four to five times more likely than white women to die due to pregnancy-related complications — profoundly preventable deaths.

What if that doctor had a finely-tuned AI recommendation in hand? One trained directly on curated patient data, carefully assessed and mitigated for bias. AI in decision-support systems in medicine can act as a guardrail against the unconscious forces that may sway a physician’s recommendation. Sometimes all it takes is a second look.

AI has enormous potential in health care: opportunities to discover key biomarkers that lead to precision medicine, as well as the ability to augment diagnostic readings of patient scans with machine vision or improve our logistical systems of care and triage, proactively reaching out to the patients in the most need.

With equal urgency, AI can be applied to diagnose and serve as part of the treatment plan for disparities in health in America. We will need AI in health care to be held to a higher standard: datasets assessed for representativeness and bias, interpreted and validated against clinical understanding and integrated into practice while emphasizing accessibility and equity. The CDC is right to recognize racism as a serious public health threat, and it would be a mistake to overlook our most potent recent technological innovation in the fight to overcome it.

 

Dr. Haniyeh Mahmoudian is the global AI ethicist for DataRobot and a leader of the company’s Applied AI Ethics Team.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult