Health Life

Artificial intelligence’s limitations in coronavirus response

Artificial intelligence is being used to understand and address coronavirus but the results will only be as unbiased as the information fed into the algorithms. Credit: Daniele Marzocchi/Flickr, licensed under CC BY-NC 2.0

As the coronavirus pandemic endures, the socio-economic implications of race and gender in contracting COVID-19 and dying from it have been laid bare. Artificial intelligence (AI) is playing a key role in the response, but it could also be exacerbating inequalities within our health systems—a critical concern that is dragging the technology’s limitations back into the spotlight.

The response to the crisis has in many ways been mediated by data—an explosion of information being used by AI algorithms to better understand and address COVID-19, including tracking the virus’ spread and developing therapeutic interventions.

AI, like its human maker, is not immune to bias. The technology—generally designed to digest large volumes of data and make deductions to support decision making—reflects the prejudices of the humans who develop it and feed it information that it uses to spit out outcomes. For example, years ago when Amazon developed an AI tool to help rank job candidates by learning from its past hires, the system mimicked the gender-bias of its makers by downgrading resumes from women.

“We were seeing AI being used extensively before COVID-19, and during COVID-19 you’re seeing an increase in the use of some types of tools,” noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.

Monitoring tools to keep an eye on white collar workers working from home and educational tools that claim to detect whether students are cheating in exams are increasingly growing common. But Whittaker says that most of this technology