Idea in Brief
The Problem
Bias will find its way into AI and machine-learning models no matter how strong your technology is or how diverse your organization may be.
The Reason
There are many sources of biased AI, all of which can easily fly under the radar of data scientists and other technologists.
The Solution
An AI ethics committee can identify and mitigate the ethical risks of AI products that are developed in-house or procured from third-party vendors.
In 2019 a study published in the journal Science found that artificial intelligence from Optum, which many health systems were using to spot high-risk patients who should receive follow-up care, was prompting medical professionals to pay more attention to white people than to Black people. Only 18% of the people identified by the AI were Black, while 82% were white. After reviewing data on the patients who were actually the sickest, the researchers calculated that the numbers should have been about 46% and 53%, respectively. The impact was far-reaching: The researchers estimated that the AI had been applied to at least 100 million patients.