It’s easier to program bias out of a machine than out of a mind.
A Simple Tactic That Could Help Reduce Bias in AI
It’s been well-established that AI-driven systems are subject to the biases of their human creators — we unwittingly “bake” biases into systems by training them on biased data or with “rules” created by experts with implicit biases. But it doesn’t have to be this way. The good news is that more strategic use of AI systems — through “blind taste tests” — can give us a fresh chance to identify and remove decision biases from the underlying algorithms even if we can’t remove them completely from our own habits of mind. That is, we can simply deny the algorithm the information suspected of biasing the outcome to ensure it makes predictions blind to that variable. Breaking the cycle of bias in this way has the potential to promote greater equality across contexts — from business to science to the arts — on dimensions including gender, race, socioeconomic status, and others.