Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems — with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.
What Do We Do About the Biases in AI?
Over the past few years, society has started to wrestle with just how much human biases can make their way into artificial intelligence systems—with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority. What can CEOs and their top management teams do to lead the way on bias and fairness? Among others, we see six essential steps: First, business leaders will need to stay up to-date on this fast-moving field of research. Second, when your business or organization is deploying AI, establish responsible processes that can mitigate bias. Consider using a portfolio of technical tools, as well as operational practices such as internal “red teams,” or third-party audits. Third, engage in fact-based conversations around potential human biases. This could take the form of running algorithms alongside human decision makers, comparing results, and using “explainability techniques” that help pinpoint what led the model to reach a decision – in order to understand why there may be differences. Fourth, consider how humans and machines can work together to mitigate bias, including with “human-in-the-loop” processes. Fifth, invest more, provide more data, and take a multi-disciplinary approach in bias research (while respecting privacy) to continue advancing this field. Finally, invest more in diversifying the AI field itself. A more diverse AI community would be better equipped to anticipate, review, and spot bias and engage communities affected.