One of the biggest sources of anxiety about AI is not that it will turn against us, but that we simply cannot understand how it works. The solution to rogue systems that discriminate against women in credit applications or that make racist recommendations in criminal sentencing, or that reduce the number of black patients identified as needing extra medical care, might seem to be “explainable AI.” But sometimes, what’s just as important as knowing “why” an algorithm made a decision, is being able to ask “what” it was being optimized for in the first place?
Why Business Leaders Need to Understand Their Algorithms
One of the biggest sources of anxiety about AI is not that it will turn against us, but that we simply cannot understand how it works. Knowing “why” is important for many industries, particularly those with fiduciary obligations like consumer finance, or in healthcare and education, where vulnerable lives are involved. Leaders will increasingly be challenged by shareholders, customers, and regulators on what they optimize for. There will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination. Document your decisions carefully and make sure you understand, or at the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as “the algorithm made me do it”.