Georgetown University’s Timothy DeStefano and colleagues—Harvard’s Michael Menietti and Luca Vendraminelli and MIT’s Katherine Kellogg—analyzed the stocking decisions for 425 products of a U.S. luxury fashion retailer across 186 stores. Half the decisions were made after employees received recommendations from an easily understood algorithm, the other half after recommendations from one that couldn’t be deciphered. A comparison of the decisions showed that employees followed the guidance of the uninterpretable algorithm more often. The conclusion: People may be more trusting of AI when they can’t see how it works.
People May Be More Trusting of AI When They Can’t See How It Works
If they can’t “see into” the system, they’re more apt to approach it with blind faith.
Summary.
New research looked at the extent to which the employees of a fashion retailer followed the stocking recommendations of two algorithms: one whose workings were easy to understand and one that was indecipherable. Surprisingly, they accepted the guidance of the uninterpretable algorithm more often.
A version of this article appeared in the September–October 2023 issue of Harvard Business Review.