When people talk about content moderation, they typically reference debates about free speech and censorship. The people who actually do the moderating, however, tend to be an afterthought. Their work is largely invisible — hidden away in call center–type offices — as they filter the deluge of user-generated content on social media platforms. Even though users don’t see them, these employees are the internet’s frontline workers, facing the worst of human nature one disturbing picture or video at a time. Without them, social media companies — and their ad-driven business models — likely couldn’t exist as they do now. Yet in addition to the constant exposure to unpleasant content, these jobs tend to be poorly paid, contingent, and full of pressure to perform quickly and accurately. But do they have to be so bad?
Content Moderation Is Terrible by Design
Social media companies couldn’t exist in their current form without content moderation. But while these jobs are essential, they’re often low-paid, emotionally taxing, and extremely stressful — they require exposure to horrific violence, disturbing sexual content, and generally the worst of what we see (or don’t see) online. Do they have to be? Sarah T. Roberts, faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at UCLA, details the evolution of this work, from patchwork approaches to in-house moderators and contractors to the current prevailing model, where generalist contractors work in call center–like offices. There are steps companies could take to improve this work, including providing better technology for moderators as well as better pay and more psychological support. But improvement, at present, is more likely to come from worker organizing and collective demand for better conditions than from the firms that employ the workers or the companies that need the moderation.