Episode Four - AI Moderation
Zoe Kleinman explores the battle to control online content and meets the moderators trying to manage the worst excesses of the internet.
Over 500 hours of video are posted on YouTube every minute. Over 4 million photos are uploaded to Instagram every hour. There are around 500 million posts to X (formerly Twitter) every single day. These numbers are growing by the second.
How do you even begin to monitor and police such a relentless avalanche of information? In this new series, Zoe Kleinman journeys into the world of the online content moderators.
Big social media platforms rely on automation for much of the work, but they also need an army of human moderators to screen out the content that is harmful. Many moderators spend their days looking at graphic imagery, including footage of killings, war zones, torture and self-harm. We hear many stories about what happens when this content falls through the net, but we don鈥檛 hear much about the people trying to contain it. This is their story.
What happens to the people doing this job? And will the work these human beings face every day be soon taken over entirely by Artificial Intelligence?
In episode four, Zoe hears how AI is trained to moderate content and how its role is evolving.
How do AI moderators tell the difference between offensive content and legitimate posting? Does AI moderation need moderating itself? And can AI really keep up with the amount of new and extreme material uploaded online everyday? Zoe speaks with former moderators and tech insiders to hear how AI is coping with the challenges of monitoring the web.
Presenter: Zoe Kleinman
Producer: Tom Woolfenden
Assistant Producer: Reuben Huxtable
Executive Producer: Rosamund Jones
Sound Designer: Dan King
Series Editor: Kirsten Lass
A Loftus Media production for 大象传媒 Radio 4
Last on
More episodes
Previous
Broadcast
- Today 13:45大象传媒 Radio 4