Taint
Why are tech companies building a multi-billion dollar industry on discredited science? Aleks Krotoski explores how our biases and outdated ways of thinking end up in our AIs.
There are many ways in which the taint of prejudice, outdated ways of thinking and plain old human error can enter our artificial intelligence systems. The weakest link is always where the sticky handprints of humans are most visible.
To train AIs, systems need two things: computer vision, to precisely identify images, and machine learning algorithms. But they also need a person to label images over and over and over again, so when the AI perceives that image, they learn what it is.
In this episode, Aleks Krotoski takes a look at affect recognition and explores how it became part of a multi-billion dollar AI industry. It all comes back to a system called FACS or Facial Action Coding System, which was devised by a psychologist called Dr Paul Ekman.
FACS is a framework which categorises facial expressions and was widely adopted by artificial labs in the nineties for use in computer vision. But, the science behind FACS has been widely disputed in the science community for two centuries.
From a Parisian asylum, via the tropical rainforests of Papua New Guinea, Aleks Krotoski traces the history of this controversial science and tells the story of how it ended up in our AIs.
Producer: Caitlin Smith
Last on
More episodes
Previous
Next
Broadcast
- Mon 8 Nov 2021 16:30大象传媒 Radio 4
Podcast
-
The Digital Human
Aleks Krotoski explores the digital world