´óÏó´«Ã½

Web firms pledge to tackle online hate speech

  • Published
Aftermath of terror attacks in BelgiumImage source, Reuters
Image caption,

Successive terror attacks in Europe underlined the need to tackle online extremism and hate speech, said the Commission

Microsoft, YouTube, Twitter and Facebook have pledged to remove hate speech within 24 hours, in support of a code of conduct drafted by the EU.

The freshly drafted code aims to limit the viral spread of online abuse on social media.

It requires the firms to act quickly when told about hate speech and to do more to help combat illegal and xenophobic content.

The firms must also help "educate" users about acceptable behaviour.

The need for better ways to combat online hate speech had become more urgent in the wake of terror attacks in Belgium, said Vera Jourova, European Commissioner for Justice.

"Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racists use to spread violence and hatred," .

Hate speech and xenophobia also had a "chilling effect" on groups that sought to champion tolerance and non-discrimination, she said.

'Counter narratives'

The agreement of the four web firms was an "important step forward" in making sure the net stayed a place where free expression was possible, Ms Jourova said.

A core part of the code is the requirement to remove hateful content within 24 hours of being properly notified about it.

The tech giants have also agreed to work more closely with groups that monitor and flag violent and hateful content. They will also develop and promote "counter narratives" to challenge those who post hate speech or illegal content.

Karen White, Twitter's head of public policy for Europe, said "hateful conduct" had no place on its network and added there was a "clear distinction between freedom of expression and conduct that incites violence and hate".

The code also requires the four firms to overhaul their notification systems to ensure people can quickly report inflammatory content when they find it.

The Commission will hold regular meetings with technology firms to monitor what effect the code of conduct is having. A preliminary assessment of its effectiveness will be drawn up for the Commission's high level group on combating racism and xenophobia by the end of 2016.

The code of conduct for net firms was one of several initiatives to tackle abuse online, . Other work involves research to help ISPs assess information posted online and produce tools that can counter intolerance.