Main content

How will the Online Safety Bill work? Eight questions answered.

The Online Safety Bill now before parliament was first proposed nearly six years ago in response to concerns over the harmful material available online and the lack of regulation around it.

In this episode of Radio 4's The Briefing Room, David Aaronovitch speaks to four experts to find out if the bill will succeed in placing more responsibility on tech firms to monitor their content and protect both adults and children.

The Online Safety Bill is a new set of internet laws to protect children and adults. David Aaronovitch talks to law, technology and policy experts to find out if these measures will work. Listen to the programme.

What does the Online Safety Bill do?

Under the direction of the Department for Culture, Media and Sport (DCMS), the Online Safety Bill will regulate platforms that allow users to share content (e.g. Facebook), search engines (e.g. Google) and providers of pornographic content not generated by users. All the providers would have duties relating to illegal content and content harmful to children.

All of this would be supervised by Ofcom, the Office of Communications, who would, if needed, have the power to fine a company whichever is higher of £18m or 10% of their revenue and to disrupt their business with measures such as blocking access to sites or telling advertisers or credit card companies not engage with the service.

How will illegal content be defined and dealt with?

Illegal offences include content relating to terrorism, child sexual exploitation and abuse, and matters such as threats to kill, assisting suicide, stalking and harassment and, for example, selling knives. Platform operators will be required to do a risk assessment, identifying these offences and then take mitigating steps.

In 2017, Lorna Woods, Professor of Internet Law at the University of Essex, wrote a blog about regulating social media that ultimately brought about the Online Safety Bill. Her concept of harm was a "health and safety", risk-based approach where “you look at the thing that might be risky rather than the consequences and then work back.” Ultimately, however, the government approach has been to “identify categories of content that could be harmful”.

To prevent harmful content for adult users, there will be a "triple shield" that directs tech companies to provide "user empowerment tools" (eg prompts such as "Did you really mean to post that?") allowing more control over what is seen and what accounts are engaged with.

Meanwhile, the approach to harmful content for children will involve the process for illegal content outlined above.

What does the Molly Russell case tell us about the bill?

Molly Russell. Photo courtesy Ian Russell 漏

Shortly after the first green paper on online safety came out in 2017, teenager Molly Russell was found dead at her home in Harrow. She died from an act of self-harm while suffering from depression and the negative effect of online content relating to suicide, depression and anxiety.

Molly’s death crystallised the need for regulation and is now a test for its reach. The problem the case highlights is whether the bill should focus on isolated items of content – which may not individually be a problem – or the dissemination of items in a feed that means the content becomes more extreme. The current wording around children’s safety duties excludes the element of dissemination. “Some people are quite rightly worried about that,” says Lorna Woods.

Are there any other weaknesses of the bill?

Victoria Nash, senior policy fellow at the Oxford Internet Institute, believes that the bill, at 250 pages, has ultimately become inflated, and one of the problems arising from that, she says, is a shift of focus towards specific rather than broader risks, exemplified by the Molly Russell case outlined above.

From content that's illegal to content that's harmful to children...

Internet Law Professor Lorna Woods summarises what's in the Online Safety Bill.

Meanwhile, on the range of information needing to be processed, Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, laments the ditching of the "health and safety" approach to online harms because it suggests platforms won’t be required to adopt a “systematic approach” to protecting their users.

At the other end of the scale, legislation risks encouraging tech companies to massively self-censor in order to avoid fines.

Is Ofcom up to the task?

The responsibility for enforcing the online safety bill rests with Ofcom. Gina recognises that there will be a lot of work involved to “build capacity” but she is “optimistic”.

“The bill currently calls for Ofcom to lead a report on independent access to social media data so that we can assess the health of the online networks outside of taking the platform companies word on what's going on online,” Gina explains, “and what this bill gives a framework for is independent oversight of what's happening when that combination of algorithm data and user combine.”

Will age-verification be a key factor in protection from online harm?

Age verification is an obvious way for platforms to ensure the suitability of content to user. However, politicians have indicated that the bill will not require providers to impose this restriction. The implication is that companies will offer content appropriate for all age groups. For anyone wanting to go outside of that, however, they may have to go beyond merely self-declaration of age. Meta have already announced they would use AI information-gathering to determine age-appropriate content. Meanwhile, in EU countries, YouTube requires users wanting to view material suitable for over-18s to be logged into an account to a have provided verified ID.

Could tech providers leave the UK?

For very different reasons, tech companies WhatsApp and Wikipedia have indicated that they may stop offering their services in the UK.

WhatsApp and other encrypted message service providers are extremely unlikely to want to monitor content or open up messages because it runs contrary to their offering. Meanwhile, Wikipedia’s non-profit status will make it very difficult for them to withstand heavy fines.

Gina thinks that some companies protest too much and will – largely – stay put, but she’s more concerned that the concern over encryption “takes us off the focus of what is the main thrust of this bill – safer online environments for ourselves and our children.”

What does AI mean for the bill?

The pace of technological change is dizzying. Developments in AI have been particularly dramatic. With the Online Safety Bill starting its journey nearly six years ago, could it need an update within years of being passed?

Lorna says that this is up in the air. “I think it depends how the bill is actually implemented and the approach, because on one level, AI is just a functionality. And so, at a very simple level, you could say risk, assess, risk mitigate. What that means in practice, obviously, a whole different ball game.”

Listen to The Briefing Room: The Online Safety Bill