Can internet companies monitor terrorists?

Image source, ALAMY

  • Author, Rory Cellan-Jones
  • Role, Technology correspondent

Big American tech firms, and in particular Facebook, are under pressure to become more active in the battle against terrorism. But what are their current arrangements?

Facebook is saying little apart from the fact that "we do not allow terrorist content on the site and take steps to prevent people from using our service for these purposes".

But privately the social networking giant believes it does more than most in keeping extremist material off the site and collaborating - where the law allows - with law enforcement agencies.

The company has links on every page allowing users to report anything that breaks the rules - from pornography to extremist material. There are four centres around the world, including one in Dublin covering Europe, where staff monitor the site and handle reports of abuse of its rules.

We are told that several "accounts" used by Michael Adebowale had been deleted by Facebook after being flagged as linked to terrorism. Now this raises some questions - users are not allowed to have multiple accounts so if there was a pattern of creating different profiles, using them for distributing terrorist content and then moving on, maybe that should have been a red flag.

If we are talking about pages or groups in which he was just one member, that may have been less cause for instant concern. And what is not clear is whether these deletions happened as part of an automated process or whether staff examined the content before making a decision.

But it is the message in which Adebowale talked in graphic terms about plans to kill a soldier, which is at the heart of the controversy. This was not spotted by Facebook. The question is whether the company, having already identified him as a person sharing extremist content, should have continued to monitor him and then alerted the authorities.

I understand that when it does spot a clear and present threat to commit a serious crime - say, a man talking about murdering his wife - the company does contact the authorities.

But with 1.3 billion users around the world, and a sizeable number likely to be in conflict with their governments, the volume of cases which could be labelled as potential terrorist threats is probably quite high.

Having systems that scanned every message for keywords relating to terrorism, would certainly be technically feasible. But alerting the authorities every time something questionable popped up would be a huge step for Facebook or any other internet company to take.

They will point out that they are global companies and if they have that kind of relationship with the British authorities, then governments everywhere from Russia to Egypt to Uzbekistan will want a similar deal. And their definition of what constitutes terrorism may well differ from that in the UK or US.

It is also the case that plenty of extremist chatter has already moved from the likes of Facebook and Twitter to more obscure networks, which the intelligence agencies will find even harder to monitor. That process is likely to accelerate if terrorists decide the big social networks are not secure from surveillance.

Facebook and the other tech giants are facing conflicting pressures. Since Edward Snowden's revelation about the extent of surveillance they have been trying to reassure users that they do not simply hand over material to the authorities without due legal process. But now they may find that new legislation forces them to be more active participants in the battle against terrorism and a little less concerned about their users' privacy.