British Prime Minister Theresa May emerged from 10 Downing Street and stepped somberly to a podium. The previous night, 3 June, three terrorists had driven a van into pedestrians on London Bridge and attacked people in nearby Borough Market, using 12-inch knives. Eight victims were dead and 48 injured. After expressing condolences to their families, May identified two forces behind the previous night’s carnage.
They were Islamic extremist views — and American social media.
“We cannot allow this ideology the safe space it needs to breed, yet that is precisely what the Internet and the big companies that provide Internet services provide,” she said.
While other European political leaders, including France’s Emmanuel Macron and Germany’s Angela Merkel, have also criticised social media for not keeping extremists off their platforms, May has gone further. Her government has made bashing Facebook, Twitter and YouTube a regular rhetorical trope following a string of terrorist attacks in the past 12 months.
May’s rhetoric coincides with a plunge in tech companies’ reputations on both sides of the Atlantic, fuelled by revelations about Russia’s use of social media to interfere in the US presidential election, antitrust concerns and anger at the companies’ use of complex offshore structures to avoid tax.
But turning Big Tech into Public Enemy Number One risks alienating the very firms whose support British police and intelligence agencies need to thwart terrorist threats, said the government’s independent terrorism watchdog, trial lawyer Max Hill.
“I struggle to see how it would help if our parliament were to criminalise tech company bosses who ‘don’t do enough’,” he told a conference in July. Technology companies, he said, “need to be brought firmly onside, they do not need to be forced offside”.
Alan West, a member of the house of lords who previously served as an adviser to then-Prime Minister Gordon Brown, is among politicians and security experts saying May’s government is beating up on social media companies because its own counter-terrorism strategy hasn’t worked.
The UK’s signature programme to fight radicalisation, called Prevent, aims to identify individuals with extremist views and connect them with intervention to keep them from committing terrorist acts. Yet suicide bomber Salman Abedi, who killed 22 people at a pop concert in Manchester in May, was repeatedly flagged to authorities — and wasn’t known to Prevent, according to police. Of 7 631 referrals made to Prevent in 2015 and 2016, action was taken in only one in 20 cases, according to a home office report released in early November.
May’s government and her Conservative Party have pressured big tech — despite the absence of evidence that those who carried out the spate of UK attacks were radicalised on one of the large social networking platforms. (The attacker who targeted the Westminster Bridge in March, Khalid Masood, used the encrypted messaging app WhatsApp, which is owned by Facebook, as well as Telegram, a smaller messaging service. The London Bridge and Borough Market attackers are suspected of having used Telegram.)
“It is not morally acceptable for tech giants to wash their hands of all liability when it comes to discussions on safety and security,” Vicky Ford, a Conservative lawmaker and co-chair of the cross-party parliamentary technology forum, said in an interview.
May has proposed a two-hour timeframe for removing terrorist content, with substantial fines should the companies not comply. If adopted into law, it would be the most stringent such rule in the world.
Sinead McSweeney, Twitter’s vice president of public policy and communications in Europe, the Middle East and Africa, said: “Terrorism is a societal problem and therefore requires a societal response.” She added: “For our part, we’ve dramatically reduced the presence of terrorist groups on Twitter.”
Twitter said in September it has increased its use of artificial intelligence to screen for terrorist content. Worldwide, it now suspends 95% of accounts that post extremist content within hours and finds and removes 99% of offending posts without having to rely on complaints from users. It blocks three-quarters of suspect accounts before they post a single tweet, it said in a report.
Lena Pietsch, a Facebook spokeswoman, said the company removes terrorists’ accounts and posts that support terrorism whenever it becomes aware of them, and informs authorities if needed. Facebook announced in May plans to hire thousands of additional contractors to screen for content that violates its service terms.
A spokesman for Google’s YouTube said violent extremism is a complex problem and the company is committed to being part of the solution. YouTube’s algorithms, as of November, helped spot about 83% of the terrorist-related content it removes. And in three-quarters of removals, these automated systems spot the problematic content before anyone sees it.
These efforts have helped to chase terrorist groups such as Islamic State off of the best-known social media platforms, such as Twitter, and onto smaller group chat apps like Telegram and Signal, according to Peter Neumann, who runs the International Centre for the Study of Radicalisation and Political Violence at King’s College, London. Rob Wainwright, the head of Europol, said that Telegram is “a major problem” and that Facebook, Twitter and “some others” were proving more cooperative.
Telegram has said in messages posted to all its users that it bans Isis content and shuts down terrorist content “within hours”. The company opposes allowing security agencies to penetrate the end-to-end encryption of its messages.
The big tech companies say their relationship with British law enforcement has never been better.
“When I have been in the room with industry leaders and government people, there has always been a quite constructive dialogue,” said Antony Walker, deputy CEO of techUK, a trade association representing technology companies. A lawmaker, speaking on condition of anonymity because of the issue’s sensitivity, also said the government has good private communication with the industry, a sense backed up by a European security source.
British home secretary Amber Rudd has conducted a series of closed-door meetings with tech executives, including in March after a terrorist attack targeted Westminster Bridge and the British Parliament. In late July, Rudd traveled to California and met with Facebook, Google, Microsoft and Twitter executives both separately and as a group.
These exchanges have done little to temper Rudd’s public stance. At the October Conservative Party conference, she again faulted social media companies for not doing enough and said she was tired of being “sneered at” and “patronised” by technologists.
The companies say complying with May’s proposed two-hour requirement would be difficult.
“We are making significant progress but removing all of this content within a few hours — or indeed stopping it from appearing on the Internet in the first place — poses an enormous technological and scientific challenge,” Kent Walker, Google’s general counsel, said at the September meeting with May.
Newscasts, terrorism experts and anti-radicalisation activists all might have legitimate cause to use snippets of terrorist videos or other extremist content. Today’s artificial intelligence is not sophisticated enough to decipher context, Walker said.
“Machines are simply not at the stage where they can replace human judgment,” he said. — Reported by Jeremy Kahn and Kitty Donaldson, (c) 2017 Bloomberg LP