Comment

The era of self-regulation for technology companies is coming to an end

File photo dated 25/03/18 of the Facebook logo on a laptop. Facebook has confirmed that it measures how trustworthy users are when they report fake news posts, as it attempts to tackle misinformation across the social network. PRESS ASSOCIATION Photo. Issue date: Wednesday August 22, 2018. See PA story TECHNOLOGY FacebookAccounts. Photo credit should read: Dominic Lipinski/PA Wire 
The time has come for the tech companies to be properly accountable, argues Jeremy Wright Credit: Dominic Lipinski/PA Wire 

The safety of the internet has been at the forefront of people’s minds in recent weeks. We have all heard tragic stories of young and vulnerable people being negatively influenced by social media. I remain a firm believer that technology has the power to do good. But it is clear that things need to change. With power comes responsibility and the time has come for the tech companies to be properly accountable.

The Telegraph has been at the forefront of this debate, calling for change through its duty of care campaign. This is something that the Home Secretary and I are seriously considering as part of our forthcoming White Paper on online harms.

Tackling online harms is also why the Minister for Digital Margot James and I are flying to San Francisco today to talk to some of the world’s biggest technology firms, including Facebook, Twitter, Google and Apple. We will be making clear to them that we won’t stand by to see people unreasonably and unnecessarily exposed to harm. If it wouldn’t be acceptable offline then it should not be acceptable online.

Safety is at the forefront of almost every other industry. The online world should be no different. Because, make no mistake, these firms are here to stay and, as a result, they have a big role to play as part of the solution. It’s vital that we make sure that they use their technology to protect the people – their customers – who use it every day.

It’s important not to lose sight of what online harms actually are. Yes, it means things like cyberbullying, images of self harm, terrorism and grooming. But disinformation – which challenges our ideals of democracy and truth – must be tackled head on, too.

Disinformation isn’t new. But the rise of tech platforms has meant that it is arguably more prevalent than ever before. It is now possible for a range of players to reach large parts of the population with false information. This is why we are including tackling harms like disinformation in our White Paper. It will set out a new framework for making sure disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

In the UK, most people who read the news now do so online. When it is read across platforms like Facebook, Google and Twitter and then shared thousands of times, the reach is immense. False information on these platforms has the potential to threaten public safety, undermine national security, reduce trust in the media, damage our global influence and undermine our democratic processes.

Fortunately we’re yet to see any evidence of disinformation affecting democratic processes in the UK. But this is something that we’ll be continuing to keep a very close eye on.

Tools exist to enable action to be taken, particularly through the use of Artificial Intelligence. We’ve already seen welcome moves from platforms such as Facebook and Twitter, which have developed initiatives to help users identify the trustworthiness of sources and who shut down thousands of fake sites and profiles.

But voluntary measures have not been enough. I want trustworthy information to flourish online and for there to be transparency so that the public are not duped. Parliament cares deeply about this, as today’s Select Committee report into disinformation shows.

More needs to be done. One of the main recommendations in the Cairncross report on the future of journalism last week was to put a “news quality obligation” on the larger online platforms – placing their efforts to improve people’s understanding of the trustworthiness of news articles under regulatory supervision.

The industry should want to be part of the solution. That’s why my department is co-hosting a Technology Challenge in Bristol next month with colleagues from the US. It will see firms set out their innovative solutions to tackling disinformation, and will award the winner $250,000 to develop their ideas further.

Online firms rely on us spending time online. We will only do that if we feel safe there. So a safer internet is good for business too. But we can no longer rely on the industry’s goodwill. Around the world governments are facing the challenge of how to keep citizens safe online. As the era of self regulation comes to an end, the UK can and should lead the way.

 

Jeremy Wright is Culture Secretary.

License this content