Regulator gets more social media power
New powers will be given to the watchdog Ofcom to force social media firms to act over harmful content. The companies have defended their own rules about taking down unacceptable content, but critics say independent rules are needed to keep people safe.
It is unclear what penalties Ofcom will be able to enforce to target violence, cyber-bullying and child abuse. There have been widespread calls for social media firms to take more responsibility for their content, especially after the death of Molly Russell who took her own life after viewing graphic content on Instagram.
Later on Wednesday, the government will officially announce the new powers for Ofcom – which currently only regulates the media, not internet safety – as part of its plans for a new legal duty of care.
Ofcom will have the power to make tech firms responsible for protecting people from harmful content such as violence, terrorism, cyber-bullying and child abuse – and platforms will need to ensure that content is removed quickly.
They will also be expected to “minimise the risks” of it appearing at all.
The regulator has just announced the appointment of a new chief executive, Dame Melanie Dawes, who will take up the role in March.
“There are many platforms who ideally would not have wanted regulation, but I think that’s changing,” said Digital Secretary Baroness Nicky Morgan.
“I think they understand now that actually regulation is coming.”
In a statement, Facebook said it had “long called” for new regulation, and said it was “looking forward to carrying on the discussion” with the government and wider industry.
Communication watchdog Ofcom already regulates television and radio broadcasters, including the BBC, and deals with complaints about them.
This is the government’s first response to the Online Harms consultation it carried out in the UK in 2019, which received 2,500 replies.
The new rules will apply to firms hosting user-generated content, including comments, forums and video-sharing – that is likely to include Facebook, Snapchat, Twitter, YouTube and TikTok.
The intention is that government sets the direction of the policy but gives Ofcom the freedom to draw up and adapt the details. By doing this, the watchdog should have the ability to tackle new online threats as they emerge without the need for further legislation.
A full response will be published in the spring. Children’s charity the NSPCC welcomed the news.
“Too many times social media companies have said: ‘We don’t like the idea of children being abused on our sites, we’ll do something, leave it to us,'” said chief executive Peter Wanless.
“Thirteen self-regulatory attempts to keep children safe online have failed. Statutory regulation is essential.”
Seyi Akiwowo set up the online abuse awareness group Glitch after experiencing sexist and racist harassment online after a video of her giving a talk in her role as a councillor was posted on a neo-Nazi forum.
“When I first suffered abuse the response of the tech companies was below [what I’d hoped],” she said.
“I am excited by the Online Harms Bill – it places the duty of care on these multi-billion pound tech companies.”
In many countries, social media platforms are permitted to regulate themselves, as long as they adhere to local laws on illegal material.
Germany introduced the NetzDG Law in 2018, which states that social media platforms with more than two million registered German users have to review and remove illegal content within 24 hours of being posted or face fines of up to €5m (£4.2m).
Australia passed the Sharing of Abhorrent Violent Material Act in April 2019, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10% of a company’s global turnover.
China blocks many western tech giants including Twitter, Google and Facebook, and the state monitors Chinese social apps for politically sensitive content.