Report urges UK government to beef up online safety measure

FILE - In this Monday, Oct. 25, 2021 file photo, Facebook whistleblower Frances Haugen leaves after giving evidence to the joint committee for the Draft Online Safety Bill at the Houses of Parliament, in London. A group of lawmakers say proposed British rules aimed at cracking down on harmful online content should be beefed up with tougher measures. The committee has been hearing evidence from tech executives, researchers and whistleblowers such as Haugen, whose revelations alleging that the company put profits ahead of safety have galvanized legislative and regulatory efforts around the world to clamp down on online hate speech and misinformation. (AP Photo/Matt Dunham, File)

Proposed British rules aimed at cracking down on harmful online content should be beefed up with tougher measures like making it illegal to send unsolicited graphic images, requiring porn sites to ensure children can’t gain access and moving faster to hold tech executives criminally liable for failing to uphold the regulations, lawmakers said in a new report.

The committee of lawmakers recommended a series of major changes to the U.K. government’s draft online safety bill early Tuesday that would make digital and social media companies more responsible for protecting users from child abuse, racist content and other harmful material found on their platforms.

The proposals by Prime Minister Boris Johnson’s government and similar rules that the European Union is working on underline how Europe is at the vanguard of the global movement to rein in the power of digital giants like Google and Facebook parent Meta Platforms.

The committee of British lawmakers is scrutinizing the draft bill to offer recommendations on how the government can improve it before it’s presented to Parliament next year for approval.

“The era of self-regulation for big tech has come to an end,” committee chairman Damian Collins said. “The companies are clearly responsible for services they have designed and profit from, and need to be held to account for the decisions they make.”

The committee has been hearing evidence from tech executives, researchers and whistleblowers such as former Facebook data scientist Frances Haugen, whose revelations alleging that the company put profits ahead of safety have galvanized legislative and regulatory efforts around the world to clamp down on online hate speech and misinformation.

Under the proposed rules, the U.K.’s communications regulator, Ofcom, would be appointed to investigate internet companies for any breaches, with the power to impose fines of up 18 million pounds ($24 million) or 10% of a company’s annual revenue, whichever is higher.

Criminal offenses should be drawn up to cover new online crimes such as cyberflashing — sending someone unsolicited graphic images — or sending flashing images to someone with epilepsy with the intention of causing a seizure, the report said.

Internet companies would have to follow codes of conduct on areas like child exploitation and terrorism and would be required to carry out risk assessments for algorithms recommending videos and other content to users that could lead them down “rabbit holes” filled with false information.

To protect children from accessing online porn, websites should use “age assurance” technology to ensure users are old enough to be on the site, the report said. To ease concerns it would result in a data grab by big tech companies dominating the verification systems, the report advised Ofcom to set standards to protect privacy and restrict the amount of information they can collect.

One of the more controversial parts of the British government’s proposal is a provision holding senior managers at internet companies criminally liable for failing to give regulators information needed to assess whether they’re complying with the rules, which would kick in after a two-year review period.

That’s raised concerns tech companies would use the time to stall instead of complying, spurring digital minister Nadine Dorries to vow to make it happen “under a much shorter timeframe” of up to six months — something the committee backed.

However, digital rights groups and tech companies have warned that the provision could have unintended consequences. Google said it would be an incentive for companies to use automated systems to take down content “at scale” to protect themselves from prosecution.

Authoritarian countries could look to the U.K. rules as justification for threatening staff to get political speech and journalistic content taken down that their leaders don’t like, Twitter’s senior director of global public policy strategy, Nick Pickles, told the committee in October.

The government has two months to respond to the committee’s report before presenting it to Parliament for approval.


LONDON (AP)