
On March 17, the UK Online Safety Act took effect, granting Ofcom authority to regulate social media platforms, with the law mandating companies to remove harmful content or face social media platformfines of up to $23.3 million (£18 million), or 10% of global revenue.
With the Act dictating the rules for online safety and social media regulation now in force, Ofcom is imposing illegal content codes, pushing tech firms to take fiercer action against harmful online content.
Social media platforms must actively identify and remove content related to child sexual abuse, terrorism, hate crimes, suicide promotion, and fraud.
UK Technology Secretary Peter Kyle hailed the bill as “a major step forward in creating a safer online world,” highlighting that for long enough, child abuse material, terrorism material, and intimate image abuse have been too accessible.
In parallel, Ofcom’s enforcement director, Suzanne Carter said, “Platforms must now act quickly to comply with their legal duties, and our codes are designed to help them do that.”
The legislation also bans new offences like cyberflashing, intimate image abuse (revenge porn), and epilepsy trolling, targeting “threatening communications” and “sending false information intended to cause non-trivial harm.”
Social media platforms are now legally obligated to address content promoting violence and racially or religiously motivated public order offenses.
Penalties for Future Thoughts
For offenses involving child safety, managers and the companies face charges if they fail to comply with the UK Online Safety Act by removing harmful content. The UK’s Department for Science, Innovation, and Technology (DSIT) highlighted that the legislation will make the UK “the safest place in the world to be a child online.”
Despite its goals, the Online Safety Act came under criticism for its late enactment and implementation, with some arguing that the 18-month gap between its passage from October 2023 and enforcement in March 2025 was too long, while others complain that the new UK social media regulations don’t go far enough.
The legislation is subjecting social media platforms to increased scrutiny within the UK, as platforms now must provide evidence of their strong content moderation measures, and this includes using automatic tools for identification of offensive material.
Online safety bills go beyond UK social media content regulation, as they are about saving lives in the wild wild west of social media. If online safety bills were a requirement and are embedded, would the online world not be a better place? A secure online world is not only about preventing financial fraud or cyberbullying, but also about protecting children from offline consequences of online threats.
By tech companies taking matters and including online safety act for platforms, they allow for a healthier, safer online environment to flourish. The internet needs to be a haven for creativity, innovation, and connectivity, not creating a negative space that contradicts UK online safety act and free speech.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.