AI Used for Sextortion and Child Abuse, Warns UK Police Chief

UK police warned that criminals are weaponizing AI tools for fraud, cyberattacks, child exploitation, posing challenges for law enforcement.

On Sunday, the UK police warned that criminals are weaponizing AI tools for fraud, cyberattacks, and child exploitation, posing urgent challenges for global law enforcement.

The growing misuse of AI tools highlights an urgent need for global collaboration between law enforcement, policymakers, and technology companies. Criminals are leveraging AI’s capabilities to create highly convincing scams, exploit vulnerabilities, and produce illegal content, outpacing traditional crime-fighting methods.

Criminals are inventive and will use any available AI tools to commit crimes. AI is now part of that toolkit, “stated Alex Murray, the national police lead for AI, citing the way that fraudsters were using deepfake technology to impersonate executives to carry out financial fraud. In one recent case, a finance worker was tricked into transferring £20.5 million after a convincing impersonation over a video call.

AI and Child Exploitation

AI has increased the rise of “sextortion” cases, where criminals use AI to manipulate victims’ images to indecent images for blackmail. Offenders now create fake but convincing AI images from public social media profiles as they expand their tactics, going beyond existing photos.

The most alarming trend, Murray said, is the use of weaponizing AI to create child sexual abuse material. “We’re talking about thousands of synthetic images being produced,” he revealed. These images, illegal under UK law, are being used by offenders to fuel harmful networks.

Hackers using AI will lead to an exploitation to discover software vulnerabilities for isolated and targeted cyberattacks. Although most AI-related crimes deals with fraud and child exploitation, Murray warns that misuse is bound to escalate rapidly.

Weaponizing AI and Chatbot Radicalization

The rapid evolution of generative AI has sparked concerns about its misuse in spreading extremist content and facilitating criminal activities. Experts warn that without swift regulation, these tools could pose significant threats to public safety and security.

The UK’s independent reviewer of terrorism legislation, Jonathan Hall, told The Telegraph that “chatbot radicalization” as one such danger. He showed how a chatbot version of Osama bin Laden could be created with ease, using commercially available platforms.

Hall called on policymakers to take immediate action regulating generative AI, drawing an analogy to the early days of the Internet, when unchecked activity led to rampant misuse.

“We need a common understanding of generative AI-and the confidence to act,” he said.

Murray repeated, adding that as the AI tools become more sophisticated and accessible, their exploitation by criminals is likely to increase significantly by 2029. “The ease of use and realism will only grow,” he said. “Policing needs to evolve quickly to address this emerging threat.”

As AI technology continues to evolve, striking a balance in its benefits with adequate measures against misuse will become key for law enforcement and policymakers globally.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.