Lawmakers to Approve California Deepfake Law 

California lawmakers approved a California deepfake law addressing the spread of deepfakes, protecting workers from AI exploitation.

California lawmakers approved a California deepfake law addressing the spread of deepfakes, protecting workers from AI exploitation, pushing for AI literacy in schools. 

Led by Democrats working on hundreds of proposals in the final days of the session to be sent to Governor Gavin Newsom to sign, veto, or approve the bills for the California deepfake law by September 30 of this year. 

California Deepfake Law for Election Protection

One of the main concerns addressed in this series of bills, including the California deepfake law, is the increasing use of AI tools to influence voters by creating deepfake content, such as manipulated images and videos, ahead of this year’s US election. 

One example is the deepfake video that went viral on social media, created by X platform owner Elon Musk, showing current Vice President and Democratic candidate Kamala Harris making false statements about President Joe Biden. 

In retaliation, Democratic lawmakers approved bills banning the creation of deepfakes about elections and requiring social media platforms to remove misleading content 120 days before and 60 days after elections.  

Political campaigns must disclose whether AI tools were used in their content. 

Against Child Abuse Deepfake Content

Among these proposals are bills that will make AI tools use illegal for creating images and videos related to child abuse. AI-generated content involving child abuse has long been a hurdle for lawmakers, given that current laws only allow district attorneys to prosecute offenders, and only if there’s clear proof of child abuse.  

Another aspect of the California deepfake law requires social media platforms to provide AI detection tools for users so they can distinguish between real and fake content. 

The proposals will also protect workers from being exploited by AI, recognizing the speedy growth of the AI industry as companies ceaselessly work to improve their services and attract and maintain user retention. 

Regulatory Moves for Safe AI Integration

One recent addition to the AI models is voice assistants, where some companies have cloned the voice of famous people, without consent. In May, Scarlette Johansson sued OpenAI for using her voice in its GPT assistant without her permission.  

In a broader regulatory move, California recently became the first state on the path to pass a bill that would force tech companies to disclose the data used to train their AI models

This bill ignited a wave of some of the most divergent reactions, with many companies opposing it, claiming it hinders innovation, while Elon Musk has voiced support, pushing for safe and ethical AI use. 

Proposals under the California deepfake law require safety agencies to institute protocols that curb risks and avoid algorithmic discrimination before entering any contracts linked to AI models for decision-making processes. 

California lawmakers have also passed bills to integrate AI into school subjects like math, science, and history, to integrate technology into enhanced education. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech section to stay informed and up-to-date with our daily articles.