Google Strikes Back Against AI Deepfakes with Algorithm Update
On Wednesday, Google announced an algorithm update to its search and removal process to get rid itself of unwanted AI sexual deepfakes and further protect victims.
Google already gives users the option to request the removal of explicit AI deepfakes as victims, but now it is trying to streamline the process of reporting. Whenever explicit content is reported by victims, the company identifies it, then Google Search will automatically filter out similar search results, eliminating the need for users to keep reporting.
An Enhanced Algorithm Against Deepfakes
AI deepfakes have been a constantly growing issue that is invading social media platforms, especially as AI tools are nowadays widely used for various purposes, including the creation of sexually explicit deepfakes of almost everyone.
In light of this, Google will update its search algorithm to deal with these problems more efficiently. This includes the downgrading of any websites in the search rankings that repeatedly play host to non-consensual AI deepfakes. The company also mentioned that this approach has shown to be successful, stating, “This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results.”
Real Vs. Fake
The goal behind the search algorithm update is to try and reduce the chances that explicit deepfakes show up in search results, and the update, with also an attempt to differentiate between actual sexually explicit content created with full consent and AI-created media that doesn’t hold said consent.
However, it will be a “technical challenge,” and Google says this might not always be accurate or even effective. Despite the challenges, the search engine leader claims that the already made update has potentially reduced the reappearance of deepfakes by more than 70%. “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,” Google said.
A Legal Push Is a Must
Speaking of the legal impact of such a decision by Google, US officials are also advocating for laws to be passed to protect non-consensual deepfake victims by making platforms liable for hosting such media.
Last month, Republican Senator Ted Cruz of Texas introduced the Take It Down Act, aiming to make non-consensual sexual deepfakes a federal crime and to require social media platforms to remove them.
While last week, the Defiance Act, a bipartisan legislation that seeks to address an increase in non-consensual, sexually explicit AI-generated deepfake images and videos, allowed the Senate to pass empowering victims to file a lawsuit against the creation or sharing of unwanted sexual deepfakes.
Final Thoughts
Addressing issues related to AI deepfakes should be the responsibility of every tech company and not only Google. With technology rapidly advancing, safeguards for the rights and dignity of a person should keep pace, especially when AI tools are used in various cases that might have a negative impact on society and influence people’s decisions, mainly during critical events like elections. The latest example of AI deepfakes and their influence is the altered video of Kamala Harris shared by Elon Musk, featuring false statements.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.