Big Tech's Deepfake Misinformation Fight Activated by Election Interests
A collective $2 million was spent on a project by Microsoft and OpenAI to hinder the spread of deepfakes regarding the 2024 elections for the sake of not being able to “deceive the voters and undermine democracy.”
The Mother Year of All Elections
2024 is said to be the mother year of all elections worldwide with over 2 billion people will be set to vote in elections that will be spread over 50 nations. Big tech companies, from the likes of Meta, Microsoft, Amazon, and Google, would surely want to take part in these elections, be it directly or indirectly, specifically that elections will be held in the US. The With the unfolding event in the Middle East, more accurately concerning the U.S.-Israel alliance in the Palestine war, American elections have become a major political event worldwide and it is a stop that everyone must take part in, and everything is watching closely.
The rise of generative AI and their language models, like famous chatbots such as ChatGPT, has led to a significant new threat environment involving AI-generated deepfakes, with expert associating it to the exponential spread of misinformation. Adding to the issue, another widespread availability of such tools, enabling anyone to produce fake videos, pictures, or audio of well-known political figures, with presidential candidate Donald Trump falling victim to such videos, as well as current President Joe Biden.
Next week Monday India’s Election Commission warned political parties such Bhartiya Janata Party, BSP Bahujan Samaj Party, CPI Communist Party of India, PM Communist Party of India, INC Indian National Congress 6, NCP Nationalist Congress Party as against utilizing deepfakes and similar false information in their online election campaigns.
Amid the election situation, major tech firms, most prominently Big Tech giant, such as Microsoft and its backed company OpenAI, announced their entrenched commitment to counter such threats to American ‘democracy’ through voluntary agreements and are working on a shared framework to address deepfakes specifically intended to deceive voters.
In other areas, major AI companies are addressing these risks by imposing restrictions on their software. For instance, Google stated it will not permit its Gemini AI chatbot to respond to inquiries about the 2024 elections, and Meta, Facebook’s parent company, is also restricting election-related responses from its AI chatbot. Meta also announced that election ads will be blocked to stop the spread of misinformation.
On Wednesday, OpenAI unveiled a new detector for identifying deepfakes aimed at disinformation researchers, designed to recognize fake content produced by its very own DALL-E image generator. Not only that, but the company has also joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA) addresses the widespread of misinformation, which is present online through creating standards and certifying the online source, this includes members like Adobe, Microsoft, Google, and Intel.
Why Now and Not Then?
Why was the project by Microsoft and OpenAI able to create a deepfake image detector now, during the mother of all elections? Aren’t we as readers aware that child abusers are online generating deepfake images of children? Has this become the norm? Why weren’t the big tech companies able to release this feature? With the advancements in tech and knowing how much time it takes to release this feature, it is not a coincidence that it is the year of elections.
If the current student encampment worldwide taught us anything, today’s children and their mental health is equally, if not more, important than political elections. As we look at the events unfolding on campuses, let’s be mindful that the children of today are the leaders of tomorrow.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.