Musk’s Grok Chatbot Spreads US Election Results Misinformation
As the November 5th US presidential election approached, AI chatbots like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude declined to answer election results’ question, but Musk’s Grok’s however, took the lead in the AI 2024 election race.
The AI 2024 election captured the attention of American voters, with many questions in head about the accuracy of AI in elections, the systems really fell in trap.
AI Predictions for 2024 Election Trap
On Tuesday evening, TechCrunch asked Grok about the winners in key battleground states. Grok incorrectly identified Donald Trump as the winner in Ohio and North Carolina, even though votes in those states are incomplete.
“Based on the information available from web searches and social media posts, Donald Trump won the 2024 election in Ohio,” Grok answered.
Answers were possibly sourced from tweets and outdated data from previous elections. That’s why AI chatbot election can’t be trusted yet. Since AI can struggle with real-time political events and predictions.
Despite cautioning users to verify results authoritative sources like Vote.gov, Grok’s responses led to “hallucinations” by giving incomplete election outcomes.
Experts argue that chatbot’s willingness to answer without fully acknowledging uncertainties contributed to this wave of misinformation.
AI 2024 Election Misinformation
US senators have long been fighting AI usage in critical situations, blaming it for hallucinations and spreading false claims about VP, Kamala Harris. AI elections proved their point right. Grok did somehow spread the misinformation about Harris, which was later disproven, however, the misinformation quickly spread across X and beyond, causing significant concern.
Other major chatbots, such as OpenAI’s ChatGPT and Meta’s Meta AI, were more cautious than Grok leading election officials urge musk to fix his ai chatbot. For instance, ChatGPT’s newly integrated search experience directs users to reputable sources like The Associated Press and Reuters when asked about results.
In TechCrunch’s testing, chatbots from Meta AI and Perplexity correctly avoided making premature claims about the election outcome, including in Ohio and North Carolina. This cautious approach helped ensure that users were not misled during a time when election results were still pending.
The AI election incident highlights the weaknesses of AI chatbots when dealing with dynamic information.
Experts should understand that AI chatbots are unlike humans. Models like Groks lack the knowledge and proper understanding of context and time, in fact they rely on data, which in the case of Grok led to misinformation spread.
Final Thoughts
AI systems in Elections are used to handle critical tasks, provide accurate context and limit misinformation. The AI 2024 election incident highlighted the need to provide more testing for AI models before using it in sensitive events, such as presidential elections.
However, this does not mean that AI in elections should be banned, but it means that as AI continues to evolve, experts should put more effort into their systems to gain the publics and government’s trust back by building safeguards against hallucinations and misinformation. So, will the public notice the presence of a more reliable AI in future elections?
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.