AI Tests Human Fact-Checkers on X. Who Will Win? 

X using AI fact-checking techniques is sparking concern from UK technology minister over the risks of misinformation spread.

On Tuesday, X announced it would begin using AI fact-checking techniques to generate notes about tweets, sparking concern from UK technology minister Damian Collins over the risks of misinformation spread online. 

According to X, the system will make its community notes, user-approved factchecks that appear beneath posts. Until now, these were all composed of humans. This action represents a growing embrace of automated fact-checking on the internet.  

Many are welcoming the action while others fear it will do more harm than good. 

AI’s Mission in Online Factchecking 

X describes the transition as making factchecks faster and of higher quality.  

“We designed this pilot to be AI helping humans, with humans deciding,” Keith Coleman, the company’s vice president of product said. 

The microblogging platform also highlighted how the system blends machine-written drafts with human moderation. A massive language model generates the first draft of a note, and only if various users from different points of view find it helpful will it be shown. The site calls this a type of automated content moderation, merging AI influence with human wisdom. 

The company also released a research paper co-authored by academics from leading universities including MIT and Stanford. It argues that AI fact-checking techniques are faster and more accessible than traditional fact-checking, which it admits is often too slow and limited in reach. 

AI’s Limits and Potential for Harm 

Critics argue that the use of AI misinformation detection must be controlled tightly. UK technology minister suggested the system might create “the industrial manipulation of what people see and decide to trust,” criticizing the decision as “leaving it to bots to edit the news.” 

Andy Dudfield, head of AI at the UK factchecking organization Full Fact, said the proposals would put extra pressure on human editors.  

“These plans risk increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides,” he told the BBC.  

That concern makes quality control so dominant to generative AI fact-checking. 

Samuel Stockwell, researcher at Alan Turing Institute, said that while an AI fact-checker can be used to help control the massive volume of assertions on the web, it remains dangerous. His worries are that tools can sometimes produce assertive but wrong information if not watched closely. 

AI Fact-Checker or Human Fact-Checker 

Studies have shown that users are more likely to trust human-written content compared to AI-generated fact notes. In the 2020 American election, many posts with misinformation went uncorrected on X because AI factchecker notes were not getting used by readers.  

While comparing between both, AI fact-checking techniques outperform humans in many ways, that do not express frustration or emotions that hinder human debaters. When people refuse to change their minds, humans tend to argue more aggressively or disengage, but AI maintains a calm and consistent tone throughout the conversation.  

Technology firms are cutting back on human moderation too, with Meta and Google following suit, some fear that this shift towards fact checker AI notes will undermine the fight against online disinformation. While AI fact-checking techniques can deliver pace and scale, many still fear it can never replace human reasoning. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.