Meta AI Is Collecting Your Secrets, And Human Contractors Are Reading Them

Contractors reveal Meta breaching AI privacy policy with chatbots collecting users’ personal data, where chats and photos reviewed by humans.

On Wednesday, four Meta contractors recruited through third parties revealed that users share sensitive data, including their real names, e-mails, and “explicit photos” – with Instagram and Facebook’s AI chatbots, breaching each platform’s AI privacy policy.

The contractors, hired through Alignerr and Scale AI-owned Outlier, warned that Meta’s chatbots expose more raw user data than competitors’ AI systems, despite the company’s claims of strict privacy policies.

Meta’s privacy preserving AI development is seen as red flags against further data privacy habits and the use of human contractors to train its AI.

One contractor mentioned one of many AI privacy violations examples, where individuals “discuss things with Meta’s AI like they were discussing things with friends, or even romantic partners.” Others even shared selfies and “explicit photos.”

Despite its history of data leaks and opaque activities, any Meta AI privacy policy still freely shares personal information, seemingly unaware this data may later face scrutiny, be stored, or even train the very AI systems collecting it.

The toxic cycle induced by Meta between the users and the platforms is leaving users demanding answers on whether their data fuels the company’s AI from the start.

AI Data Privacy Security Are All Concerns

While oversharing with AI isn’t new, the level of intimacy in Meta’s platforms appears excessive due to their weak data privacy and security in AI. Human contractors are tasked with reviewing these chats to improve AI behavior, a method used widely across the tech industry.

Other giants faced similar backlash for violating privacy laws with AI.

In 2029, Apple drew fire when contractors accessed sensitive Siri conversations without privacy safeguards. Bloomberg exposed Amazon and Microsoft similarly permitting contractor reviews of Alexa and Xbox voice recordings, including accidental captures of children’s voices.

Still, the level of exposure reported by Meta’s contractors seems more concerning, where “unredacted personal data was more common for the Meta projects they worked on,” the contractors said, compared to others.

Meta’s Data Mishandling History

The revelations are troubling given Meta’s troubled privacy history when it comes to data processing AI.

The 2018 Cambridge Analytica scandal exposed Facebook allowing unauthorized data harvesting by political consultants, affecting millions and influencing US elections. Eventually, the Meta-Analytica scandal resulted in a record $5 billion federal trade commission (FTC) fine – largest ever privacy violation.

Regulating AI data scraping is a problem deeply rooted in Meta’s systems, or in users’ growing trust, and dependence on conversational AI.

Despite Meta’s well documented privacy controversies, including the Cambridge Analytica data scandal and reports of contractors reviewing user conversations, many still interact with the company’s AI tools as if they were private, trusted confidants.

What’s remarkable is that people continue to overshare with AI models. Some experts attribute the issue to two factors:

  • Meta’s AI privacy policy feels deceptively natural, personal, and human-like, lowering defenses.
  • Users rarely realize they’re interacting with monitored, human-trained systems.

Lastly, while Meta is forced with creating firm boundaries and ensuring privacy, the blurring of human and machines and the illusion of intimacy placed numerous users in harm’s way. The result is a system where the company harvests massive amounts of personal data, and users, knowingly or unknowingly, help to feed it.

At the time, Facebook claimed users had carved in.

How To Stop Meta Using My Data for AI

No responses but questions linger. No one heard from Meta or how to opt out of Meta AI training due to the infiltration that

In response to theBusiness Insider investigation, a Meta spokesperson told Fortune, “strict policies that govern personal data access for all employees and contractors, claiming it limits personal information exposure and provides training and safeguards.

“For projects focused on AI personalization … contractors are permitted in the course of their work to access certain personal information in accordance with our publicly available privacy policies and AI terms. Regardless of the project, any unauthorized sharing or misuse of personal information is a violation of our data policies, and we will take appropriate action,” Meta added.

Despite the assurances, Meta’s long track record of privacy measures continues to spark concern. Its use of human reviewers, though standard in the industry combined with its massive user base and history of privacy lapses, adds insistence to calls for stricter data governance and transparency in AI privacy policy development.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.