Meta Gave Sex-Trafficking Accounts 16 Chances, Unsealed Documents Show

On Friday, newly unsealed court filings in California revealed testimony from former Meta safety executives alleging the company tolerated harmful content, misled regulators, resisted teen-safety fixes, and allowed accounts tied to sex trafficking multiple violations before finally suspending them, neglecting youth and child safety online, according to Time.

The evidence that’s foundational for the case brough by multiple state attorneys general exposes Meta’s internal culture of systemic exposure to user safety. The social media godfather identified these threats but deprioritized them.

The multidistrict lawsuit accuses Meta and other major platforms of prioritizing growth over the well-being of young users and online content moderation.

Compounded by separate legal hurdles, the former Meta executive, Vaishnavi Jayakumar, exposed the giant’s “17-strike” policy, under oath, testifying that a user reported for violations 16 times would not necessarily be banned.

“That means you could incur 16 violations of prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” revealed Jayakumar’s deposition.

The plaintiffs argue that internal social media child safety testimony, research, and documents outline a pattern of delayed safety measures, concealed data, and policies that failed to protect minors from exploitation, mental-health risks, and predatory behavior.

Meta’s Tolerance for Sex-Trafficking Content

A court brief unsealed in the Northern District of California alleges that Instagram users engaged in “trafficking of humans for sex” were given up to 16 violations before suspension, a finding the plaintiffs describe as a glaring online child safety failure. The claim comes from sworn testimony by Instagram’s former head of safety and well-being, Vaishnavi Jayakumar, who said she was stunned by Meta’s “17x” strike policy when she joined in 2020.

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” she testified, calling it “a very, very high strike threshold” by any industry measure that undermined social media kids protection.

Plaintiffs say internal documents confirm this policy.

Together, Jayakumar’s sworn testimony and internal records highlight how Zuckerberg’s Meta paints its own portrait of a tech giant whose internal safeguards and their enforcement mechanisms were – by their own design and admission – ill-equipped to hinder the exploitation happening on its platforms.

The filing also claims Instagram had no specific way for users to report child sexual abuse material, despite public assurances of zero tolerance. Jayakumar allegedly raised the issue “multiple times,” only to be told it would be “too much work” to build—an omission that contributed to spreading Instagram harmful content.

A Meta spokesperson disputed these claims, saying the company has long removed accounts immediately when it suspects trafficking or exploitation and has expanded reporting tools for child-safety issues under its broader Meta child safety policy.

The lawsuit is part of a massive case involving more than 1,800 plaintiffs—school districts, attorneys general, parents—who say Meta, TikTok, Snapchat, and YouTube operate “addictive and dangerous” platforms that fuel a youth mental-health crisis and highlight gaps in child protection tech laws.

Meta Withheld Teen-Harm Research

The unsealed brief also alleges that Meta misled Congress and withheld internal research showing harm to teens.

According to the filing, a 2019 “deactivation study” found that users who stepped away from Facebook and Instagram for one week only experienced lower anxiety, depression, and loneliness.

Attorneys say this should have strengthened any Facebook child safety protections, but Meta allegedly stopped the project and prevented the results’ release.

An internal employee worried the youth and child safety online decision resembled tobacco-industry behavior, said, “If the results are bad and we don’t publish and they leak… is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?” This sparked renewed concerns over protecting kids online.

The filings also describe years of internal debates about teen-safety features. Researchers recommended making teen accounts private by default to prevent adults from messaging minora change that could have reduced Instagram kids’ harm.

Meta’s growth team opposed the idea, warning it would “likely smash engagement” and cost 1.5 million monthly active teen users. “Isn’t safety the whole point of this team?” one frustrated researcher reportedly asked.

Meta spokesperson Andy Stone pushed back, saying the allegations rely on “cherry-picked quotes and misinformed opinions” and insisted the company has introduced meaningful protections, including Instagram’s new Teen Accounts designed to reduce Meta child exploitation risks.

As the legal youth and child safety online battle unfolds, the unsealed testimony deepens scrutiny of Meta’s safety practices, intensifying public debate over whether tech giants can balance engagement-driven business models with their responsibility to improve Instagram kids safety for millions of young users.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.