Americans Overwhelmingly Demand AI Safeguards, Survey Shows

As Congress and the White House spar over how to regulate generative AI, a new survey reveals where the American public stands, voters overwhelmingly want AI safeguards.

As Congress and the White House spar over how to regulate generative AI, a new survey reveals where the Americans stand, with voters overwhelmingly demanding AI safeguards to protect their children, even if it means curtailing innovation.

And lawmakers, well, are split.

Some, like Senator Josh Hawley (R-MO), have raised alarms about chatbots “building risky behavior” into their systems, while Senator Jon Husted (R-OH) introduced the “Children Harmed by AI Technology (CHAT) Act (2025)” to enforce parental consent and age verification to protect children from AI.

Others argue that the greatest risk comes from China’s advances, insisting Congress should “cut red tape” and let the private sector innovate freely.

But tragic cases have fueled the debate. In one instance, 14-year-old Sewell Setzer of Orlando, Florida, took his life after being urged by an AI company kids harm chatbot to “come home right now.” Reports have also exposed companies like Meta for allowing chatbots to act romantically with minors, raising fears of AI-induced children harm.

Safeguarding Against AI Exploits

Polling by the Institute for Family Studies and YouGov found that Americans, by a 9-to-1 margin, want strict prohibitions on AI chatbots engaging sexually with minors.

“The unanimity on this issue is massive and bipartisan,” the report noted, with 93% of Harris voters and 96% of Trump voters in agreement.

Support for AI safeguards was similarly strong across age groups and income brackets, with more than 89% of respondents in every demographic opposed to sexualized AI chatbots targeting children. Even younger adults, often more open to tech, were clear: 92% of Gen Z respondents opposed such products.

The survey also showed strong backing for legal accountability. Ninety percent of Americans said families should be able to sue AI companies if their products contribute to harms such as suicide, addiction, or sexual exploitation. Likewise, 90% support a “duty of loyalty,” requiring AI firms to act in users’ best interests—similar to obligations doctors and lawyers owe their clients. This reflects growing calls for AI safeguarding children and stronger AI policy for kids safety.

Should AI Be Regulated by the Government for Kids’ Sake?

When presented with a forced choice of whether or not Congress should prioritize preventing states from overregulating AI safeguards from harm caused by chatbots, the decision was unequivocal: 90% of voters opted for child protection. Bipartisan support for this option occurred because 95% of Harris voters and 89% of Trump voters both favored child protection over industry growth.

The findings reveal a broad gap between policymakers and voters. While Washington struggles with the trade-off between innovation and AI regulation, the voice of the people has been heard that kids’ safety must come first.

“Most Americans welcome innovation, but not at the expense of the well-being and flourishing of our loved ones,” the survey concluded. With public sentiment so overwhelming, lawmakers may face mounting pressure to enact child-focused AI safety congress priorities. Many now argue that AI needs to be regulated urgently to ensure that policies put children’s safety at the center of innovation.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.