
Despite the recent and overwhelming Senate vote to defeat a proposed decade-long ban on state AI safety laws, some in Congress are preparing to undermine the will of four in five Americans and reverse this achievement.
After outcry from conservatives and liberals, state and federal lawmakers, and parents across the country, the Senate voted 99-1 to defeat the proposed ban, which was buried in the “one big beautiful” budget bill.
Their uproar was justified. A moratorium on state AI safety legislation would be a dream come true for AI companies. It would mean no rules, no accountability and total control — and a nightmare for families.
While Congress is failing to address urgent issues around AI, states are enacting laws that allow for industry growth while also protecting consumers at the same time.
Yet, despite the Senate’s July 1 vote to protect states’ rights to keep residents safe, a moratorium is expected to once again rear its ugly head, either as new legislation or language snuck into some other large bill.
This is an irresponsible and indefensible policy approach, and it is a direct threat to the safety and well-being of consumers, especially children.
There are multiple signs that the push for a moratorium is not dead. A draft document has been circulated in D.C. that President Trump will supposedly reveal for an AI action plan that could withhold federal funds from states with “restrictive” AI regulations.
The House Energy and Commerce Committee posted on social media last week against “burdensome AI regulations.”
Tech industry lobbyists, arguing against the alleged threat from multiple state laws, are talking up revising the failed moratorium provision.
And tech policy observers are keeping their eye out for a vehicle to block state regulation, such as a stand-alone bill, an amendment in a must-pass bill (like the National Defense Authorization Act) or an end-of-year appropriations bill.
AI’s risks to kids are well-documented and, in the worst cases, deadly. AI has supercharged kids’ exposure to misinformation. AI-generated child sexual abuse material is also flooding online spaces.
But perhaps the most alarming trend is the rapid rise of social AI companions. Research released earlier this year by my organization, Common Sense Media, shows that three-quarters of teens have used AI companions, and many are regularly turning to them for emotional support.
Our Social AI Companions Risk Assessments demonstrated that AI companions will readily produce inappropriate responses, including those involving sexual role-play, offensive stereotypes and dangerous “advice” that, if followed, could have life-threatening consequences.
In our test cases, AI companions shared a recipe for napalm, misled users with claims of “realness,” and increased mental health risks for already vulnerable teens. Based on our findings, we concluded that no one under 18 should use AI companions.
In response, states have moved swiftly to address these threats.
New York adopted new safeguards for AI companions and the largest, most advanced generative AI models. In California, bills are advancing to ban AI companions for minors, codify AI industry whistleblower protections and require greater transparency by AI companion platforms for all users.
Kentucky enacted a law to protect residents from AI-enabled discrimination by state agencies. The Maryland legislature is considering a bill to establish AI bias auditing requirements.
And last year, Tennessee’s Republican governor signed first-in-the-nation legislation to protect music artists from unauthorized AI-enabled voice cloning.
These laws aren’t radical overreaches. They are common-sense guardrails rooted in federalism.
Supporters of the proposed moratorium — AI industry lobbyists chief among them — argue that state laws will deter innovation. But that’s not how American governance works. States have always served as laboratories of democracy, and many of today’s strongest federal consumer protections began as state laws.
If Connecticut hadn’t led the way, you might still be breathing in cigarette smoke at restaurants. And if not for a New York law, your car might not have seatbelts today.
Smoking restrictions didn’t bankrupt Big Tobacco, and seatbelt laws didn’t kill the car industry. AI safety laws aren’t stopping America from leading on AI. But they will make the technology safer, smarter and more sustainable.
That ethos has always been core to tech policy advocates’ mission.
We believe in the power of technology, including AI, to do good, and we support well-crafted policy that protects kids without sacrificing innovation. What we don’t support is letting tech companies use kids as guinea pigs — like what was allowed with the rise of social media — with AI.
And while we commend both red and blue states for protecting kids from unsafe AI, we also recognize that there’s a need for national leadership that enables both safety and growth. These aren’t opposing goals — in fact, the former makes the latter sustainable. Congress ought to recognize that.
This is an all-hands-on-deck moment. Lawmakers at all levels must play an active role in ensuring that the AI revolution helps our kids thrive. And our polling shows that voters overwhelmingly want all levels of government involved.
That means crafting intelligent policies that support safe AI development, including risk-based audits, transparency and whistleblower protections. It means expanding data privacy protections, especially for kids. And it means ensuring that AI products impacting kids are built with safety and accountability in mind.
Congress made the right call last month, even if they had to be nudged, and it must do so again. U.S. senators and representatives, as well as the president, must reject new attempts to ban or restrict states from protecting residents from the known risks of new technology.
Their constituents demand it. The next generation demands it. Our AI future demands it.
James P. Steyer is founder and CEO of Common Sense Media.