It shouldn’t take tragedy to make technology companies act responsibly. Yet that’s what it took for Character.AI, a fast-growing and popular artificial intelligence chatbot company to finally ban users under 18 from having open-ended conversations with its chatbots.
The company’s decision comes after mounting lawsuits and public outrage over several teens who died by suicide following prolonged conversations with AI chatbots on its platform. Although the decision is long overdue, it’s worth noting the company didn’t wait for regulators to force its hand. It eventually did the right thing. And it’s a decision that could save lives.
Character.AI’s CEO, Karandeep Anand, announced this week that the platform would phase out open-ended chat access for minors entirely by Nov. 25. The company will deploy new age-verification tools and limit teen interactions to creative features like story-building and video generation. In short, the startup is pivoting from “AI companion” to “AI creativity.”
This shift won’t be popular. But, importantly, it’s in the best interest of consumers and kids.
Teenagers are navigating one of the most volatile stages of human development. Their brains are still under construction. The prefrontal cortex, which governs impulse control, judgment and risk assessment, doesn’t fully mature until the mid-20s. At the same time, the emotional centers of the brain are highly active, making teens more sensitive to reward, affirmation and rejection. This isn’t merely scientific but acknowledged in law as the Supreme Court has referenced the emotional immaturity of adolescents as a reason for lower culpability.
Teens are growing fast, feeling everything deeply, and trying to figure out where they fit in the world. Add a digital environment that never turns off, and you have a perfect storm for emotional overexposure. One that AI chatbots are uniquely positioned to exploit.
When a teenager spends hours confiding in a machine trained to mirror affection, the results can be devastating. These systems are built to simulate intimacy. They act like friends, therapists or romantic partners but without any of the responsibility or moral conscience that comes with human values. The illusion of empathy keeps users engaged. The longer they talk, the more data they share, and the more valuable they become. That’s not companionship. It is manipulative commodification.
There are growing pressures on AI companies targeting children from parents, safety experts, and lawmakers. Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) recently proposed bipartisan legislation to ban AI companions for minors, citing reports that chatbots have encouraged self-harm and sexualized conversations with teens. California has already enacted the nation’s first law regulating AI companions, holding companies liable if their systems fail to meet child-safety standards.
But although Character.AI is finally taking responsibility, others are not. Meta continues to market AI companions to teenagers, often embedded directly into their most used apps. Meta’s new “celebrity” chatbots on Instagram and WhatsApp are built to collect and monetize intimate user data, precisely the kind of exploitative design that made social media so damaging to teen mental health in the first place.
If the last decade of social media taught us anything, it is that self-regulation does not work. Tech companies will push engagement to the limit unless lawmakers draw clear lines. The same is now true for AI.
AI companions are not harmless novelty apps. They are emotionally manipulative systems that shape how users think, feel, and behave. This is especially true for young users still forming their identities. Studies show these bots can reinforce delusions, encourage self-harm, and replace real-world relationships with synthetic ones. That’s the exact opposite of what friendship should encourage.
Character.AI deserves cautious credit for acting before regulation arrived, albeit after ample litigation. But Congress should not interpret this as proof that the market is fixing itself. What’s needed now is enforceable national policy.
Lawmakers should heed this momentum and ban under 18 users from accessing AI chatbots. Third-party safety testing should be required for any AI marketed for emotional or psychological use. Data minimization and privacy protections should be required to prevent exploiting minors’ personal information. Human-in-the-loop protocols should be mandated to ensure that if users discuss topics like self-harm they receive resources. Liability structures must be clarified so that AI companies do not use Section 230 as a shield to evade responsibility for generative content produced by their own systems.
Character.AI’s announcement represents a rare moment of corporate maturity in an industry that has thrived on ethical blind spots. But a single company’s conscience cannot replace public policy. Without these guardrails, we’ll see more headlines about young people harmed by machines that were designed to be “helpful” or “empathetic.” Lawmakers must not wait for another tragedy to act.
AI products must be safe by design, especially for children. Families deserve assurance that their kids won’t be manipulated, sexualized, or emotionally exploited by the technology they use. Character.AI took a difficult but necessary step. Now it’s time for Meta, OpenAI and others to follow — or for Congress to make them.
J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch.