Character.AI plans to ban children from talking with its AI chatbots starting next month amid growing scrutiny over how young users are interacting with the technology.
The company, known for its vast array of AI characters, will remove the ability for users under 18 years old to engage in “open-ended” conversations with AI by November 25. It plans to begin ramping down access in the coming weeks, initially restricting kids to two hours of chat time per day.
Character.AI noted that it plans to develop an “under-18 experience,” in which teens can create videos, stories and streams with its AI characters.
“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company said in a blog post, underscoring recent news reports and questions from regulators.
The company and other chatbot developers have recently come under scrutiny following several teen suicides linked to the technology. The mother of 14-year-old Sewell Setzer III sued Character.AI last November, accusing the chatbot of driving her son to suicide.
OpenAI is also facing a lawsuit from the parents of 16-year-old Adam Raine, who took his own life after engaging with ChatGPT. Both families testified before a Senate panel last month and urged lawmakers to place guardrails on chatbots.
The Federal Trade Commission (FTC) also launched an inquiry into AI chatbots in September, requesting information from Character.AI, OpenAI and several other leading tech firms.
“After evaluating these reports and feedback from regulators, safety experts, and parents, we’ve decided to make this change to create a new experience for our under-18 community,” Character.AI said Wednesday.
“These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers,” it added. “But we believe they are the right thing to do.”
In addition to restricting children’s access to its chatbots, Character.AI also plans to roll out new age assurance technology and establish and fund a new non-profit called the AI Safety Lab.
Amid rising concerns about chatbots, a bipartisan group of senators introduced legislation Tuesday that would bar AI companions for children.
The bill from Sens. Josh Hawley (R-Mo.), Richard Blumenthal (D-Conn.), Katie Britt (R-Ala.), Mark Warner (D-Va.) and Chris Murphy (D-Conn.) would also require AI chatbots to repeatedly disclose that they are not human, in addition to making it a crime to develop products that solicit or produce sexual content for children.
California Gov. Gavin Newsom (D) signed into law a similar measure late last month, requiring chatbot developers in the Golden State to create protocols preventing their models from producing content about suicide or self-harm and directing users to crisis services if needed.
He declined to approve a stricter measure that would have barred developers from making chatbots available to children unless they could ensure they would not engage in harmful discussions with kids.