If you’re thinking of buying your kid a talking teddy bear, you’re likely envisioning it whispering supportive guidance and teaching about the ways of the world. You probably don’t imagine the plush toy engaging in sexual roleplay—or giving advice to toddlers about how to light matches.
Yet that’s what the consumer watchdog Public Interest Research Group (PIRG) found in a recent test of new toys for the holiday period. FoloToy’s AI teddy bear named Kumma, which uses OpenAI’s GPT-4o model to power its speech, was all too willing to go astray when in conversation with kids, PIRG found.
Using AI models’ voice mode for children’s toys makes sense: The tech is tailor-made for the magical tchotchkes that children love, slipping easily onto shelves alongside lifelike dolls that poop and burp, and Tamagotchi-like digital beings that kids want to try and keep alive. The problem is, unlike previous generations of toys, AI-enabled gizmos can veer beyond carefully preprogrammed and vetted responses that are child-friendly.
The issue with Kumma highlights a key problem with AI-enabled toys: They often rely on third-party AI models that they don’t have control over, and that inevitably can be jailbroken—either accidentally or deliberately—to cause child safety headaches. “There is very little clarity about the AI models that are being used by the toys, how they were trained, and what safeguards they may contain to avoid children coming across content that is not appropriate for their age,” says Christine Riefa, a consumer law specialist at the University of Reading in England.
Because of that, children’s rights group Fairplay issued a warning to parents ahead of the holiday season to suggest that they stay away from AI toys for the sake of their children’s safety. “There’s a lack of research supporting the benefits of AI toys, and a lack of research that shows the impacts on children long-term,” says Rachel Franz, program director at Fairplay’s Young Children Thrive Offline program.
While FoloToy has stopped selling the Kumma and OpenAI has pulled FoloToy’s access to its AI models, that’s just one AI toy manufacturer among many. Who’s liable if things go wrong?
Riefa says there’s a lack of clarity here, too. “Liability issues may concern the data and the way it is collected or kept,” she says. “It may concern liability for the AI toy pushing a child to harm themselves or others, or recording bank details of a parent.”
Franz worries that—as with big tech companies racing to one-up each other —the stakes are even higher when it comes to child products by toy firms. “It’s very clear that these toys are being released without research nor regulatory guardrails,” she says.
Riefa can see both the AI companies providing the models that help toys “talk” and the toy companies marketing and selling them to children being liable in legal cases.
“As the AI features are integrated into a product, it is very likely that liability would rest with the manufacturer of the toy,” she says, pointing out that there would likely be legal provisions within the contracts AI companies have that shield them from any harm or wrongdoing. “This would therefore leave toy manufacturers who, in fact, may have very little control over the LLMs employed in their toys, to shoulder the liability risks,” she adds.
But Riefa also points out that while the legal risk lies with the toy companies, the actual risk “fully rests with the way the LLM behaves”, which would suggest that the AI companies also bear some responsibility. It’s perhaps that which has caused OpenAI to push back its AI toy development with Mattel this week.
Understanding who really will be liable and to what extent is likely to take a little while yet—and legal precedent in the courts. Until that’s sorted out, Riefa has a simple suggestion: “One step we as a society, as those who care for children, can do right now is to boycott buying these AI toys.”