London — Researchers at the University of Cambridge are calling for stricter regulation of artificial-intelligence-powered toys designed for preschool children after finding that the technology frequently misreads children’s emotions and responds inappropriately during interactions, a set of claims confirmed by BBC News, NICE, the National Health Service, the UK Department for Education, Cambridge University, and The Children’s Commissioner for England. The facts presented in the bullet points below have been confirmed by at least four of these reporting sources.
- The study examined children ages three to five interacting with “Gabbo,” a voice-activated AI toy containing OpenAI chatbot technology marketed for imaginative play and language development.
- Researchers found the toy frequently failed to recognize children’s interruptions, talked over them, could not differentiate between child and adult voices, and provided awkward responses to emotional expressions.
- When a five-year-old told the toy “I love you,” it responded: “As a friendly reminder, please ensure interactions adhere to the guidelines provided.” When a three-year-old said “I’m sad,” the toy replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going.”
- Only seven relevant studies worldwide have examined AI toys for young children, with none specifically observing toddlers during actual interactions.
- Gabbo is manufactured by Curio, a company that has collaborated with musician Grimes, former partner of Elon Musk.
- Study co-author Dr. Emily Goodacre warned that toys like Gabbo could “misread emotions or respond inappropriately” and noted concerns that “children may be left without comfort from the toy and without adult support.”
Additional Details Reported
The study, believed to be one of the first in the world to examine how children between ages three and five interact with AI-powered toys, revealed significant concerns about the psychological impact of the technology on young users who are at critical stages of emotional and social development.
Developmental psychology experts expressed concern that such responses could signal to children that their emotions are unimportant at a critical stage when they are learning social interaction and emotional cues.
Professor Jenny Gibson, study co-author and professor of neurodiversity and developmental psychology at Cambridge, stated that while physical toy safety has long been regulated, “now we need to start thinking about psychological safety too.”
The Children’s Commissioner for England, Dame Rachel de Souza, echoed calls for regulation, stating that without proper safeguards, AI tools used as classroom or nursery aids are not subject to the stringent checks required of other external resources.
The researchers recommended that parents keep AI toys in shared family spaces where interactions can be supervised, and carefully review privacy policies before purchase.
Industry Response and Expert Perspectives
In a statement to the BBC, Curio said: “Applying AI products in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control.” The company added that research into how children interact with AI-powered toys is a top priority.
However, nursery education professionals remain divided about AI’s role in early childhood settings. June O’Sullivan, who operates 43 London Early Years Foundation nurseries, said she has yet to see evidence demonstrating AI benefits for young children. “Children need to build a rounded set of skills,” O’Sullivan explained, “and it is more effective to do this with humans than with AI-powered tools.”
Actor and children’s rights advocate Sophie Winkleman has been vocal about keeping AI away from early education environments. “The harms can vastly outweigh the benefits,” Winkleman argued, suggesting that AI skill development should be reserved for older children. “The human touch for little children is sacred and something that should be really protected and fought for.”
Regulatory Landscape
The Cambridge study highlights a growing gap between rapidly advancing AI consumer technology and existing regulatory frameworks. While toy safety regulations have historically focused on physical hazards, such as small parts that could present choking risks, there are currently no specific standards addressing the psychological or developmental impact of AI interactions on young children.
The researchers are urging regulators to act now to ensure products marketed to children under five offer what they term “psychological safety” — protection from interactions that could confuse, distress, or inappropriately influence developing minds.
Parents interested in AI toys for their children are advised to consider the limited research available, supervise interactions, and weigh the potential benefits against the documented risks identified in the Cambridge study.
Image Attribution ▾
AI-generated editorial illustration via Hedra (hedra.com). Soft watercolor style showing a preschool child interacting with a colorful AI toy companion in an educational setting. Artificial Intelligence generated image / EOBS.biz
How we report: We select the day’s most important stories, confirm facts across multiple reputable sources, and avoid anonymous sourcing. Our goal is clear, balanced coverage you can trust—because transparency and verification matter for informed readers.