Researchers are calling for stricter regulation of AI-powered toys designed for toddlers after conducting one of the first studies to examine how children under five interact with such toys.
The study observed how a small group of children aged three to five played with a cuddly toy called Gabbo.
While several AI toys are already marketed to children as young as three, there is currently very little research examining how the technology affects preschoolers.
A team at the University of Cambridge identified only seven relevant studies worldwide, none of which focused directly on toddlers.
Gabbo contains a voice-activated AI chatbot developed by OpenAI. The toy is designed to encourage pre-schoolers to talk to it and engage in imaginative play.
Parents who took part in the study were particularly interested in the toy’s potential to support language development and communication skills.
However, the researchers found that many children struggled to interact with it.
Gabbo often failed to recognize interruptions, talked over the children, and could not distinguish between adult and child voices. It also responded awkwardly to expressions of affection.
In one instance, when a five-year-old told the toy, “I love you,” it replied:
“As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
Researchers say responses like this could be confusing for young children who are still learning how to interpret social cues and emotions.
Dr Emily Goodacre, co-author of the study, said toys like Gabbo could “misread emotions or respond inappropriately”, adding that children might be left “without comfort from the toy and without adult support either”.
During another interaction, a three-year-old told the toy: “I’m sad.”
Gabbo responded: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?”
Researchers warned that responses like this could unintentionally signal that a child’s sadness is unimportant.
“There’s historically been a lot of attention on physical safety, we don’t want toys where you can pull the eyes off and swallow them,” said Jenny Gibson, professor of neurodiversity and developmental psychology at the University of Cambridge and a co-author of the study.
“Now we need to start thinking about psychological safety too.”
After the year-long observational study, researchers said regulators should act now to ensure products marketed to children under five meet standards of “psychological safety”.
Gabbo is produced by Curio, a technology company that has previously collaborated with singer Grimes, the former partner of Elon Musk.
In a statement, Curio said:
“Applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency and control.
Research into how children interact with AI-powered toys is a top priority for Curio this year and in the future.”
Calls for stronger regulation of AI in early-years settings were echoed by the Children’s Commissioner for England, Dame Rachel de Souza.
“There are plenty of good uses for AI,” she said. “But without proper regulation, many of the tools and models used as classroom assistants or teaching aids are not subject to the stringent safeguarding checks nursery providers would normally require.”
Concerns over unsupervised play
The report also advises parents to keep AI toys in shared spaces where interactions can be supervised, and to carefully read privacy policies before allowing children to use them.
Early-years educators remain divided about the role of AI in nurseries.
June O’Sullivan, chief executive of the London Early Years Foundation, which runs 43 nurseries across London, said she has yet to see convincing evidence that AI benefits young children.
She believes children need to build a broad range of social and developmental skills, which are best developed through human interaction.
“I couldn’t find anything that made me feel that bringing it into our nurseries would enhance children’s learning,” O’Sullivan said.
Actor and children’s rights campaigner Sophie Winkleman has also voiced concern about introducing AI into early education.
She argues that “the harms can vastly outweigh the benefits” and believes AI education should be introduced later in life.
“The human touch for little children is sacred and something that should be protected and fought for,” she said.


