New study proposes child-safe AI framework after concerns about children and chatbots

A new University of Cambridge study has proposed a framework for child-safe artificial intelligence (AI) following incidents where children saw chatbots as quasi-human and trustworthy, leaving researchers concerned about them experiencing distress or harm.
AI chatbots have frequently shown signs of an “empathy gap”, researchers argue, and this gap puts young users at risk of distress or harm, raising the urgent need for “child-safe AI.”
Dr. Nomisha Kurian, who led the research, has urged developers and policymakers to prioritise approaches to AI design that take greater account of children’s needs.
Children, she argues, are particularly susceptible to treating chatbots as lifelike, quasi-human confidantes, and as a result their interactions with the technology can go awry when it fails to respond to their unique needs and vulnerabilities.
The study links that gap in understanding to recent cases in which interactions with AI led to potentially dangerous situations for young users. They include an incident in 2021, when Amazon’s AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Last year, Snapchat’s My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old.
Both companies responded by implementing safety measures, but the study says there is also a need to be proactive in the long-term to ensure that AI is child-safe. It offers a 28-item framework to help companies, teachers, school leaders, parents, developers and policy actors think systematically about how to keep younger users safe when they “talk” to AI chatbots.
“Children are probably AI’s most overlooked stakeholders,” Dr. Kurian said. “Very few developers and companies currently have well-established policies on child-safe AI. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”
Dr Kurian’s study examined cases where the interactions between AI and children, or adult researchers posing as children, exposed potential risks. It analysed these cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function, alongside evidence about children’s cognitive, social and emotional development.
LLMs have been described as “stochastic parrots“: a reference to the fact that they use statistical probability to mimic language patterns without necessarily understanding them. A similar method underpins how they respond to emotions.
This means that even though chatbots have remarkable language abilities, they may handle the abstract, emotional and unpredictable aspects of conversation poorly; a problem that Dr Kurian characterizes as their “empathy gap.”
They may have particular trouble responding to children, who are still developing linguistically and often use unusual speech patterns or ambiguous phrases. Children are also often more inclined than adults to confide sensitive personal information.
Despite this, children are much more likely than adults to treat chatbots as though they are human. Recent research found that children will disclose more about their own mental health to a friendly-looking robot than to an adult. Dr Kurian’s study suggests that many chatbots’ friendly and lifelike designs similarly encourage children to trust them, even though AI may not understand their feelings or needs.
“Making a chatbot sound human can help the user get more benefits out of it,” Dr Kurian said. “But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human, and the reality that it may not be capable of forming a proper emotional bond.”
Access the findings, including the 28 recommendations, using this link.
Popular

Provider
Marketplace
Practice
Research
Workforce
Co-regulation is a powerful tool for supporting children to navigate behaviour
2025-04-22 02:05:04
by Freya Lucas

Practice
Research
Experts warn of student surveillance as schools adopt 'data-hungry' technologies
2025-04-30 10:30:33
by Isabella Southwell

Quality
Research
Students are neither left nor right brained: how some early childhood educators get this ‘neuromyth’ and others wrong
2025-04-23 03:15:16
by Contributed Content