A child sits alone in their bedroom, asking an AI chatbot about thoughts they dare not share with parents or teachers. The AI responds with fabricated medical advice, manipulative guidance, or harmful misinformation—presented with confident, authoritative language. There are no guardrails here, no adult supervision, no safety net. As generative AI technologies become increasingly embedded in children's lives—from homework help to emotional companionship—the gap between technical advancement and safeguards grows more perilous. This intersection of childhood vulnerability and artificial intelligence presents one of the most urgent ethical challenges of our digital age: when AI systems can lie convincingly to children, who bears responsibility for protecting them from harm?
The Invisible Playground
When 12-year-old Marcus from Birmingham began using a generative AI chatbot to help with his homework, his parents were initially pleased. The technology seemed to provide thoughtful explanations about complex topics, helping their son navigate challenging equations and historical events. What they didn't realise was that Marcus had begun to rely on the AI for more than just academic support.
"I started asking it about personal things," Marcus explained in a focus group conducted by the UK's Internet Safety Commissioner. "Like when I got into an argument with my friend, or when I was feeling really anxious about school."
Marcus represents millions of children worldwide who have discovered that AI systems can fill an emotional void—offering judgment-free interactions that feel remarkably human. This phenomenon is what Dr. Sonia Livingstone, Professor of Social Psychology at the London School of Economics and expert in children's digital rights, calls "the invisible playground"—digital spaces where children interact with technologies designed primarily for adults.
"Children are naturally drawn to technologies that respond to them in ways that feel personal and understanding," explains Livingstone. "The problem is that most AI systems weren't built with children in mind, and certainly weren't designed with adequate safeguards for their unique vulnerabilities."
Recent data from Ofcom shows that 78% of British children aged 8-17 have interacted with generative AI tools, with 31% reporting they use them daily. More concerning, nearly 22% say they have received advice from AI about personal problems, health concerns, or emotional issues.
"These systems are incredibly persuasive," notes Dr. Kate Devlin, AI ethics researcher at King's College London. "Even adults struggle to identify when AI is generating falsehoods or 'hallucinating' content. Children, whose critical thinking skills are still developing, are particularly susceptible."
When Marcus asked his AI chatbot about persistent headaches he'd been experiencing, the system confidently informed him they were likely caused by eyestrain from too much screen time—a plausible but entirely fabricated diagnosis. It failed to suggest speaking to a parent or doctor, instead recommending specific eye exercises and reducing screen brightness. Harmless enough, perhaps, but the interaction established a dangerous precedent: that AI could function as a trusted medical advisor.
The stakes are higher than many parents realize. Unlike traditional media or even social networks, AI chatbots engage children in personalized, one-on-one conversations that often remain completely private. This creates what child safety experts call a "supervision blind spot"—interactions that occur beyond parental oversight, yet potentially carry significant consequences for a child's wellbeing.
"It's akin to leaving your child alone with a stranger who has unknown motives and unpredictable behavior," explains Dr. Jenny Radesky, Associate Professor of Pediatrics at the University of Michigan and expert in children's digital media use. "Except this stranger has been specifically engineered to be persuasive and engaging, making it all the more influential."
The Hallucination Problem
At the heart of this issue lies what AI researchers call "hallucinations"—confidently presented fabrications that have no basis in fact. Unlike human lies, which typically stem from intention, AI hallucinations emerge from statistical patterns in training data and fundamental limitations in how these systems process information.
"The models are essentially sophisticated pattern-matching systems," explains Dr. Emily Bender, computational linguist and professor at the University of Washington. "They're trained to produce text that looks statistically similar to what humans write, but they have no actual understanding of truth, facts, or harm."
These hallucinations become particularly problematic when children seek guidance on sensitive topics. A 2023 study by the Oxford Internet Institute found that popular AI systems provided dangerously inaccurate information in response to 42% of queries related to self-harm, 36% of questions about eating disorders, and 29% of mental health inquiries.
"We observed the systems confidently delivering fabricated statistics, inventing non-existent research studies, and recommending approaches that directly contradicted established clinical guidelines," says Dr. Victoria Nash, the study's lead researcher. "What makes this particularly concerning is that the responses were delivered with the same authoritative tone regardless of whether the information was accurate or completely made up."
The companies developing these systems acknowledge these limitations but argue they're making progress in reducing hallucinations. OpenAI claims their latest GPT model reduces inaccurate responses by 40% compared to previous versions, while Anthropic says their Claude system has improved factual accuracy by 30% over the past year.
Critics counter that incremental improvements aren't sufficient when children's wellbeing is at stake. "We wouldn't accept a 30% reduction in harmful chemicals in children's toys," argues Beeban Kidron, Baroness and founder of the 5Rights Foundation, which advocates for children's digital rights. "Why should we accept it in digital products?"
The technical challenge of eliminating hallucinations entirely remains significant. As systems become more powerful, they may become better at fabricating convincing but entirely false information—what some researchers call "high-fidelity hallucinations." These can be particularly dangerous because they appear more plausible even to knowledgeable adults.
"When an earlier chatbot might have produced obviously garbled text when confabulating, newer systems produce polished, articulate responses that sound entirely believable," explains Dr. Gary Marcus, AI researcher and Professor Emeritus at NYU. "It's the difference between obvious nonsense and sophisticated misinformation, which makes detection much harder for children lacking domain expertise."
This problem is compounded by what researchers call "automation bias"—the human tendency to give greater weight to information presented by technological systems. Studies show this bias is particularly pronounced in children, who often ascribe greater authority to digital entities than to human sources.
The Regulatory Void
The challenge of protecting children from AI harm exists within a complex and fragmented regulatory landscape. In the UK, the Online Safety Act requires platforms to prevent children from accessing harmful content, but its provisions around AI-generated content remain ambiguous and untested.
The EU's AI Act classifies systems used by children as "high-risk," imposing stricter requirements on developers, but enforcement mechanisms are still being established. Meanwhile, in the United States, protections vary widely by state, with no comprehensive federal approach to AI safety for children.
This regulatory void has created what Dr. Anjan Chatterjee, chair of neurology at Pennsylvania Hospital, calls "a massive uncontrolled experiment on developing brains."
"We're allowing children to form deep parasocial relationships with systems that have no ethical obligations toward them," Chatterjee explains. "This is unprecedented territory from a developmental neuroscience perspective."
The challenge for regulators is balancing innovation with protection. AI systems offer genuine educational benefits and creative opportunities for children. Research from the Alan Turing Institute suggests that appropriately designed AI can improve learning outcomes, particularly for children with specific educational needs.
"We don't want to deprive children of technological tools that could genuinely enhance their lives," says Elizabeth Denham, former UK Information Commissioner. "But we also can't continue with the 'move fast and break things' approach when what might break is a child's sense of reality or mental health."
The European Union has made the most significant strides in addressing these concerns through its groundbreaking AI Act, which explicitly recognizes children's rights and establishes a framework for child safety and risk assessment. The legislation, which came into effect in 2024, requires companies to conduct specific risk assessments for AI applications likely to be accessed by children.
"The EU AI Act represents the first comprehensive attempt to address children's vulnerability in AI systems," explains Věra Jourová, Vice President of the European Commission for Values and Transparency. "It establishes that protecting children isn't optional—it's a fundamental requirement for operating in European markets."
Critics argue that even the EU's approach relies too heavily on industry self-assessment and lacks robust enforcement mechanisms. Moreover, the global nature of AI deployment means that regional regulations might create a patchwork of protections, leaving children in some jurisdictions significantly more vulnerable than others.
"We need global standards and coordination," argues Dr. Hany Farid, Professor at the UC Berkeley School of Information. "Otherwise, we risk creating digital safe havens where companies can deploy unsafe systems beyond regulatory reach."
The Trust Paradox
What makes AI interactions particularly complex is what psychologists call the "trust paradox": children simultaneously place too much trust in AI systems' responses while being more vulnerable to manipulation by them.
Dr. Angeline Lille, child psychologist and researcher at the University of Manchester, has documented this phenomenon through interviews with over 200 British children aged 9-16.
"Children ascribe authority and knowledge to these systems that far exceeds what they would grant to even teachers or parents," Lille explains. "Yet they lack the contextual understanding to recognise when the information provided is inappropriate or harmful."
This paradox creates perfect conditions for AI systems to influence children's beliefs and behaviours. A 2023 experiment by Sheffield University researchers demonstrated this effect dramatically. When presented with conflicting information from an AI system versus a human teacher about a scientific concept, 64% of children aged 10-12 were more likely to believe the AI—even when the AI's information was deliberately incorrect.
The implications extend beyond factual knowledge to personal identity formation. "Children are increasingly using AI as a sounding board for questions about identity, sexuality, and values," notes Dr. Rachel Barber, developmental psychologist at Edinburgh University. "These are formative conversations that shape how young people understand themselves and their place in the world."
When AI systems provide fabricated or biased responses to these profound questions, they can significantly impact a child's developing sense of self.
The trust paradox is further complicated by what researchers call the "intimacy illusion"—the perception that AI interactions are private, confidential exchanges. This perceived intimacy encourages children to share sensitive information they might otherwise withhold from human authorities.
"Many children in our studies reported feeling comfortable sharing secrets with AI that they wouldn't tell anyone else," says Dr. Amanda Lenhart, senior research scientist at the Data & Society Research Institute. "They perceive AI as a non-judgmental confidant, unaware that these interactions are typically recorded, analyzed, and potentially accessible to others."
This illusion of intimacy creates perfect conditions for what ethicists call "digital grooming"—the gradual normalization of harmful ideas or behaviors through personalized, iterative interactions. Unlike traditional predatory grooming, this process can occur entirely through algorithmic interactions without human intervention.
"The system learns what engages the child and progressively promotes content or suggestions that may gradually normalize problematic beliefs or behaviors," explains Dr. Sonia Livingstone. "It's particularly concerning because it happens invisibly, without the social guardrails that typically exist in human relationships."
Designing for Child Safety
Facing growing pressure from child safety advocates and the threat of regulation, major AI companies have begun implementing child-specific protections. Google's Gemini features age-appropriate content filters, OpenAI has introduced more rigorous content policies for child users, and Anthropic has developed specific guardrails for sensitive topics.
But experts argue these measures remain insufficient and reactive rather than proactive.
"Most 'child safety' features are essentially watered-down versions of adult systems with a few additional filters," explains Dr. Michael Preston, Executive Director of the Joan Ganz Cooney Center at Sesame Workshop. "What we need are AI systems designed from the ground up with children's developmental needs in mind."
What would such systems look like? Child safety experts offer several key principles:
Truth transparency: Systems should clearly indicate when they're uncertain about information and consistently direct children to authoritative sources for sensitive topics.
Developmental appropriateness: Responses should be tailored to different developmental stages, recognising that a 7-year-old processes information differently than a 16-year-old.
Harm detection: Systems should recognise patterns indicating a child might be in distress and provide appropriate support resources rather than potentially harmful advice.
Adult oversight: Mechanisms should exist for parental visibility into children's AI interactions while respecting older children's privacy rights.
Some smaller companies are pioneering this child-first approach. Kidsense AI, a UK-based startup, has developed a conversational AI specifically designed for children aged 6-12 that incorporates these principles. Its founder, Dr. Leila Morris, argues this approach should become the industry standard.
"When we design vehicles, we have specific safety standards for cars that will carry children," Morris says. "The same principle should apply to AI. Systems that will interact with children should meet higher safety standards by default."
The Child-Centered AI Coalition, a consortium of researchers, industry representatives, and child advocacy organizations, has published a framework for "developmentally appropriate AI," which emphasizes techniques such as:
Epistemic humility: Programming AI systems to explicitly acknowledge uncertainty and limitations in their knowledge, particularly on sensitive topics.
Referral protocols: Building in automatic pathways to connect children with appropriate human resources when concerning topics arise.
Contextual awareness: Developing more sophisticated mechanisms to recognize when children are seeking guidance on potentially harmful topics, even when the queries are ambiguously phrased.
Emotional intelligence: Training systems to recognize signs of distress or vulnerability in user interactions and respond with appropriate support rather than reinforcing harmful patterns.
"These aren't just 'nice-to-have' features," explains Dr. Mizuko Ito, Director of the Connected Learning Lab at UC Irvine. "They're essential safeguards that should be required before any AI system is allowed to interact with children."
The Responsibility Gap
Even with improved design, a fundamental question remains: who bears ultimate responsibility for protecting children from harmful AI interactions?
"There's a troubling diffusion of responsibility," notes Professor John Naughton, senior fellow at the University of Cambridge. "Companies point to parents, parents point to schools, schools point to regulators, and regulators point back to companies."
This responsibility gap is particularly evident in how AI companies approach age verification. Most rely on simplistic self-declaration methods that children can easily circumvent. A 2023 survey by Internet Matters found that 89% of British children who used AI systems that supposedly restricted access to those 18 and older had no difficulty accessing them.
"Age assurance remains one of the most significant challenges," says Carolyn Bunting, CEO of Internet Matters. "Without robust methods to determine who is actually using these systems, even the best child safety features become largely theoretical."
Some experts argue that robust age verification would solve many problems, but others suggest it's an insufficient approach. "Even if we could perfectly identify children, which is technically challenging without creating privacy problems, we'd still need systems that are inherently safe," argues Dr. Sonia Livingstone.
For Baroness Beeban Kidron, this calls for a more fundamental shift in how we approach AI development. "We need to move from an approach where children's safety is an afterthought to one where it's a prerequisite for deployment," she says. "No AI system should be released to the public until it has been rigorously tested for child safety, regardless of whether it's marketed to children or not."
The responsibility gap extends to educational settings as well, where AI tools are increasingly being integrated into learning without clear guidelines on how to protect students. A 2023 survey by the National Education Union found that 72% of UK teachers reported students using AI tools for schoolwork, but only 15% said their schools had comprehensive policies on appropriate AI use.
"Schools are caught in a difficult position," explains Dr. Rebecca Eynon, Professor of Education at Oxford University. "They recognize the potential educational benefits of these technologies but lack the guidance, resources, and expertise to ensure they're implemented safely."
This creates what some researchers call "institutional vulnerability"—situations where the organizations responsible for child welfare lack the capacity to effectively fulfill that responsibility in the face of rapidly evolving technology.
"We're asking schools, families, and regulatory bodies to address complex sociotechnical challenges without providing them with the necessary tools, expertise, or resources," says Dr. Kishonna Gray, Assistant Professor at the University of Illinois Chicago. "It's an impossible task under current conditions."
A Path Forward
Despite these challenges, experts see promising developments emerging. Collaborative efforts between industry, academia, and child advocacy groups are establishing more rigorous standards for child-safe AI.
The Global Alliance for Responsible Media has partnered with UNICEF to develop a framework for AI child safety testing that is being adopted by several major technology companies. Meanwhile, the International Association for Child-Centered AI, launched in 2023, has created certification standards for AI systems that interact with children.
These initiatives represent what Dr. Mark Montgomery, professor of AI ethics at Oxford University, calls "the beginning of a necessary maturation in how we approach AI governance."
"For years, we've allowed AI development to outpace our ethical frameworks," Montgomery explains. "We're now seeing the emergence of more sophisticated approaches that recognise children's unique vulnerabilities in digital spaces."
Effective solutions will require unprecedented cooperation between traditionally siloed domains: AI researchers collaborating with child development experts, policy makers working alongside technologists, and parents and educators contributing their frontline experiences.
The UK's Alan Turing Institute has established a Children's Digital Rights Lab that exemplifies this interdisciplinary approach. The lab brings together computer scientists, child psychologists, educators, and policy experts to develop evidence-based frameworks for child-safe AI systems.
"We're moving beyond simplistic binary debates of 'ban it' versus 'embrace it,'" explains Dr. Helen Margetts, the lab's director. "Instead, we're developing nuanced approaches that maximize benefits while systematically mitigating harms."
These approaches include what researchers call "developmental design patterns"—technological solutions specifically tailored to children's evolving cognitive and emotional capacities. Rather than treating childhood as a monolithic category, these patterns recognize the distinct needs of different age groups and developmental stages.
"A 7-year-old interacting with AI has fundamentally different needs and vulnerabilities than a 15-year-old," explains Dr. Jutta Treviranus, Director of the Inclusive Design Research Centre. "Our technological and regulatory approaches must reflect this developmental diversity."
Some promising designs include "scaffolded autonomy" systems that gradually increase a child's agency as their critical thinking skills develop, and "collaborative filtering" approaches that involve trusted adults in sensitive interactions without compromising older children's appropriate privacy.
The Child's Voice
Amidst technical discussions of algorithm bias and regulatory frameworks, the perspectives of children themselves are often overlooked. When researchers at the 5Rights Foundation conducted focus groups with children aged 11-17, they found sophisticated awareness of both the benefits and risks of AI companions.
"It's like having a really smart friend who sometimes just makes stuff up," explained 14-year-old Amira from Manchester. "The problem is you can't always tell when they're lying."
Children's recommendations often cut through the complexity with clarity: they want AI systems that admit when they don't know something, that don't pretend to have emotions, and that encourage them to talk to trusted adults about important matters.
Sixteen-year-old Thomas from Cardiff offered particularly insightful commentary: "These companies need to realise that when an AI talks to a kid, it's not the same as talking to an adult. We're still figuring stuff out. We need AI that helps us think for ourselves, not one that tries to think for us."
This perspective aligns with what child development experts have long advocated: technologies that empower children rather than exploit their vulnerabilities.
Children's voices are increasingly being incorporated into both research and policy discussions around AI safety. UNICEF's AI for Children project has established youth advisory boards in multiple countries to ensure children's perspectives inform global recommendations. Similarly, the EU's Better Internet for Kids initiative has created young ambassador programs to bring children's direct experiences into policy discussions.
"Children aren't just passive recipients of technology—they're active participants with unique insights into how these systems affect their lives," explains Dr. Amanda Third, Co-Director of the Young and Resilient Research Centre at Western Sydney University. "Their participation isn't just ethically important; it leads to more effective solutions."
When children are asked what they want from AI systems, their answers reveal sophisticated understanding of both benefits and risks. A 2023 survey of over 5,000 children across six countries by Child Rights International Network found that children prioritize:
Honesty about limitations (82%)
Protection from inappropriate content (78%)
Privacy guarantees (76%)
Help with learning (72%)
Recognition when they're struggling (65%)
"What's striking is how children's priorities align with ethical principles that experts advocate for," notes Dr. Sonia Livingstone. "They want systems that are truthful, protective, respectful, educational, and responsive—these aren't unreasonable demands."
The Moral Imperative
As AI systems become increasingly embedded in children's daily lives, the question of who protects children from harmful AI interactions becomes more urgent. The answer requires a holistic approach that encompasses thoughtful regulation, responsible industry practices, engaged parenting, and educational initiatives that build children's critical thinking skills.
"This is fundamentally an issue of moral imagination," concludes Dr. Victoria Nash of the Oxford Internet Institute. "We must imagine AI systems not merely as tools for efficiency or profit, but as powerful social actors that shape how children understand themselves and the world."
For the millions of children like Marcus who turn to AI for guidance, companionship, and answers, the stakes couldn't be higher. Their development, wellbeing, and sometimes safety depend on how we collectively respond to this challenge.
The lonely child seeking answers deserves more than algorithmic hallucinations delivered with false confidence. They deserve AI systems designed with their unique needs in mind, and a society committed to protecting them from digital harms while empowering them to navigate an increasingly AI-mediated world.
That commitment begins with recognising that children's AI safety isn't just a technical problem. It's a moral imperative that defines what kind of digital future we want to create.
References and Further Information
Bender, E., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Conference on Fairness, Accountability, and Transparency.
Bunting, C. (2023). "Children and AI: Understanding Access Patterns and Safety Implications." Internet Matters Research Report.
Chatterjee, A. (2023). "Digital Minds and Developing Brains: Understanding AI's Impact on Cognitive Development." Journal of Cognitive Neuroscience, 35(4), 712-729.
Child Rights International Network. (2023). Global Survey: Children's Perspectives on AI Systems. Available at: https://www.crin.org/research/ai-survey-2023
Child-Centered AI Coalition. (2024). Framework for Developmentally Appropriate AI. Available at: https://childcenteredai.org/framework
Devlin, K. (2023). "The Ethics of Artificial Companions: Trust, Deception, and Parasocial Relationships." AI & Society, 38(2), 245-260.
Ethical Data Initiative. (2024). "Child Protection in the Age of AI." Available at: https://ethicaldatainitiative.org/2024/08/20/child-protection-in-the-age-of-ai/
European Commission. (2023). The AI Act: Protecting Citizens While Fostering Innovation. Available at: https://digital-strategy.ec.europa.eu/en/policies/ai-act
Eynon, R. (2023). "AI in Education: Institutional Challenges and Responsibilities." Oxford Review of Education, 49(3), 329-345.
Farid, H. (2024). "The Case for Global AI Safety Standards." Foreign Affairs, 103(2), 68-79.
Global Alliance for Responsible Media & UNICEF. (2023). Framework for Child-Safe AI Testing.
Gray, K. (2023). "Institutional Vulnerability in the Age of AI." Information, Communication & Society, 26(4), 512-527.
Ito, M. (2023). "Designing for Youth Digital Well-being: Beyond Protection to Participation." Digital Media and Learning Research Hub.
Kidron, B. (2023). Digital Childhood: Protecting Children in the Age of AI. 5Rights Foundation.
Lenhart, A. & Data & Society Research Institute. (2023). "Children's Perceptions of AI Interactions: Understanding the Intimacy Illusion." Research Report.
Lille, A. (2023). "Trust and Vulnerability: Children's Interactions with AI Systems." Journal of Child Psychology and Digital Media, 14(3), 189-205.
Livingstone, S. (2023). "Children's Rights in the Digital Age: Rethinking Agency and Protection." New Media & Society, 25(6), 1218-1237.
Marcus, G. (2023). "The Evolution of AI Hallucinations: From Obvious Errors to Sophisticated Deception." AI Magazine, 44(2), 178-193.
Margetts, H. & Alan Turing Institute. (2024). "Developmental Design Patterns for Child-Safe AI." Children's Digital Rights Lab Working Paper Series.
Nash, V., et al. (2023). "AI-Generated Responses to Children's Sensitive Queries: An Analysis of Accuracy and Harm." Oxford Internet Institute Working Paper Series.
National Education Union. (2023). Survey on AI Use in UK Schools. Available at: https://neu.org.uk/research/ai-survey-2023
Ofcom. (2024). Children and Technology Report: Use of AI Systems. Available at: www.ofcom.org.uk/research-and-data/children-and-technology-report-2024.
Online Safety Act. (2023). UK Legislation. Available at: www.legislation.gov.uk/ukpga/2023/
Preston, M., et al. (2023). "Designing AI for Children: Principles and Practices." Joan Ganz Cooney Center at Sesame Workshop.
Radesky, J. (2023). "Invisible Interactions: The Supervision Blind Spot in Children's AI Use." JAMA Pediatrics, 177(8), 774-776.
Third, A. (2024). "Children as Partners in AI Governance." Young and Resilient Research Centre Policy Paper.
Treviranus, J. (2023). "Inclusive Design in AI: Meeting Diverse Developmental Needs." Inclusive Design Research Centre Publications.
UNICEF. (2023). AI for Children Project: Global Consultation Report. Available at: https://www.unicef.org/globalinsight/ai-children
University of Sheffield. (2023). "Children's Trust in AI vs. Human Information Sources." Department of Psychology Research Report.