AI “slop” is flooding children's feeds: What it means for early learning settings
The Sector > Research > Understanding Children > AI “slop” is flooding children’s feeds: What it means for early learning settings

AI “slop” is flooding children’s feeds: What it means for early learning settings

by Fiona Alston

January 29, 2026
AI “slop” is flooding children’s feeds: What it means for early learning settings

Low-effort, AI-generated videos are increasingly targeting the youngest viewers on YouTube and YouTube Kids. For early childhood education and care services, the issue raises practical questions about digital media, family partnerships and children’s wellbeing.

 

Generative AI has made it cheap and fast to produce children’s videos at scale. The result is a growing volume of low-quality, repetitive content designed to capture attention and accumulate views, rather than support children’s learning and development. 

 

Recent reporting and investigations have highlighted two overlapping trends:

 

  • creators using AI tools to churn out simple songs and animations aimed at very young children 
  • concerns that some AI-generated children’s videos can include unsafe or disturbing material disguised as child-friendly content 

 

For early childhood education and care (ECEC) leaders, the challenge sits at the intersection of child development, online safety and family expectations.

 

Why this matters for children’s learning and wellbeing

 

Infants and toddlers are already spending significant time with digital media, with research indicating a growing share of that time can involve YouTube. 

 

At the same time, regulators and experts continue to flag the role that recommender systems and design features can play in keeping children watching, including auto-play and endless feeds. 

 

Even where content appears “educational”, quality varies widely. Repetitive, poorly made videos can crowd out richer experiences such as shared reading, sustained play, conversation, and active music-making.

 

What platforms say they are doing

 

YouTube has introduced disclosure requirements for content that is meaningfully altered or synthetically generated when it appears realistic, using an “altered content” setting in YouTube Studio. 

 

The platform has also signalled stronger enforcement around monetisation for “mass-produced” or “repetitive” videos, framing the changes as a clarification of existing expectations for original and authentic content. 

 

These moves matter, but they do not solve the day-to-day problem faced by families and educators: children can still encounter large volumes of low-quality material, and the line between “safe” and “helpful” is not the same thing.

 

The Australian context: tighter expectations around children’s online safety

 

Australia’s policy settings continue to shift. From 10 December 2025, certain platforms, including YouTube, are required to take reasonable steps to prevent Australians under 16 from having accounts. Public content can still be viewed without logging in. 

 

Separately, the Online Safety Act framework and eSafety’s codes and standards focus on reducing exposure to illegal and age-restricted content and lifting platform responsibility. 

 

For ECEC services, these developments reinforce an important point: families are navigating a more complex media environment, and services may be asked for guidance.

 

Practical actions for ECEC leaders and educators

 

  1. Make screen use part of the wellbeing conversation – Embed digital media into existing guidance on rest, routines and self-regulation. Keep the focus on what supports children’s development, rather than moralising about screens.
  2. Support co-viewing and active use – Where families choose screen content, encourage co-viewing and “talk alongside” strategies: naming objects, responding to children’s cues, and linking what is on screen to real-world play.
  3. Strengthen service policies for digital media – Review any screen-related practices in the service, including excursions, special events, or transitions, and ensure the rationale is clear and consistent with the service philosophy.
  4. Share credible resources on online safety – eSafety provides practical guidance for families on how platforms work and how to use parental controls, including for YouTube Kids. 
  5. Use the issue to deepen partnerships with families – Families are under pressure, and “quick fixes” can feel like survival. A respectful, strengths-based approach will land better than blanket rules.

 

AI-generated children’s content is not a niche problem. It is a scale problem, and one that will keep evolving as tools improve and creators chase easy revenue. For early learning services, the most useful response is calm, practical leadership: clear policies, credible information, and strong partnerships with families as they navigate children’s digital worlds. 

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT