AI in early learning: Balancing innovation with ethics in the age of artificial intelligence
The Sector > Research > Innovative Research > AI in early learning: Balancing innovation with ethics in the age of artificial intelligence

AI in early learning: Balancing innovation with ethics in the age of artificial intelligence

by Fiona Alston

October 28, 2025

As artificial intelligence (AI) becomes increasingly embedded in everyday life, its presence in early childhood education (ECE) is growing rapidly bringing new possibilities and pressing ethical questions.

 

A recent editorial by Jennifer J. Chen (Kean University, USA) and Hui Li (Education University of Hong Kong), published in the journal AI, Brain and Child, explores the dual promise and peril of AI in ECE, calling for robust, child-focused frameworks to guide its ethical integration into early learning environments.

 

From digital voice assistants like Alexa and Siri to AI-powered adaptive learning platforms, young children are increasingly engaging with AI in ways that influence their learning, communication and development. The authors describe this environment as a “digital playground,” where AI affords children the opportunity to explore, create and learn in unprecedented ways.

 

In classrooms, educators are also starting to embrace AI tools that support personalisation and accessibility. One example cited is the use of Amira, an AI-driven reading platform, to support bilingual students with individualised feedback and reading interventions.

 

While these tools offer clear educational benefits, Chen and Li warn of AI’s “double-edged sword” potential. Ethical risks including data privacy breaches, biased algorithms, overuse, and diminished child agency are among the concerns raised, particularly for children aged three to eight, a period marked by rapid developmental change.

 

These risks highlight the need for thoughtful governance and shared responsibility. The authors call for coordinated efforts from policymakers, developers, educators and families to ensure AI is used in ways that are developmentally appropriate, equitable and transparent.

 

To guide ethical integration, Chen and Lin (2024) propose the POWER framework, AI in ECE should be:

 

  • Purposeful
  • Optimal
  • Wise
  • Ethical
  • Responsible

 

These principles aim to support early learning environments that enhance children’s experiences without compromising their rights or wellbeing.

 

Implications for the ECEC sector

 

For Australian early childhood education and care providers, this research offers timely insights. As digital technologies evolve, educators and centre leaders face increasing pressure to balance innovation with developmental appropriateness.

 

Embedding ethical AI use into service delivery, staff training, curriculum design, and family engagement is essential. This includes assessing the suitability of AI-powered tools, understanding how data is used, and ensuring technologies align with the National Quality Framework’s emphasis on children’s agency, wellbeing and inclusion.

 

Chen and Li’s work urges the sector to embrace AI not as a replacement for human connection, but as a supplement to enhance learning when guided by values of equity, responsibility and care.

 

“The current state of AI in ECE is marked by both educational benefits and ethical challenges, underscoring the need to generate transformative knowledge that can inform both policy and practice” – Chen and Li (2025)

 

The full editorial is available under a Creative Commons licence via ResearchGate: AI and the Developing Child – Chen & Li (2025)

 

Additional resources on Early Years programs for educators can be found on the eSafety Commissioner website.

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT