Storypark research reveals three key concerns about AI use in ECEC
The Sector > Marketplace > Products > Storypark research reveals three key concerns about AI use in ECEC

Storypark research reveals three key concerns about AI use in ECEC

by Storypark

October 14, 2025

Early childhood leaders and educators’ key concerns about the use of artificial intelligence (AI) in the sector came through loud and clear in recent research conducted with Storypark customers. We explore how a considered approach to AI development can address these challenges while supporting professional expertise.

 

Early childhood education’s next technological leap

 

Technology has long impacted how early childhood educators document children’s learning from film cameras and paper portfolios to digital cameras and online platforms. Each shift has brought fresh possibilities alongside challenges: digital cameras reduced the time lag in sharing learning moments but introduced concerns about educators spending valuable non-contact time managing hundreds of images, while online portfolios transformed family engagement yet raised questions about maintaining children’s direct connection with their own learning stories.

 

Now AI is emerging as the next step in this evolution. Yet research conducted with Storypark customers reveals: 54% of respondents’ organisations lack any AI policy, while over 60% are already using AI tools like ChatGPT and Microsoft CoPilot. As with most technological advances, adoption often outpaces effective and appropriate legal, regulatory, and societal frameworks.

 

Many early childhood professionals are using AI tools without formal guidance or institutional support for navigating the complex questions these tools raise. The key concerns identified through our customer research? Lack of training and clear guidelines (48% of respondents), privacy and security (46% of respondents), and ethical considerations (46% of respondents). These aren’t abstract fears but rather reflect genuine challenges as some professionals navigate AI platforms without support, guardrails or data protection guarantees.

 

The critical question facing the sector isn’t whether AI belongs in early childhood education, it’s already here. The question is how to implement it thoughtfully while upholding the sector’s values and standards.

 

Storypark CEO Jamie McDonald suggests that rather than responding with blanket acceptance or rejection, the early childhood sector’s history of reflective practice and critical thinking positions it well to navigate AI implementation thoughtfully.

 

Three key concerns about AI use in ECEC

 

Three main concerns are emerging as professionals integrate AI tools into their daily work: issues around training and guidance, child safety and data protection, and important ethical considerations. Understanding these challenges is crucial for any organisation looking to support thoughtful AI implementation.

 

Concern one: Lack of training and clear guidelines (48% of respondents)

 

Nearly half of respondents feel unprepared to navigate AI tools effectively, and it’s not just about learning the technology. The real challenge is understanding how AI fits within their teaching practice, how to maintain professional judgment while using it, and what they should tell families about its use.

 

Many educators are experimenting quietly with tools like ChatGPT to help write learning stories or family communication, but without clear guidance about appropriate boundaries. Some worry they’re compromising their professional integrity, while others fear they’re missing opportunities to grow their practice.

 

As part of the Storypark Responsible AI Commitments we are committed to a collaborative and community-centric approach. 

 

Our recent webinar, Navigating the future of documentation: Aligning AI with purpose and practice, aimed to support this approach, bringing together academic research, pedagogical expertise, and ECEC leadership to explore opportunities and challenges. Dr Kate Highfield, whose research explores the impact of technology as a tool with young children, parents and educators, joined education project advisor Jess Mitchell, who is interested in how technology can empower educators. Together with Storypark’s lead pedagogical consultant Amanda Higgins, they explored the complexities AI brings to documentation practice, from best practice to professional responsibilities. 

 

We have also developed a range of free resources including; guidance for developing AI policy, a sample letter for family communication and a guide to implementing AI in ECE created with the Storypark pedagogical team.

 

Concern two: Privacy and security (46% of respondents)

 

Educators have told us they are genuinely worried about what happens to children’s data when they use AI tools. The concerns go well beyond basic security – they want to know who owns the information once it’s uploaded, how long it’s stored, and whether children’s details could end up training AI models that might later reproduce that information. Many popular AI tools use uploaded content to improve their systems by default (OpenAI policy FAQ). While platforms like ChatGPT allow users to opt out through privacy settings, research shows users often lack awareness about how AI chatbots collect and use their data.

 

Many popular AI tools do use uploaded content to improve their systems, and the terms of service for platforms like ChatGPT can be unclear about data usage. For a sector handling sensitive information about young children, this uncertainty is particularly troubling.

 

As part of the Storypark Responsible AI Commitments we commit to protecting your privacy, security and data.

 

Storypark has built privacy protection into our AI from the ground up, using enterprise-grade services from AI providers OpenAI and Anthropic. When educators use AI-powered features in Storypark like grammar checks, only the text and title are securely sent to their AI partners to generate responses – never images, videos, PDF documents, comments, or information about which children are included.

 

All data is automatically deleted after processing and is never used to train AI models. Customer data is processed only for the duration of the associated AI query and response. Storypark collects metadata internally to understand feature usage and improve functionality, but this metadata is not shared with their AI partners.

 

Storypark has set and maintains strict data boundaries, and provides transparent information through resources like the AI fact sheet that explains exactly what happens to data when AI features are used, showing what is shared, how it’s deleted, and confirming it’s not used for training.

 

Concern three: Ethical considerations (46% of respondents)

 

This encompasses questions about authenticity, professional integrity, and the appropriateness of AI assistance in documenting children’s learning and development. Early childhood leaders and educators worry about maintaining their authentic voice, ensuring AI suggestions align with their professional values, and being transparent with families about technology use. Many also fear that using AI might diminish the perception of their professional competence.

 

This also includes concerns about AI bias, ensuring equitable support for all children, and preserving the relationships that define quality early childhood education. Early childhood leaders and educators want to avoid AI creating content that misrepresents children’s voices and abilities, or that produces documentation that feels ‘artificial’ rather than authentic.

 

As part of the Storypark Responsible AI Commitments we commit to supporting quality practice and openness and honesty. 

 

When creating AI features in Storypark, keeping the educator’s authentic voice and thoughts is at the heart of our design approach. Instead of automatically writing entire learning stories or observations, we provide reflective prompts, developed with our team of pedagogical consultants, that encourage educators to think deeply about pedagogical practice. All AI-generated suggestions are fully editable, ensuring educators maintain control over the educator’s professional voice. AI tools are always optional in Storypark, even if a service has access, each educator can choose whether or not they’d like to use them, giving them more autonomy over their technology choices.

 

At Storypark we believe technology should serve to strengthen rather than replace the professional expertise that defines quality early childhood education.

 

Looking forward: Purposeful innovation

 

The three concerns emerging across the sector, training gaps, privacy vulnerabilities, and ethical uncertainties, point to challenges that shouldn’t require thoughtful sector response and shouldn’t be ignored.

 

AI is being integrated in real-time, often by individual educators working without higher-level guidance or guardrails. This creates both risk and opportunity. The risk is fragmented adoption that undermines professional standards and data protection. The opportunity is for the sector to collectively define what responsible AI looks like in early childhood settings, enabling thoughtful implementation that supports quality documentation, professional development, and family engagement.

 

Organisations are exploring how AI can support pedagogical reflection and streamline administrative tasks. Some services are beginning to evaluate AI tools against transparent data practices and design approaches that preserve educator voice, though sector-wide guidance is still developing.

 

“I would always want my writing to sound like my personal voice, my genuine warmth for the child,” reflects one educator, capturing the tension at the heart of AI adoption in ECEC. This is insistence that tools align with professional values.

 

Addressing these challenges will require collective effort: policy frameworks that help services make informed decisions, professional development that builds AI literacy alongside pedagogical understanding, and industry standards that protect children’s data.

 

At Storypark, we’ve focused on treating AI as a mentor rather than a replacement, keeping educator’s insight and voice at the heart, and building privacy protection into design from the ground up. We’ve embedded optional AI features within our existing platform, reducing the need for separate tools and supporting consistency across services.

Despite everything AI can do, the essence of early childhood education remains deeply human. Children thrive through caring relationships and responsive interactions with educators who understand their individuality, culture, and development. As Storypark CEO Jamie McDonald notes, “People will always be at the centre of education, and the professional intuition and insight of early childhood educators is invaluable.” Technology makes documentation and planning easier, but it can’t replicate the warmth, intuition, or professional expertise that define quality teaching. Moving forward, the sector must embrace innovation with purpose, using technology to strengthen, not replace, the reflective, relationship-centred practice that ensures children’s wellbeing and upholds the integrity of early learning.

 

Storypark is Australia’s leading early childhood education technology platform, trusted by over 11,850 centres worldwide to support them in driving high quality outcomes for children, families, and their organisation. Storypark offers a complete childcare management platform with everything you need to manage your early childhood education service, strengthen learning and engagement, reduce admin, support compliance and boost occupancy. Learn more about our approach to AI at https://hubs.ly/Q03MtyZM0.

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT