The AI game is shifting - why ECEC needs to pay attention
The Sector > Research > Allied Fields > The AI game is shifting – why ECEC needs to pay attention

The AI game is shifting – why ECEC needs to pay attention

by Freya Lucas

September 09, 2024

The advent of artificial intelligence (AI) has already been a ‘game changer’ in the field of early childhood education and care (ECEC). From products which write observations of children’s learning through to chat bots which respond to parent enquiries, AI has a ubiquitous presence in the sector. 

 

As is often the case with emerging technologies, the advent of new products and technologies has come at a more rapid pace than regulations or discussions on the ethical use of these new elements, and has seen legislations and discussions about how to best protect end users from the consequences of them come after the fact. 

 

In a bid to support better decision making in this space, businesses across Australia, including those working in the ECEC sector, are now being invited to comply with the Voluntary AI Safety Standard (or the International Standard Organisation’s version) and to start gathering the documentation they need to make better decisions about AI.

 

Let’s explore what this document is, why it’s important for ECEC services, and some of the other considerations about the use of AI in ECEC services. 

 

What is the Voluntary AI Safety Standard? 

 

The Voluntary AI Safety Standard helps organisations of all types and sizes – including those working in ECEC, to develop and deploy AI systems in Australia safely and reliably. 

 

The standard has 10 voluntary ‘guard rails’ which are designed to help organisations to benefit from AI while mitigating and managing the risks that AI may pose to organisations, people and groups. 

 

These guardrails are especially important for those working with vulnerable populations such as ECEC, aged care, or those living with a disability. 

 

The standards share what the guardrails are, when to use them, and also offer some definitions, links to tools and resources, and information on how AI interacts with other business guidance and regulations. 

 

It is hoped that as more and more businesses, in a multitude of sectors and industries, adopt the standards, that Australian and international vendors and deployers will feel market pressure to ensure their products and services are fit for purpose. 

 

In turn, these products and services should become cheaper, and it should become easier for end users to know whether the AI system they are buying, relying on or being assessed and rated against (in an ECEC context) is actually fit for purpose.

 

Why is this important for ECEC? 

 

There are a multitude of products and services being offered to the ECEC sector on any given day, and in the AI space, many of these products are pitched at solving the problem of ‘paperwork’ or ‘compliance’. 

 

With a time-pressured sector experiencing a shortage of qualified staff, many of these products can seem to be an answer to a prayer, removing the need to link observations of children to theory, philosophy, or the requirements of the National Quality Framework. 

 

Claims such as “all your documentation gets created for you in seconds, saving you over four hours per week,” can seem very appealing to an educator who has 11 children to write observations for, while for Directors and leaders who are being told every day that their staff are suffering from burnout, such tools can hold enormous appeal. 

 

Eminent early childhood researcher Dr Kate Highfield has some concerns about such products, which are mirrored by many in the sector. 

 

“They do have to be used with caution, as the responses these tools generate are only as good as the information we input,” she said when speaking with CELA, “and then we also need to consider the data we are uploading, the privacy of this work of our children and of course the requests we are making.” 

 

Valid concerns 

 

In a review across a multitude of sectors and industries Deloitte recently found that nearly ¾ of respondents (73 per cent) were alarmed about AI making factual errors in 2023. 

 

By 2024 this figure had risen to 87 per cent with concerns. Other major issues flagged by respondents were the misuse of personal, confidential or sensitive information (89 per cent of respondents), legal risk and copyright infringement (84 per cent) and a lack of accountability (also 84 per cent).

 

These concerns are worrying in all contexts, but especially so in ECEC. AI making a “mistake” with the sensitive information about a child’s mental health and wellbeing could have far reaching consequences for the child and their family. 

 

While it may seem low risk, in that an observation of a child displaying complex emotion is unlikely to be read outside of the context of the centre and the family, the information being ‘fed’ into AI is supporting the volume of information held by AI as a whole. 

 

Essentially, AI is learning from itself. When it is being ‘fed’ information about the way that children learn, grow, and develop, it is learning from this information, and drawing conclusions about what children are, what they need, and how best to meet their needs. 


When this information is applied to children in ‘the real world,’ without the lens of critical thinking and the overlay of professional knowledge and practice, it presents a real and present risk.

 

What’s next? 

 

ECEC services, approved providers, leaders, researchers and advocates are strongly encouraged to look more deeply into the new voluntary code, and into the use of AI in their services. 

 

Adding ‘how does AI help to…” to some reflective questions from the National Quality Standard may help to guide this thinking. 

 

Using Quality Area 1 – Program and Practice as an example: 

 

  • How does AI help us to develop our understanding of the approved learning framework that we use in our service to foster learning outcomes for all children?
  • How does AI support all children to progress towards the learning outcomes? 
  • How does AI help us learn about each child’s knowledge, strengths, ideas, culture, abilities and interests?
  • How does AI help us make decisions about children’s daily experiences and routines, and who is involved in making these decisions?
  • How does AI ensure that experiences and routines are child-centred rather than adult-directed or clock-driven? 

 

Perhaps most importantly, when looking at the adoption of AI overall, is the question Who is advantaged when I work in this way? Who is disadvantaged?  What questions do I have about my work? What am I challenged by? What am I curious about? What am I confronted by?

 

Learn more about the voluntary standards here

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT