AI Regulation for Early Childhood Education Centres
The Sector > Quality > Compliance > AI Regulation for Early Childhood Education Centres

AI Regulation for Early Childhood Education Centres

by Dan Pearce, General Counsel at Holding Redlich

November 19, 2024

The Commonwealth government has released its latest initiative to regulate the development and use of artificial intelligence (AI) tools, with potential impacts on early childhood education and care (ECEC) services that use or plan to use AI-driven technologies. 

 

The package comprises two parts – a voluntary regime and a proposed mandatory regime – that will apply to so-called “high risk” AI cases, once the basis of its application has been resolved.

 

Both set out a series of almost identical guardrails or safety standards.The main difference is that the last safety standard for the voluntary scheme requires engagement with the organisation’s stakeholders, rather than assessment and certification as proposed under the mandatory guardrails.

 

Let’s take a closer look at the 10 guardrails and how these might be applied to ECEC services:

 

1. Establish an accountability process and arrangements to ensure regulatory compliance

 

This may involve designating a team member or external advisor to oversee AI compliance, including assessing risks and ensuring AI tools align with regulatory requirements and ethical standards. ECEC operators could conduct regular audits or set up a compliance committee to review AI tools, such as automated enrolment systems or apps that track children’s learning milestones.

 

2. Establish a risk management process to identify and minimise risks

 

Conduct a risk assessment for each AI application to identify potential risks to child privacy or safety. For example, if using AI-powered facial recognition for sign-in/sign-out processes, review the risk of data misuse and implement security protocols to minimise these risks.

 

3. Establish systems to manage the quality of data involved

 

Handling sensitive data, such as the personal records of children and families, requires maintaining privacy at all times. There is also a risk, when using AI, that data could inadvertently create biases. Consider how and where all this data is stored and establish systems to protect it from breaches.

 

4. Test AI tools to assess performance, both before and following implementation

 

Before deploying an AI-driven tool, you could test its accuracy and reliability with a pilot program. Continue testing after implementation to monitor its effectiveness and then adjust based on real-world outcomes.

 

5. Ensure there is a human in the loop to allow oversight and interventions as required

 

Even if an AI system generates learning recommendations, maintain a staff member’s involvement in approving these recommendations. For instance, educators should review AI-suggested lesson plans to ensure they align with the centre’s philosophy and are suitable for each child’s developmental stage.

 

6. Keep users informed regarding the use of AI and AI-generated content

 

Clearly communicate with parents about how AI is used within the centre, such as through newsletters or information sessions. For example, if AI is used to track developmental progress, provide parents with explanations of how the data is collected, stored, and used in their child’s learning journey.

 

7. Establish protocols for those affected by AI to challenge the outcomes

 

Set up a system for parents and staff to question AI-driven decisions, such as if a parent disagrees with an AI-generated report on their child’s progress. Offer a straightforward process for these concerns to be reviewed by a qualified staff member or an external consultant.

 

8. Ensure transparency with other parties to facilitate their risk assessment

 

If third-party organisations or inspectors need to assess the AI systems in place, keep detailed records of how these systems are used. For example, if the centre uses AI to monitor attendance or incidents, be prepared to provide transparent reports on data handling and system functionality.

 

9. Maintain records to allow assessment of compliance

 

Document all decisions related to AI implementation, risk assessments, testing results, and any data protection measures. For instance, keep a log of parental consent for using AI-based tools or records of staff training on AI systems to demonstrate compliance if audited.

 

10. Assess performance and certify compliance.

 

Schedule regular reviews of all AI systems to confirm ongoing compliance with both voluntary and future mandatory requirements. This might involve updating data privacy protocols or adjusting AI usage in response to updated standards from the National Quality Framework (NQF) or ACECQA.

 

By applying these guardrails, ECEC services can more safely and effectively integrate AI in ways that align with regulatory expectations and service values, ensuring they protect child safety, uphold privacy, and remain compliant with evolving community standards.

 

What next?

 

There are a number of considerations for the government on implementing the mandatory guardrails, which will include determining what amounts to “high risk” AI. 

 

First, the government must decide if risk will be determined by way of lists of uses (as in the EU), or by way of principles (where the organisation makes its own decision having regard to issues such as the likely impact on users’ legal rights, safety or reputation).

 

Second, they need to consider whether AI can be used for a variety of purposes or integrated into other products, and whether it should be regarded as high risk by default given the possibility of use in unforeseen situations.

 

Finally, there is the question of how best to implement the mandatory guardrails, with proposals to either introduce an AI-specific Act, supplement existing regulations across privacy, child safety, and consumer protection laws, or develop a general framework that coordinates existing requirements.

 

While mandatory requirements may not be legislated until next year, the Voluntary Safety Standards already apply across all AI scenarios, not just high-risk cases. By proactively reviewing and establishing AI safety measures, ECEC services can demonstrate their commitment to managing AI risks and ensure they are taking AI seriously. 

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT