AI in Early Childhood: Asking the right questions
The Sector > Quality > In The Field > Asking questions about the use of AI in early childhood education: Creating your AI policy

Asking questions about the use of AI in early childhood education: Creating your AI policy

by Bernadette, Storypark

October 09, 2024

Early childhood education is easily one of the best spaces to ask questions and be curious. We are continually inspired not only by children, who explore their world with creativity and diligence by asking “Why?” but also by educators and practitioners who hone their practice through critical thinking and feedback.

 

Daniel Wahl, in his book, Redesigning Generative Cultures says, “The art of asking beautiful questions is about i) challenging assumptions ii) inquiring about things normally taken for granted, and iii) wondering about new possibilities.”

As we start to see artificial intelligence (AI) crop up in many of the software and tools we use day to day, it’s worth asking questions about the use of AI. We’ve seen the release of the Voluntary AI safety standard, the Australian Framework for Generative AI in Schools, the Generative Artificial Intelligence in education policy paper in the UK, and in a similar vein 22 state departments of education in the United States have issued AI guidance in the last year. Each an attempt to help those in education use AI tools in their settings both safely and reliably.

But how are we to approach the use of AI in early childhood education specifically? In 2024, the ubiquity of AI might make us feel like we have to minimise any operational or ethical considerations we hold about how educators use it. However, reflective practice and critical thinking have always defined the ECE sector. Storypark’s Responsible AI Commitments were born out of a desire to connect the excitement we feel about AI’s possibilities, with consideration of the impact of anything we create and use – championing responsible technology use. Similarly, it’s worth any early childhood organisation considering how they want AI tools to be used in their settings.

Just like many services’ existing ICT and Cyber Safety policies, the creation of an AI policy can put the intent to use AI tools to support and promote children’s learning onto one page and becomes a useful resource for management, educators and families.

 

Asking questions to form your organisation’s AI policy

 

To develop your service/organisation’s thinking and policy around the use of AI, we recommend asking some or all of these questions. These are great questions to ask even if you’re already using AI:

Do we know how the AI tools we want to use work?


Understanding how AI tools work doesn’t mean anyone on your team has to become a software developer or data expert, rather that any provider of AI tools should be transparent with you about how the tool uses the data you provide. Beyond efficiencies the tools create, you’ll want to know how the child data you input is going to be used by the provider and whether you still maintain ownership once uploaded. Importantly, is your data used to train the AI? This means you could see some aspect of your data appear in answers in the future. At Storypark, we take the protection of children’s information very seriously, so we do not work with any particular platform without knowing how they will use the content educators input. We also take additional steps to remove any names or other personal identifiers in any content we pass to an AI model.


> Will we acknowledge with families and children when and where we use AI?


Disclosing the use of AI tools is something to consider when creating your service’s AI policy. IT specialists Brightly note, “it’s recommended that you create a policy around transparency and disclosure in the use of AI in your business when there’s potential for content to mislead, confuse or misrepresent.” Depending on how you are planning on using AI, families do have the right to know.


> How will we ensure we continue to lean on educators’ professional intuition and knowledge?
At its minimum, the Voluntary AI Safety Standard in Australia calls attention to the need for “meaningful human oversight” to reduce the potential for unintended consequences as a result of using AI. We also know in ECE, educators hold an unparallelled depth of knowledge about each child, their family, cultural values, goals and aspirations and links to prior learning. The possibilities of AI to assist with documentation, assessment and planning are very exciting when considered in partnership with educators who are aware of its limitations. In your AI policy you may consider how AI will be positioned as a partner and assistant rather than just a time saving tool.

 

> How will we maintain fairness and inclusivity in our use of AI?


With any AI tools we want to be able to trust that the decisions it makes in the background are fair and unbiased, especially when working with young people who don’t have agency over our decision to use AI. It’s why as part of our Responsible AI Commitments, Storypark commits to actively working to get rid of unfair biases in data and our algorithms, that might negatively affect educators or children.

Similarly, the content created by the AI tools matter too. If for example, a child’s voice is misquoted or their behaviour is misrepresented we may be concerned. Not just for authenticity’s sake, as pedagogical documentation is about more than just a faithful recording of events, “it’s a means to learn about how children learn and think.” We recommend mentioning in your AI policy if you want to commit to only using AI tools that allow editing of the content it creates.

Aside from these questions you may also want to assess AI tools against ECE standards in your region or country – in Australia, Freya Lucas suggests, “adding ‘how does AI help to…’ to some reflective questions from the National Quality Standard may help to guide this thinking.”

 

Sharing your organisation’s AI policy

 

From your answers to the above questions, you can create your service’s core principles in relation to AI use, forming the foundation of your AI usage policy. The format could be an internal document that is shared with staff or also shared wider with families and your service’s community.

Communicating your AI usage policy is important so that everyone understands the guidelines and expectations around using AI at your service. You should also consider training opportunities around the ethical and inclusive use of AI – if these were included in your policy. Best practice is to involve your families and encourage their input to ensure better buy-in.

Engaging stakeholders like families can lead to robust, healthy discussions – share your AI policy in a community post or display it in your service.

 

The Voluntary AI Safety Standard in Australia also advises, “incorporate AI governance into your existing processes or create new processes as you need them.” Consider how your new AI policy will be adopted into your day to day operations and how you will regularly revisit it.

Remember as with any new tools, it’s a journey! We’ll continue to share more thoughts about how we can harness the exciting potential of AI in early childhood education and use the tools mindfully at the same time.

 

This piece was reshared here with the kind permission of the team from Storypark. Find the original here.

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT