Bill to criminalise AI tools used for child abuse to be introduced in parliament

Before continuing to read this piece, readers should be aware that the content of this article may prove distressing, and should consider their own circumstances prior to continuing to engage. A list of support services has been provided at the conclusion of the article.
A new private member’s bill seeks to close a critical legal gap by criminalising AI technologies designed to generate child sexual abuse material.
Independent MP Kate Chaney will this week introduce legislation to the House of Representatives aimed at making it illegal to possess, distribute or develop artificial intelligence (AI) tools built for creating child abuse material.
While possession and distribution of child abuse material is already a crime in Australia, there is currently no specific prohibition on accessing or sharing the AI tools that can generate this content.
Ms Chaney, the Member for Curtin, said the legislation is urgently needed to address risks posed by the rapid spread of AI programs that allow offenders to create unlimited amounts of child sexual abuse imagery.
“These tools enable the on-demand, unlimited creation of this type of material,” Ms Chaney said. “Perpetrators can train AI with images of a particular child, delete the original material to avoid detection, and still generate new abuse content with word prompts. This creates an enormous risk to children and makes police work significantly more challenging.”
She warned that AI-generated abuse images still begin with real photos of children, meaning actual harm is caused to victims during the creation process.
The bill will introduce several new offences, including:
- using a carriage service to download, access, supply or facilitate technologies designed to create child abuse material
- scraping or distributing data with the intention of training or creating these AI tools
Penalties for the offences would carry a maximum prison term of 15 years. Exemptions would apply to law enforcement, intelligence agencies and others authorised to investigate child abuse crimes.
The legislation follows a roundtable discussion on AI and child exploitation, which recommended urgent action to outlaw the tools.
Child safety experts have described the proposal as a vital step in addressing a dangerous gap. Former Australian of the Year and advocate Grace Tame has also urged swift action, saying the government is moving too slowly to protect children online.
Jon Rouse, a former police detective inspector and child protection expert, said the bill addresses a “critical gap” in current law.
“Existing legislation prosecutes the production of child sexual abuse material, but does not yet address the use of AI to generate such material,” Professor Rouse said.
Colm Gannon, Australian chief of the International Centre for Mission and Exploited Children, said there was “no societal benefit” to the existence of these tools and supported the bill as “a clear and targeted step to close an urgent gap”.
The federal government has yet to respond to a major review of the Online Safety Act, which recommended banning so-called “nudify” apps that use AI to undress or sexualise images. Attorney-General Michelle Rowland has acknowledged the gap in current laws and said child protection remains a priority.
“Keeping young people safe from emerging harms is above politics,” Ms Rowland said. “The government will carefully consider any proposal that strengthens our responses to child sexual exploitation and abuse.”
The government continues to develop a broader framework for AI regulation, balancing safety with innovation and productivity. However, Ms Chaney stressed that immediate legislative action is required.
Advocates argue that the technology is evolving faster than legislative processes, and waiting for a comprehensive AI regulatory framework could leave children vulnerable.
Ms Chaney said: “While we work on a holistic approach to AI, we can plug the gaps in existing legislation to address high-risk cases like this, so we can continue to build trust in AI and protect children.”
She also noted that regulating AI must become a priority for this term of government.
The rise of AI abuse tools has significant implications for child safety, law enforcement and the regulation of emerging technologies. By explicitly targeting these technologies, the bill seeks to ensure that laws remain fit-for-purpose in the face of rapidly advancing digital threats.
Stakeholders in the early childhood education and care sector, which is committed to the highest standards of child safety, will be closely monitoring the bill’s progress. The proposed legislation underscores the need for proactive measures to address risks to children in both physical and online environments.
The bill is expected to be introduced to parliament shortly. If passed, it will mark a significant strengthening of Australia’s child protection laws and set a precedent for regulating high-risk AI technologies.
As the debate unfolds, the focus remains on safeguarding children from harm while ensuring laws keep pace with technological change.
To read the original coverage of this story, as produced by ABC News, please see here
Popular

Provider
Policy
Workforce
Part-time provisions under review in Children’s Services Award and 10 other modern awards
2025-07-25 08:05:18
by Fiona Alston

Quality
Practice
Provider
Research
Workforce
Resources for strengthening child safety in early childhood settings
2025-07-21 09:30:57
by Fiona Alston

Workforce
Events News
Provider
Quality
Policy
Practice
Research
ECEC Workforce and Wellbeing Forum hailed an "outstanding success” delivering insight, support and solutions
2025-07-22 10:39:54
by Fiona Alston