Australia’s eSafety Commission Sounds the Alarm on Misuse of AI in Online Child Grooming
- The eSafety Commission in Australia has raised concerns over the potential misuse of AI for grooming children online. The commission reported that nearly 60% of the reports received last year were linked to sextortion, with deepfake technology playing a significant role in image-based abuse.
- The Australian government has responded by enacting a national set of AI ethics and principles, considering AI’s implications for online safety and copyright issues. They have also allocated $41 million in the latest federal budget towards responsible AI deployment, emphasizing their commitment to protect citizens in the era of rapidly evolving artificial intelligence.

Sydney – Australia’s eSafety Commission is warning about the potential misuse of artificial intelligence (AI) in grooming children online. Amidst a national debate on imposing restrictions on this swiftly advancing technology, the commission has highlighted the increasing threat that AI poses, particularly with regards to child exploitation and sextortion.
eSafety Commissioner Julie Inman Grant shared her concerns on Twitter, remarking, “the manipulative power of generative AI to execute on grooming and sextortion is no longer speculative.” She emphasized the unchecked development and online deployment of AI technologies and the rising reports of cyberbullying and image-based abuse, especially involving deepfake technology.
The Office of the Children’s eSafety Commissioner, established by the Enhancing Online Safety for Children Act in 2015, reported that almost 60% of the approximately 7,000 reports received last year through the image-based abuse scheme were connected to sextortion. This nefarious practice involves extorting money or sexual favors by threatening to expose evidence of sexual activity. The commission believes the proliferation of deepfake technology, which can generate highly realistic media, might aggravate this situation.
While Australian law requires social media services to comply with the department’s safety mandates, the largely unregulated advancement and online distribution of AI technology have prompted concerns.
In response, Australia has enacted a national set of AI ethics and principles. A spokesperson for Ed Husic, the Minister of Science and Technology, said the government is also contemplating AI’s implications on online safety and copyright issues.
“AI is not an unregulated area,” Husic’s spokesperson assured The Guardian – Australia. “As part of explorations of additional regulation of AI, the government is consulting with a wide range of stakeholders regarding potential gaps and considering further policy.”
The Australian government has also sought advice on “near-term implications of generative AI, including steps being taken by other countries.” Communications minister Michelle Rowland confirmed that AI will be regulated by several bodies, including the eSafety commissioner, the Australian Competition and Consumer Commission, the Australian information commissioner, and the National AI Center.
Further demonstrating its commitment to safe AI practices, the Australian federal budget has allocated $41 million towards the responsible implementation of AI programs. This move highlights Australia’s dedication to protecting its citizens in this era of rapidly advancing artificial intelligence.