Key Takeaways
- The UK Children’s Commissioner is urging the government to immediately ban AI apps that create fake sexual images of children (“deepfakes”).
- Children report feeling scared and anxious about being targeted by this technology.
- Despite laws against creating and sharing such images, the AI tools themselves remain easy to find and use.
- The Commissioner calls these apps dangerous with no positive purpose and wants them outlawed completely.
- Further action is needed, including holding app developers accountable and making image removal easier.
The UK’s official advocate for children’s rights is demanding an immediate ban on artificial intelligence apps used to create fake sexual images of children.
These AI-generated “deepfakes” realistically depict real people in fabricated situations. The Children’s Commissioner, Dame Rachel de Souza, highlighted the growing fear among young people, especially girls, about becoming victims.
In a recent report, Dame de Souza shared that children are actively avoiding posting photos online because they fear someone might use AI to create harmful, sexualized images of them.
“We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children’s lives,” she stated.
These apps, sometimes called “nudification” tools, have already caused distress, contributing to incidents like a school closure in Pennsylvania last year, as reported by PetaPixel.
Dame de Souza argues there’s “no positive reason for these particular apps to exist” and called for decisive government action to ban them outright.
While UK law already makes creating and sharing sexual deepfakes illegal, the Commissioner points out a critical gap: the technology enabling this abuse is still legal and readily accessible.
“It is no longer confined to corners of the dark web but now accessible through large social media platforms and search engines,” Dame de Souza explained.
The report warns of the severe impact on child victims, linking deepfake abuse to serious mental health issues, including suicidal thoughts.
Beyond banning the apps, the Commissioner wants legal duties placed on app developers, easier ways to get abusive images taken down, and formal recognition of deepfake sexual abuse as violence against women and girls.
The core message is clear: the law must take the devastating harm caused by deepfakes much more seriously to protect children effectively.