These AI Apps Terrify Kids. The UK Wants Them Gone.

Key Takeaways

  • The Children’s Commissioner, Dame Rachel de Souza, is urging the UK government to ban AI apps used to create fake explicit images of children, often called ‘nudification’ or deepfakes.
  • These generative AI tools are widely available, sometimes free, and causing fear among young people, particularly girls, who are changing their online behaviour to avoid being targeted.
  • Dame Rachel’s new report calls for specific bans, legal responsibility for AI developers, and systems to remove AI-generated child sexual abuse material online.
  • School leaders support the call, highlighting the need for laws to keep pace with technology.
  • The government says creating or sharing such material is already illegal under the Online Safety Act and notes new offences target AI tools specifically designed for generating child abuse content.

The UK’s Children’s Commissioner is demanding a ban on artificial intelligence applications that create fake sexually explicit images of children.

Dame Rachel de Souza wants the government to outlaw any app enabling ‘nudification’ – altering real photos with
AI to make individuals appear naked – and those used for explicit deepfakes of young people.

She highlighted that these generative AI tools are already common and often free, making misuse easy. While producing or sharing explicit images of a child is illegal, the technology itself remains legal and accessible, according to a report from The Independent.

In her new report, Dame Rachel revealed that children, especially girls, are modifying how they act online out of fear of these apps.

“Children have told me they are frightened by the very idea of this technology even being available, let alone used,” Dame Rachel stated. They worry anyone could misuse a smartphone to create manipulative fake images.

She stressed the negative impact, saying, “Girls have told me they now actively avoid posting images or engaging online to reduce the risk of being targeted.”

Dame Rachel argues there’s “no positive reason for these particular apps to exist” and they should have “no place in our society.”

Besides the ban, her report recommends making AI developers legally responsible for preventing child safety risks and creating better ways to remove deepfake child sexual abuse material from the internet.

It also suggests recognizing deepfake sexual abuse as a specific form of violence against women and girls in law and Tpolicy.

Support for this stance came from Paul Whiteman of the school leaders’ union NAHT, noting members share concerns about the technology being used against both students and staff.

He mentioned the union would discuss criminalizing the creation and sharing of non-consensual deepfakes, emphasizing that technology is outpacing legal frameworks.

In response, a government spokesperson affirmed that creating or distributing child sexual abuse material, including AI-generated images, is illegal and platforms must remove it under the Online Safety Act or face large fines.

The spokesperson added that the UK is introducing specific AI child sexual abuse offences, criminalizing tools designed to generate such harmful content.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended