close
close

Report warns: AI is being used to create deepfake images of child sexual abuse on real victims

By Rebecca Camber, Crime and Security Editor, and Fran Wolfisz

02:40 July 22, 2024, updated 05:36 July 22, 2024

  • When Olivia was rescued by the police, she thought her rape ordeal was over
  • But thanks to advances in artificial intelligence, their torment may never end
  • Paedophiles use AI software to generate “tailor-made scenarios” of their abuse



When Olivia was rescued by police at the age of eight, she thought her five-year rape ordeal was over.

But thanks to advances in artificial intelligence, their suffering may never end.

Paedophiles use AI software to generate new content and “tailored scenarios” based on real images of their abuse shared online.

Olivia was rescued by police in 2013 after being repeatedly raped and sexually tortured since she was three years old.

Her tormentor posted so many pictures that analysts from the Internet Watch Foundation (IWF) saw them daily as part of their work.

A child sexual abuse victim, Olivia, is now being tormented by having her images used by AI tools to create new abuse material (File Photos)

Olivia, now 20, becomes a victim of abuse every time images of her abuse are shared and sold on the Internet.

Click here to resize this module

The perpetrators have created AI models to generate new images of her that can be downloaded for free. This allows the perpetrators to generate images of her in any setting or engaging in any sexual activity imaginable.

In a new report released today, the IMF also found models for generating AI footage of celebrity children and warned of a “new cottage industry of online criminals creating lifelike child sexual abuse footage to order.”

Additionally, a darknet forum user reportedly shared an anonymous web page with links to AI models for 128 different named victims of child sexual abuse.

Susie Hargreaves, chief executive of the IWF, said: “Survivors of some of the worst traumas now have no peace, knowing that perpetrators can use images of their suffering to create any abuse scenario they desire.”

Paedophiles use AI software to generate new content and “tailor-made scenarios” based on real abuse images (archive photo)

“Without adequate controls, generative AI tools provide a playground for online predators.”

The IWF added that the AI ​​tools used to create the images would remain legal in the UK, even though artificially generated images of child sexual abuse are illegal.

Click here to resize this module

A spokesperson for the group said: “Although Olivia is now free from her abuser, like many other survivors, she will continue to be a victim of abuse as images of her abuse continue to be shared, sold and viewed online.”

“This torment has now reached a new level with the advent of generative text-to-image AI that is being exploited by criminals.

“Fine-tuned models like Olivia’s were trained on the images that IMF analysts saw every day, but which they could not eradicate despite their best efforts.

“This means that the suffering of survivors potentially has no end, as perpetrators can create as many images of the children as they want.”

“From conversations with adults who have been repeatedly victims of assault, the IWF knows that it is psychological torture if their images continue to be shared on the Internet.”

“For many survivors, the idea that they could be identified or even recognized through images of the abuse is terrifying.”

IMF analysts concluded that 90 percent of AI images were realistic enough to be judged under the same law as real child sexual abuse material (CSAM), and that the images were becoming increasingly extreme.

It warned that “hundreds of images can be spit out with just a click of the mouse” and that some of them would have “almost flawless, photorealistic quality”.

Ms Hargreaves added: “We will be closely monitoring how industry, regulators and government respond to the threat to ensure that the suffering of Olivia and children like her is not exacerbated, reinterpreted and replicated using AI tools.”

Richard Collard of the NSPCC said: “The speed at which AI-induced child abuse is developing is incredibly worrying, but also preventable.”

IMF analysts found that 90 percent of the AI ​​images were realistic enough to be judged under the same law as real child sexual abuse material (CSAM) and that they were becoming increasingly extreme (archive photo)

“Too many AI products are being developed and brought to market without even the most basic considerations for child safety. This re-traumatizes children who have been victims of abuse.”

“It is critical that child protection is a key pillar of any government legislation on AI safety.

“We must now also demand tough action from technology companies to stop the ever-increasing abuse of artificial intelligence and ensure that children whose images are used are identified and supported.”

Victoria Green, chief executive of the Marie Collins Foundation, said: “Knowing that perpetrators can now use readily available AI technology to create and distribute further content of their abuse is not only abhorrent to victims and survivors, it also causes enormous anxiety. Victims and survivors have a right not to live in fear of re-victimisation.”

A government spokesman said: “We welcome the Internet Watch Foundation’s report and will carefully consider its recommendations.”

“We are committed to taking further action to keep children safe online and to take action against those who cause harm, including using AI.”