close
close

Generative AI and child sexual abuse material: An early cautionary lesson and a promise from tech companies | American Enterprise Institute

It is almost a truism that all technological innovations, including the tools of generative artificial intelligence (Gen AI), can be used for both good and evil purposes. This post looks at a decidedly disturbing use of Gen AI that clearly falls into the latter category – namely, its use to create sexually explicit images of minors. In fact, the indictments filed by the U.S. Department of Justice last month in United States v. Anderegg highlight the grim reality and “frightening concern” that my AEI colleague Daniel Lyons correctly predicted 14 months ago.

Andermattif the facts prove to be what the federal government claims, is an early warning example. It not only underscores the need for criminal prosecutions after The misuse and abuse of Gen AI does occur, but it also suggests that technology companies need to take proactive measures Before It’s about reducing the likelihood without compromising the enormous benefits of the new technology. As AEI’s John Bailey aptly wrote in his summary of a report released in May by a bipartisan U.S. Senate task force on the future of AI in the United States, there must be “an environment in which AI innovation can flourish while ensuring safety and accountability.”

Facade of the Ministry of Justice
About Adobe Stock

Last month, the Justice Department’s Child Exploitation and Obscenity Section released a four-count indictment handed down by a federal grand jury against Steven Anderegg, a 42-year-old man from Holmen, Wisconsin. According to a Justice Department press release, Anderegg is “criminally charged in connection with his alleged production, distribution, and possession of AI-generated images of minors engaging in sexually explicit conduct, as well as his distribution of similar sexually explicit AI-generated images to minors.”

A government report describes “hundreds – if not thousands – of these images” as “hyperrealistic” and allegedly created by Anderegg “using a GenAI model called Stable Diffusion (created by Stability AI).” Specifically, he allegedly “used extremely specific and explicit prompts to create these images. Likewise, he used specific ‘negative’ prompts – that is, prompts that tell the GenAI model what not to include in the generated content – to avoid creating images depicting adults.”

Although the First Amendment to the U.S. Constitution generally protects speech from government censorship, the U.S. Supreme Court has made it clear that the production, distribution, and possession of child pornography are not protected by the U.S. Constitution. Regarding nomenclature, the Department of Justice notes that while “child pornography” is a legal term, the term “‘child sexual abuse material’ (CSAM) is preferable because it better reflects the abuse depicted in the images and videos and the resulting trauma to the child.”

The case against Anderegg is, as the WashingtonPost reported, “possibly the first federal indictment for the creation of child sexual abuse material that relates to images created entirely using AI.” However, the indictment makes clear that those who create CSAM using AI are rightly prosecuted in the same way as other CSAM producers.

A federal law on child pornography cited in Anderegg’s indictment states: “It is not a necessary element of any offence under this section is that the minor depicted actually exists.” (Emphasis added). In addition, the law allows for the prosecution of a “visual depiction of any form’ (emphasis added), including images such as cartoons depicting a minor engaging in ‘sexually explicit conduct’.

Given this legal basis, it is not surprising that the Federal Bureau of Investigation claims that CSAM created using genetic AI tools is illegal. This opinion is shared by the Department of Homeland Security. Nicole M. Argentieri, head of the Justice Department’s Criminal Division, stated in announcing Anderegg’s arrest that “using AI to create sexually explicit depictions of children is illegal and the Department of Justice will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material.”

Unfortunately, the misuse of Gen-AI tools to generate CSAM as in Andermatt is not surprising. A study published in December by the Stanford Internet Observatory (SIO) found the “presence of repeated identical instances of CSAM” in an open training dataset for text-to-image models known as LAION-5B. A SIO blog report summarizing the study’s findings notes that the “dataset contained known CSAM gathered from a wide range of sources, including mainstream social media sites and popular adult video sites.” The dataset is “used to train popular text-to-image generation AI models such as Stable Diffusion.” According to the government, Anderegg claimed to enter text prompts into Stable Diffusion to generate “images based on its parameters.”

Here, Gen-AI companies have both an ethical and a legal responsibility – they must closely monitor and test the training sets they use for the presence of CSAM and, if they find CSAM, immediately eradicate it. In April Forbes reported that the leading companies in the field of genetic AI have pledged to do just that by adding security features to their tools. This is a highly commendable step forward against such despicable content.