As artificial intelligence continues to shape and evolve how people live, law enforcement and advocacy groups are sounding the alarm about the use of AI to produce and distribute child pornography.
Last week, a 42-year-old Wisconsin man was arrested for allegedly producing and distributing “AI-generated images of minors engaged in sexually explicit conduct,” according to a statement from the U.S. Department of Justice.
The case comes as similar incidents are popping up across the nation, as people are using text-to-image AI models to create lewd pictures of children. Experts are now worried the issue will become worse as the technology advances and its use becomes more widespread.
Stay informed on the latest news
Sign up for WPR’s email newsletter.
“It’s clear that it’s a growing problem that’s impacting communities around the country,” Wisconsin Attorney General Josh Kaul said. “Given the developments that we’re seeing in AI, we expect that this problem is going to continue to grow.”
The National Center for Missing and Exploited Children said it received 4,700 reports of AI-generated child porn images, also known as child sexual abuse images or material, last year. The Federal Bureau of Investigation issued a warning in March that producing child sexual abuse material using AI is illegal.
Meanwhile, a Stanford Internet Observatory investigation identified hundreds of known images of child sexual abuse material in a dataset that is used to train AI text-to-image generation models. Lisa Thompson, the vice president and director of the research institute at the National Center on Sexual Exploitation, said the issue has become a “crisis.”Â
“Now imagine that you’re a survivor of child sexual abuse exploitation,” Thompson said. “Has your abuse been used to train AI … that would be so haunting to even think about, to contemplate.”
In March, Wisconsin Gov. Tony Evers signed a law that created a new crime of possession of “virtual child pornography,” making it a Class D felony to “knowingly receive, distribute, produce, possess, or access in any way, with intent to view, obscene material that contains a depiction of a purported child engaging in sexually explicit conduct and the person knew, or reasonably should have known, that the material contains such a depiction.”
In Wisconsin, a Class D felony is punishable by up to 25 years in prison, a maximum fine of $100,000, or both.
Thompson called that law an “exciting development.” Kaul also said he believes that law is sufficient, but he wants to see more action from lawmakers.
“We also need to see more action on the federal level, including more information gathering by Congress,” Kaul said.
Kaul signed a letter with all 50 attorneys general urging Congress to establish a commission to “study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation.”
The letter said AI can be used to create “deep fakes” by using real photographs of children and then generating new images of those children in sexual positions. AI can also be used to combine data from photographs of children to animate realistic sexual images.
“Creating these images is easier than ever, as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see,” the letter said. “And because many of these AI tools are ‘opensource,’ the tools can be run in an unrestricted and unpoliced way.”
Thompson said she wants to see more regulation against the developers and creators of the AI models that are being used to distribute the images.
“It’s like we’re fighting this raging inferno,” Thompson said. “And we don’t have the tools to distinguish it. And it’s mind boggling what’s happening, all these tools that are out there, there’s insufficient regulation.”
But Carl Szabo, the vice president and general counsel for the nonprofit NetChoice, said it’s up to police and prosecutors to go after the “bad actors” who are using the models to create the lewd images.
“We do have laws out there today, what we need to make sure we do is prosecute the laws that exist to the fullest extent and make crystal clear that bad actors can’t hide behind a keyboard,” Szabo said.
Raven, a nonprofit focused on child exploitation, said there were over 99,172 IP addresses, or Internet Protocol address, throughout the nation that distributed known child sexual abuse images during a three-month period from 2022 to 2023. But only 782 were investigated, according to the testimony of John Pizzuro, CEO of Raven.
Kaul said more resources are needed when it comes to handling internet crimes against children. He said there’s been “enormous growth” regarding the number of tips and referrals the Wisconsin Department of Justice gets from the Center for Missing and Exploited Children.
“With AI, that problem is going to grow dramatically, because there’s going to be the possibility that very large volumes of images will be generated in a very short period of time and that’s going to make enforcement efforts that much more difficult as there is more that needs to be tracked down,” Kaul said.
Holmen man charged
Steven Anderegg of Holmen allegedly used a text-to-image AI model called “Stable Diffusion” to create thousands of realistic images of minors, according to the U.S. Department of Justice. Many of the images pictured nude children.
“Evidence recovered from Anderegg’s electronic devices revealed that he generated these images using specific, sexually explicit text prompts related to minors, which he then stored on his computer,” a statement said.
Anderegg allegedly communicated with a 15-year-old boy as well, and described how he used Stable Diffusion to convert his prompts into images of children. Anderegg was caught because of a cyber tip from the National Center for Missing and Exploited Children after Instagram reported Anderegg’s account for distributing some of the images through its application.
A criminal complaint said Anderegg’s resume indicated he has worked as a software engineer since 2003. It also lists a recent job at a “startup” where he utilized his “excellent technical understanding in formulating AI models.”
Principal Deputy Assistant Attorney General Nicole Argentieri, head of the Justice Department’s Criminal Division, said the DOJ will “aggressively pursue those who produce and distribute child sexual abuse material.”
“Today’s announcement sends a clear message: using AI to produce sexually explicit depictions of children is illegal, and the Justice Department will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material,” Argentieri said.
Anderegg’s attorney did not respond to a reporter’s request for comment.
According to the U.S. Department of Justice, a child psychiatrist in Charlotte, North Carolina, was sentenced to 40 years in prison last year after he used AI to create child sexual abuse material of children. The man used a web-based AI application to alter images of real children into child sexual abuse material.
A federal jury also convicted a registered sex offender in Pittsburgh for possessing pictures that “digitally superimposed the faces of child actors onto nude bodies and bodies engaged in sex acts,” according to the FBI.
Wisconsin Public Radio, © Copyright 2024, Board of Regents of the University of Wisconsin System and Wisconsin Educational Communications Board.