Wisconsin man charged with using AI to create and send explicit images of minors

Wisconsin man charged with using AI to create and send explicit images of minors
  • A man was arrested on charges of creating and distributing explicit AI-generated images of minors.

  • The 42-year-old man used Stability AI's text-to-image model, Stable Diffusion, to make the images.

  • He faces up to 70 years in prison, the Department of Justice said.

A 42-year-old man was arrested on charges of creating sexually explicit images of minors and sending similar obscene material to a 15-year-old.

According to a press release from the Department of Justice, Steven Anderegg was arrested last week after a federal grand jury returned an indictment charging him with producing, distributing, and possessing explicit AI-generated images of minors in one of the first cases of its kind.

The Wisconsin resident was accused of using Stability AI's text-to-image model, Stable Diffusion, to create thousands of hyper-realistic images of nude or partially clothed minors, many of which showed them engaging in sexual acts with men or touching their genitals.

He was also accused of sending the 15-year-old boy AI-generated images of minors showing their genitals in an Instagram message and describing how he used Stable Diffusion to generate the explicit images of minors, according to the court documents.

The DoJ said he stored the images on his computer and that evidence was recovered from his electronic devices. It also claims he created the images using sexually explicit text prompts related to minors.

Anderegg is in federal custody, awaiting a detention hearing set for Wednesday. He faces up to 70 years in prison for the four counts alleged in the indictment, per the DoJ.

If convicted, it wouldn't be the first time a person has been imprisoned for using AI to generate sexually explicit images of minors.

In November, a child psychologist was sentenced to 40 years in prison, followed by 30 years of supervised release. The North Carolina man used an AI tool to create pornographic images of minors and sexually exploited a minor.

An investigation by the Stanford University Cyber Policy Center in December found that pubic dataset LAION-5B, which was used to train AI image generators, including Stability AI's earlier model Stable Diffusion 1.5, contained over 1,000 images of confirmed child sexual abuse material.

A Stability AI spokesperson previously told Business Insider it had introduced filters to prevent users from creating illegal content with Stable Diffusion and that its models were only trained on a filtered subset of LAION-5B.

UK-based Stability AI states in its terms of use, which was last updated in March, that users will not use or allow others to use its models to commit "exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content."

The company said in a statement to BI that the model used in this case was Stable Diffusion 1.5, which was released by RunwayML, not Stability AI.

"Stability AI is committed to preventing the misuse of AI and prohibit the use of our image models and services for unlawful activity, including attempts to edit or create child sexual abuse material."

Do you work at Stability AI? Got a tip? Contact the reporter via email at jmann@businessinsider.com or reach out on the encrypted messaging app Signal at jyotimann.11

Read the original article on Business Insider