Students used AI to create nude photos of their classmates. For some, arrests came next.

  • Oops!
    Something went wrong.
    Please try again later.

Stevie Hyder felt nauseous.

Standing in the hallway at her Illinois high school a few weeks ago, the 15-year-old found out one of her sophomore classmates was using artificial intelligence, or AI, to create nude photos of her. Dozens of doctored images of her and other teenage girls were floating around, a friend told her. Some even depicted teachers.

By the time the principal called her mom, Hyder was the 22nd girl on his list.

“Initially, it was very upsetting,” said Stephanie Essex, her mom. “It didn’t take very long, though, before we both got very angry about the situation.”

That anger is quickly becoming a sentiment shared by many parents, particularly those of young girls, after a spate of similar incidents across the country. As AI gains a stronger foothold in the American economy and culture, administrators are watching it creep into schools

Except policing the nascent technology isn’t easy, as Hyder’s principal, Mike Baird, wrote in a message to parents.

“As we reflect on recent events, it has become clear that we are facing new challenges in the realm of technology and social media,” he wrote in a March 15 letter. “There is no playbook for much of what we are encountering.”

As cases crop up, principals and parents are being forced to navigate a patchwork of district policies and state laws, some of which are stricter than others.

An incident in Florida led to the arrests of two middle school boys in December, a warrant obtained by USA TODAY shows. At least so far, consequences in other states and school districts have been less severe.

If administrators want to avoid similar nightmares, now is the time for them to get crystal clear about their rules on AI, experts say.

“If we take preventative steps for everybody, we will put ourselves in a much better position than trying to play whack-a-mole,” said Kate Ruane, a free speech attorney at the Center for Democracy and Technology.

Middle schoolers arrested in Miami

In December, two middle school boys at a charter school in Miami were arrested on suspicion of using an AI app to create nude photos of their classmates, who were between the ages of 12 and 13, according to an arrest warrant.

Officials charged the boys with third-degree felonies, citing a state law that forbids the “unauthorized promotion of a sexually explicit image.” A number of states, including Texas and Virginia, have so-called “deepfake laws,” which criminalize the nonconsensual creation of pornography using an image of another person. Even more state legislatures are mulling putting such rules on the books.

Read more: Tech giants pledge crackdown on 2024 election AI deepfakes. Will they keep their promise?

Florida’s statute is particularly harsh, according to Mary Anne Franks, a professor at George Washington University Law School and an expert on revenge porn laws. In Franks' view, though, the Miami case is an example of overcharging.

“That’s extraordinarily young to be charging someone with a felony,” she said.

The Florida Charter School Alliance, a group that represents the boys' school, Pinecrest Cove Preparatory Academy, declined to comment. Their parents did not respond to requests for comment.

Beverly Hills students expelled over deepfake scandal

A few months later, a similar scandal hit a middle school in Beverly Hills, California.

In February, five eighth-graders at Beverly Vista Middle School were involved in using AI to superimpose the faces of 16 other eighth-graders onto photos of nude bodies, according to CBS Los Angeles and a statement from the district.

The Beverly Hills Police Department launched an investigation into the incident, according to department spokesperson Andrew Myers. The probe is ongoing.

Read more: New Congressional task force could regulate use of AI but not focused on 2024 election

On March 6, the Beverly Hills Unified School District board approved stipulations to expel the five eighth-graders involved, the Los Angeles Times reported.

“We recognize that kids are still learning and growing, and mistakes are part of this process,” said Michael Bregy, the district superintendent, in a statement shared with USA TODAY. “However, accountability is essential, and appropriate measures have been taken.”

Schools in 'uncharted territory,' mom says

In Hyder's case at Richmond-Burton Community High School in Illinois, the student disseminating doctored photos was using his school email address, according to Essex, Hyder's mom. Administrators didn’t catch wind of what was happening until another student reported it, she said.

“Their filters should’ve caught something,” Essex said. “It could’ve continued on for months.”

Baird, the principal, confirmed to USA TODAY that the Richmond Police Department is investigating the incident. He told parents law enforcement is giving him daily updates. In a message to the community Friday, he said the students accused of being involved in creating the photos likely won’t return for at least the rest of the school year.

Hyder just hopes the photos don't come back to haunt her.

“This is uncharted territory,” her mom said.

Zachary Schermele covers education and breaking news for USA TODAY. You can reach him by email at zschermele@usatoday.com. Follow him on X at @ZachSchermele.

This article originally appeared on USA TODAY: AI-generated nude photos are upending schools — and prompting arrests