The rise of deepfake cyberbullying poses a growing problem for schools

The rise of deepfake cyberbullying poses a growing problem for schools

Schools are facing a growing problem of students usingartificial intelligenceto transform innocent images of classmates into sexually explicit deepfakes.

The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.

The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends.

"While the ability to alter images has been available for decades, the rise ofA.I.has made it easier for anyone to alter or create such images with little to no training or experience," Lafourche Parish Sheriff Craig Webre said in a news release. "This incident highlights a serious concern that all parents should address with their children."

Here are key takeaways fromAP's storyon the rise of AI-generated nude images and how schools are responding.

More states pass laws to address deepfakes

The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state's new law, said Republican state Sen. Patrick Connick, who authored the legislation.

The law is one of many across the country taking aim at deepfakes. In 2025, at least half the statesenacted legislationaddressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material.

Students also have been prosecuted inFloridaandPennsylvaniaand expelled in places likeCalifornia. One fifth grade teacher inTexasalso was charged with using AI to create child pornography of his students.

Deepfakes become easier to create as technology evolves

Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.

"Now, you can do it on an app, you can download it on social media, and you don't have to have any technical expertise whatsoever," he said.

He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025.

Experts fear schools aren't doing enough

Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, "students don't think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity."

He said many parents assume that schools are addressing the issue when they aren't.

"So many of them are just so unaware and so ignorant," said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. "We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn't happening amongst their youth."

Trauma from AI deepfakes can be particularly harmful

AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.

Many victims become depressed and anxious, he said.

"They literally shut down because it makes it feel like, you know, there's no way they can even prove that this is not real — because it does look 100% real," he said.

Parents are encouraged to talk to students

Parents can start the conversation by casually asking their kids if they've seen any funny fake videos online, Alexander said.

Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, "Have you thought about what it would be like if you were in this video, even the funny one?" And then parents can ask if a classmate has made a fake video, even an innocuous one.

"Based on the numbers, I guarantee they'll say that they know someone," he said.

If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO ofThe Social Institute, which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away.

She uses the acronym SHIELD as a roadmap for how to respond. The "S" stands for "stop" and don't forward. "H" is for "huddle" with a trusted adult. The "I" is for "inform" any social media platforms on which the image is posted. "E" is a cue to collect "evidence," like who is spreading the image, but not to download anything. The "L" is for "limit" social media access. The "D" is a reminder to "direct" victims to help.

"The fact that that acronym is six steps I think shows that this issue is really complicated," she said.

The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP'sstandardsfor working with philanthropies, alistof supporters and funded coverage areas at AP.org.

 

NEO NEWS © 2015 | Distributed By My Blogger Themes | Designed By Templateism.com