All posts
AI in Education10 min readJanuary 6, 2026

The Deepfake Crisis Schools Aren't Ready For

AI-generated abuse imagery reports jumped from 4,700 to 440,000 in 18 months. 13% of principals report deepfake bullying incidents. Here's what school leaders must do now.

Reports of AI-generated child sexual abuse material jumped from 4,700 in 2023 to 440,000 in the first six months of 2025, a 93-fold increase according to the National Center for Missing & Exploited Children. RAND's October 2024 survey found 13% of K-12 principals reported deepfake-related bullying incidents. Most schools are unprepared: over half of educators have received no training on recognizing or responding to AI-generated content.

In March 2025, 44 girls at Cascade High School in Iowa were discovered to be victims of AI-generated explicit imagery created by male classmates using free apps. Four boys were charged. The victims, calling themselves "Voices of the Strong 44," issued a joint statement after the school told them not to talk about it and failed to offer counseling. The images took seconds to create. The impact will last years.

This is not a future problem. Incidents are happening globally, wherever students have smartphones and internet access. International schools face unique challenges: varying legal frameworks across jurisdictions, families from multiple cultural contexts with different expectations around technology and privacy, and the reality that content created in one country can spread worldwide instantly.

How Bad Is the Deepfake Problem in Schools?

The numbers document a crisis:

AI CSAM Reports
4.7k → 440k
93x increase in 18 months
Source: NCMEC
Principals Reporting Incidents
13%
Already affecting 1 in 8 schools
Source: RAND Corp
Student Exposure
1 in 17
Teens know a victim personally
Source: Thorn
Training Gap
50%+
Educators have zero training
Source: EdWeek

The technology has become trivially easy. Apps generating non-consensual intimate imagery from normal photos are freely available and require no technical skill. Sergio Alexander's research at Texas Christian University documents how "deepfake technology, once primarily associated with political disinformation and entertainment, is now being weaponized in schools as a new and insidious form of cyberbullying."

The victims are overwhelmingly female. The perpetrators are overwhelmingly male classmates.

A Louisiana middle school case in late 2025 illustrates the chaos schools face. According to reporting from NOLA.com and CBS News, AI-generated nude images of eight female students circulated through the school. When one victim confronted a boy sharing the images, she was expelled for fighting. The boy faced criminal charges under Louisiana's new deepfake law, but the victim's family is now suing the district for its handling of the situation.

⚠️ The enforcement gap

Over half of educators report receiving either no training or poor-quality training on AI-generated content. Policies have not caught up to technology students are already using.

What Laws Now Apply to School Deepfake Incidents?

The legal landscape shifted dramatically in 2025:

Federal law: The TAKE IT DOWN Act (signed May 2025) requires social media platforms to remove non-consensual intimate imagery within 48 hours of a report. It explicitly covers AI-generated content and provides criminal penalties for distribution. Schools should understand this law because it gives victims a federal mechanism for content removal.

State laws: According to the National Conference of State Legislatures, at least half of U.S. states enacted deepfake-related legislation in 2025, with varying approaches to criminal penalties, civil liability, and school requirements. Some states now treat AI-generated CSAM the same as traditional CSAM under criminal law.

School implications: Creating, possessing, or distributing non-consensual intimate imagery, including AI-generated imagery, can result in criminal charges for students, not just school discipline.

But legislation alone does not protect students. Schools must respond before images spread, support victims effectively, and educate students about both legal and human consequences. For international schools, understanding the legal frameworks of your host country and the home countries of your students adds another layer of complexity.

Why Aren't Schools Ready for This?

Three gaps leave schools vulnerable:

Policy gaps. Acceptable use policies and bullying policies written before deepfakes became student-accessible may not explicitly cover AI-generated content, creating ambiguity about consequences. The Cyberbullying Research Center's Sameer Hinduja recommends that schools update their policies to specifically address AI-generated deepfakes so "students don't think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity."

Training gaps. Deepfake content spreads through private channels, Snapchat, Discord, group texts, that adults do not see. Staff do not know warning signs to watch for. The Louisiana case revealed that the school's cyberbullying curriculum dated from 2018.

Support gaps. Traditional harassment involves words or actions. Deepfake harassment creates permanent visual artifacts that can resurface indefinitely. The psychological impact is qualitatively different, and counselors may lack experience with image-based abuse.

What Should Schools Do Right Now?

Four immediate actions:

1. Update policies explicitly. Your acceptable use policy, student code of conduct, and harassment policies should explicitly mention AI-generated content. Name the specific behavior: "Creating, possessing, or distributing non-consensual intimate imagery, including AI-generated imagery, is a serious violation with serious consequences."

2. Train staff to recognize warning signs. Students clustered around phones who disperse when adults approach. Sudden social targeting of particular students. References to images or apps in overheard conversations. Counselors need protocols for supporting victims of image-based abuse.

3. Educate students directly. At the school I lead, we structure our digital citizenship curriculum around six core competencies, including relationships and communication (preventing and responding to cyberbullying), privacy and security, and digital footprint awareness. We take a developmental approach: foundational habits in elementary grades, increasing social complexity in middle school, and preparation for real-world consequences in high school. The goal is not compliance but competency, building the judgment students need to navigate situations policies cannot anticipate.

4. Build response protocols. When (not if) an incident occurs, you need clear procedures: who is notified, how evidence is preserved, how the victim is supported, how the perpetrator is held accountable, how families are informed, and how you balance privacy with community communication.

💡 The bystander angle works

Research shows students are more likely to intervene when they understand they are not powerless. Teaching what TO do (report, support the victim, do not share) is more effective than just teaching what NOT to do.

How Do You Talk to Students About This?

This is not comfortable. It involves discussing sexual imagery, consent, and technology in ways that feel inappropriate for younger grades and awkward for all ages.

But discomfort does not make it optional. Students encounter this technology whether schools address it or not. The question is whether they encounter it with guidance.

For elementary students: Start with consent and digital citizenship basics. You do not share pictures of people without permission. You do not use technology to hurt others.

For middle and high school students: Be direct. What deepfakes are. Why creating them causes lasting harm. What the legal consequences are (including potential criminal charges). What to do if you encounter them.

The schools that protect students best talk about this openly. They do not hope it will not happen to them.


Frequently Asked Questions

Can students really face criminal charges for creating deepfakes?

Yes. Under the TAKE IT DOWN Act and various state laws, creating or distributing non-consensual intimate imagery, including AI-generated imagery, can result in criminal charges. Minors are not exempt, though consequences vary by jurisdiction and age. In Louisiana, the student who shared deepfake images faced 10 criminal counts under state law.

What if the deepfake does not show nudity?

Non-sexual deepfakes can still constitute harassment, defamation, or bullying depending on content and context. Even "prank" deepfakes that embarrass or mock students may violate school policies and potentially laws.

How do we investigate when images are on private platforms we cannot access?

Focus on what you can verify: witness statements, device confiscation (following proper procedures), and reports from victims or bystanders. You do not need to see the images to respond to credible reports of their existence.

Should we involve law enforcement?

For any imagery depicting minors in sexual situations, including AI-generated imagery, consult with law enforcement. Many jurisdictions require reporting suspected CSAM. Document your decision-making process.

What support do deepfake victims need?

Immediate emotional support, assurance that they are not at fault, practical help removing content (using TAKE IT DOWN Act provisions), long-term counseling access, and protection from retaliation. The psychological impact of image-based abuse can be severe and lasting.


References

  1. Artificially Intelligent Bullies: Dealing with Deepfakes in K-12 Schools - RAND Corporation
  2. TAKE IT DOWN Act - U.S. Congress
  3. Spike in Online Crimes Against Children a Wake-Up Call - National Center for Missing & Exploited Children
  4. The Deepfake Dilemma: New Challenges Protecting Students - National Center for Missing & Exploited Children
  5. Louisiana Girl Expelled After Confronting Classmates Sharing AI-Generated Nude Images - NOLA.com
  6. Why Schools Need to Wake Up to the Threat of AI Deepfakes and Bullying - Education Week
  7. Deepfakes Reshape Cyberbullying: TCU Expert Calls for Action - Texas Christian University
  8. The Rise of Deepfake Cyberbullying Poses a Growing Problem for Schools - Education Week
  9. The ENFORCE Act: Critical Updates to Federal Law for Addressing AI-Generated CSAM Offenses - Thorn
Benedict Rinne

Benedict Rinne, M.Ed.

Founder of KAIAK. Helping international school leaders simplify operations with AI. Connect on LinkedIn

Want help building systems like this?

I help school leaders automate the chaos and get their time back.