All posts
AI in Education9 min readJanuary 30, 2026

AI and Student Data: The Questions Every School Leader Should Ask

82% of K-12 schools experienced cyber incidents in 2024. COPPA 2025 requires explicit consent for under-13 data sharing. Here's what AI tools collect, what laws apply, and the 6 questions to ask every vendor.

The risk is already here: 82% of K-12 schools experienced a cyber incident between July 2023 and December 2024. Meanwhile, 86% of students now use AI tools for school or personal purposes. As AI adoption accelerates, the data protection gap widens.

Schools are racing to adopt AI while data protection lags behind. A Center for Democracy and Technology survey found that teachers who use AI extensively are more likely to report their schools have experienced data breaches. The 2025 COPPA amendments now require separate parental consent before sharing children's data with third parties. Free AI tools often "pay" through data collection. Here are the six questions every school leader should ask before any AI tool touches student data.

What Data Do AI Tools Actually Collect?

AI tools collect more than most educators realize. The Future of Privacy Forum's guidance on vetting AI tools highlights that many edtech products using ChatGPT-style APIs may have very different data practices than schools expect.

Conversation Histories
Every prompt, response, and correction. Student info embedded in teacher prompts.
Behavioral Patterns
Time spent, topics explored, error patterns, engagement levels.
Submitted Work
Essays, problems, drafts. Student work used for AI feedback or grading.
Learning Indicators
Struggles, progress, misconceptions. Performance data for adaptive systems.

This data is valuable to schools for improving instruction, but also to AI companies for training models. The critical question: who controls it and how is it protected?

The FTC clarified in the 2025 COPPA amendments that "disclosures of children's personal information for the purposes of developing or training artificial intelligence technology are never integral to a website or online service" and require separate consent.

What Laws Govern Student Data in AI Tools?

Three regulatory frameworks matter most for U.S. schools:

FERPA protects student education records. Schools must ensure AI vendors are properly designated as "school officials" with legitimate educational interests. According to the Future of Privacy Forum, many schools struggle to determine whether AI tools protect PII in accordance with FERPA. Without proper contracts, using AI tools with student data may violate federal law.

COPPA 2025 Amendments regulate data from children under 13. The FTC finalized major changes effective June 2025 with a compliance deadline of April 2026. Critical changes include: requiring separate parental consent for third-party disclosures (not just general consent), expanding "personal information" to include biometric identifiers and government IDs, and requiring written information security programs.

State laws vary widely. According to Student Privacy Compass, about 21 states list data security as a focus area in their AI guidance, and 20 states reference FERPA or COPPA as baselines. Some require data localization, retention limits, or specific breach notification protocols.

⚠️ The free tool trap

If an AI tool is free, the product might be your students' data. The FTC specifically noted that using children's data for advertising, monetary compensation, or AI training is never considered "integral" to a service and always requires separate consent.

Why Does AI Use Increase Data Breach Risk?

The connection between AI adoption and data breaches isn't theoretical. The CDT's 2025 survey found that teachers who use AI for "many" school-related purposes were more likely to report their schools had experienced large-scale data breaches (28%) compared to teachers overall (23%).

According to the Center for Internet Security, 82% of K-12 schools in the U.S. experienced a cyber incident between July 2023 and December 2024. Ransomware attacks on education jumped 69% in Q1 2025 compared to the same period in 2024.

The most prominent recent example: the PowerSchool breach compromised sensitive records of 62 million students and 10 million teachers, including grades, attendance, and medical information. The attack used compromised contractor credentials without multi-factor authentication. More than 100 school districts have filed lawsuits.

82%
K-12 schools with cyber incidents (2023-2024)
69%
Increase in education ransomware attacks (Q1 2025)
62M
Students exposed in PowerSchool breach

What Six Questions Should You Ask Every AI Vendor?

Before adopting any AI tool that touches student data:

1. What data do you collect?
Understand full scope, not just minimum necessary.
Red flag: "Whatever is needed for the tool to function"
2. Where is data stored, and for how long?
Jurisdiction affects legal protections.
Red flag: "Indefinitely" or "various global locations"
3. Who has access to the data?
Chain of access should be explicit and limited.
Red flag: "Our partners" without specifics
4. Is data used for model training?
Under COPPA 2025, AI training requires separate consent.
Red flag: Vague language about "product improvement"
5. How is the tool FERPA/COPPA compliant?
Require documentation, not assertions.
Red flag: "We believe so" or no documentation
6. What happens when we stop using the tool?
You need a clear exit strategy.
Red flag: "Data is retained" or unclear deletion process
BEFORE
"This AI tool looks helpful. Let's try it with students."
AFTER
"Let's review the privacy policy, verify FERPA compliance, understand data practices, and get appropriate consent before piloting."

What Does a Practical Vetting Process Look Like?

Step 1: Review before use. No AI tool goes into classrooms before someone reviews its data practices. The Future of Privacy Forum recommends checking whether the tool's terms of service actually permit use by the ages of your students. ChatGPT's terms, for instance, prohibit children under 13 entirely.

Step 2: Categorize by risk. Tools that collect no student data (teacher-only use with anonymized content) are lower risk. Tools collecting identifiable student data require more scrutiny. Tools using data for AI training require even more.

Step 3: Require documentation. For higher-risk tools: written privacy policies, data processing agreements, and compliance certifications. The 2025 COPPA amendments now require vendors to maintain written information security programs.

Step 4: Get appropriate consent. Under the new COPPA rules, operators must obtain separate consent for third-party disclosures unless those disclosures are integral to the service. Using student data for AI training, advertising, or sale to third parties always requires separate consent.

Step 5: Review periodically. AI tools update constantly. Data practices acceptable at adoption may change. Build in annual reviews and monitor vendor communications.

💡 The approved tool list

Maintain a list of vetted and approved AI tools. Teachers can use listed tools without additional review. New tools require the vetting process. This balances innovation with protection.

When Should You Say No to an AI Tool?

Some AI tools should not be used with students regardless of educational promise:

No clear data policies. If a vendor can't or won't explain their data practices, they don't earn access to your students. The FPF guidance notes that reviewers should verify whether claims about not training on school data apply to their specific agreement, not just general marketing.

Data used for model training without consent. Under COPPA 2025, this is explicitly not considered integral to any service and requires separate parental consent for under-13s.

Indefinite retention without justification. AI tools should retain data only as long as educationally necessary. The new COPPA rules require data retention policies in writing.

Weak security practices. No encryption, unclear access controls, or breach history should disqualify a vendor. Given that 82% of schools experienced cyber incidents in the past 18 months, security isn't optional.

Different terms for free tiers. Many tools have different data practices for free vs. paid versions. Verify the specific terms that apply to your school's use.

How Do You Build a Privacy-Aware Culture?

This isn't just about policies. It's about building awareness at every level:

Teach teachers to think before prompting. Don't include student names or identifying information unless necessary. Anonymize when possible. Think about what data you're creating with every AI interaction.

Teach students data awareness. What you type into AI tools may be stored, analyzed, and used in unexpected ways. The CDT survey found that two-thirds of parents and students agree that parents have no idea how students are interacting with AI. Bridge that gap with education.

Make privacy part of every technology decision. Every conversation about adopting new tools should include "what are the data implications?" as a standard question. The 69% who agree they're concerned about student privacy need to see that concern reflected in decisions.


Frequently Asked Questions

Can teachers use ChatGPT with student work?

Depends on your school's agreement with OpenAI and what data is shared. ChatGPT's terms prohibit use by children under 13 entirely, and require parental consent for teens 13-18. Including student names, identifying information, or original student work in prompts creates FERPA implications. Safer: anonymize work before using AI for feedback, and use enterprise agreements with explicit FERPA compliance.

What about AI tools that say they don't train on school data?

Get it in writing. The FPF guidance recommends verifying that the claim applies to your specific agreement, not just marketing materials. Some tools have different terms for free vs. paid tiers. Also verify what happens to data from prompts versus uploaded documents.

How do we handle AI tools students use at home?

You can't control home use, but you can educate students about data implications and avoid requiring use of tools with problematic data practices for homework. If you assign work using AI, specify approved tools that have appropriate protections.

Is Google/Microsoft AI safer because we already have contracts?

Possibly, but AI features may have different data practices than core productivity tools. The FPF notes that many edtech products incorporate AI services like ChatGPT through APIs with their own terms. Verify that your existing data processing agreements explicitly cover the AI functionality you're using.

Provide alternatives that don't require AI. Students shouldn't be penalized for their parents' privacy concerns. This may require more teacher time but respects family choices and complies with the new COPPA requirement for separate consent.

What changed with COPPA in 2025?

The FTC's 2025 amendments (effective June 2025, compliance deadline April 2026) require separate parental consent for third-party data disclosures, expand "personal information" to include biometric and government IDs, and require operators to maintain written security programs. Critically, using children's data for AI training, advertising, or third-party sales is explicitly not considered integral to any service.


References

  1. Hand in Hand: AI Use Connected to Increased Risks to Students - Center for Democracy and Technology, October 2025
  2. FTC Finalizes Changes to Children's Privacy Rule - Federal Trade Commission, January 2025
  3. FTC Publishes Updates to COPPA Rule - Latham & Watkins Analysis, May 2025
  4. Vetting Generative AI Tools for Use in Schools - Future of Privacy Forum, October 2024
  5. State Guidance on Generative AI in K-12 Education - Student Privacy Compass, April 2025
  6. Ransomware Attacks in Education Jump 23% Year Over Year - K-12 Dive, July 2025
  7. Ransomware Attacks Surge 69% Across Global Education Sector - K-12 Dive, April 2025
  8. AI Use and Risk on the Rise for Students - Benton Institute, October 2025
  9. COPPA Rule Amendments in Federal Register - Federal Register, April 2025
Benedict Rinne

Benedict Rinne, M.Ed.

Founder of KAIAK. Helping international school leaders simplify operations with AI. Connect on LinkedIn

Want help building systems like this?

I help school leaders automate the chaos and get their time back.