All posts
AI in Education10 min readDecember 31, 2025

Your School Needs an AI Policy. Here's a Framework That Actually Works.

60% of schools have no AI guidance and 60% of teachers say policies are unclear. Generic bans fail. This three-domain framework with decision-making tools provides clarity without rigidity—plus a template you can adapt.

60% of U.S. schools have no guidance for generative AI usage, according to the Center for Democracy and Technology. Meanwhile, 70% of teens have used AI tools—and 46% used them for schoolwork without teacher permission. This policy vacuum isn't just an administrative gap. It's creating real consequences: lawsuits, inconsistent discipline, and students facing different rules in every classroom.

The Cost of No Policy: A Cautionary Tale

In October 2024, parents of a Hingham High School senior in Massachusetts filed a federal lawsuit after their son was disciplined for using AI on a history project. The student received detention, a D grade, and was initially barred from National Honor Society—consequences that, according to the lawsuit, jeopardized his applications to Stanford and MIT.

The core issue? The school's student handbook at the time didn't specifically address AI use. School officials argued the student should have known AI assistance violated academic integrity principles. The family argued you can't punish students for breaking rules that don't exist.

The judge ultimately denied the family's motion for a preliminary injunction, finding the school's discipline wasn't arbitrary. But the case exposed a fundamental problem: without explicit AI policies, schools face legal challenges and students face inconsistent expectations.

As Pat Yongpradit of Code.org told Education Week: "You can have, in the same school, a teacher allowing their 10th grade English class to use ChatGPT freely... And then literally, right down the hall, you can have another teacher banning it totally. Same school, different 10th grade English class."

At the school I lead, we didn't create a standalone "AI Policy." Instead, we embedded AI guidance into our existing Digital Citizenship Policy and Academic Honesty Policy. This approach recognizes that AI isn't a separate issue—it's a new dimension of how students learn, how teachers teach, and how schools operate.

The Data: How Bad Is the Policy Gap?

The numbers paint a troubling picture:

Schools Without AI Guidance
60%
Center for Democracy and Technology, 2025
Teachers Say Policies Unclear
60%
EdWeek Research Center, Jan 2025
Teens Unsure if Rules Exist
37%
Common Sense Media, Sept 2024
States With AI Guidance
26
TeachAI, April 2025

The communication gap extends to parents. According to Common Sense Media's 2024 survey, 83% of parents say schools have not communicated with families about generative AI. When confusion reigns at school and home, students are left to figure out expectations themselves.

Why Do Generic AI Policies Fail?

Three common approaches, three common failures:

The Blanket Ban
"Students may not use AI"
Unenforceable, too broad. Can't distinguish grammar checking (supports learning) from full assignment generation (undermines it). And 85% of students are using AI anyway.
The Permission Maze
"Use AI with teacher permission"
Creates cognitive load. One teacher allows it; another doesn't. Students track rules instead of focusing on learning—exactly the confusion the Hingham case exposed.
The Vague Aspiration
"Use AI responsibly"
No guidance for edge cases. What counts as responsible? Students and teachers guess differently. When a dispute arises, there's no shared framework to resolve it.

⚠️ The enforcement test

Any policy you can't consistently enforce isn't really a policy—it's a suggestion. Selective enforcement breeds cynicism. The Center for Democracy and Technology found that only 34% of teachers report having guidance on what actions to take when they suspect AI misuse. Without clear protocols, responses vary wildly.

What Three Domains Should an AI Policy Address?

Effective policies address three distinct domains, each with different considerations:

1. Student use — about learning

  • Does this AI use support or undermine the learning objective?
  • Is the student developing skills or outsourcing them?
  • Can the student demonstrate understanding independent of AI?

2. Teacher use — about effectiveness and ethics

  • Does this use save time while maintaining quality?
  • Does it protect student privacy?
  • Does it model responsible use for students?

3. Institutional use — about data, equity, and resources

  • What data is collected and who has access?
  • Does this create or reduce inequities?
  • What training and support are needed?

Most schools focus only on student use. But teacher use (lesson planning, feedback, communication) and institutional use (which tools to adopt, what data practices to accept) need attention too. The TeachAI Toolkit emphasizes that guidance should complement existing policies on "technology use, data protection, academic integrity, and student support" rather than standing alone.

At my school, we address all three domains, but we scaffold expectations by grade level. A Grade 2 student using AI to help understand a concept has different disclosure requirements than a Grade 10 student using AI to draft an essay. The underlying principle—transparency about AI use—stays consistent, but the implementation adapts to developmental appropriateness.

What Decision-Making Frameworks Work Better Than Exhaustive Rules?

Rather than enumerating every allowed and prohibited use, effective policies provide frameworks for making decisions:

The AI Transparency Spectrum:

  • Must disclose: Significant AI assistance with drafts, analysis, creative work
  • Encouraged/normalized: Grammar checking, brainstorming, formatting
  • Prohibited: Submitting AI-generated work as original without disclosure

The Purpose Test: Is AI helping the student learn, or doing the learning for them? Using AI to understand a struggling concept = learning. Using AI to produce an assignment without engaging = substitution.

The Dependency Check: Can the student do this without AI? If not, they need the underlying skill before leveraging AI to enhance it. You don't skip learning to drive because autonomous vehicles exist.

BEFORE
"AI is prohibited except when explicitly allowed by the teacher."
AFTER
"AI use should be transparent, purposeful, and documented. Use the Purpose Test: Is AI helping you learn, or doing the learning for you?"

How Do You Build Buy-In for an AI Policy?

A policy nobody follows isn't a policy. Getting buy-in requires involving stakeholders in development.

With teachers: Start with their pain points. What AI questions are they struggling to answer? Let the policy solve real problems they face. Give them ownership over classroom implementation.

This requires real investment in professional development. The EdWeek Research Center found that 50% of teachers have received at least one PD session on AI—nearly double the 29% from early 2024. But that means half still haven't. At my school, we run workshops training teachers not just on policy compliance, but on how to identify AI use, how to use AI themselves effectively, and how to integrate AI thoughtfully into their teaching. Vera Cubero's AI integration work has been a valuable resource for structuring these sessions.

With students: Be honest about why the policy exists. "We want you to develop skills AI can't replace" lands better than "because we said so." Ask for their input—they often know more about AI use than adults. The Center for Democracy and Technology reports that 85% of students used AI in the 2024-25 school year—they're the experts on what's actually happening.

With parents: Proactive communication beats reactive damage control. Share your policy before problems arise. With 83% of parents saying they've heard nothing from schools about AI, you have an opportunity to lead rather than react.

What Does a Working AI Policy Template Look Like?

Here's a framework you can adapt. Note that this works best when embedded in your existing Academic Honesty and Digital Citizenship policies rather than standing alone:

General Principle: AI tools are part of modern learning. Our goal is teaching students to use them responsibly, not pretending they don't exist. We expect transparency about AI use, purposeful application supporting learning, and continued development of foundational skills.

For Students: You may use AI tools unless an assignment specifically prohibits them. When you use AI significantly (beyond grammar checking or basic formatting), document your use: what tool, what prompts, how you used the output. Ask yourself: Am I learning, or am I outsourcing? If an assignment requires demonstrating your own understanding, AI assistance may not be appropriate.

For Teachers: You set AI expectations for your assignments. Communicate these expectations clearly. Consider requiring process documentation rather than only final products. Model responsible AI use yourself. If you use AI to provide feedback, let students know.

For the Institution: We evaluate AI tools for privacy compliance, educational value, and equity implications before adoption. We provide ongoing professional development. We review this policy annually and adapt as technology evolves.

💡 Scaffolding by grade level

Citation and documentation requirements should match developmental level. Early elementary might simply acknowledge "I used AI to help me." Upper grades need specific documentation of tools, prompts, and how output was used. The principle (transparency) stays consistent; the implementation grows with students.

The Equity Dimension

Policy gaps don't affect all students equally. The TeachAI Toolkit notes that only 18% of principals report their schools provide AI guidance—but in high-poverty schools, that drops to 13% compared to 25% in more affluent schools.

This matters because AI access and knowledge is already uneven. Students with stronger tech fluency or more access at home may use AI to get ahead while others struggle. Without clear policies, we risk widening existing inequities rather than closing them.


Frequently Asked Questions

How detailed should an AI policy be?

Detailed enough to handle common situations, flexible enough to handle edge cases. A 2-page framework with decision-making tools works better than a 20-page document trying to anticipate every scenario. The goal is guiding judgment, not eliminating it.

Should AI policies differ by grade level?

Yes. Elementary policies focus on digital citizenship basics. Middle school introduces documentation requirements. High school policies can be more nuanced about appropriate vs. inappropriate use. But the underlying framework (transparency, purpose, skill development) can stay consistent.

How often should AI policies be updated?

Review annually at minimum. AI capabilities change faster than most policy cycles. Build in flexibility: "This policy applies to AI tools including but not limited to..." avoids needing updates for every new tool. Currently, only Ohio and Tennessee require districts to have comprehensive AI policies.

What if teachers disagree about AI policy?

That's why institutional policy matters. Individual teacher discretion within a shared framework works; complete teacher-by-teacher variation creates the confusion that led to the Hingham lawsuit. The policy should establish principles while leaving room for subject-specific implementation.

How do you handle policy violations?

Treat early violations as teaching opportunities, not just discipline events. The goal is developing judgment, not just compliance. Repeat or egregious violations warrant stronger consequences, but first-time unclear-boundary situations deserve conversation.


Need help developing a comprehensive AI policy tailored to your school's context? Policy development for academic honesty, digital citizenship, and AI integration is one of the consulting services I offer. Get in touch to discuss your school's specific needs.


References

  1. Hand in Hand: Schools' Embrace of AI Connected to Increased Risks - Center for Democracy and Technology (October 2025)
  2. The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI - Common Sense Media (September 2024)
  3. Schools' AI Policies Are Still Not Clear to Teachers and Students - Education Week (January 2025)
  4. AI Guidance for Schools Toolkit - TeachAI (Code.org, CoSN, Digital Promise)
  5. Schools Are Taking Too Long to Craft AI Policy - Education Week (February 2024)
  6. Hingham High School AI Lawsuit Coverage - NBC News (October 2024)
  7. More Teachers Are Using AI in Their Classrooms - Education Week (January 2026)
  8. Rising Use of AI in Schools Comes With Big Downsides - Education Week (October 2025)
  9. How School Districts Are Crafting AI Policy on the Fly - Education Week (October 2025)
  10. AI Integration Resources - Vera Cubero
Benedict Rinne

Benedict Rinne, M.Ed.

Founder of KAIAK. Helping international school leaders simplify operations with AI. Connect on LinkedIn

Want help building systems like this?

I help school leaders automate the chaos and get their time back.