69% of high school students used ChatGPT for schoolwork in 2025, according to College Board research. Banning AI on assignments doesn't work—you can't enforce it, and the attempt creates adversarial dynamics. The better approach: design assignments where authentic engagement is easier than shortcuts.
The evidence is clear. Pew Research found that teens using ChatGPT for schoolwork doubled from 13% to 26% between 2023 and 2024. Universities are responding dramatically—blue book sales surged 80% at UC Berkeley, 50% at the University of Florida, and 30% at Texas A&M as institutions scramble for AI-resistant assessment methods.
But returning entirely to handwritten, in-class exams isn't practical for K-12 education—and it misses an opportunity. The assignments that resist AI shortcuts are often better assignments anyway: more engaging, more meaningful, more likely to produce genuine learning.
What Makes Assignments Vulnerable to AI?
Generic assignments are AI's sweet spot. AI can produce competent responses to "Write a five-paragraph essay about the themes of the novel" in seconds.
When an assignment has all four characteristics, you've designed a task optimized for AI completion.
What Are the Five Principles for AI-Resistant Design?
1. Require process, not just product.
When you only see the final essay, you can't tell how it was created. When you see brainstorming notes, rough drafts, revision history, and reflection on changes made, the process becomes visible.
This doesn't mean micromanaging. It means building in checkpoints: a first draft submitted before AI feedback is allowed, a reflection explaining which suggestions were accepted and why, an in-class component that can't be outsourced.
Jason Stephens of the International Center for Academic Integrity recommends updating assignments to "require screenshots of the AI prompts used and the responses." This makes AI use transparent rather than hidden.
2. Connect to local, recent, or personal context.
AI can write about the Civil War in general. It can't write about your grandmother's experience of segregation, the city council meeting from last Tuesday, or yesterday's class discussion.
Assignments grounded in specific, recent, local, or personal context require engagement AI can't provide.
"Analyze the economic impact of urbanization."
"Interview someone who moved from a rural area to a city. Compare their experience to what researchers have found about urbanization's economic effects."
3. Include in-class components.
A timed, in-class writing sample establishes what the student can produce independently. Oral defenses require students to explain and extend their written work. Collaborative activities generate shared context that individual AI use can't replicate.
Research on AI-resistant assessment found that when students proposed solutions to AI cheating concerns, 46% recommended "multiple assessment formats including oral and process-based" components. Students themselves recognize that verbal explanation reveals genuine understanding.
You don't need to make everything in-class. But some component in your presence provides a reference point and reduces the incentive to outsource the rest.
4. Make AI use explicit and bounded.
Rather than prohibiting AI, specify how it should and shouldn't be used: "Use AI to brainstorm topics, but write your thesis yourself." "After completing your first draft, use AI to identify weaknesses—then document which suggestions you accepted and why."
Harvard's guidelines recommend creating "scaffolded assignments where AI can be used for specific stages, but not others, providing a balanced approach to technology use."
When AI use is explicit, it becomes a tool rather than a cheat. Students learn to work with AI productively instead of hiding it.
5. Design for iteration.
A single submission incentivizes getting the "right answer" by any means. Multiple drafts with feedback cycles incentivize learning. When students know they'll revise based on feedback, the first attempt matters less than the learning trajectory.
Iteration also makes AI assistance visible. A student who suddenly improves dramatically between drafts raises natural questions.
How Do You Apply These Principles Across Subjects?
💡 The bonus effect
Assignments designed to resist AI shortcuts tend to be more engaging. Students report preferring work that connects to their lives over generic academic tasks. You're not just preventing cheating—you're improving learning.
How Does This Connect to IB Programs?
For schools implementing IB programs, these principles align naturally with IB philosophy. The emphasis on inquiry, personal engagement, and demonstration of understanding over content reproduction maps directly to AI-resistant design.
ATL (Approaches to Learning) skills become more important in an AI environment:
- Critical thinking can't be outsourced
- Self-management requires personal engagement
- Communication demands authentic voice
Assignments developing these skills are inherently AI-resistant. The IB's focus on process (drafts, reflections, supervisor meetings) already builds in the visibility that makes AI shortcuts difficult.
At my school, we've leaned into this alignment. PYP exhibitions require students to demonstrate inquiry processes that can't be replicated by AI. MYP personal projects demand genuine personal engagement and documented reflection. The IB framework, designed long before generative AI existed, happens to embody exactly the principles that make assessment AI-resistant.
Where Should You Start?
You don't need to redesign every assignment. Start with one—ideally one where you suspect AI use is already happening.
Apply the principles:
- Add a process component (brainstorming doc, first draft, revision notes)
- Connect to something AI can't access (local, recent, personal)
- Consider an in-class element (oral defense, timed writing, collaboration)
- Make AI use explicit ("You may use AI for X but not Y")
- Build in iteration (multiple drafts with feedback)
Common Sense Education recommends using "programs that keep track of progress, like revision history in Google Docs" to monitor the development of student thinking—making it more difficult to simply use AI to produce a final product.
The goal isn't making cheating impossible. It's making authentic engagement the easier path.
Frequently Asked Questions
Won't this take more time to grade?
Process-based assignments generate more artifacts, but they also make grading more meaningful. You're assessing learning, not just outputs. Many teachers find they spend less time on integrity investigations when process is visible.
What about standardized tests that AI can't help with?
In-class assessments remain important reference points. K-12 Dive reports that experts recommend "conducting summative assessments in class and on paper" when teachers need to verify students have developed foundational skills. The goal isn't eliminating all AI-free work—it's making take-home assignments valuable learning experiences rather than AI-completion exercises.
How do I verify students actually did the interviews/research themselves?
Require specificity that's hard to fake: names, dates, direct quotes with context, photos of primary sources. Include follow-up questions in class discussion. The details of authentic engagement are hard to manufacture.
What if students use AI to prepare for oral defenses?
That's often fine—using AI to understand material better, then demonstrating that understanding orally, is legitimate learning. The oral defense reveals whether understanding is genuine. As research on oral exams notes, "The most compelling advantage of oral exams is that they are inherently resistant to AI misuse. A student cannot convincingly explain a concept they do not understand."
Do these principles work for elementary students?
Yes, with age-appropriate implementation. Young students naturally work in personal context (family stories, local observations). Process visibility (drafting in class, sharing thinking aloud) is already common in elementary classrooms.
Aren't oral exams unfair to anxious students?
Valid concern. Some educators note that oral assessments can create barriers for students with anxiety, speech differences, or neurodiverse conditions. The solution isn't avoiding oral components entirely, but providing accommodations and alternatives. Oral defenses can be one component among many, not the sole assessment method.
References
- New Research: Majority of High School Students Use Generative AI for Schoolwork - College Board (May 2025)
- Teens and ChatGPT: Usage Doubled - Pew Research Center (January 2025)
- With More Students Using AI, How Can Schools Promote Academic Integrity? - K-12 Dive (March 2025)
- AI-Assisted Academic Cheating: A Conceptual Model - Frontiers in Computer Science (October 2025)
- Academic Integrity and Teaching With(out) AI - Harvard Office of Academic Integrity
- How to Help Prevent AI Use for Plagiarism - Common Sense Education
- Oral Exams in AI Era - AcademyNC (August 2025)
- Student AI Cheating: How Big Is the Problem? - Curiously
- Essential Considerations for Addressing AI-Driven Cheating - Faculty Focus
- IB Approaches to Learning - International Baccalaureate
