HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
May 1, 2025
Vol. 82
No. 8

The Real Way to Stop Cheating in an AI World

author avatar
As AI use in schools grows, educators may need to reassess their presumptions on what causes cheating and how it can be addressed.

premium resources logo

Premium Resource

Technology
Students study renewable energy models while working on tablets in a classroom
Credit: Anna Stills / iStock
Generative AI has become one of the fastest adopted technologies in human history. I’m continually surprised at the pace at which AI capabilities are increasing. I can ask AI to assemble a dinner recipe based on a list of random ingredients in my fridge, or help me write a haiku to celebrate my grandmother’s birthday. But when it comes to learning, the possibilities get even more exciting. AI can provide feedback on how to make my writing more persuasive or provide 10 (or 50) possible solutions to a problem I’m exploring. With a few clicks, AI can summarize the most essential research on a given domain and present it to me as a podcast interview that I can listen to on my way to the gym.
But as a learning tool, AI also brings its share of challenges, particularly when it comes to assessing student performance. One of the most common questions educators ask me is, “How can we stop students from using AI to cheat?” I have to admit that it’s fascinating to me that we are suddenly so interested in addressing cheating. Research from the Stanford University School of Education shows that the prevalence of cheating among U.S. high school and college students is nothing new (Specter, 2023). The study found that nearly 70 percent of U.S. high school and college students regularly engaged in cheating long before the emergence of generative AI. Other studies have shown similar results (Todd, 2014).
What’s even more interesting is that the Stanford researchers found that the rate of cheating did not change after the availability of ChatGPT and other generative AI tools. The sudden alarm to do something about a problem that we have been complacent about for decades highlights some fundamental misconceptions about both why cheating happens and the role that technology plays (or doesn’t) in enabling it.
In the early generative AI days, many schools looked to AI detectors hoping to leverage technology as a quick fix to the cheating problem. But most AI writing simply isn’t detectable by AI tools because there is no watermark or embedded code. Put simply, there is nothing to “detect.” Legal content expert Debbie Mason Horacek cleverly demonstrated this by running the Declaration of Independence through an AI detector to find that 98 percent of it was supposedly written by ChatGPT. Not surprisingly, reliance on AI detection tools has led to many embarrassing moments for teachers—like the Texas A&M professor who erroneously flunked all his students for plagiarism (Klee, 2023). In that case, the students were exonerated after one of them put the professor’s own dissertation through the AI detection tool, which claimed it had been created by AI. And while AI detectors aren’t good at detecting AI-generated content, they are good at perpetuating bias—Black teenagers are twice as likely to be accused of cheating by AI detection tools than their white peers (Clifton, 2024). The point is that AI can’t detect AI.

Cheating Causes and Correctives

First, we need to recognize that cheating is not caused by technology but by culture. If we really want to address the cheating problem, it can be done without buying expensive anti-plagiarism software or creating punitive policies. And none of the levers for improving academic integrity need to involve banning generative AI. Instead, we have to understand the underlying causes of cheating and the correctives available to create a culture of academic integrity. I have created a table (see fig. 1) of common causes of cheating and their antidotes (inspired by the work of Torrey Trust, an education researcher at the University of Massachusetts at Amherst; see Trust, n.d.).
Chart listing common causes of cheating and strategies to prevent them, such as reducing high-stakes tests and increasing student agency.
While each of these elements deserve deeper exploration, the first two are the best place to start if you’re beginning the path to creating an ethical and thriving learning culture. Let’s look at them more closely.

Design Meaningful Assessments

The first antidote to cheating is to make sure the learning, and how we assess it, feels relevant to students. I used to work with preservice teachers at Brigham Young University. When new teachers were struggling with classroom management issues (students talking during class, distracting other students, etc.), they would ask for strategies to address the problematic behavior. They wanted tools to fix the students’ actions.
However, in almost every case, the fastest way to eliminate classroom management issues was to focus on making the learning activities more meaningful to the students. When the students saw value in what they were learning, most behavior problems went away. If cheating happens, it is likely because students don’t see relevance in what they are learning, how they are being tested, or both. If a student doesn’t see direct value to their lives from the concepts you are teaching, they are naturally incentivized to find the fastest/most efficient way through the experience. It doesn’t matter to them whether they actually learn the content or not because it doesn’t feel necessary to them in the first place.
Students are willing to put forth the effort to learn new concepts when they feel there is a reasonable return on that investment of effort. This requires understanding student needs and interests to align the learning to what they care about. ISTE+ASCD has created a set of Transformational Learning Principles to guide educators in making learning and assessment meaningful for students. One critical element of the principles focuses on making assessments authentic. As much as possible, assessments should ask students to apply their learning in meaningful, real-world contexts and mirror the way they might demonstrate the learning concept in their “real” lives. Authentic assessments always happen in the context of a real-world situation—this could include explaining a concept to a peer or using a new skill to solve a problem they might encounter in their future job.
There are lots of great tips available for creating authentic assessments (including, for example, this guide from the University of Bath). The more relevant the learning feels, the less interest a student will have in cheating because they see value in the learning.

When students are involved in determining the definitions of academic integrity, they are much more likely to adhere to them on their own accord.

Author Image

Establish Academic Integrity Norms with Student Input

The second antidote to cheating is to work with students to establish and understand what cheating is and what cheating isn’t. As you consider this conversation, keep in mind that our traditional definitions of cheating are not always completely logical, as a student recently pointed out to me. She was confused by a teacher who considered it cheating for students to put their paper into ChatGPT to ask how they could make it better. Yet the same professor encouraged students to go to the writing center where another student would read their paper and tell them how they could make it better.
Consider another example from a teacher workshop I was leading where we were discussing what counts as a student’s own answer. A teacher was adamant that simply copying and pasting an answer from Wikipedia or an AI tool was cheating. That’s fair. But when I asked how he recognized legitimate student work, his answer surprised me. “It’s really simple,” he said. “I write the exact answer on the board for them to copy. All they need to do is write that answer in their notebooks. And then when I give them a test, they need to give that answer back to me.” He didn’t see the irony in his philosophy, but I found myself wondering how copying the answer from the board would lead to any more learning than copying the answer from Wikipedia. From a student’s standpoint, why would one be considered cheating and the other not?
Address this issue by having regular and open conversations with your students about norms. Frame the conversation in a positive way—instead of, “How do we stop cheating?”, ask, “What is academic integrity?” Talk about the nuances of intent and transparency. For example, if AI use is disclosed, does that change the ethical equation? Discuss how you expect students to reveal their use of generative AI or other sources of inspiration. And how should they cite AI that was used in a brainstorming or feedback role, but didn’t generate any of the actual text?
When students are involved in determining the definitions of academic integrity, they are much more likely to adhere to them on their own accord. This doesn’t mean that every student idea automatically constitutes an acceptable norm—nor should you imply that it would. The simple act of having the conversation with them—and truly hearing their viewpoints—makes a huge step toward creating a climate of integrity.

Don’t Let a Good Crisis Go to Waste

The rise of generative AI has placed a spotlight on cheating in school. Even if the prevalence of cheating has not actually changed, we should not, as Winston Churchill said, “let a good crisis go to waste.” Let’s take advantage of this moment to more deeply understand what causes cheating and shift our energy from panic toward creating a culture of academic integrity. By ensuring learning is relevant to students and actively involving them in exploring appropriate norms, we are well on our way to creating a world where the value of learning overshadows the attraction of cheating. In fact, AI’s role as a forcing function to spark efforts to make learning more valuable to students may be the most impactful (even if unintended) outcome of AI in education.
Editor’s note: This article originally appeared on the ASCD Blog.
References

Clifton, M. (2024, September 18). Black teenagers twice as likely to be falsely accused of using AI tools in homework. Semafor.

Klee, M. (2023, May 17). Professor flunks all his students after ChatGPT falsely claims it wrote their papers. Rolling Stone.

Specter, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Stanford Graduate School of Education.

Todd, S. (2014, May 16). Three decades uncovering the truth about student cheating. Rutgers Today.

Trust, T. (n.d.). Technology is not the solution to cheating [Infographic]. OneHE. https://onehe.org/resources/torrey-trusts-technology-is-not-the-solution-to-cheating/

Richard Culatta is an internationally recognized innovator and learning designer. As the CEO of ISTE+ASCD, Culatta is focused on supporting education changemakers to create equitable and engaging learning experiences for students around the world.

Prior to joining ISTE+ASCD, Culatta served as the Chief Innovation Officer for the state of Rhode Island. In this role, he led an initiative to make Rhode Island the first state to offer computer science in every K-12 school and created a state vision for personalized learning.

Culatta was appointed by President Obama as the Executive Director of the Office of Educational Technology for the US Department of Education. In that capacity, he led efforts to expand connectivity to schools across the country, promote personalized learning and develop the National Education Technology Plan. He also pioneered new opportunities for engagement between the Department, education leaders, and tech developers - including bringing top game designers from around the world to the White House to help re-think the design of assessments. Culatta also served as an education policy adviser to U.S. Sen. Patty Murray and a Resident Designer for the global design firm IDEO.

Richard's book Digital for Good: Raising Kids to Thrive in an Online World (HBR Press) uncovers the challenges with our current approaches to preparing young people to be effective humans in virtual spaces and presents a path to a healthier and more civil future digital world.

Culatta began his career in the classroom as a high school teacher and has coached educators and national leaders around the world on making learning more. He holds a bachelor's degree in Spanish teaching and a master's in educational psychology and technology, from Brigham Young University.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Technology
Recentering the “AI in Education” Conversation
Tara Nattrass
3 months ago

undefined
Can AI Assess Student Learning?
Bryan Goodwin
3 months ago

undefined
EL Takeaways
Educational Leadership Staff
3 months ago

undefined
Smart AI, Smarter Teaching
Paul Emerich France
3 months ago

undefined
Turning AI into a Tool for Equity
Kate Stoltzfus & Ken Shelton
3 months ago
Related Articles
Recentering the “AI in Education” Conversation
Tara Nattrass
3 months ago

Can AI Assess Student Learning?
Bryan Goodwin
3 months ago

EL Takeaways
Educational Leadership Staff
3 months ago

Smart AI, Smarter Teaching
Paul Emerich France
3 months ago

Turning AI into a Tool for Equity
Kate Stoltzfus & Ken Shelton
3 months ago
From our issue
Issue cover featuring illustrated adults and students walking around a series of intersecting paths in various colors and the title "Navigating Bias in Teaching & Learning" in bold text
Navigating Bias in Teaching and Learning
Go To Publication