AI cheating is destroying higher education; here’s how to fight it
Introduction
Generative AI represents the most accessible and low-cost method for academic dishonesty ever created, and it is now being used by a large majority of students. A recent survey of 1,000 college students revealed that over 89 percent have turned to ChatGPT to help finish homework assignments.
These AI tools are widely available, and students can easily learn to bypass “AI detectors” by visiting popular online platforms such as Reddit or YouTube, where active communities share strategies for using AI in schoolwork without detection.
AI in Academic Institutions
How should schools and universities respond? Currently, most institutions leave AI policy up to individual teachers. This flexibility can be beneficial, allowing educators to set guidelines suited to their courses and tools. However, in conversations with teachers nationwide, I’ve found most feel they lack sufficient understanding of AI to craft effective policies.
Instead of competing with students in a tech-driven arms race, educators should reconsider what counts as cheating—much like they did with calculators and smartphones. For generative AI, the essential question is: “What part of an assignment should come from the student, and what can come from AI, to achieve the learning goal?” Answering this allows teachers to set clear boundaries.
Appropriate AI use in the broader world will ultimately be defined by society and employers. To prepare students, educators must find a middle ground between banning AI outright and letting students rely on it so heavily that they fail to build essential life and career skills.
Some teachers have reacted by switching entirely to oral or handwritten exams, but this approach is unsustainable—largely because it overburdens instructors. There are more practical ways to redefine cheating while helping students cultivate relevant skills.
How can educators navigate AI
First, educators must decide what is truly worth teaching. As available tools evolve, so should education. Just as many schools no longer teach cursive writing, instructors in various fields may determine that certain formerly taught skills no longer merit instructional time, and choose instead to emphasize enduring abilities like critical thinking and collaboration.
Second, they should track and make visible how students use AI. Several platforms, when used well, can show teachers exactly how AI tools are being employed. Documenting this use offers insight, while surfacing it enables educators to give feedback and teach responsible practices. Real-time review allows guidance during the research and writing process, not just upon submission.
Finally, educators should adopt project-based learning. When the learning occurs throughout a project, teachers can assess not only the final output, but the steps taken along the way. Educators skilled in project-based learning provide feedback from the start, making it more likely the final product genuinely reflects a student’s own knowledge and abilities.
Conclusion
In discussions with educators across the country, some have opposed letting students use ChatGPT for research and writing, arguing that “struggle leads to learning.” I don’t dispute that. However, if the struggle does not help students develop skills relevant to today’s world—much like cursive writing for many—is it necessary to insist upon it?
Copyright 2025 PowerNotes LLC. All Rights Reserved.
Contact us via email for more information.


