Leo Goldsmith, an assistant professor of screen studies at the New School, can tell when you use AI to cheat on an assignment. There’s just no good way for him to prove it.
“I know a lot of examples where educators, and I’ve had this experience too, where they receive an assignment from a student, they’re like, ‘This is gotta be AI,’ and then they don’t have” any simple way of proving that, Goldsmith told me. “This is true with all kinds of cheating: The process itself is quite a lot of work, and if the goal of that process is to get an undergraduate, for example, kicked out of school, very few people want to do this.”
This is the underlying hum AI has created in academia: my students are using AI to cheat, and there’s not much I can do about it. When I asked one professor, who asked to be anonymous, how he catches students using AI to cheat, he said, “I don’t. I’m not a cop.” Another replied that it’s the students’ choice if they want to learn in class or not.
AI is a relatively new problem in academia — and not one that educators are particularly armed to combat. Despite the rapid rise of AI tools like ChatGPT, most professors and academic institutions are still resoundingly unequipped, technically and culturally, to detect AI-assisted cheating, while students are increasingly incentivized to use it.
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.