En robothand och en mänsklig hand håller i ett rött hjärta

How I Learned to Stop Worrying and Love AI

"I now need to develop questions that will stump an AI but not a knowledgeable human." Universitetsläraren's columnist Oskar MacGregor has changed the way he formulates examination tasks.

2025-05-26

Senior Lecturer in Informatics at the University of Skövde

This is a column. The opinions expressed are the writer’s own.

Whether we like it or not, most of our students will employ chatbots to assist them in their writing. This ranges from the perfectly acceptable “I want to understand more about this topic” to the highly problematic “Generate an essay on this topic”. But the practice has already spread like wildfire across the world’s student populations, and will remain.

Whether we like it or not, we will not catch most instances of AI cheating. Since chatbots are being constantly trained on human texts, “AI detection” is always a step behind (regardless what the marketing hype might claim). And any “gotcha” tricks, like adding “invisible” white text into assignment in­structions in order to get some telltale evidence of chatbot use, might work a few times, but students are quick to cotton on. Particularly when the solution is as easy as screenshotting instead of copy-pasting the instructions, or even just bothering to actually read through the AI-generated output before submitting it.

Whether we like it or not, most of the social and commercial pressures are strongly in favor of chatbot use. The massive valuations of companies like Nvidia and OpenAI are built on the expectation that AI solutions will soon com­pletely suffuse our technologies. This might also help explain why so many, including us researchers, are obviously already using the tools in precisely the ways we seem to expect our students not to.

I’ve been thinking about what this all means for my own teaching practices. One alternative is to try to push back against the tide of the technological developments, by banning the use of AI tools in writ­ing assignments. But given where we’re at now, since the public launch of ChatGPT in late 2022, that’s a strategy based more on wishful thinking and a problematic yearning for “the halcyon days of yore” than on any factual reality.

Besides, it’s our responsibility, as educators, to prepare our students for the future. Given the predicted growth of AI, we ought therefore to actively teach them to work with the tools instead, to develop their ability to see how AI can both help and harm.

For instance, chatbots have a well-known tendency to “hallucinate” (or bullshit) in their responses, and students therefore need to learn to distinguish between more and less trustworthy parts of an AI’s output. As it turns out, this sort of knowledge actually just consists of higher education’s classical emphasis on critical thinking, in somewhat new clothes. But it can only be provided if we give our students repeated opportunities to really work actively with AI.

So instead of asking straightforward subject comprehension questions that a chatbot can generate an excellent response to in seconds, I now need to develop questions that will stump an AI but not a knowledgeable human. Like asking students to identify examples of some course-relevant phenomenon in the latest news, or some other form of slightly more open and lateral thinking that chatbots haven’t really mastered yet.

And instead of spending my time correcting grammar and reference formatting and structure and similar, I now need to focus more on tone, flagging statements that are too general and vague, or any abundance of exuberant adjectives and phrases that belong more in ad copy than in an academic essay. And I need to strictly fail texts that include fabricated references or quotes or other straightforward factual errors.

It’s never easy when new developments change the conditions for an existing practice, as is now happening with writing. But in order to help our students prepare for the future, we have to realize that we can’t stop these developments, and must instead adjust our approach accordingly. Whether we like it or not.

Do you agree? Send your opinion to redaktionen@universitetslararen.se.

Read more:
Share: