ChatGPT has exploded into public awareness, and is here to stay. In the higher education sector, a large portion of the ensuing discussion has rather anxiously revolved around the potential use of the AI tool by students for cheating: to, for instance, generate answers to assignments that – according to much of the research conducted so far – actually can receive quite good grades. This is a problem that really requires us, as educators, to think carefully about the sorts of assignments we give to our students and how to guarantee fair and secure assessments.
But ChatGPT isn’t all doom and gloom – it also brings opportunities. The crux is that we have to grasp what the tool can actually do, and how to approach it. And quickly. We still don’t really have any proper rules or laws on how we’re allowed to use it, and the technology is constantly evolving. So, in the absence of any given formal guidelines, how ought we to think, and what concrete benefits can we derive from AI already now?
For those of us tasked with imparting knowledge and facts to our students about how the world actually works, it’s important to understand that ChatGPT and similar AI tools are designed to generate realistic language, but not to constrain this generative process to reality. This is why ChatGPT tends to confabulate (”hallucinate”) facts and references. And, therefore, a fundamental principle for legitimate use of the tool is that the user must both have the expertise to determine whether any generated claims are actually accurate or not, and take full responsibility for their generated texts, including any inaccuracies, as if they were their own.
ChatGPT tends to confabulate (”hallucinate”) facts and references
This fundamental principle points towards ethically defensible use cases. I don’t know about you, but I appreciate putting more of my focus on those bits that actually contribute to quality in teaching. It’s sort of like with calculators: instead of spending cognitive resources on mental calculation, I can focus on the bigger picture. Likewise, when I prepare a new course, I can now use ChatGPT to, for instance, generate relevant course objectives for a syllabus, or a grading rubric for specific assessment components, or even, for that matter, a complete course structure, with various components and examinations and so on.
Regarding this last point, I often have a multitude of ideas, but putting together a reasonable course structure from these more or less free associations can consume quite some mental energy. My personal solution now is to pace back and forth and riff freely into a dictaphone app on my smartphone, which automatically transcribes everything I say. The resulting text is barely comprehensible to a human: it’s incoherent, with strange pauses in unexpected places, many ”umms” and ”aahs,” and sometimes I even contradict myself (”no, wait, scratch that idea, do this instead…”).
But when I paste this rambling text into ChatGPT, with accompanying instructions to use it to generate a course structure of a specific length at a specific level within a specific subject, it actually spits out a pretty decent result! Not perfect – I still always need to spend my time making some adjustments, based on my expertise and experience (in accordance with the above fundamental principle) – but the result is robust enough that I get to focus more on the important quality issues and less on trivialities.
There’s a lot about ChatGPT that we haven’t figured out yet, and it’ll take a while before we reach any real consensus on how it ought to be used. But in the meantime, it feels good for me to know that there are sensible use cases that contribute to me improving my pedagogy, by allowing me to invest more of my limited time into the bits that truly matter. And that’s a real benefit, for both me and my students!