Extract from Eureka Street
- Home
- Vol 36 No 4
- When AI writes the essay, who does the thinking?
- Erica Cervini
- 12 March 2026
At the start of the academic year, there is the familiar buzz on campus. New local and international students mingle with each other as they explore club stalls in the first couple of weeks while academics prepare fresh subjects to teach. But for many academics, this excitement is tempered at the thought of marking essays they know have been AI generated. They will be confronted with bland and homogenous phrases, pedestrian ideas and, in some cases, glaringly incorrect facts.
Universities have always had to deal with students cheating on their essays, but in the past couple of decades more ways to do this have emerged. A big one has been the commercial ‘essay mills’ that write papers to order. A cheaper option now is to use AI.
While many universities allow AI use in particular subjects, it’s generally frowned upon for writing essays and in media subjects that require news writing skills.
Typically, universities use plagiarism detection systems such as Turnitin, which is also used at some institutions to indicate what percentage of AI generated content students have used. Academics and tutors are told that student essays that score an AI percentage of under 20 is basically to be ignored. Higher scores are also contested.
While universities grapple with how to detect AI in essays and how to deal with students who use it, there is another AI issue they will need to get their heads around. Emerging research suggests that AI may diminish their critical thinking. A 166-page report, ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task’ by Massachusetts Institute of Technology (MIT) scientists suggests there is a ‘cognitive debt’ to using AI in writing essays because ‘of a likely decrease in learning skills’.
‘Studies indicate that while these systems (AI) reduce immediate cognitive load, they may simultaneously diminish critical thinking capabilities and lead to decreased engagement in deep analytical processes. This phenomenon is particularly concerning in educational contexts, where the development of robust cognitive skills is paramount.’
The MIT scientists studied three groups of students aged 18 to 39 and used electroencephalography (EEG) to assess their cognitive engagement during essay writing tasks. The first group had to rely on their brain, the second group could use a search engine, while the third used ChatGPT. The participants were interviewed after each writing task.
'If universities lose their intellectual rigor, which they are supposed to cultivate, what would be the reason for their existence?'
The researchers found the ChatGPT group had the lowest brain connectivity of all groups. This negatively impacted the students’ creativity and memory, with many unable to quote what they had written. They also felt less of an ownership of their work.
The brain-only group fared the best on brain engagement. When the AI group switched to ‘brain-only’ they fared even worse. The researchers suggest that this finding may indicate that ongoing use of AI contributes to a ‘cognitive debt’, although they point out that more research needs to be done on the topic.
These results imply that the danger is not simply that AI produces mediocre essays, but that it removes the intellectual labour through which learning normally occurs.
I wanted to know what ChatGPT thought about critical thinking, so I asked it: ‘Does the consistent use of ChatGPT for writing essays negatively affect our brain engagement and critical thinking?’ This was part of the response, but there was nothing about changes in brain activity, although ChatGPT did use this generic statement that ‘long-term data is still developing’.
‘When It Can Weaken Critical Thinking
Using ChatGPT for essays may reduce critical thinking if you:
• Copy responses without questioning them
• Skip the research and reflection process
• Let it structure all your arguments for you
• Avoid forming your own position first
Critical thinking develops through:
• Struggling with ideas
• Comparing sources
• Evaluating evidence
• Revising your reasoning
If AI replaces those steps, your thinking muscles don’t get exercised.
When It Can Strengthen Critical Thinking
Used intentionally, ChatGPT can actually support deeper thinking. For example:
• Asking it to challenge your argument
• Requesting counterarguments to test your thesis
• Comparing your draft with alternative interpretations
• Using it to clarify complex theories before forming your own view
• Getting feedback on logic gaps
In this case, it functions more like a debate partner than a ghostwriter.’
Many students are using ChatGPT as a ‘ghostwriter’, a recent Australian feature suggests. Ros Thomas conducted many interviews with students and academics who told the reporter they had found scarily high numbers of students embracing AI. Many were proud about using it and found AI an easy way to quickly complete assignments. The MIT study also showed that the group that relied on ChatGPT became lazier and, by the end of the four-month study, they often just copied and pasted from ChatGPT.
While research shows students are relying on AI, universities need to tackle the reasons why students use AI in their write essays. I suspect that part of the reason students are willing to rely on AI stems from the financial pressures they face. Studies show that university students, particularly those living away from home, are working longer hours to make ends which means less time for them to study. This is also the case for international students, who have high fees and rents to pay. However, there are caps on the number of hours they can work, but some still exceed these hours.
From my experience working in universities, I also know international students will use AI because their English skills are poor. This is not their fault but the universities’ for admitting them to degrees which need proficient English skills. I’ve also found that universities do not give international students adequate support for them to attain the necessary skills in English.
There are also students who are tempted to use AI because they are disengaged from their studies. Could one reason be that some subjects are taught the same way year after year with the same old resources, leaving students less motivated to study? A friend of mine who already has an undergraduate degree has gone back to do a masters in primary education at a Melbourne university. She has been disappointed with the coursework because it is not up-to-date with recent research into the benefits of teaching phonics. Ironically she chose this university because it had done research in this field of phonics.
Deakin University researchers argue in their latest AI study that ‘the problem of AI and assessment is far more difficult than media debates have been making out’. Instead, AI and assessment must be treated as something to be ‘continually negotiated rather than definitively resolved’. They are correct, but it’s a massive area that also needs more research on how AI affects learning and why students so readily use it, particularly in the humanities and social sciences where essay writing is the main form of assessment.
Unless this research is done and if studies continue to show there is a ‘cognitive debt’ from relying on AI, cheating may be the least of universities’ problems. If universities lose their intellectual rigor, which they are supposed to cultivate, what would be the reason for their existence?
No comments:
Post a Comment