Generative AI tools like ChatGPT have upended how universities design and police exams — and some researchers say the crisis has no clean solution.
A new paper published this month in the academic journal “Assessment & Evaluation in Higher Education” argued that artificial intelligence has turned assessment into what four experts have called a “wicked problem” — a challenge so complex it cannot be fully solved.
The authors — Thomas Corbin, David Boud, Margaret Bearman, and Phillip Dawson of Deakin University — interviewed 20 unit chairs at an unnamed “large” Australian university in the second half of 2024 via one-hour, semi-structured Zoom interviews, and found widespread confusion, heavy workloads, and no clear path to AI-proof exams.
While some saw AI as a professional tool students must master, others viewed it as a form of fraud that undermines learning.
Many admitted they were simply “at a loss,” unsure how to balance the pressure to make assessments more AI-proof and keep them creative, authentic, and manageable.
Many described impossible trade-offs.
One tried to offer both AI-permitted and AI-free assignments — but found it “a nightmare” that doubled their workload.
Another teacher worried that stricter assessments might simply “test compliance rather than creativity.”
Others noted that oral exams, while more resistant to AI, are logistically impossible to scale for large cohorts.
In a piece for The Conversation published on Tuesday, the Deakin professors explained that wicked problems, a concept borrowed from urban planning and climate change debate, are “messy, interconnected, and resistant to closure.”
There are no “true” or “false” fixes, they wrote, adding that every attempt at a solution generates new tensions and unintended consequences.
Related stories
Business Insider tells the innovative stories you want to know
Business Insider tells the innovative stories you want to know
Their prescription: Stop chasing the silver bullet.
Universities should give staff “permission to compromise, diverge, and iterate,” recognizing that what works in one area may fail in another, and that assessments will need constant revision.
“Universities that continue to chase the elusive ‘right answer’ to AI in assessment will exhaust their educators while failing their students,” the paper concluded.
What professors are trying — and what experts say about grading in the age of AI
On the ground, teachers are mixing analog and AI-aware tactics: handwritten or in-person tasks to establish a baseline voice; reflective components and live presentations; and “personalized” prompts that are harder to outsource.
Some are embracing AI to cut drudge work — using chatbots to draft lesson plans, quizzes, and report templates — so they can spend more time with students.
Others have swung the opposite way, banning AI in lower-level courses after catching hallucinated citations and indistinguishable prose.
Outside the classroom, prominent voices argue that the debate goes far beyond cheating.
Economist Tyler Cowen said AI is exposing how much of schooling rests on homework and easy-to-grade tests — and that both are becoming obsolete.