Unis need ‘systemic overhaul’ to deal with AI use

As many as 60 per cent of university students have used generative artificial intelligence to produce academic work, forcing universities to adapt and respond to make sure students are actually learning, experts say.

The surging popularity of generative AI since the launch of ChatGPT now requires “a significant, systemic overhaul of assessment practices” in universities, says University of Queensland education professor Jason Lodge. “Adapting to the age of AI is moving from a focus on detecting cheating to focusing on detecting whether learning has occurred,” he writes in a recent paper for the Tertiary Education Quality and Standards Agency.

The siren call of generative AI entices students with its ability to produce targeted and reasonably well-crafted text in seconds in response to a simple prompt typed into a website (still free for the first iterations of the chatbots).

Australian universities are taking action in various ways to deal with students who use genAI to produce academic work.

University academics can assess knowledge and understanding with verbal exams, or “vivas”, but it’s an expensive and time-consuming procedure, says University of NSW Artificial Intelligence Institute chief scientist Professor Toby Walsh.

“Unfortunately, it doesn’t scale,” he adds. “There are places where we have done that and places where we will continue to do that, but higher education now is delivered at a much greater scale.”

GenAI presents a difficult and delicate problem for teaching academics, he says, noting that large language model chatbots are useful writing tools and it has to be recognised that many students are not writing in their mother tongue. Students can, of course, take the assistance too far and use the genAI to come up with the intellectual ideas and structure the argument.

Risk of wrong flags

Walsh says AI detection technologies should be deployed with great care because they can wrongly flag a student for cheating. “At best [the detection] is a probability, so making a serious allegation of academic misconduct based on something that is not certain is very difficult and problematic.”

He suggests teaching academics can warn students that if they use genAI they will have to submit not only the finished text, but all the AI prompts and stages of work.

“We have to realise these tools are going to be out there in the real world and many of us are going to profit by having access to them in our everyday lives,” Walsh says, adding that, personally, he will never again write a business letter but instead use AI to do the work.

International company Turnitin makes a range of technological tools, including genAI detection systems, widely used in Australian universities. Turnitin regional vice-president Asia-Pacific James Thorley says its tools have been developed to avoid flagging false positives.

“We don’t mind missing some generative AI text out there, in order to keep the false positive rate as low as possible,” he says. “Independent tests show we’re below 1 per cent.”

Turnitin detection tools rely on probability, so there are no definitive positives, he adds, just extreme probability that AI has been used. An AI flag should only be seen as a single data point requiring further evidence and discussion with the student.

Recently, Turnitin launched another tool that can detect paraphrased genAI text, and the company is now looking to develop a tool to assist with assessing student work that has used AI, Thorley says.

“If a student uses a generative AI prompt, then works on the text for days and turns it into their work, I don’t think people care about that,” he says. “They care about those who put a prompt into genAI and pass off the end product as their own.”

Associate Professor Jemma Skeat, who has led assessment at Deakin University’s school of medicine, says there is still some debate about exactly where the line should be drawn regarding permissible genAI use. “You can use it to brainstorm a topic, and maybe use some of the output but add your own ideas and reframe it and it’s your generated content in the end,” she says.

Skeat says AI can legitimately be used to assist international students who don’t speak English as a native language – the tool can highlight sections of work that need rephrasing or even help rephrase – and help support students with disabilities.

Two anonymous surveys she conducted last year found students were very aware of the limitations of genAI, understanding that “entirely fabricated information” can be included in the output, Skeat says.

The survey at the end of last year found most students didn’t automatically see the use of AI as cheating. “They knew there were legitimate uses for genAI,” she says. “But they were very clear they needed to know where the line was.”

Australian Financial Review