Researchers at the University of Agder have investigated how well ChatGPT can answer exam questions. In six out of ten cases, examiners detected that the answer was written by a robot.

Exam answers from ChatGPT were only detected in 6 out of 10 cases

Researchers at the University of Agder have investigated how well ChatGPT can answer exam questions. Only 60 per cent of the answers were detected as robot texts.

The researchers asked 15 colleagues to evaluate 10 answers to a question from an exam that had been held at the University of Agder (UiA) in 2022. Five of the answers were written by students and five by the chat robot ChatGPT, Khrono writes (link in Norwegian).

The answers from ChatGPT received, on average, grade C. One of the responses was graded B+, whilst another received a D. The overall average grade was the same as the student responses.

40 per cent of ChatGPT responses passed without the examiners detecting that they were written by a bot.

Most viewed

    “There are, unfortunately, significant opportunities for students who want to cheat. This is something that should be intensively researched in the future,” Associate Professor Peter André Busch at the Department of Information Systems at UiA says.

    He carried out the study together with colleague and Associate Professor Geir Inge Hausvik.

    “It is also worth mentioning that the examiners who had more teaching experience had more correct assessments than those who had less experience,” Busch says.

    The research duo at UiA are now calling for national legislation for the use of artificial intelligence and conversational robots in an academic context.

    “A national set of regulations should apply to both the university and college sector, and to higher education,” Hausvik says.

    ———

    Translated by Alette Bjordal Gjellesvik.

    Read the Norwegian version of this article on forskning.no

    ------

    Powered by Labrador CMS