The telltale signs of essays written using ChatGPT

Student writing an essay
Student writing an essay

Cambridge researchers claim to have discovered the “telltale” signs of essays written using ChatGPT.

Repetition of words, paragraphs starting with “however”, and numbered lists with items are all giveaways that the artificial intelligence tool helped write text, a study has found.

Cambridge University Press and Assessment compared essays written by three first-year undergraduate students with the aid of ChatGPT, with 164 essays written by IGCSE students.

IGCSE is an international qualification which is the equivalent of a GCSE taken by UK pupils.

The essays were marked by examiners, the undergraduates interviewed and their essays analysed.

The study found essays written with the help of ChatGPT performed poorly on analysis and comparison skills compared to essays written without such assistance.

But ChatGPT-assisted essays performed strongly on information and reflection skills.

Researchers identified a number of key features of the ChatGPT writing style, which included repetition of words or phrases and ideas, the use of more words than are necessary to convey meaning, and Latinate vocabulary.

Essays written with the help of ChatGPT were also more likely to use paragraphs starting with words like “however”, “moreover”, and “overall”, and numbered lists with items.

The researchers said ChatGPT’s default writing style “echoes the bland, clipped, and objective style that characterises much generic journalistic writing found on the internet”.

The report said: “The students found ChatGPT useful for gathering information quickly.

“However, they considered that complete reliance on this technology would produce essays of a low academic standard.”

Concerns about cheating

The study comes after the rise of generative AI tools, like ChatGPT, sparked concerns about cheating among pupils in the education sector.

Last year, universities including Cambridge, Oxford and Edinburgh banned students from using the technology in assessed work.

However, Russell Group universities have signed up to a set of principles to help ensure students are “AI literate” to make them more employable in the future.

The group said last summer that the principles will “shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT”.

Staff will also be trained to use AI when they are teaching.

Lead researcher Jude Brady, of Cambridge University Press and Assessment, said: “Our findings offer insights into the growing area of generative AI and assessment, which is still largely uncharted territory.

“Despite the small sample size, we are excited about these findings as they have the capacity to inform the work of teachers as well as students.”

She added: “We hope our research might help people to identify when a piece of text has been written by ChatGPT.

“For students and the wider population, learning to use and detect generative AI forms an increasingly important aspect of digital literacy.”

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, money-saving offers and more.