If you’ve been getting cozy with the idea of your friendly AI chatbot, ChatGPT, whipping up your final thesis paper, it’s time for a reality check. While AI-generated writing has wowed the masses, researchers at the University of Kansas have just upped the ante by developing a method to identify AI-generated academic science writing with over 99% accuracy.

The research, published in the scientific journal Cell Reports, explains how it came to be.

“The need to discriminate human writing from AI is now both critical and urgent, particularly in domains like higher education and academic writing, where AI had not been a significant threat or contributor to authorship,” the researchers wrote.

AD

Past attempts to detect AI-generated content have met with varying degrees of success. OpenAI’s own efforts, for instance, haven’t been reliably effective. But these detection tools have traditionally focused on a broad spectrum of writing styles, rather than honing in on the specific tone and purpose that characterizes academic science writing.

The team's method, conversely, was fine-tuned for academic writing. By comparing 64 human-written perspectives and 128 ChatGPT-generated articles on the same research topics, they identified key markers of AI writing.

For example, ChatGPT’s penchant for using the words "others" and "researchers" over words like "however," "but," and "although" proved to be a dead giveaway. Also, humans’ predilection for complex paragraph structures, varied sentence lengths, and shifting word counts was another thumbprint missing in the AI writing.

When put to the test, the model managed to identify AI-generated articles from human ones with a 100% accuracy rate and distinguished AI-written paragraphs within human articles with a 92% rate. In similar tests, the team's model even outshone an existing AI text detector on the market.

According to an article sent by Cell to Techxplore, the next step is to push the boundaries of the model’s applicability, testing it on more extensive datasets and different types of academic science writing. As ChatGPT and its ilk become more sophisticated, it remains to be seen if the model will maintain its effectiveness.

AD

Moreover, users could potentially fine-tune open-source Large Language Models (LLMs) and manipulate the language to evade this and other detection methods. It would be yet another technological arms race: one between “jailbreakers” seeking to create the most advanced AI tools, and researchers aiming to develop the best detectors for AI-generated content.

But is this tool available for educators to use on student papers?

Desaire says the model is not quite there yet. However, their methods can be easily replicated for different applications.

In the meantime, remember: Before you instruct ChatGPT to “write my thesis,” be aware that your AI pal might be more predictable than you think, so it's better to Do Your Own Research (papers). As for ChatGPT, it might be time to go back to AI school for some lessons in nuance and variety.

Stay on top of crypto news, get daily updates in your inbox.