LifestyleScripps News Life

Actions

Study shows new tool detects AI-produced content with 99% accuracy

Study shows new tool detects AI-produced content with 99% accuracy
Posted
and last updated

Do you think you can spot AI content? For college professors, AI-generated content, like that from ChatGPT, poses a real problem. If students aren’t turning in their own work, they aren’t earning their own degrees.

What’s even worse is that one of the glaring problems with AI text is that because of how it’s gathered, it’s not always factual. These startling issues have prompted higher learning institutions to establish guidelines restricting ChatGPT use, detailing how students may or may not use the artificial intelligence chatbot for learning with explicit guardrails so they don’t damage their academic integrity.

Adobe

However, this isn’t a foolproof solution. Not every student is honoring their school’s parameters, and tools on the current market that detect AI-generated content have been known to give false positives, which have resulted in work getting flagged and delayed diplomas.

One university decided to take matters into its own hands, creating a high-tech solution to beat computer-generated content. A study published in the peer-reviewed journal Cell Reports Physical Science introduces a new tool, developed by the University of Kansas, that has been proven to detect ChatGPT-created text with over 99% accuracy. It’s one of the only tools available that is capable of detecting AI-written content in academic papers.

University of Kansas flag
Adobe

To train the detector, researchers used 64 human-written documents to make 128 articles using ChatGPT — a data set that’s 100,000 smaller than the size of those used to train other detectors, said Heather Desaire, the study’s lead author, in a statement. The team’s usage of a smaller data set and higher reliance on human knowledge increased the tool’s accuracy.

“So, we had this small data set, which could be processed super quickly, and all the documents could actually be read by people,” she said. “We used our human brains to find useful differences in the document sets, we didn’t rely on the strategies to differentiate humans and AI that had been developed previously.”

The results speak for themselves. When it came to identifying human-created articles versus AI-generated pieces, the tool’s accuracy rate was over 99%. Desaire’s team has made the tool available to researchers who would like to build from it — an opportunity for others to contribute to AI detection.

This story originally appeared on Simplemost. Check out Simplemost for additional stories.