University of Kansas researchers published a paper Wednesday detailing an algorithm they developed that can detect with a high rate of accuracy whether a research article was written by a text generation model, like ChatGPT.
The researchers, who published in Cell Reports Physical Science, claim that their algorithm attained 100% accuracy in detecting AI-written research papers, based on a pool of 30 human-written research papers and 60 AI-written papers. When tested at the paragraph level, the algorithm was 92% accurate.
If true at scale, the tool enjoys greater accuracy than other algorithms designed for the same purpose. OpenAI’s own ChatGPT-detection algorithm is not accurate.
The researchers trained their algorithm using 64 human-written scientific papers and 128 AI-written articles.
They identified several common differences between the writing styles of AI and humans, such as the human propensity for longer paragraphs, a larger vocabulary, more punctuation and a greater use of qualifiers, like “however.”
Such research could be of interest to higher education institutions, which have alternately prohibited text generation tools as cheating aids or embraced them as learning aids.