ChatGPT is being used for all kinds of purposes, but not all of them might be equally legitimate.
Academic papers have started popping up with the phrase “As an AI language model.” As discovered by Twitter user Andrew Kean Gao, there are several papers on Google scholar with this particular phrase. “As an AI language model”, of course, is the default text that’s returned by ChatGPT before many answers, and it indicates that these papers were likely — at least partly — written by ChatGPT.
For instance, a peer reviewed paper in the International Journal of New Innovations in Engineering and Technology on electric vehicles used the phrase “As an AI language model.”
A paper on electric vehicles, and another on the economic effects of climactic changes also used the phrase “As an AI language model.”
And even a paper on human-robot interaction seemed to have been written by ChatGPT, employing the same “As an AI language model”.
Now it’s not surprising that researchers are using ChatGPT to conduct their research — ChatGPT can provide helpful summarizes of topics, and can help create broad outlines which are often used by non-native English writers to write English text. But what’s more surprising is that researchers are directly lifting entire passages from ChatGPT’s output, and passing it off as their own work. To make it worse, researchers don’t even seem to read the output that they put into their papers — anyone having read the phrase “As an AI language model” would likely delete it from their draft to mask the use of ChatGPT, but the phrase remains conspicuously present in many occassions. And incredibly, the phrase also seems to have escaped the attention of peer reviewers, and makes its way into the final draft.
Researchers aren’t the only people using ChatGPT for plagiarism. Students have been using ChatGPT to submit assignments to such a degree that some schools have begun asking for assignments to be submitted through Google Docs, which would allow a teacher to check to document history to see if it’s been copy-pasted from ChatGPT, or written slowly over a long period of time by the student themselves. A lawyer was also caught using ChatGPT for a submission because ChatGPT made up fake cases, which were eventually highlighted by the prosecution, resulting in a censure from the courts. And with peer-reviewed papers popping up on Google Scholar with the phrase “As an AI language model”, it’s clear that few fields, if any, will remain immune to AI plagiarism, and companies could soon need to come up with robust ways to detect if text was written by an AI.