Preloader Image 1

Scientific detectives discovered dishonest use of ChatGPT in articles

The Open Ai website is displayed on a smartphone on a pile of open dictionaries.

Some researchers are using ChatGPT to write articles without disclosure.Credit: Jonathan Raa/NurPhoto via Getty

August 9, magazine Physics writes published a paper aimed at discovering new solutions to a complex mathematical equationfirst. It seemed authentic, but scientific detective Guillaume Cabanac discovered a strange phrase on the third page of the manuscript: ‘Reaction replication’.

The phrase is the label of a button on ChatGPT, a free-to-use AI chatbot that generates fluent text when users prompt it with a question. Cabanac, a computer scientist at the University of Toulouse in France, quickly posted a screenshot of the page in question on PubPeer – a website where scientists discuss published research.

The authors have since confirmed to the journal that they used ChatGPT to help draft their manuscript, said Kim Eggleton, head of peer review and research integrity at IOP Publishing. , said. Physics writespublisher in Bristol, UK. The anomaly was not detected during the two months of peer review (the paper was submitted in May and the revised version was submitted in July) or during the typesetting process. The publisher has now decided to retract the article because the authors did not declare their use of this tool when submitting the article. “This is a violation of our ethics policies,” Eggleton said. Corresponding author Abdullahi Yusuf, who is jointly affiliated with Biruni University in Istanbul and the Lebanese American University in Beirut, did not respond. Naturerequest for comment by.

‘The tip of the iceberg, the surface of the problem’

This is not the only case of a ChatGPT-supported manuscript entering a peer-reviewed journal without being published. Since April, Cabanac has flagged more than a dozen articles containing the notable ChatGPT phrases ‘Feedback Reproduction’ or ‘As an AI language model, I …’ and posted them on PubPeer. Many publishers, including Elsevier and Springer Nature, have said that authors can use ChatGPT and other large language modeling (LLM) tools to help them produce their manuscripts, as long as they declare report that. (NatureIts news team is editorially independent of publisher Springer Nature.)

The keyword phrase search picked up only naive undeclared uses of ChatGPT – where the authors forgot to edit out the identifiers – so the number of peer-reviewed articles is not revealed created with ChatGPT’s undeclared support will likely be much larger. “That’s just the tip of the iceberg,” Cabanac said. (The telltale signs are also changing: ChatGPT’s ‘Regenerate Response’ button changed earlier this year to ‘Regenerate’ in an update to the tool).

Cabanac detected typical ChatGPT phrases in several articles published in Elsevier. The latest article was published on August 3, 2019 Resource policy explores the impact of e-commerce on fossil fuel efficiency in developing countries2. Cabanac noticed that some of the equations in the paper didn’t make sense, but the giveaway was on a table: ‘Please note that as an AI language model, I cannot create specific tables or conduct tests…’

An Elsevier spokesperson said Nature that the publisher was “aware of the issue” and was investigating it. The paper’s authors, at Liaoning University in Shenyang, China, and the China Academy of International Trade and Economic Cooperation in Beijing, did not respond. Naturerequest for comment by.

Scary fluency

Articles written in whole or in part using computer software without the author’s disclosure are nothing new. However, they often contain subtle but detectable traces – such as specific language patterns or mistranslated ‘tortured phrases’ – which help distinguish them from human copies writes out, said Matt Hodgkinson, director of research integrity at the London-based UK Office of Research Integrity. But if the researchers removed the boilerplate ChatGPT phrases, the more complex chatbot’s fluent text was “nearly impossible” to detect, Hodgkinson said. “It’s basically an arms race,” he said – “the fraudsters against the people who are trying to stop them.”

Cabanac and others have also discovered undisclosed uses of ChatGPT (through narrative phrases) in peer-reviewed conference papers and in preprints – manuscripts that have not yet been tested. through peer review. When these issues are raised on PubPeer, authors sometimes admit that they used ChatGPT, undeclared, to help create the work.

Elisabeth Bik, a microbiologist and independent research integrity consultant in San Francisco, California, says the rapid rise of ChatGPT and other innovative AI tools will empower paper mills – companies that create and sell fake manuscripts to researchers looking to increase their publication output. . “It will make the problem a hundred times worse,” Bik said. “I’m very worried that we’ve got a huge amount of this paperwork that we don’t even recognize anymore.”

Stretch to the limit

David Bimler, who discovered the fake articles under the pseudonym Smut Clyde, said: “The problem of LLM-produced articles going undisclosed in journals points to a deeper problem: people Overdue reviewers often do not have time to thoroughly scour manuscripts for red flags. “The entire scientific ecosystem is either published or perished,” said Bimler, a retired psychologist who formerly worked at Massey University in Palmerston North, New Zealand. “The number of gatekeepers cannot keep up.”

Hodgkinson said ChatGPT and other LLMs tend to produce false references, which can be a signal to peer reviewers looking to detect use of these tools in manuscripts. “If the reference doesn’t exist then that’s a red flag,” he said. For example, the website Retraction Watch reported on a preprint on millipedes written in ChatGPT; it was discovered by a researcher cited by the work, who found that its references were spurious.

Rune Stensvold, a microbiologist at the State Serum Institute in Copenhagen, encountered the problem of fake references when a student asked him for a copy of a paper that Stensvold had apparently co-authored. with one of his colleagues in 2006. That article is not available. exist. Students asked an AI chatbot to suggest articles about blastocyst – an intestinal parasite – and the chatbot pieced together a reference with Stensvold’s name on it. “It looks very real,” he said. “It taught me that when I review articles, I should probably start by looking at the references.”

Additional reporting by Chris Stokel-Walker.

#Scientific #detectives #discovered #dishonest #ChatGPT #articles

Written By

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *