ChatGPT writes convincing fake scientific abstracts that fool reviewers in study

Nanowerk  January 16, 2023
A team of researchers in the US (Northwestern University, University of Chicago) took titles from recent papers from high-impact journals and asked ChatGPT to generate abstracts. They ran these generated abstracts and the original abstracts through a plagiarism detector and AI output detector, and had blinded human reviewers try to differentiate between generated and original abstracts. Each reviewer was given 25 abstracts that were a mixture of the generated and original abstracts and asked them to give a binary score of what they thought the abstract was. They could only spot ChatGPT generated abstracts 68% of the time. The reviewers also incorrectly identified 14% of real abstracts as being artificial intelligence AI generated. The fake abstracts did not set off alarms using traditional plagiarism-detection tools. However, an AI output detector was pretty good at detecting output from ChatGPT. AI language models have a potential to help automate the writing to alleviate publishing bottleneck, and making it easier for non-English-speaking scientists to share their work with the broader community. They suggest that it be included in the scientific editorial process as a screening process to protect from targeting by organizations such as paper mills. Hard-to-detect fake abstracts could undermine science, and paper mills could increase production…read more.

Posted in Scholarly publishing and tagged , , .

Leave a Reply