Use of AI Is Seeping Into Academic Journals
It is proving difficult to detect
In its August edition, Resources Policy, an academic journal under the Elsevier publishing umbrella, featured a peer-reviewed study about how ecommerce has affected fossil fuel efficiency in developing nations. But buried in the report was a curious sentence: "Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table."
The study's three listed authors had names and university or institutional affiliations—they did not appear to be AI language models. But for anyone who has played around in ChatGPT, that phrase may sound familiar: The generative AI chatbot often prefaces its statements with this caveat, noting its weaknesses in delivering some information. After a screenshot of the sentence was posted to X, formerly Twitter, by another researcher, Elsevier began investigating. The publisher is looking into the use of AI in this article and "any other possible instances," Andrew Davis, vice president of global communications at Elsevier, said to WIRED in a statement.
Elsevier's AI policies do not block the use of AI tools to help with writing, but they do require disclosure. The publishing company uses its own in-house AI tools to check for plagiarism and completeness, but it does not allow editors to use outside AI tools to review papers.
Please select this link to read the complete article from WIRED.