Bad science can be stopped through chain retraction


Feet of Clay: Detecting Red Flags in Authors’ Reference Lists and Author Comments on PubMed, PubPeer and Research4Life

It is important that the scientific literature does not perpetuate the results that are found in a published paper. No one wants their reasoning to be based on false premises. In the same way that many people wouldn’t accept a medical treatment bolstered by shaky clinical trials, the scientific community doesn’t want researchers, the public and, increasingly, artificial intelligence (AI) to rely on erroneous data or conclusions from retracted articles.

I encourage publishers to donate the article processing charges they received on publication to charity. For instance, IOP Publishing, owned by the Institute of Physics in London, was among the first publishers to retract articles on the basis of tortured phrases. It donates revenues from its retracted articles to Research4Life, an organization that provides institutions in low-and middle-income countries with online access to the academic literature.

To facilitate all the steps, publishers need to update their practices and add more resources to their research-integrity teams.

In theory, meta-analyses or systematic reviews should be withdrawn or corrected if work they have cited goes on to be retracted, according to a policy issued in 2021 by the Cochrane Collaboration, an international group known for its gold-standard reviews of medical treatments.

There should be more concern from publishers. The reliability of a paper’s conclusion has been called into question, thanks to the notes that serve to alert readers.

Publishers did not flag citations to the papers in submitted manuscripts until recently. However, many publishers say they are aware of Cabanac’s tool and monitor issues he raises, and some are bringing in similar screening tools.

Tools exist to check reference lists, such as RetractoBot, which alerts scholars when papers they have cited are retracted. And the Feet of Clay Detector can be used, for free, to check whether the reference list of a published article has any red flags. It can run checks using just the title of an article or entire publishers’ portfolios, making it easy for individual researchers and journals to check the literature that is of interest to them.

The editorial boards of subscribed publishers are served by the STM Integrity Hub, which is currently in development and integrates the checks and balances into it. The software aims to flag any suspicious signals such as tortured phrases, comments on PubPeer or retracted references to editors.

Co-authors, editors, referees and typesetters should keep an eye out for unnatural phrases in articles. They can expose text that has been generated by artificial intelligence or by an elaborate form of copy-and-paste that uses a translation tool to make phrases unrecognizable to plagiarism-detection tools.

There is a list of the detector findings on his website, but he has flagged more than 1,700 papers that were based on their re-enactment of previously published work. Some authors have thanked Cabanac for alerting them to problems in their references. Others argue that it’s unfair to effectively cast aspersions on their work because of retractions made after publication that, they say, don’t affect their paper.

When using a study, it’s important for the authors to check for any post-publication criticism and any reference to it in the manuscript draft.

Detecting Research Illusions Using PubPeer and Digital Science (Critical Review, Springer Nature), a Tool to Detect and Frisk a Scientific Paper

Two PubPeer extensions play an important role. If a paper has received comments on PubPeer, one plug-in automatically flags it, so that readers can read more about it. The other works on finding the same articles in a user’s digital library. Readers can check the status of the article on the landing page of the publisher if they click the Crossmark button.

I made a tool for the PPS to comb the literature for nonsensical phrases that are popular in the scientific literature. Each tortured phrase needs to be seen by a human reader, then added as a ‘fingerprint’ to the tool that screens the literature using 130 million scientific documents. So far, 5,800 fingerprints have been collated. Humans are involved in a third step to check for false positives. (Dimensions is in the portfolio of Digital Science, which is part of Holtzbrinck, the majority shareholder in Nature’s publisher, Springer Nature.)

Thousands of problematic papers in the literature have already been flagged by a researchintegrity sleuth thanks to the creation of software. He hopes that the new detector he has been developing over the past two years, which was described in a Comment article in Nature this week, will provide another way to stop bad research from spreading through the scientific literature.

There are other ways to question a study, such as when a growing number of papers are being reported on the PubPeer platform. Over two hundred thousand articles have received comments on PubPeer, and the majority of them were critical. But publishers typically don’t monitor these, and the authors of a criticized paper aren’t obliged to respond. It is common for post-publication comments, including those from eminent researchers in the field, to raise potentially important issues that go unacknowledged by the authors and the publishing journal.

Journals are very slow. The process requires journal staff to mediate a conversation between all parties — a discussion that authors of the criticized paper are typically reluctant to engage in and which sometimes involves extra data and post-publication reviewers. Most investigations can take months or years before the outcome is made public.

Scientists who discover a suspicious or problematic paper can flag it through the conventional route by contacting the editorial team of the journal in which it appeared. But it can be difficult to find out how to raise concerns, and who with. Furthermore, this process is typically not anonymous and, depending on the power dynamics at play, some researchers might be unwilling or unable to enter these conversations.

Paper mills have been established that take advantage of the system. They produce manuscripts based on made-up, manipulated or plagiarised data, sell those fake manuscripts as well as authorship and citations, and engineer the peer-review process.

A researcher’s performance metrics — including the number of papers published, citations acquired and peer-review reports submitted — can all serve to build a reputation and visibility, leading to invitations to speak at conferences, review manuscripts, guest-edit special issues and join editorial boards. It can give more weight to job applications, be key to attracting funding, and help build a high-profile career. The institutions that host scientists who publish a lot seem to like it.

Article retractions have been growing steadily over the past few decades, soaring to a record-breaking figure of nearly 14,000 last year, compared with less than 1,000 per year before 2009 (see go.nature.com/3azcxan and go.nature.com/3x9uxfn).

In January, a review paper1 about ways to detect human illnesses by examining the eye appeared in a conference proceedings published by the Institute of Electrical and Electronics Engineers (IEEE) in New York City. The authors of the paper did not notice that most of the paper had been pulled.

RetractoBot, a tool developed by researchers at the University of Oxford, UK, is designed to detect when a study has been removed from the internet. The software currently monitors 20,000 retracted papers and about 400,000 papers, published after 2000, that cite them. The results of a randomized trial run by the team behind it are planned for next year, according to Nicholas De Voito, an integrity researcher at Oxford.

“We are not accusing anybody of doing something wrong. We are just observing that in some bibliographies, the references have been retracted or withdrawn, meaning that the paper may be unreliable,” Cabanac says. The Feet of Clay Detector is a tool used to find weak clay foundations, an analogy from the Bible.

Another flagged study5 is by Ahmad Salar Elahi, a physicist affiliated with the Islamic Azad University in Tehran who has already had dozens of his papers retracted, in many cases because of excessive self-citation and instances of faked peer review. The website Retraction Watch wrote about the Nazari and Chen cases, as well as reports about the possibility of Elahi being dismissed from the university. Now, Ghoranneviss — who has retired — says that Elahi was barred only from that centre and not the rest of the university. Ghoranneviss, one of the co-authors of the Elahi papers, said he was not aware of it. Neither the university nor Elahi responded to Nature queries. The IEEE and Springer Nature, publishers of the journals that ran the Elahi papers, say they’re investigating.

The National Pingtung University of Education in Taiwan was where the computer scientist Chen-Yuan Chen worked. He was behind a syndicate that faked peer review and boosted citations, which came to light in 2014 after an investigation by the publisher SAGE. Some of Chen’s papers that are still in the literature were published by Springer Nature, which says it hadn’t been aware of the issue but is now investigating. Neither Chen nor Nazari responded to Nature’s requests for comment.

Some authors are unhappy about Cabanac’s work. The journal Clinical and Translational Oncology placed an expression of concern on a paper in the summer of 2020, warning that it may not be reliable because of the number of articles that have been removed. The journal’s publishing editor, Ying Jia at Springer Nature in Washington DC, says the team was alerted by one of Cabanac’s posts on social media last year.

The study states that authors tend to be hesitant to update reviews even after being told the papers cite reams of work. Researchers e-mailed the authors of 88 systematic reviews that cited now-retracted studies in bone health by a Japanese fraudster, Yoshihiro Sato. Last year, the authors of the 11 reviews told Nature that they were not updated.

Authors aren’t routinely alerted if work cited in their past papers is withdrawn — although in recent years, paper-management tools for researchers such as Zotero and EndNote have incorporated Retraction Watch’s open database of retracted papers and have begun to flag papers that have been taken down. According to Cabanac, publishers may use tools similar to his to create similar alerts.

The team has alerted more than 100,000 researchers so far. DeVito says that a minority of authors are annoyed about being contacted, but that others are grateful. “We are merely trying to provide a service to the community to reduce this practice from happening,” he says.