The false idea that vaccines cause autism began when a fraudulent 1998 study was published by British doctor Andrew Wakefield — a claim that spread misinformation that continued for decades.
“There was a whole network and a whole industry and a whole set of people who believed that (claim) because it was published in a journal,” University of Colorado Boulder professor Daniel Acuna said. “And it actually was (published) in a very good journal, but the same kind of thing can happen in other journals.”
Acuna, in collaboration with other scientists and students, has developed an AI tool that can identify predatory scientific journals. These fake journals charge scientists hundreds or thousands of dollars and publish their research on the journal’s website without offering any service. These journals will publish anything, contrary to the standard process of peer review where experts reviews the ideas presented in the research paper and check for scientific accuracy.
Out of nearly 15,200 scientific journals, the AI tool was able to identify more than 1,000 of these illegitimate journals.
“Science is a big web of ideas that are all interconnected,” Acuna said. “So if one of those ideas is wrong, it’s going to affect an entire sub-part of science and potentially all of science.”
Acuna started the project five years ago after realizing the process for identifying and analyzing these journals is labor intensive. And, when a journal is caught, the same company will often simply spawn a new journal to take its place.
So, he developed an AI tool to replicate what people would do when trying to determine if a journal is reputable.
“We ask AI to go to the website, review the information, review the information of the authors, what they have published (and) whether they’re famous or associated with reputable institutions,” Acuna said.
Typically, the predatory journals will target younger, more inexperienced authors. The AI tool also considers how new the journal is and if it uses an old website design. The AI then looks for correlation between those different signals to determine whether the journal is trustworthy.
Pawin Taechoyotin, a graduate student who works with Acuna, said the phenomenon is similar to misinformation that happens online in news and social media.
“It also happens in science as well,” Taechoyotin said. “There are a lot of papers that are retracted five years after it was published or maybe 10 years after it was published, and the reason the retraction could be found is that the results are not relevant or that there was manipulating of results.”
Acuna launched a startup, called ReviewerZero.ai, that uses AI to detect and prevent different types of research integrity issues. For example, the company helps publishers and institutions catch potential problems with authors and journals.
Moving forward, Acuna plans to look at potential networks of these predatory journals. He’s found through his work that they’re often not acting alone and there are entire companies behind them, including authors that are involved in the fraud.