↩ Accueil

Vue lecture

Peer review in the age of artificial intelligence

It is Peer Review Week and the theme for 2025 is “Rethinking Peer Review in the AI Era”. This is not surprising given the rapid rise in the use and capabilities of artificial intelligence. However, views on AI are deeply polarized for reasons that span its legality, efficacy and even its morality.

A recent survey done by IOP Publishing – the scientific publisher that brings you Physics World – reveals that physicists who do peer review are polarized regarding whether AI should be used in the process.

IOPP’s Laura Feetham-Walker is lead author of AI and Peer Review 2025which describes the survey and analyses its results. She joins me in this episode of the Physics World Weekly podcast in a conversation that explores reviewers’ perceptions of AI and their views of how it should, or shouldn’t, be used in peer review.

The post Peer review in the age of artificial intelligence appeared first on Physics World.

  •  

Artificial intelligence could help detect ‘predatory’ journals

Artificial intelligence (AI) could help sniff out questionable open-access publications that are more interested in profit than scientific integrity. That is according to an analysis of 15,000 scientific journals by an international team of computer scientists. They find that dubious journals tend to publish an unusually high number of articles and feature authors who have many affiliations and frequently self-cite (Sci. Adv. 11 eadt2792).

Open access removes the requirement for traditional subscriptions. Articles are instead made immediately and freely available for anyone to read, with publication costs covered by authors by paying an article-processing charge.

But as the popularity of open-access journals has risen, there has been a growth in “predatory” journals that exploit the open-access model by making scientists pay publication fees without a proper peer-review process in place.

To build an AI-based method for distinguishing legitimate from questionable journals, Daniel Acuña, a computer scientist at the University of Colorado Boulder, and colleagues used the Directory of Open Access Journals (DOAJ) – an online, community-curated index of open-access journals.

The researchers trained their machine-learning model on 12,869 journals indexed on the DOAJ and 2536 journals that have been removed from the DOAJ due to questionable practices that violate the community’s listing criteria. The team then tested the tool on 15,191 journals listed by Unpaywall, an online directory of free research articles.

To identify questionable journals, the AI-system analyses journals’ bibliometric information and the content and design of their websites, scrutinising details such as the affiliations of editorial board members and the average author h-index – a metric that quantifies a researcher’s productivity and impact.

The AI-model flagged 1437 journals as questionable, with the researchers concluding that 1092 were genuinely questionable while 345 were false positives.

They also identified around 1780 problematic journals that the AI screening failed to flag. According to the study authors, their analysis shows that problematic publishing practices leave detectable patterns in citation behaviour such as the last authors having a low h-index together with a high rate of self-citation.

Acuña adds that the tool could help to pre-screen large numbers of journals, adding, however, that “human professionals should do the final analysis”. The researcher’s novel AI screening system isn’t publicly accessible but they hope to make it available to universities and publishing companies soon.

The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.

  •