↩ Accueil

Vue normale

Reçu hier — 13 novembre 2025

Ultra-rich media owners are tightening their grip on democracy. It’s time to wrest our power back | Robert Reich

13 novembre 2025 à 12:00

The Guardian has no billionaire or corporate owner: funded by readers, our fierce independence is guaranteed

The richest man on Earth owns X.

The family of the second-richest man owns Paramount, which owns CBS, and could soon own Warner Bros, which owns CNN.

Continue reading...

© Photograph: JW Hendricks/NurPhoto/Shutterstock

© Photograph: JW Hendricks/NurPhoto/Shutterstock

© Photograph: JW Hendricks/NurPhoto/Shutterstock

Reçu avant avant-hier

Physicists discuss the future of machine learning and artificial intelligence

12 novembre 2025 à 16:00
Pierre Gentine, Jimeng Sun, Jay Lee and Kyle Cranmer
Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.

IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.

Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical, and health sciences and Machine Learning: Engineering focuses on applied AI and non-traditional machine learning to the most complex engineering challenges.

Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.

Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.

What do you attribute to the huge growth over the past decade in research into and using machine learning?

Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.

Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.

Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.

Jay Lee (JL):  Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing.  The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.

What areas of machine learning excite you the most and why?

KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.

PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.

JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.

JL: One area is  generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.

What vision do you have for your journal in the coming years?

KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.

PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.

JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.

JL:  Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigor and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.

The post Physicists discuss the future of machine learning and artificial intelligence appeared first on Physics World.

Peer review in the age of artificial intelligence

18 septembre 2025 à 13:00

It is Peer Review Week and the theme for 2025 is “Rethinking Peer Review in the AI Era”. This is not surprising given the rapid rise in the use and capabilities of artificial intelligence. However, views on AI are deeply polarized for reasons that span its legality, efficacy and even its morality.

A recent survey done by IOP Publishing – the scientific publisher that brings you Physics World – reveals that physicists who do peer review are polarized regarding whether AI should be used in the process.

IOPP’s Laura Feetham-Walker is lead author of AI and Peer Review 2025which describes the survey and analyses its results. She joins me in this episode of the Physics World Weekly podcast in a conversation that explores reviewers’ perceptions of AI and their views of how it should, or shouldn’t, be used in peer review.

The post Peer review in the age of artificial intelligence appeared first on Physics World.

Artificial intelligence could help detect ‘predatory’ journals

17 septembre 2025 à 11:42

Artificial intelligence (AI) could help sniff out questionable open-access publications that are more interested in profit than scientific integrity. That is according to an analysis of 15,000 scientific journals by an international team of computer scientists. They find that dubious journals tend to publish an unusually high number of articles and feature authors who have many affiliations and frequently self-cite (Sci. Adv. 11 eadt2792).

Open access removes the requirement for traditional subscriptions. Articles are instead made immediately and freely available for anyone to read, with publication costs covered by authors by paying an article-processing charge.

But as the popularity of open-access journals has risen, there has been a growth in “predatory” journals that exploit the open-access model by making scientists pay publication fees without a proper peer-review process in place.

To build an AI-based method for distinguishing legitimate from questionable journals, Daniel Acuña, a computer scientist at the University of Colorado Boulder, and colleagues used the Directory of Open Access Journals (DOAJ) – an online, community-curated index of open-access journals.

The researchers trained their machine-learning model on 12,869 journals indexed on the DOAJ and 2536 journals that have been removed from the DOAJ due to questionable practices that violate the community’s listing criteria. The team then tested the tool on 15,191 journals listed by Unpaywall, an online directory of free research articles.

To identify questionable journals, the AI-system analyses journals’ bibliometric information and the content and design of their websites, scrutinising details such as the affiliations of editorial board members and the average author h-index – a metric that quantifies a researcher’s productivity and impact.

The AI-model flagged 1437 journals as questionable, with the researchers concluding that 1092 were genuinely questionable while 345 were false positives.

They also identified around 1780 problematic journals that the AI screening failed to flag. According to the study authors, their analysis shows that problematic publishing practices leave detectable patterns in citation behaviour such as the last authors having a low h-index together with a high rate of self-citation.

Acuña adds that the tool could help to pre-screen large numbers of journals, adding, however, that “human professionals should do the final analysis”. The researcher’s novel AI screening system isn’t publicly accessible but they hope to make it available to universities and publishing companies soon.

The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.

❌