↩ Accueil

Vue normale

The power of a poster

28 janvier 2026 à 12:00

Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan
Secret to success Kevin McGuigan discovered that one research poster can lead to generous funding contributions. (Courtesy: Antonio Jaen Osuna)

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

The post The power of a poster appeared first on Physics World.

ATLAS narrows the hunt for dark matter

28 janvier 2026 à 10:04

Researchers at the ATLAS collaboration have been searching for signs of new particles in the dark sector of the universe, a hidden realm that could help explain dark matter. In some theories, this sector contains dark quarks (fundamental particles) that undergo a shower and hadronization process, forming long-lived dark mesons (dark quarks and antiquarks bound by a new dark strong force), which eventually decay into ordinary particles. These decays would appear in the detector as unusual “emerging jets”: bursts of particles originating from displaced vertices relative to the primary collision point.

Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.

No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.

Read the full article

Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment

The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801

Do you want to learn more about this topic?

Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B WangE AbdallaF Atrio-Barandela and D Pavón (2016)

The post ATLAS narrows the hunt for dark matter appeared first on Physics World.

How do bacteria produce entropy?

28 janvier 2026 à 10:02

Active matter is matter composed of large numbers of active constituents, each of which consumes chemical energy in order to move or to exert mechanical forces.

This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.

Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.

Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.

The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.

One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.

The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.

The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.

The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.

Read the full article

Field theories of active particle systems and their entropy production – IOPscience

G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601

The post How do bacteria produce entropy? appeared first on Physics World.

Einstein’s recoiling slit experiment realized at the quantum limit

28 janvier 2026 à 10:00

Quantum mechanics famously limits how much information about a system can be accessed at once in a single experiment. The more precisely a particle’s path can be determined, the less visible its interference pattern becomes. This trade-off, known as Bohr’s complementarity principle, has shaped our understanding of quantum physics for nearly a century. Now, researchers in China have brought one of the most famous thought experiments surrounding this principle to the quantum limit, using a single atom as a movable slit.

The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.

For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.

A slit that kicks back

To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).

Illustration showing the experimental realization
Experimental realization To perform Einstein’s thought experiment in the lab, the researchers used a single trapped atom as a movable slit. Photon paths become correlated with the atom’s motion, allowing researchers to probe the trade-off between interference and which-path information. (Courtesy: Y-C Zhang et al. Phys. Rev. Lett. 135 230202)

In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.

Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.

Watching interference fade 

As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.

This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.

“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”

Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.

Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.

This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.

The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.

European Space Agency unveils first images from Earth-observation ‘sounder’ satellite

27 janvier 2026 à 19:26

The European Space Agency has released the first images from the Meteosat Third Generation-Sounder (MTG-S) satellite. They show variations in temperature and humidity over Europe and northern Africa in unprecedented detail with further data from the mission set to improve weather-forecasting models and improve measurements of air quality over Europe.

Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.

The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.

The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.

The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.

The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.

Whole-Earth image showing cloud formation
(Courtesy: EUMETSAT)

“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”

ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.

The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.

Uranus and Neptune may be more rocky than icy, say astrophysicists

27 janvier 2026 à 14:00

Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.
Slices of different pies: According to models developed with “agnostic” initial assumptions, Uranus (top) and Neptune (bottom) could be composed mainly of water ice (blue areas), but they could also contain substantial amounts of silicon dioxide rock (brown areas). (Courtesy: Luca Morf)

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

String-theory concept boosts understanding of biological networks

27 janvier 2026 à 10:35

Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

The secret life of TiO₂ in foams

26 janvier 2026 à 17:31

Porous carbon foams are an exciting area of research because they are lightweight, electrically conductive, and have extremely high surface areas. Coating these foams with TiO₂ makes them chemically active, enabling their use in energy storage devices, fuel cells, hydrogen production, CO₂‑reduction catalysts, photocatalysis, and thermal management systems. While many studies have examined the outer surfaces of coated foams, much less is known about how TiO₂ coatings behave deep inside the foam structure.

In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.

They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.

Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.

Read the full article

A comprehensive multi-scale study on the growth mechanisms of magnetron sputtered coatings on open-cell 3D foams

Loris Chavée et al 2026 Prog. Energy 8 015002

Do you want to learn more about this topic?

Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)

The post The secret life of TiO₂ in foams appeared first on Physics World.

Laser processed thin NiO powder coating for durable anode-free batteries

26 janvier 2026 à 17:30

Traditional lithium‑ion batteries use a thick graphite anode, where lithium ions move in and out of the graphite during charging and discharging. In an anode‑free lithium metal battery, there is no anode material at the start, only a copper foil. During the first charge, lithium leaves the cathode and deposits onto the copper as pure lithium metal, effectively forming the anode. Removing the anode increases energy density dramatically by reducing weight, and it also simplifies and lowers the cost of manufacturing. Because of this, anode‑free batteries are considered to have major potential for next‑generation energy storage. However, a key challenge is that lithium deposits unevenly on bare copper, forming long needle‑like dendrites that can pierce the separator and cause short circuits. This uneven growth also leads to rapid capacity loss, so anode‑free batteries typically fail after only a few hundred cycles.

In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.

This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.

Read the full article

Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery

Supriya Kadam et al 2025 Prog. Energy 7 045003

Do you want to learn more about this topic?

Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)

The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.

Planning a sustainable water future in the United States

26 janvier 2026 à 17:28

Within 45 years, water demand in the United States is predicted to double, while climate change is expected to worsen freshwater supplies, with 44% of the country already experiencing some form of drought. One way to expand water resources is desalination, where salt is removed from seawater or brackish groundwater to make clean, usable water. Brackish groundwater contains far less salt than seawater, making it much easier and cheaper to treat, and the United States has vast reserves of it in deep aquifers. The challenge is that desalination traditionally requires a lot of energy and produces a concentrated brine waste stream that is difficult and costly to dispose of. As a result, desalination currently provides only about 1% of the nation’s water supply, even though it is a major source of drinking water in regions such as the Middle East and North Africa.

Researchers Vasilis Fthenakis (left) and Zhuoran Zhang (right) from Columbia University taken at Nassau Point in Long Island
Researchers Vasilis Fthenakis (left) and Zhuoran Zhang (right) from Columbia University taken at Nassau Point in Long Island (Courtesy: Zhuoran Zhang, Columbia University)

In this work, the researchers show how desalination of brackish groundwater can be made genuinely sustainable and economically viable for addressing the United States’ looming water shortages. A key part of the solution is zero‑liquid‑discharge, which avoids brine disposal by extracting more freshwater and recovering salts such as sodium, calcium, and magnesium for reuse. Crucially, the study demonstrates that when desalination is powered by low‑cost solar and wind energy, the overall process becomes far more affordable. By 2040, solar photovoltaics paired with optimised battery storage are projected to produce electricity at lower cost than the grid in the states facing the largest water deficits, making renewable‑powered desalination a competitive option.

The researchers also show that advanced technologies, such as high‑recovery reverse osmosis and crystallisation, can achieve zero‑liquid‑discharge without increasing costs, because the extra water and salt recovery offsets the expense of brine management. Their modelling indicates that a full renewable‑powered zero‑liquid‑discharge pathway can produce freshwater at an affordable cost, while reducing environmental impacts and avoiding brine disposal altogether. Taken together, this work outlines a realistic, sustainable pathway for large‑scale desalination in the United States, offering a credible strategy for securing future water supplies in increasingly water‑stressed regions.

Progress diagram adapted from article
Progress diagram adapted from article (Courtesy: Zhuoran Zhang, Columbia University)

Do you want to learn more about this topic?

Review of solar-enabled desalination and implications for zero-liquid-discharge applications by Vasilis Fthenakis et al. (2024)

 

The post Planning a sustainable water future in the United States appeared first on Physics World.

Could silicon become the bedrock of quantum computers?

26 janvier 2026 à 17:00

Silicon, in the form of semiconductors, integrated chips and transistors, is the bedrock of modern classical computers – so much so that it lends its name to technological hubs around the world, beginning with Silicon Valley in the US . For quantum computers, the bedrock is still unknown, but a new platform developed by researchers in Australia suggests that silicon could play a role here, too.

Dubbed the 14|15 platform due to its elemental constituents, it combines a crystalline silicon substrate with qubits made from phosphorus atoms . By relying on only two types of atoms, team co-leader Michelle Simmons says the device “avoids the interfaces and complexities that plague so many multi-material platforms” while enabling “high-quality qubits with lower noise, simplicity of design and device stability”.

Boarding at platform 14|15

Quantum computers take registers of qubits, which store quantum information, and apply basic operations to them sequentially to execute algorithms. One of the primary challenges they face is scalability – that is, sustaining reliable, or high-fidelity, operations on an increasing number of qubits. Many of today’s platforms use only a small number of qubits, for which operations can be individually tuned for optimal performance. However, as the amount of hardware, complexity and noise increases, this hands-on approach becomes debilitating.

Silicon quantum processors may offer a solution. Writing in Nature, Simmons, Ludwik Kranz, and their team at Silicon Quantum Computing (a spinout from the University of New South Wales in Sydney) describe a system that uses the nuclei of phosphorus atoms as its primary qubit. Each nucleus behaves a little like a bar magnet with an orientation (north/south or up/down) that represents a 0 or 1.

These so-called spin qubits are particularly desirable because they exhibit relatively long coherence times, meaning information can be preserved for long enough to apply the numerous operations of an algorithm. Using monolithic, high-purity silicon as the substrate further benefits coherence since it reduces undesirable charge and magnetic noise arising from impurities and interfaces.

To make their quantum processor, the team deposited phosphorus atoms in small registers a few nanometres across. Within each register, the phosphorus nuclei do not interact enough to generate the entangled states required for a quantum computation. The team remedy this by loading each cluster of phosphorous atoms with a electron that is shared between the atoms. The result is that so-called hyperfine interactions, wherein each nuclear spin interacts with the electron like an interacting bar magnet, arise and provide the interaction necessary to entangle nuclear spins within each register.

By combining these interactions with control of individual nuclear spins, the researchers showed that they can generate Bell states (maximally entangled two-qubit states) between pairs of nuclei within a register with error rates as low as 0.5% – the lowest to date for semiconductor platforms.

Scaling through repulsion

The team’s next step was to connect multiple processors – a step that exponentially increases their combined capacity. To understand how, consider two quantum processors, one with n qubits and the other m qubits. Isolated from one another, they can collectively represent at most 2n + 2m states. Once they are entangled, however, they can represent 2n + m states.

Simmons says that silicon quantum processors offer an inherent advantage in scaling, too. Generating numerous registers on a single chip and using “naturally occurring” qubits, she notes, reduces their need for extraneous confinement gates and electronics as they scale.

The researchers showcased these scaling capabilities by entangling a register of four phosphorus atoms with a register of five, separated by 13 nm. The entanglement of these registers is mediated by the electron-exchange interaction, a phenomenon arising from the combination of Pauli’s exclusion principle and Coulomb repulsion when electrons are confined in a small region. By leveraging this and all other interactions and control in their toolkit, the researchers generate entanglement of eight data qubits across the two registers.

Retaining such high-quality qubits and individual control of them despite their high density demonstrates the scaling potential of the platform. Future avenues of exploration include increasing the size of 2D arrays of registers to increase the number of qubits, but Simmons says the rest is “top secret”, adding “the world will know soon enough”.

The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.

Is our embrace of AI naïve and could it lead to an environmental disaster?

26 janvier 2026 à 12:00

According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

New sensor uses topological material to detect helium leaks

26 janvier 2026 à 10:00

A new sensor detects helium leaks by monitoring how sound waves propagate through a topological material – no chemical reactions required. Developed by acoustic scientists at Nanjing University, China, the innovative, physics-based device is compact, stable, accurate and capable of operating at very low temperatures.

Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.

A two-dimensional acoustic topological material

The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.

To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.

Many advantages over traditional gas sensors

Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.

As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.

Other gases could be detected

Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.

While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.

Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

Encrypted qubits can be cloned and stored in multiple locations

24 janvier 2026 à 16:09

Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.

Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”

No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.

Signal and noise

In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.

Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”

Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.

Cloning clones

Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.

 The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”

Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

The forgotten pioneers of computational physics

11 novembre 2025 à 11:00

When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before.

These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.

This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.

Human and mechanical computers

Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.

Three women in a basement lab performing calculations by hand
Human computers The term “computer” originally referred to people who performed calculations by hand. Here, Kay McNulty, Alyse Snyder and Sis Stump operate the differential analyser in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, circa 1942–1945. (Courtesy: US government)

In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.

As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.

The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.

The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.

Electronic computers

Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.

Two women adjusting switches on a large room-sized computer
World first The ENIAC was the first programmable, electronic, general-purpose digital computer. It was built at the US Army’s Ballistic Research Laboratory in 1945, then moved to the University of Pennsylvania in 1946. Its initial team of six coders and operators were all women, including Betty Jean Jennings (later Bartik – left of photo) and Frances Bilas (later Spence – right of photo). They are shown preparing the computer for Demonstration Day in February 1946. (Courtesy: US Army/ ARL Technical Library)

Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.

Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.

Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).

Early algorithms

The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.

In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys. 21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.

Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).

Black and white photo of two men looking at a chess board on a table in front of large rack of computer switches
A mind for chess Paul Stein (left) and Nicholas Metropolis play “Los Alamos” chess against the MANIAC. “Los Alamos” chess was a simplified version of the game, with the bishops removed to reduce the MANIAC’s processing time between moves. The computer still needed about 20 minutes between moves. The MANIAC became the first computer to beat a human opponent at chess in 1956. (Courtesy: US government / Los Alamos National Laboratory)

Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.

The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.

Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.

In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.

The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.

Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.

The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.

The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.

The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.

A clear-cut division of labour?

In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.

Three black and white photos of people operating a large room-sized computer
Skilled role Operating the ENIAC required an analytical mind as well as technical skills. (Top) Irwin Goldstein setting the switches on one of the ENIAC’s function tables at the Moore School of Electrical Engineering in 1946. (Middle) Gloria Gordon (later Bolotsky – crouching) and Ester Gerston (standing) wiring the right side of the ENIAC with a new program, c. 1946. (Bottom) Glenn A Beck changing a tube on the ENIAC. Replacing a bad tube meant checking among the ENIAC’s 19,000 possibilities. (Courtesy: US Army / Harold Breaux; US Army / ARL Technical Library; US Army)

There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H 43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.

Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.

Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science 122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine 29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.

Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.

With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”

An evolving identity

Black and white photo of a woman using equipment to punch a program onto paper tape
What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory)

In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.

1930s–1940s

Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.

Late 1940s – early 1950s

A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.

Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.

As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”

For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.

1960s

Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.

1980s – early 2000s

Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.

Early 2010s

A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.

Overlooked then, overlooked now

Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?

Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.

But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.

The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.

Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.

The post The forgotten pioneers of computational physics appeared first on Physics World.

Cosmic time capsules: the search for pristine comets

23 janvier 2026 à 14:40

In this episode of Physics World Stories, host Andrew Glester explores the fascinating hunt for pristine comets – icy bodies that preserve material from the solar system’s beginnings and even earlier. Unlike more familiar comets that repeatedly swing close to the Sun and transform, these frozen relics act as time capsules, offering unique insights into our cosmic history.

Pale blue circle against red streaks. composite image of interstellar comet 3I/ATLAS captured by the Europa Ultraviolet Spectrograph instrument on NASA’s Europa Clipper spacecraft
Interstellar comet 3I/ATLAS is seen in this composite image captured on 6 November 2025 by the Europa Ultraviolet Spectrograph instrument on NASA’s Europa Clipper spacecraft. (Courtesy: NASA/JPL-Caltech/SWRI)

The first guest is Tracy Becker, deputy principal investigator for the Ultraviolet Spectrograph on NASA’s Europa Clipper mission. Becker describes how the Jupiter-bound spacecraft recently turned its gaze to 3I/ATLAS, an interstellar visitor that appeared last July. Mission scientists quickly reacted to this unique opportunity, which also enabled them to test the mission’s instruments before it arrives at the icy world of Europa.

Michael Küppers then introduces the upcoming Comet Interceptor mission, set for launch in 2029. This joint ESA–JAXA mission will “park” in space until a suitable comet arrives from the outer reaches of the solar system. They will deploy two probes to study it from multiple angles – offering a first-ever close look at material untouched since the solar system’s birth.

From interstellar wanderers to carefully orchestrated intercepts, this episode blends pioneering missions and cosmic detective work. Keep up to date with all the latest space and astronomy developments in the dedicated section of the Physics World website.

The post Cosmic time capsules: the search for pristine comets appeared first on Physics World.

💾

Hot ancient galaxy cluster challenges current cosmological models

23 janvier 2026 à 12:30

As with people, age in cosmology does not always extrapolate. An early-career politician may be more likely to win a debate with a student than with a seasoned diplomat, but put all three in a room with a toddler and the toddler will almost certainly get their own way – they are following a different set of rules. A team of global collaborators noticed a similar phenomenon when peering at a cluster of developing galaxies from a time when the universe was just a tenth of its current age.

Cosmological theories suggest that such infant clusters should host much cooler and less abundant gas than more mature clusters. But what the researchers saw was at least five times hotter than expected – apparently not abiding by those rules.

“That’s a massive surprise and forces us to rethink how large structures actually form and evolve in the universe,” says first author Dazhi Zhou, a PhD candidate at the University of British Columbia.

Eyes on the past

Looking into distant outer space allows us to peer into the past. The protocluster of developing galaxies that Zhou and collaborators investigated – known as SPT2349–56 – is 12.4 billion light years away, so the light observed from it left home when the universe was just 1.4 billion years old. Light from so far away will be quite faint and hard to detect by the time it reaches us, so the researchers used the Atacama Large Millimeter/submillimeter Array (ALMA) to study SPT2349–56 using a special type of shadow.

As this type of protocluster develops, Zhou explains, the gas around its galaxies  becomes so hot that electrons in the gas interact with, and confer some of their energy upon, passing photons. This leaves light passing through the gas with more photons at the higher energy end of the spectrum and fewer at the lower end. When viewing the cosmic microwave background radiation – the “afterglow” left behind by the Big Bang – this results in a shadow at low energies. This energy shift, discovered by physicists Rashid Sunyaev and Yakov Zeldovich, not only reveals the presence of the protocluster, but the strength of this signature indicates the thermal energy of the gas in the protocluster.

The team’s observations were not easy. “This shadow is actually pretty tiny,” Zhou explains. In addition, there is thermal emission from the dust inside galaxies at radio wavelengths, originally estimated to be 20 times stronger than the Sunyaev–Zeldovich signature. “It really is like finding a needle in a haystack,” he adds. Nonetheless, the team did identify a definite Sunyaev–Zeldovich signature from SPT2349–56, with a thermal energy indicating that it was at least five times hotter than expected – thousands of times hotter than the surface of our Sun.

Time to upgrade?

SPT2349–56 has some quirks that may explain its high thermal energy, including three supermassive black holes shooting out jets of high-energy matter – a known but rare phenomenon for these supermassive black holes. However, simulations that take these outbursts into account as a heating mechanism that’s more efficient and occurs much earlier than heating from gravitational collapse (as current models suggest) still do not give the high temperatures observed, perhaps pointing to gaps in our knowledge of the underlying physics.

Eiichiro Komatsu from the Max-Planck-Institut für Astrophysik describes the work as “a wonderful  measurement”. Although not directly involved in this research, Komatsu has also looked at what the Sunyaev–Zeldovich effect can reveal about the cosmos. “The amount of thermal energy measured by the authors is staggering, yet its origin is a mystery,” he tells Physics World. He suggests these results will motivate further observations of other systems in the early universe.

“We need to be cautious rather than making any big claim,” adds Zhou. This is the first Sunyaev–Zeldovich detection of a protocluster from the first three billion years of the universe’s existence. Next, he aims to study similar protoclusters, and he hopes others will also work to corroborate the observations.

The research is reported in Nature.

The post Hot ancient galaxy cluster challenges current cosmological models appeared first on Physics World.

Laser fusion: Focused Energy charts a course to commercial viability

22 janvier 2026 à 16:01

This episode of the Physics World Weekly podcast features a conversation with the plasma physicist Debbie Callahan who is chief strategy officer at Focused Energy – a California and Germany based fusion-energy startup. Prior to that she spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US.

Focused Energy is developing a commercial system for generating energy from the laser-driven fusion of hydrogen isotopes. Callahan describes LightHouse, which is the company’s design for a laser-fusion power plant, and Pearl, which is the firm’s deuterium–tritium fuel capsule.

Callahan talks about the challenges and rewards of working in the fusion industry and also calls on early-career physicists to consider careers in this burgeoning sector.

The post Laser fusion: Focused Energy charts a course to commercial viability appeared first on Physics World.

Fuel cell catalyst requirements for heavy-duty vehicle applications

22 janvier 2026 à 12:25

Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.

TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.

This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.

We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.

Nagappan Ramaswamy

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.

He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.

At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.

The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.

Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’

22 janvier 2026 à 12:00

What skills do you use every day in your job?

Much of my time is spent trying to build and refine models in quantum optics, usually with just a pencil, paper and a computer. This requires an ability to sit with difficult concepts for a long time, sometimes far longer than is comfortable, until they finally reveal their structure.

Good communication is equally essential – I teach students; collaborate with colleagues from different subfields; and translate complex ideas into accessible language for the broader public. Modern physics connects with many different fields, so being flexible and open-minded matters as much as knowing the technical details. Above all, curiosity drives everything. When I don’t understand something, that uncertainty becomes my strongest motivation to keep going.

What do you like best and least about your job?

What I like the best is the sense of discovery – the moment when a problem that has evaded understanding for weeks suddenly becomes clear. Those flashes of insight feel like hearing the quiet whisper of nature itself. They are rare, but they bring along a joy that is hard to find elsewhere.

I also value the opportunity to guide the next generation of physicists, whether in the university classroom or through public science communication. Teaching brings a different kind of fulfilment: witnessing students develop confidence, curiosity and a genuine love for physics.

What I like the least is the inherent uncertainty of research. Questions do not promise favourable answers, and progress is rarely linear. Fortunately, I have come to see this lack of balance not as a weakness but as a source of power that forces growth, new perspectives, and ultimately deeper understanding.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had known that feeling lost is not a sign of inadequacy but a natural part of doing physics at a high level. Not understanding something can be the greatest motivator, provided one is willing to invest time and effort. Passion and curiosity matter far more than innate brilliance. If I had realized earlier that steady dedication can carry you farther than talent alone, I would have embraced uncertainty with much more confidence.

The post Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’ appeared first on Physics World.

Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement

22 janvier 2026 à 10:30

“God does not play dice.”

With this famous remark at the 1927 Solvay Conference, Albert Einstein set the tone for one of physics’ most enduring debates. At the heart of his dispute with Niels Bohr lay a question that continues to shape the foundations of physics: does the apparently probabilistic nature of quantum mechanics reflect something fundamental, or is it simply due to lack of information about some “hidden variables” of the system that we cannot access?

Physicists at University College London, UK (UCL) have now addressed this question via the concept of quantum state diffusion (QSD). In QSD, the wavefunction does not collapse abruptly. Instead, wavefunction collapse is modelled as a continuous interaction with the environment that causes the system to evolve gradually toward a definite state, restoring some degree of intuition to the counterintuitive quantum world.

A quantum coin toss

To appreciate the distinction (and the advantages it might bring), imagine tossing a coin. While the coin is spinning in midair, it is neither fully heads nor fully tails – its state represents a blend of both possibilities. This mirrors a quantum system in superposition.

When the coin eventually lands, the uncertainty disappears and we obtain a definite outcome. In quantum terms, this corresponds to wavefunction collapse: the superposition resolves into a single state upon measurement.

In the standard interpretation of quantum mechanics, wavefunction collapse is considered instantaneous. However, this abrupt transition is challenging from a thermodynamic perspective because uncertainty is closely tied to entropy. Before measurement, a system in superposition carries maximal uncertainty, and thus maximum entropy. After collapse, the outcome is definite and our uncertainty about the system is reduced, thereby reducing the entropy.

This apparent reduction in entropy immediately raises a deeper question. If the system suddenly becomes more ordered at the moment of measurement, where does the “missing” entropy go?

From instant jumps to continuous flows

Returning to the coin analogy, imagine that instead of landing cleanly and instantly revealing heads or tails, the coin wobbles, leans, slows and gradually settles onto one face. The outcome is the same, but the transition is continuous rather than abrupt.

This gradual settling captures the essence of QSD. Instead of an instantaneous “collapse”, the quantum state unfolds continuously over time. This makes it possible to track various parameters of thermodynamic change, including a quantity called environmental stochastic entropy production that measures how irreversible the process is.

Another benefit is that whereas standard projective measurements describe an abrupt “yes/no” outcome, QSD models a broader class of generalized or “weak” measurements, revealing the subtle ways quantum systems evolve. It also allows physicists to follow individual trajectories rather than just average outcomes, uncovering details that the standard framework smooths over.

“The QSD framework helps us understand how unpredictable environmental influences affect quantum systems,” explains Sophia Walls, a PhD student at UCL and the first author of a paper in Physical Review A on the research. Environmental noise, Walls adds, is particularly important for quantum technologies, making the study’s insights valuable for quantum error correction, control protocols and feedback mechanisms.

Bridging determinism and probability

At first glance, QSD might seem to resemble decoherence, which also arises from system–environment interactions such as noise. But the two differ in scope. “Decoherence explains how a system becomes a classical mixed state,” Walls clarifies, “but not how it ultimately purifies into a single eigenstate.” QSD, with its stochastic term, describes this final purification – the point where the coin’s faint shimmer sharpens into heads or tails.

In this view, measurement is not a single act but a continuous, entropy-producing flow of information between system and environment – a process that gradually results in manifestation of one of the possible quantum states, rather than an abrupt “collapse”.

“Standard quantum mechanics separates two kinds of dynamics – the deterministic Schrödinger evolution and the probabilistic, instantaneous collapse,” Walls notes. “QSD connects both in a single dynamical equation, offering a more unified description of measurement.”

This continuous evolution makes otherwise intractable quantities, such as entropy production, measurable and meaningful. It also breathes life into the wavefunction itself. By simulating individual realizations, QSD distinguishes between two seemingly identical mixed states: one genuinely entangled with its environment, and another that simply represents our ignorance. Only in the first case does the system dynamically evolve – a distinction invisible in the orthodox picture.

A window on quantum gravity?

Could this diffusion-based framework also illuminate other fundamental questions beyond the nature of measurement? Walls thinks it’s possible. Recent work suggests that stochastic processes could provide experimental clues about how gravity behaves at the quantum scale. QSD may one day offer a way to formalize or test such ideas. “If the nature of quantum gravity can be studied through a diffusive or stochastic process, then QSD would be a relevant framework to explore it,” Walls says.

The post Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement appeared first on Physics World.

NPL unveils miniature atomic fountain clock  

21 janvier 2026 à 18:23

A miniature version of an atomic fountain clock has been unveiled by researchers at the UK’s National Physical Laboratory (NPL). Their timekeeper occupies just 5% of the volume of a conventional atomic fountain clock while delivering a time signal with a stability that is on par with a full-sized system. The team is now honing its design to create compact fountain clocks that could be used in portable systems and remote locations.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. Today, the second is defined using a transition in caesium atoms that involves microwave radiation. Caesium atoms are placed in a microwave cavity and a measurement-and-feedback mechanism is used to tune the frequency of the cavity radiation to the atomic transition – creating a source of microwaves with a very narrow frequency range centred at the clock frequency.

The first atomic clocks sent a fast-moving beam of atoms through a microwave cavity. The precision of such a beam clock is limited by the relatively short time that individual atoms spend in the cavity. Also, the speed of the atoms means that the measured frequency peak is shifted and broadened by the Doppler effect.

Launching atoms

These problems were addressed by the development of the fountain clock, in which the atoms are cooled (slowed down) by laser light, which also launches the atoms upwards. The atoms pass through a microwave cavity on the way up, and again as they fall back down. The atoms travel at much slower speeds than in a beam clock. The atoms spend much more time in the cavity and therefore the time signal from an atomic clock is much more precise than a beam clock. However, long times result in greater thermal spread of the atomic beam – which degrades clock performance. Trading-off measurement time with thermal spread means that the caesium fountain clocks that currently define the second have drops of about 30 cm.

Other components are also needed to operate fountain clocks – including a vacuum system and laser and microwave instrumentation. This pushes the height of a typical clock to about 2 m, and makes it a complex and expensive instrument that cannot be easily transported.

Now, Sam Walby and colleagues at NPL have shrunk the overall height of a rubidium-based fountain clock to 80 cm, while retaining the 30 cm drop. The result is an instrument that is 5% the volume of one of NPL’s conventional caesium atomic fountain clocks.

Precise yet portable

“That’s taking it from barely being able to fit though a doorway, to something one could pick up and carry with one arm,” says Walby.

Despite the miniaturization, the mini-fountain achieved a stability of one part in 1015 after several days of operation – which NPL says is comparable to full-sized clocks.

Walby told Physics World that the NPL team achieved miniaturization by eliminating two conventional components from their clock design. One is a dedicated chamber used to measure the quantum states of the atoms. Instead, this measurement is make within the clock’s cooling chamber. Also eliminated is a dedicated state-selection microwave cavity, which puts the atoms into the quantum state from which the clock transition occurs.

“The mini-fountain also does this [state] selection,” explains Walby, “but instead of using a dedicated cavity, we use a coax-to-waveguide adapter that is directed into the cooling chamber, which creates a travelling wave of microwaves at the correct frequency.”

The NPL team also reduced the amount of magnetic shielding used, which meant that the edge-effects of the magnetic field had to be more carefully considered. The optics system of the clock was greatly simplified and the use of commercial components mean that the clock is low maintenance and easy to operate – according to NPL.

Radical simplification

“By radically simplifying and shrinking the atomic fountain, we’re making ultra-precise timing technology available beyond national labs,” said Walby. “This opens new possibilities for resilient infrastructure and next-generation navigation.”

According to Walby, one potential use of a miniature atomic fountain clock is as a holdover clock. These are devices that produce a very stable time signal when not synchronized with other atomic clocks. This is important for creating resilience in infrastructure that relies on precision timing – such as communications networks, global navigation satellite systems (including GPS) and power grids. Synchronization is usually done using GNSS signals but these can be jammed or spoofed to disrupt timing systems.

Holdover clocks require time errors of just a few nanoseconds over a month, which the new NPL clock can deliver. The miniature atomic clock could also be used as a secondary frequency standard for the SI second.

The small size of the clock also lends itself to portable and even mobile applications, according to Walby: “The adaptation of the mini-fountain technology to mobile platforms will be subject of further developments”.

However, the mini-clock is large when compared to more compact or chip-based clocks – which do not perform as well. Therefore, he believes that the technology is more likely to be implemented on ships or ground vehicles than aircraft.

“At a minimum, it should be easily transportable compared to the current solutions of similar performance,” he says.

“Highly innovative”

Atomic-clock expert Elizabeth Donley tells Physics World, “NPL has been highly innovative in recent years in standardizing fountain clock designs and even supplying caesium fountains to other national standards labs and organizations around the world for timekeeping purposes. This new compact rubidium fountain is a continuation of this work and can provide a smaller frequency standard with comparable performance to the larger fountains based on caesium.”

Donley spent more than two decades developing atomic clocks at the US National Institute of Standards and Technology (NIST) and now works as a consultant in the field. She agrees that miniature fountain clocks would be useful for holding-over timing information when time signals are interrupted.

She adds, “Once the international community decides to redefine the second to be based on an optical transition, it won’t matter if you use rubidium or caesium. So I see this work as more of a practical achievement than a ground-breaking one. Practical achievements are what drives progress most of the time.”

The new clock is described in Applied Physics Letters.

The post NPL unveils miniature atomic fountain clock   appeared first on Physics World.

Shining laser light on a material produces subtle changes in its magnetic properties

21 janvier 2026 à 15:00

Researchers in Switzerland have found an unexpected new use for an optical technique commonly used in silicon chip manufacturing. By shining a focused laser beam onto a sample of material, a team at the Paul Scherrer Institute (PSI) and ETH Zürich showed that it was possible to change the material’s magnetic properties on a scale of nanometres – essentially “writing” these magnetic properties into the sample in the same way as photolithography etches patterns onto wafers. The discovery could have applications for novel forms of computer memory as well as fundamental research.

In standard photolithography – the workhorse of the modern chip manufacturing industry – a light beam passes through a transmission mask and projects an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create finely detailed structures.

In the new work, Laura Heyderman and colleagues in PSI-ETH Zürich’s joint Mesoscopic System group began by placing a thin film of a magnetic material in a standard photolithography machine, but without a photoresist. They then scanned a focused laser beam over the surface of the sample while modulating the beam’s wavelength of 405 nm to deliver varying intensities of light. This process is known as direct write laser annealing (DWLA), and it makes it possible to heat areas of the sample that measure just 150 nm across.

In each heated area, thermal energy from the laser is deposited at the surface and partially absorbed by the film down to a depth of around 100 nm). The remainder dissipates through a silicon substrate coated in 300-nm-thick silicon oxide. However, the thermal conductivity of this substrate is low, which maximizes the temperature increase in the film for a given laser fluence. The researchers also sought to keep the temperature increase as uniform as possible by using thin-film heterostructures with a total thickness of less than 20 nm.

Crystallization and interdiffusion effects

Members of the PSI-ETH Zürich team applied this technique to several technologically important magnetic thin-film systems, including ferromagnetic CoFeB/MgO, ferrimagnetic CoGd and synthetic antiferromagnets composed of Co/Cr, Co/Ta or CoFeB/Pt/Ru. They found that DWLA induces both crystallization and interdiffusion effects in these materials. During crystallization, the orientation of the sample’s magnetic moments gradually changes, while interdiffusion alters the magnetic exchange coupling between the layers of the structures.

The researchers say that both phenomena could have interesting applications. The magnetized regions in the structures could be used in data storage, for example, with the direction of the magnetization (“up” or “down”) corresponding to the “1” or “0” of a bit of data. In conventional data-storage systems, these bits are switched with a magnetic field, but team member Jeffrey Brock explains that the new technique allows electric currents to be used instead. This is advantageous because electric currents are easier to produce than magnetic fields, while data storage devices switched with electricity are both faster and capable of packing more data into a given space.

Team member Lauren Riddiford says the new work builds on previous studies by members of the same group, which showed it was possible to make devices suitable for computer memory by locally patterning magnetic properties. “The trick we used here was to locally oxidize the topmost layer in a magnetic multilayer,” she explains. “However, we found that this works only in a few systems and only produces abrupt changes in the material properties. We were therefore brainstorming possible alternative methods to create gradual, smooth gradients in material properties, which would open possibilities to even more exciting applications and realized that we could perform local annealing with a laser originally made for patterning polymer resist layers for photolithography.”

Riddiford adds that the method proved so fast and simple to implement that the team’s main challenge was to investigate all the material changes it produced. Physical characterization methods for ultrathin films can be slow and difficult, she tells Physics World.

The researchers, who describe their technique in Nature Communications, now hope to use it to develop structures that are compatible with current chip-manufacturing technology. “Beyond magnetism, our approach can be used to locally modify the properties of any material that undergoes changes when heated, so we hope researchers using thin films for many different devices – electronic, superconducting, optical, microfluidic and so on – could use this technique to design desired functionalities,” Riddiford says. “We are looking forward to seeing where this method will be implemented next, whether in magnetic or non-magnetic materials, and what kind of applications it might bring.”

The post Shining laser light on a material produces subtle changes in its magnetic properties appeared first on Physics World.

The obscure physics theory that helped Chinese science emerge from the shadows

21 janvier 2026 à 12:00

“The Straton Model of elementary particles had very limited influence in the West,” said Jinyan Liu as she sat with me in a quiet corner of the CERN cafeteria. Liu, who I caught up with during a break in a recent conference on the history of particle physics, was referring to a particular model of elementary particle physics first put together in China in the mid-1960s. The Straton Model was, and still largely is, unknown outside that country. “But it was an essential step forward,” Liu added, “for Chinese physicists in joining the international community.”

Liu was at CERN to give a talk on how Chinese theorists redirected their research efforts in the years after the Cultural Revolution, which ended in 1976. They switched from the Straton Model, which was a politically infused theory of matter favoured by Mao Zedong, the founder of the People’s Republic of China, to mainstream particle physics as practised by the rest of the world. It’s easy to portray the move as the long-overdue moment when Chinese scientists resumed their “real” physics research. But, Liu told me, “actually it was much more complicated”.

A physicist by training, Liu received her PhD on contemporary theories of spontaneous charge-parity (CP) violation from the Institute of Theoretical Physics at the Chinese Academy of Sciences (CAS) in 2013. She then switched to the CAS Institute for History of Natural Sciences, where she was its first member with a physics PhD. Her initial research topic was the history and development of the Straton Model.

The model is essentially a theory of the structure of hadrons – either baryons (such as protons and neutrons) or mesons (such as pions and kaons). But the model’s origins are as improbable as they are labyrinthine. Mao, who had a keen interest in natural science, was convinced that matter was infinitely divisible, and in 1963 he came across an article by the Marxist-inspired Japanese physicist Shoichi Sakata (1911–1970).

First published in Japanese in 1961 and later translated into Russian, Sakata’s paper was entitled “Dialogues concerning a new view of elementary particles”. It restated Sakata’s belief, which he had been working on since the 1950s, that hadrons are made of smaller constituents – “elementary particles are not the ultimate elements of matter” as he put it. With some Chinese scholars back then still paying close attention to publications from the Soviet Union, their former political and ideological ally, that paper was then translated into Chinese.

Mao Zedong was engrossed in Shoichi Sakata’s paper, for it seemed to offer scientific support for his own views.

This version appeared in the Bulletin of the Studies of Dialectics of Nature in 1963. Mao, who received an issue of that bulletin from his son-in-law, was engrossed in Sakata’s paper, for it seemed to offer scientific support for his own views. Sakata’s article – both in the original Japanese and now in Chinese – cited Friedrich Engels’ view that matter has numerous stages of discrete but qualitatively different parts. In addition, it quoted Lenin’s remark that “even the electron is inexhaustible”.

A wider dimension

“International politics now also entered,” Liu told me, as we discussed the issue further at CERN. A split between China and the Soviet Union had begun to open up in the late 1950s, with Mao breaking off relations with the Soviet Union and starting to establish non-governmental science and technology exchanges between China and Japan. Indeed, when China hosted the Peking Symposium of foreign scientists in 1964, Japan brought the biggest delegation, with Sakata as its leader.

At the event, Mao personally congratulated Sakata on his theory. It was, Sakata later recalled, “the most unforgettable moment of my journey to China”. In 1965, Sakata’s paper was retranslated from the Japanese original, with an annotated version published in Red Flag and the newspaper Renmin ribao, or “People’s Daily”, both official organs of the Chinese Communist Party.

Chinese physicists realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

Chinese physicists, who had been assigned to work on the atomic bomb and other research deemed important by the Communist Party, now started to take note. Uninterested in philosophy, they realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.

As a result, 39 members of CAS, Peking University and the University of Science and Technology of China formed the Beijing Elementary Particle Group. Between 1965 and 1966, they wrote dozens of papers on a model of hadrons inspired by both Sakata’s work and quark theory based on the available experimental data. It was dubbed the Straton Model because it involved layers or “strata” of particles nested in each other.

Liu has interviewed most surviving members of the group and studied details of the model. It differed from the model being developed at the time by the US theorist Murray Gell-Mann, which saw quarks as not physical but mathematical elements. As Liu discovered, Chinese particle physicists were now given resources they’d never had before. In particular, they could use computers, which until then had been devoted to urgent national defence work. “To be honest,” Liu chuckled, “the elementary particle physicists didn’t use computers much, but at least they were made available.”

The high-water mark for the Straton Model occurred in July 1966 when members of the Beijing Elementary Particle Group presented it at a summer physics colloquium organized by the China Association for Science and Technology. The opening ceremony was held in Tiananmen Square, in what was then China’s biggest conference centre, with attendees including Abdus Salam from Imperial College London. The only high-profile figure to be invited from the West, Salam was deemed acceptable because he was science advisor to the president of Pakistan, a country considered outside the western orbit.

The proceedings of the colloquium were later published as “Research on the theory of elementary particles carried out under the brilliant illumination of Mao Tse-Tung’s thought”. Its introduction was what Liu calls a “militant document” – designed to reinforce the idea that the authors were carrying Mao’s thought into scientific research to repudiate “decadent feudal, bourgeois and revisionist ideologies”.

Participants in Beijing had expected to make their advances known internationally by publishing the proceedings in English. But the Cultural Revolution had just begun two months before, and publications in English were forbidden. “As a result,” Liu told me, “the model had very limited influence outside China.” Sakata, however, had an important influence on Japanese theorists having co-authored the key paper on neutrino flavour oscillation (Prog. Theoretical. Physics 28 870).

A resurfaced effort

In recent years, Liu has shed new light on the Straton Model, writing a paper in the journal Chinese Annals of History of Science and Technology (2 85). In 2022, she also published a 2022 Chinese-language book entitled Constructing a Theory of Hadron Structure: Chinese Physicists’ Straton Model, which describes the downfall of the model after 1966. None of its predicted material particles appeared, though a candidate event once occurred in a cosmic ray observatory in the south of China.

By 1976, quantum chromodynamics (QCD) had convincingly emerged as the established model of hadrons. The effective end of the Straton Model took place at a conference in January 1980 in Conghua, near Hong Kong. Hung-Yuan Tzu, one of the key leaders of the Beijing Group, gave a paper entitled “Reminiscences of the Straton Model”, signalling that physics had moved on.

During our meeting at CERN, Liu showed me photos of the 1980 event. “It was a very important conference in the history of Chinese physics,” she said, “the first opening to Chinese physicists in the West”. Visits by Chinese expatriates were organized by Tsung-Dao Lee and Chen-Ning Yang, who shared the 1957 Nobel Prize for Physics for their work on parity violation.

The critical point

It is easy for westerners to mock the Straton Model; Sheldon Glashow once referred to it as about “Maons”. But Liu sees it as significant research that had many unexpected consequences, such as helping to advance physics research in China. “It gave physicists a way to pursue quantum field theory without having to do national defence work”.

The model also trained young researchers in particle physics and honed their research competence. After the post-Cultural Revolution reform and its opening to the West, these physicists could then integrate into the international community. “The story,” Liu said, “shows how ingeniously the Chinese physicists adapted to the political situation.”

The post The obscure physics theory that helped Chinese science emerge from the shadows appeared first on Physics World.

❌