↩ Accueil

Vue lecture

Melting ice propels itself across a patterned surface

Researchers in the US are first to show how a melting ice disc can quickly propel itself across a patterned surface in a manner reminiscent of the Leidenfrost effect. Jonathan Boreyko and colleagues at Virginia Tech demonstrated how the discs can suddenly slingshot themselves along herringbone channels when a small amount of heat is applied.

The Leidenfrost effect is a classic physics experiment whereby a liquid droplet levitates above a hot surface – buoyed by vapour streaming from the bottom of the droplet. In 2022, Boreyko’s team extended the effect to a disc of ice. This three-phase Leidenfrost effect requires a much hotter surface because the ice must first melt to liquid, which then evaporates.

The team also noticed that the ice discs can propel themselves in specific directions across an asymmetrically-patterned surface. This ratcheting effect also occurs with Leidenfrost droplets, and is related to the asymmetric emission of vapour.

“Quite separately, we found out about a really interesting natural phenomenon at Death Valley in California, where boulders slowly move across the desert,” Boreyko adds. “It turns out this happens because they are sitting on thin rafts of ice, which the wind can then push over the underlying meltwater.”

Combined effects

In their latest study, Boreyko’s team considered how these two effects could be combined – allowing ice discs to propel themselves across cooler surfaces like the Death Valley boulders, but without any need for external forces like the wind.

They patterned a surface with a network of V-shaped herringbone channels, each branching off at an angle from a central channel. At first, meltwater formed an even ring around the disc – but as the channels directed its subsequent flow, the ice began to move in the same direction.

“For the Leidenfrost droplet ratchets, they have to heat the surface way above the boiling point of the liquid,” Boreyko explains. “In contrast, for melting ice discs, any temperature above freezing will cause the ice to melt and then move along with the meltwater.”

The speed of the disc’s movement depended on how easily water spreads out on to the herringbone channels. When etched onto bare aluminium, the channels were hydrophilic – encouraging meltwater to flow along them. Predictably, since liquid water is far more dense and viscous than vapour, this effect unfolded far more slowly than the three-phase Leidenfrost effect demonstrated in the team’s previous experiment.

Surprising result

Yet as Boreyko describes, “a much more surprising result was when we tried spraying a water-repellent coating over the surface structure.” While preventing meltwater from flowing quickly through the channels, this coating roughened the surface with nanostructures, which initially locked the ice disc in place as it rested on the ridges between the channels.

As the ice melted, the ring of meltwater partially filled the channels beneath the disc. Gradually, however, the ratcheted surface directed more water to accumulate in front of the disc – introducing a Laplace pressure difference between both sides of the disc.

When this pressure difference is strong enough, the ice suddenly dislodges from the surface. “As the meltwater preferentially escaped on one side, it created a surface tension force that ‘slingshotted’ the ice at a dramatically higher speed,” Boreyko describes.

Applications of the new effect include surfaces could be de-iced with just a small amount of heating. Alternatively, energy could be harvested from ice-disc motion. It could also be used to propel large objects across a surface, says Boreyko. “It turns out that whenever you have more liquid on the front side of an object, and less on the backside, it creates a surface tension force that can be dramatic.”

The research is described in ACS Applied Materials & Interfaces.

The post Melting ice propels itself across a patterned surface appeared first on Physics World.

  •  

Android phone network makes an effective early warning system for earthquakes

The global network of Android smartphones makes a useful earthquake early warning system, giving many users precious seconds to act before the shaking starts. These findings, which come from researchers at Android’s parent organization Google, are based on a three-year-long study involving millions of phones in 98 countries. According to the researchers, the network’s capabilities could be especially useful in areas that lack established early warning systems.

By using Android smartphones, which make up 70% of smartphones worldwide, the Android Earthquake Alert (AEA) system can help provide life-saving warnings in many places around the globe,” says study co-leader Richard Allen, a visiting faculty researcher at Google who directs the Berkeley Seismological Laboratory at the University of California, Berkeley, US.

Traditional earthquake early warning systems use networks of seismic sensors expressly designed for this purpose. First implemented in Mexico and Japan, and now also deployed in Taiwan, South Korea, the US, Israel, Costa Rica and Canada, they rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Even a few seconds of warning can be useful, Allen explains, because it enables people to take protective actions such as the “drop, cover and hold on” (DCHO) sequence recommended in most countries.

Building such seismic networks is expensive, and many earthquake-prone regions do not have them. What they do have, however, is smartphones. Most such devices contain built-in accelerometers, and as their popularity soared in the 2010s, seismic scientists began exploring ways of using them to detect earthquakes. “Although the accelerometers in these phones are less sensitive than the permanent instruments used in traditional seismic networks, they can still detect tremors during strong earthquakes,” Allen tells Physics World.

A smartphone-based warning system

By the late 2010s, several teams had developed smartphone apps that could sense earthquakes when they happen, with early examples including Mexico’s SkyAlert and Berkeley’s ShakeAlert. The latest study takes this work a step further. “By using the accelerometers in a network of smartphones like a seismic array, we are now able to provide warnings in some parts of the world where they didn’t exist before and are most needed,” Allen explains.

Working with study co-leader Marc Stogaitis, a principal software engineer at Android, Allen and colleagues tested the AEA system between 2021 and 2024. During this period, the app detected an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8 (corresponding to events in Japan and Türkiye, respectively).

Detecting earthquakes with smartphones

Animation showing phones detecting shaking as a magnitude 6.2 earthquake in Türkiye progressed. Yellow dots are phones that detect shaking. The yellow circle is the P-wave’s estimated location and the red circle is for the S-wave. Note that phones can detect shaking for reasons other than an earthquake, and the system needs to handle this source of noise. This video has no sound. (Courtesy: Google)

For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users. These alerts are designed to draw users’ attention immediately and prompt them to take protective actions such as DCHO. The system sent alerts of this type on average 60 times per month during the study period, for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of 3 or 4.

To assess how effective these alerts were, the researchers used Google Search to collect voluntary feedback via user surveys. Between 5 February 2023 and 30 April 2024, 1 555 006 people responded to a survey after receiving alerts generated from an AEA detection. Their responses indicated that 85% of them did indeed experience shaking, with 36% receiving the alert before the ground began to move, 28% during and 23% after.

Graphic showing responses to survey on the effectiveness of the AEA and users' responses to alerts
Feeling the Earth move: Feedback from users who received an alert. A total of 1 555 006 responses to the user survey were collected over the period 5 February 2023 to 30 April 2024. During this time, alerts were issued for 1042 earthquakes detected by AEA. (Courtesy: Google)

Principles of operation

AEA works on the same principles of seismic wave propagation as traditional earthquake detection systems. When an Android smartphone is stationary, the system uses the output of its accelerometer to detect the type of sudden increase in acceleration that P and S waves in an earthquake would trigger. Once a phone detects such a pattern, it sends a message to Google servers with the acceleration information and an approximate location. The servers then search for candidate seismic sources that tally with this information.

“When a candidate earthquake source satisfies the observed data with a high enough confidence, an earthquake is declared and its magnitude, hypocentre and origin time are estimated based on the arrival time and amplitude of the P and S waves,” explains Stogaitis. “This detection capability is deployed as part of Google Play Services core system software, meaning it is on by default for most Android smartphones. As there are billions of Android phones around the world, this system provides an earthquake detection capability wherever there are people, in both wealthy and less-wealthy nations.”

In the future, Allen says that he and his colleagues hope to use the same information to generate other hazard-reducing tools. Maps of ground shaking, for example, could assist the emergency response after an earthquake.

For now, the researchers, who report their work in Science, are focused on improving the AEA system. “We are learning from earthquakes as they occur around the globe and the Android Earthquake Alerts system is helping to collect information about these natural disasters at a rapid rate,” says Allen. “We think that we can continue to improve both the quality of earthquake detections, and also improve on our strategies to deliver effective alerts.”

The post Android phone network makes an effective early warning system for earthquakes appeared first on Physics World.

  •  

Predicted quasiparticles called ‘neglectons’ hold promise for robust, universal quantum computing

Quantum computers open the door to profound increases in computational power, but the quantum states they rely on are fragile. Topologically protected quantum states are more robust, but the most experimentally promising route to topological quantum computing limits the calculations these states can perform. Now, however, a team of mathematicians and physicists in the US has found a way around this barrier. By exploiting a previously neglected aspect of topological quantum field theory, the team showed that these states can be much more broadly useful for quantum computation than was previously believed.

The quantum bits (qubits) in topological quantum computers are based on particle-like knots, or vortices, in the sea of electrons washing through a material. In two-dimensional materials, the behaviour of these quasiparticles diverges from that of everyday bosons and fermions, earning them the name of anyons (from “any”). The advantage of anyon-based quantum computing is that the only thing that can change the state of anyons is moving them around in relation to each other – a process called “braiding” that alters their relative topology.

Photo of a blackboard containing a diagram of anyon braiding. Writing on the blackboard says "Quantum gates are implemented by braiding anyons" and "Key idea: Quantum state evolves by braiding output only depends on the topology of the braid, *not* the path taken"
Topological protection: Diagram of a scheme for implementing quantum gates by braiding anyons. (Courtesy: Gus Ruelas/USC)

However, as team leader Aaron Lauda of the University of Southern California explains, not all anyons are up to the task. Certain anyons derived from mathematical symmetries appear to have a quantum dimension of zero, meaning that they cannot be manipulated in quantum computations. Traditionally, he says, “you just throw those things away”.

The problem is that in this so-called “semisimple” model, braiding the remaining anyons, which are known as Ising anyons, only lends itself to a limited range of computational logic gates. These gates are called Clifford gates, and they can be efficiently simulated by classical computers, which reduces their usefulness for truly ground-breaking quantum machines.

New mathematical tools for anyons

Lauda’s interest in this problem was piqued when he realized that there had been some progress in the mathematical tools that apply to anyons. Notably, in 2011, Nathan Geer at Utah State University and Jonathan Kujawa at Oklahoma University in the US, together with Bertrand Patureau-Mirand at Université de Bretagne-Sud in France showed that what appear to be zero-dimensional objects in topological quantum field theory (TQFT) can actually be manipulated in ways that were not previously thought possible.

“What excites us is that these new TQFTs can be more powerful and possess properties not present in the traditional setting,” says Geer, who was not involved in the latest work.

Photo of a blackboard containing an explanation of how to encode qubits into the collective state of a neglecton and two Ising anyons, which are quasiparticle vortices in a 2D material. The explanation includes a diagram showing the neglecton and the Ising anyons in a 2D material placed in a vertically oriented magnetic field. It also includes sketches showing how to perform braiding with this collection of particles and create 0 and 1 ket states
Just add neglectons: Encoding qubits into collective state of three anyons. (Courtesy: Gus Ruelas/USC)

As Landa explains it, this new approach to TQFT led to “a different way to measure the contribution” of the anyons that the semisimple model leaves out – and surprisingly, the result wasn’t zero. Better still, he and his colleagues found that when certain types of discarded anyons – which they call “neglectons” because they were neglected in previous approaches – are added back into the model, Ising anyons can be braided around them in such a way as to allow any quantum computation.

The role of unitarity

Here, the catch was that including neglectons meant that the new model lacked a property known as unitarity. This is essential in the widely held probabilistic interpretation of quantum mechanics. “Most physicists start to get squeamish when you have, like, ‘non-unitarity’ or what we say, non positive definite [objects],” Lauda explains.

The team solved this problem with some ingenious workarounds created by Lauda’s PhD student, Filippo Iulianelli. Thanks to these workarounds, the team was able to confine the computational space to only those regions where anyon transformations work out as unitary.

Shawn Cui, who was not involved in this work, but whose research at Purdue University, US, centres around topological quantum field theory and quantum computation, describes the research by Lauda and colleagues as “a substantial theoretical advance with important implications for overcoming limitations of semisimple models”. However, he adds that realizing this progress in experimental terms “remains a long-term goal”.

For his part, Lauda points out that there are good precedents for particles being discovered after mathematical principles of symmetry were used to predict their existence. Murray Gell-Man’s prediction of the omega minus baryon in 1962 is, he says, a case in point. “One of the things I would say now is we already have systems where we’re seeing Ising anyons,” Lauda says. “We should be looking also for these neglectons in those settings.”

The post Predicted quasiparticles called ‘neglectons’ hold promise for robust, universal quantum computing appeared first on Physics World.

  •  

Graphite ‘hijacks’ the journey from molten carbon to diamond

At high temperatures and pressures, molten carbon has two options. It can crystallize into diamond and become one of the world’s most valuable substances. Alternatively, it can crystallize into graphite, which is industrially useful but somewhat less exciting.

Researchers in the US have now discovered what causes molten carbon to “choose” one crystalline form over the other. Their findings, which are based on sophisticated simulations that use machine learning to predict molecular behaviour, have implications for several fields, including geology, nuclear fusion and quantum computing as well as industrial diamond production.

Monitoring crystallization in molten carbon is challenging because the process is rapid and occurs under conditions that are hard to produce in a laboratory. When scientists have tried to study this region of carbon’s phase diagram using high pressure flash heating, their experiments have produced conflicting results.

A better understanding of phase changes near the crystallization point could bring substantial benefits. Liquid-phase carbon is a known intermediate in the synthesis of artificial diamonds, nanodiamonds and the nitrogen-vacancy-doped diamonds used in quantum computing. The presence of diamond in natural minerals can also shed light on tectonic processes in Earth-like planets and the deep-Earth carbon cycle.

Crystallization process can be monitored in detail

In the new work, a team led by chemist Davide Donadio of the University of California, Davis used machine-learning-accelerated, quantum-accurate molecular dynamics simulations to model how diamond and graphite form as liquid carbon cools from 5000 to 3000 K at pressures ranging from 5 to 30 GPa. While such extreme conditions can be created using laser heating, Donadio notes that doing so requires highly specialized equipment. Simulations also provide a level of control over conditions and an ability to monitor the crystallization process at the atomic scale that would be difficult, if not impossible, to achieve experimentally.

The team’s simulations showed that the crystallization behaviour of molten carbon is more complex than previously thought. While it crystallizes into diamond at higher pressures, at lower pressures (up to 15 GPa) it forms graphite instead. This was surprising, the researchers say, because even at these slightly lower pressures, the material’s most thermodynamically stable phase ought to be diamond rather than graphite.

“Nature taking the path of least resistance”

The team attributes this unexpected behaviour to an empirical observation known as Ostwald’s step rule, which states that crystallization often proceeds through intermediate metastable phases rather than directly to the phase that is most thermodynamically stable. In this case, the researchers say that graphite, a nucleating metastable crystal, acts as a stepping stone because its structure more closely resembles that of the parent liquid carbon. For this reason, it hinders the direct formation of the stable diamond phase.

“The liquid carbon essentially finds it easier to become graphite first, even though diamond is ultimately more stable under these conditions,” says co-author Tianshu Li, a professor of civil and environmental engineering at George Washington University. “It’s nature taking the path of least resistance.”

The insights gleaned from this work, which is described in Nature Communications, could help resolve inconsistencies among historical electrical and laser flash-heating experiments, Donadio says. Though these experiments were aimed at resolving the phase diagram of carbon near the graphite-diamond-liquid triple point, various experimental details and recrystallization conditions may have meant that their systems instead became “trapped” in metastable graphitic configurations. Understanding how this happens could prove useful for manufacturing carbon-based materials such as synthetic diamonds and nanodiamonds at high pressure and temperature.

“I have been studying crystal nucleation for 20 years and have always been intrigued by the behaviour of carbon,” Donadio tells Physics World. “Studies based on so-called empirical potentials have been typically unreliable in this context and ab initio density functional theory-based calculations are too slow. Machine learning potentials allow us to overcome these issues, having the right combination of accuracy and computational speed.”

Looking to the future, Donadio says he and his colleagues aim to study more complex chemical compositions. “We will also be focusing on targeted pressures and temperatures, the likes of which are found in the interiors of giant planets in our solar system.”

The post Graphite ‘hijacks’ the journey from molten carbon to diamond appeared first on Physics World.

  •  

Building a quantum powerhouse in the US Midwest

In this episode of the Physics World Weekly podcast I am in conversation two physicists who are leading lights in the quantum science and technology community in the US state of Illinois. They are Preeti Chalsani who is chief quantum officer at Intersect Illinois, and David Awschalom who is director of Q-NEXT.

As well as being home to Chicago, the third largest urban area in the US, the state also hosts two national labs (Fermilab and Argonne) and several top universities. In this episode, Awschalom and Chalsani explain how the state is establishing itself as a burgeoning hub for quantum innovation – along with neighbouring regions in Wisconsin and Indiana.

Chalsani talks about the Illinois Quantum and Microelectronics Park, a 128-acre technology campus that being developed on the site of a former steel mill just south of Chicago. The park has already attracted its first major tenant, PsiQuantum, which will build a utility-scale, fault-tolerant quantum computer at the park.

Q-NEXT is led by Argonne National Laboratory, and Awschalom explains how academia, national labs, industry, and government are working together to make the region a quantum powerhouse.

  • Related podcasts include interviews with Celia Merzbacher of the US’s Quantum Economic Development Consortium; Nadya Mason of the Pritzker School of Molecular Engineering at the University of Chicago; and Travis Humble of the Quantum Science Center at Oak Ridge National Laboratory

Courtesy: American ElementsThis podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.

The post Building a quantum powerhouse in the US Midwest appeared first on Physics World.

  •  

Richard Muller: ‘Physics stays the same. What changes is how the president listens’

Richard Muller, a physicist at the University of California, Berkeley, was in his office when someone called Liz showed up who’d once taken one of his classes. She said her family had invited a physicist over for dinner, who touted controlled nuclear fusion as a future energy source. When Liz suggested solar power was a better option, the guest grew patronizing. “If you wanted to power California,” he told her, “you’d have to plaster the entire state with solar cells.”

Fortunately, Liz remembered what she’d learned on Muller’s course, entitled “Physics for Future Presidents”, and explained why the dinner guest was wrong. “There’s a kilowatt in a square metre of sunlight,” she told him, “which means a gigawatt in a square kilometre – only about the space of a nuclear power plant.” Stunned, the physicist grew silent. “Your numbers don’t sound wrong,” he finally said. “Of course, today’s solar cells are only 15% efficient. But I’ll take a look again.”

It’s a wonderful story that Muller told me when I visited him a few months ago to ask about his 2008 book Physics for Future Presidents: the Science Behind the Headlines. Based on the course that Liz took, the book tries to explain physics concepts underpinning key issues including energy and climate change. “She hadn’t just memorized facts,” Muller said. “She knew enough to shut up an expert who hadn’t done his homework. That’s what presidents should be able to do.” A president, Muller believes, should know enough science to have a sense for the value of expert advice.

Dissenting minds

Muller’s book was published shortly before Barack Obama’s two terms as US president. Obama was highly pro-science, appointing the Nobel-prize-winning physicist Steven Chu as his science adviser. With Donald Trump in the White House, I had come to ask Muller what advice – if any – he would change in the book. But it wasn’t easy for me to keep Muller on topic, as he derails easily with anecdotes of fascinating situations and extraordinary people that he’s encountered in his remarkable life.

Richard Muller
Talking physics Richard Muller explaining antimatter to students at the University of California, Berkeley, in 2005. (Courtesy: WikiCommons)

Born in New York City, Muller, 81, attended Bronx High School of Science and Columbia University, joining the University of California, Berkeley as a graduate student in the autumn of 1964. A few weeks after entering, he joined the Free Speech Movement to protest against the university’s ban on campus political activities. During a sit-in, Muller was arrested and dragged down the steps of Sproul Hall, Berkeley’s administration building.

As a graduate student, Muller worked with Berkeley physicist Luis Alvarez – who would later win the 1968 Nobel Prize for Physics – to send a balloon with a payload of cosmic-ray detectors over the Pacific. Known as the High Altitude Particle Physics Experiment (HAPPE), the apparatus crashed in the ocean. Or so Muller thought.

As Muller explained in a 2023 article in the Wall Street Journal, US intelligence recovered a Chinese surveillance device, shot down over Georgia by the US military, with a name that translated as “HAPI”. Muller found enough other similarities to conclude that the Chinese had recovered the device and copied it as a model for their balloons. But by then Muller had switched to studying negative kaon particles using bubble chambers. After his PhD, he stayed at Berkeley as a postdoc, eventually becoming a professor in 1980.

Muller is a prominent contrarian, publishing an article advancing the controversial – though some now argue that it’s plausible – view that the COVID-19 virus originated in a Chinese lab. For a long time he was a global-warming sceptic, but in 2012, after three years of careful analysis, he publicly changed his mind via an article in the New York Times. Former US President Bill Clinton cited Muller as “one of my heroes because he changed his mind on global warming”. Muller loved that remark, but told me: “I’m not a hero. I’m just a scientist.”

Muller was once shadowed by a sociology student for a week for a course project. “She was like [the anthropologist] Diane Fosse and I was a gorilla,” Muller recalls. She was astonished. “I thought physicists spent all their time thinking and experimenting,” the student told him. “You spend most of your time talking.” Muller wasn’t surprised. “You don’t want to spend your time rediscovering something somebody already knows,” he said. “So physicists talk a lot.”

Recommended recommendations

I tried again to steer Muller back to the book. He said it was based on a physics course at Berkeley known originally as “Qualitative physics” and informally as physics for poets or dummies. One of the first people to teach it had been the theorist and “father of the fusion bomb” Edward Teller. “Teller was exceedingly popular,” Muller told me, “possibly because he gave everyone in class an A and no exams.”

After Teller, fewer and fewer students attended the course until enrolment dropped to 20. So when Muller took over in 1999 he retitled it “Physics for future presidents”, he refocused it on contemporary issues, and rebuilt the enrolment until it typically filled a large auditorium with about 500 students. He retired in 2010 after a decade of teaching the course.

Making a final effort, I handed Muller a copy of his book, turned to the last page where he listed a dozen or so specific recommendations for future presidents, and asked him to say whether he had changed his mind in the intervening 17 years.

Fund strong programmes in energy efficiency and conservation? “Yup!”

Raise the miles-per-gallon of autos substantially? “Yup.”

Support efforts at sequestering carbon dioxide? “I’m not much in favour anymore because the developing world can’t afford it.”

Encourage the development of nuclear power? “Yeah. Particularly fission; fusion’s too far in the future. Also, I’d tell the president to make clear that nuclear waste storage is a solved problem, and make sure that Yucca mountain is quickly approved.”

See that China and India are given substantial carbon credits for building coal-fired power stations and nuclear plants? “Nuclear power plants yes, carbon credits no. Over a million and a half people in China die from coal pollution each year.”

Encourage solar and wind technologies? “Yes.” Cancel subsidies on corn ethanol? “Yes”. Encourage developments in efficient lighting? “Yes.” Insulation is better than heating? “Yes.” Cool roofs save more energy than air conditioners and often better than solar cells? “Yes.”

The critical point

Muller’s final piece of advice to the future president was that the “emphasis must be on technologies that the developing world can afford”. He was adamant. “If what you are doing is buying expensive electric automobiles that will never sell in the developing world, it’s just virtue signalling in luxury.”

I kept trying to find some new physics Muller would tell the president, but it wasn’t much. “Physics mostly stays the same,” Muller concluded, “so the advice mainly does, too.” But not everything remains unvarying. “What changes the most”, he conceded, “is how the president listens”. Or even whether the president is listening at all.

The post Richard Muller: ‘Physics stays the same. What changes is how the president listens’ appeared first on Physics World.

  •  

NASA launches TRACERS mission to study Earth’s ‘magnetic shield’

NASA has successfully launched a mission to explore the interactions between the Sun’s and Earth’s magnetic fields. The Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites (TRACERS) craft was sent into low-Earth orbit on 23 July from Vandenberg Space Force Base in California by a SpaceX Falcon 9 rocket. Following a month of calibration, the twin-satellite mission is expected to operate for a year.

The spacecraft will observe particles and electromagnetic fields in the Earth’s northern magnetic “cusp region”, which encircles the North Pole where the Earth’s magnetic field lines curve down toward Earth.

This unique vantage point allows researchers to study how magnetic reconnection — when field lines connect and explosively reconfigure — affects the space environment. Such observations will help researchers understand how processes change over both space and time.

The two satellites will collect data from over 3000 cusp crossings during the one-year mission with the information being used to understand space-weather phenomena that can disrupt satellite operations, communications and power grids on Earth.

Each nearly identical octagonal satellite – weighing less than 200 kg each – features six instruments including magnetomers, electric-field instruments and devices to measure the energy of ions and electrons in plasma around the spacecraft.

It will operate in a Sun-synchronous orbit about 590 km above ground with the satellites following one behind the other in close separation, passing through regions of space at least 10 seconds apart.

“TRACERS is an exciting mission,” says Stephen Fuselier from the Southwest Research Institute in Texas, who is the mission’s deputy principal investigator. “The data from that single pass through the cusp were amazing. We can’t wait to get the data from thousands of cusp passes.”

The post NASA launches TRACERS mission to study Earth’s ‘magnetic shield’ appeared first on Physics World.

  •  

Jet stream study set to improve future climate predictions

Factors influencing the jet stream in the southern hemisphere
Driven by global warming The researchers identified which factors influence the jet stream in the southern hemisphere. (Courtesy: Leipzig University/Office for University Communications)

An international team of meteorologists has found that half of the recently observed shifts in the southern hemisphere’s jet stream are directly attributable to global warming – and pioneered a novel statistical method to pave the way for better climate predictions in the future.

Prompted by recent changes in the behaviour of the southern hemisphere’s summertime eddy-driven jet (EDJ) – a band of strong westerly winds located at a latitude of between 30°S and 60°S – the Leipzig University-led team sifted through historical measurement data to show that wind speeds in the EDJ have increased, while the wind belt has moved consistently toward the South Pole. They then used a range of innovative methods to demonstrate that 50% of these shifts are directly attributable to global warming, with the remainder triggered by other climate-related changes, including warming of the tropical Pacific and the upper tropical atmosphere, and the strengthening of winds in the stratosphere.

“We found that human fingerprints on the EDJ are already showing,” says lead author Julia Mindlin, research fellow at Leipzig University’s Institute for Meteorology. “Global warming, springtime changes in stratospheric winds linked to ozone depletion, and tropical ocean warming are all influencing the jet’s strength and position.”

“Interestingly, the response isn’t uniform, it varies depending on where you look, and climate models are underestimating how strong the jet is becoming. That opens up new questions about what’s missing in our models and where we need to dig deeper,” she adds.

Storyline approach

Rather than collecting new data, the researchers used existing, high-quality observational and reanalysis datasets – including the long-running HadCRUT5 surface temperature data, produced by the UK Met Office and the University of East Anglia, and a variety of sea surface temperature (SST) products including HadISST, ERSSTv5 and COBE.

“We also relied on something called reanalysis data, which is a very robust ‘best guess’ of what the atmosphere was doing at any given time. It is produced by blending real observations with physics-based models to reconstruct a detailed picture of the atmosphere, going back decades,” says Mindlin.

To interpret the data, the team – which also included researchers at the University of Reading, the University of Buenos Aires and the Jülich Supercomputing Centre – used a statistical approach called causal inference to help isolate the effects of specific climate drivers. They also employed “storyline” techniques to explore multiple plausible futures rather than simply averaging qualitatively different climate responses.

“These tools offer a way to incorporate physical understanding while accounting for uncertainty, making the analysis both rigorous and policy-relevant,” says Mindlin.

Future blueprint

For Mindlin, these findings are important for several reasons. First, they demonstrate “that the changes predicted by theory and climate models in response to human activity are already observable”. Second, she notes that they “help us better understand the physical mechanisms that drive climate change, especially the role of atmospheric circulation”.

“Third, our methodology provides a blueprint for future studies, both in the southern hemisphere and in other regions where eddy-driven jets play a role in shaping climate and weather patterns,” she says. “By identifying where and why models diverge from observations, our work also contributes to improving future projections and enhances our ability to design more targeted model experiments or theoretical frameworks.”

The team is now focused on improving understanding of how extreme weather events, like droughts, heatwaves and floods, are likely to change in a warming world. Since these events are closely linked to atmospheric circulation, Mindlin stresses that it is critical to understand how circulation itself is evolving under different climate drivers.

One of the team’s current areas of focus is drought in South America. Mindlin notes that this is especially challenging due to the short and sparse observational record in the region, and the fact that drought is a complex phenomenon that operates across multiple timescales.

“Studying climate change is inherently difficult – we have only one Earth, and future outcomes depend heavily on human choices,” she says. “That’s why we employ ‘storylines’ as a methodology, allowing us to explore multiple physically plausible futures in a way that respects uncertainty while supporting actionable insight.”

The results are reported in the Proceedings of the National Academy of Sciences.

The post Jet stream study set to improve future climate predictions appeared first on Physics World.

  •  

Festival opens up the quantum realm

quantum hackathon day 1 NQCC
Collaborative insights: The UK Quantum Hackathon, organized by the NQCC for the fourth consecutive year and a cornerstone of the Quantum Fringe festival, allowed industry experts to work alongside early-career researchers to explore practical use cases for quantum computing. (Courtesy: NQCC)

The International Year of Quantum Science and Technology (IYQ) has already triggered an explosion of activities around the world to mark 100 years since the emergence of quantum mechanics. In the UK, the UNESCO-backed celebrations have provided the perfect impetus for the University of Edinburgh’s Quantum Software Lab (QSL) to work with the National Quantum Computing Centre (NQCC) to organize and host a festival of events that have enabled diverse communities to explore the transformative power of quantum computing.

Known collectively as the Quantum Fringe, in a clear nod to Edinburgh’s famous cultural festival, some 16 separate events have been held across Scotland throughout June and July. Designed to make quantum technologies more accessible and more relevant to the outside world, the programme combined education and outreach with scientific meetings and knowledge exchange.

The Quantum Fringe programme evolved from several regular fixtures in the quantum calendar. One of these cornerstones was the NQCC’s flagship event, the UK Quantum Hackathon, which is now in its fourth consecutive year. In common with previous editions, the 2025 event challenged teams of hackers to devise quantum solutions to real-world use cases set by mentors from different industry sectors. The teams were supported throughout the three-day event by the industry mentors, as well as by technical experts from providers of various quantum resources.

quantum hackathon - NQCC
Time constrained: the teams of hackers were given two days to formulate their solution and test it on simulators, annealers and physical processors. (Courtesy: NQCC)

This year, perhaps buoyed by the success of previous editions, there was a significant uptick in the number of use cases submitted by end-user organizations. “We had twice as many applications as we could accommodate, and over half of the use cases we selected came from newcomers to the event,” said Abby Casey, Quantum Readiness Delivery Lead at the NQCC. “That level of interest suggests that there is a real appetite among the end-user community for understanding how quantum computing could be used in their organizations.”

Reflecting the broader agenda of the IYQ, this year the NQCC particularly encouraged use cases that offered some form of societal benefit, and many of the 15 that were selected aimed to align with the UN’s Sustainable Development Goals. One team investigated the accuracy of quantum-powered neural networks for predicting the progression of a tumour, while another sought to optimize the performance of graphene-based catalysts for fuel cells. Moonbility, a start-up firm developing digital twins to optimize the usage of transport and infrastructure, challenged its team to develop a navigation system capable of mapping out routes for people with specific mobility requirements, such as step-free access or calmer environments for those with anxiety disorders.

During the event the hackers were given just two days to explore the use case, formulate a solution, and generate results using quantum simulators, annealers and physical processors. The last day provided an opportunity for the teams to share their findings with their peers and a five-strong judging panel that was chaired by Sir Peter Knight, one of the architects of the UK’s National Quantum Technologies Programme and co-chair of the IYQ’s Steering Committee a prime mover in the IYQ celebrations. “Your effort, energy and passion have been quite extraordinary,” commented Sir Peter at the end of the event. “It’s truly impressive to see what you have achieved in just two days.”

From the presentations it was clear that some of the teams had adapted their solution to reflect the physical constraints of the hardware platform they had been allocated. Those explorations were facilitated by the increased participation of mentors from hardware developers, including QuEra and Pasqal for cold-atom architectures, and Rigetti and IBM for gate-based superconducting processors. “Cold atoms offer greater connectivity than superconducting platforms, which may make them more suited to solving particular types of problems,” said Gerard Milburn of the University of Sussex, who has recently become a Quantum Fellow at the NQCC.

quantum hackathon day 3 NQCC
Results day: The final day of the hackathon allowed the teams to share their results with the other participants and a five-strong judging panel. (Courtesy: NQCC)

The winning team, which had been challenged by Aioi R&D Lab to develop a quantum-powered solution for scheduling road maintenance, won particular praise for framing the problem in a way that recognized the needs of all road users, not just motorists. “It was really interesting that they thought about the societal value right at the start, and then used those ethical considerations to inform the way they approached the problem,” said Knight.

The wider impact of the hackathon is clear to see, with the event providing a short, intense and collaborative learning experience for early-career researchers, technology providers, and both small start-up companies and large multinationals. This year, however, the hackathon also provided the finale to the Quantum Fringe, which was the brainchild of Elham Kashefi and her team at the QSL. Taking inspiration from the better-known Edinburgh Fringe, the idea was to create a diverse programme of events to engage and inspire different audiences with the latest ideas in quantum computing.

“We wanted to celebrate the International Year of Quantum in a unique way,” said Mina Doosti, one of the QSL’s lead researchers. “We had lots of very different events, many of which we hadn’t foreseen at the start. It was very refreshing, and we had a lot of fun.”

One of Doosti’s favourite events was a two-day summer school designed for senior high-school students. As well as introducing the students to the concepts of quantum computing, the QSL researchers challenged them to write some code that could be run on IBM’s free-to-access quantum computer. “The organizers and lecturers from the QSL worked hard to develop material that would make sense to the students, and the attendees really grabbed the opportunity to come and learn,” Doosti explained. “From the questions they were asking and the way they tackled the games and challenges, we could see that they were interested and that they had learnt something.”

From the outset the QSL team were also keen for the Quantum Fringe to become a focal point for quantum-inspired activities that were being planned by other organizations. Starting from a baseline of four pillar events that had been organized by the NQCC and the QSL in previous years, the programme eventually swelled to 16 separate gatherings with different aims and outcomes. That included a public lecture organized by the new QCi3 Hub – a research consortium focused on interconnected quantum technologies – which attracted around 200 people who wanted to know more about the evolution of quantum science and its likely impact across technology, industry, and society. An open discussion forum hosted by Quantinuum, one of the main sponsors of the festival, also brought together academic researchers, industry experts and members of the public to identify strategies for ensuring that quantum computing benefits everyone in society, not just a privileged few.

Quantum researchers also had plenty of technical events to choose from. The regular AIMday Quantum Computing, now in its third year, enabled academics to work alongside industry experts to explore a number of business-led challenges. More focused scientific meetings allowed researchers to share their latest results in quantum cryptography and cybersecurity, algorithms and complexity, and error correction in neutral atoms. For her part, Doosti co-led the third edition of Foundations in Quantum Computing, a workshop that combines invited talks with dedicated time for focused discussion. “The speakers are briefed to cover the evolution of a particular field and to highlight open challenges, and then we use the discussion sessions to brainstorm ideas around a specific question,” she explained.

Those scientific meetings were complemented by a workshop on responsible quantum innovation, again hosted by the QCi3 Hub, and a week-long summer school on the Isle of Skye that was run by Heriot-Watt University and the London School of Mathematics. “All of our partners ran their events in the way they wanted, but we helped them with local support and some marketing and promotion,” said Ramin Jafarzadegan, the QSL’s operations manager and the chair of the Quantum Fringe festival. “Bringing all of these activities together delivered real value because visitors to Edinburgh could take part in multiple events.”

Indeed, one clear benefit of this approach was that some of the visiting scientists stayed for longer, which also enabled them to work alongside the QSL team. That has inspired a new scheme, called QSL Visiting Scholars, that aims to encourage scientists from other institutions to spend a month or so in Edinburgh to pursue collaborative projects.

As a whole, the Quantum Fringe has helped both the NQCC and the QSL in their ambitions to bring diverse stakeholders together to create new connections and to grow the ecosystem for quantum computing in the UK. “The NQCC should have patented the ‘quantum hackathon’ name,” joked Sir Peter. “Similar events are popping up everywhere these days, but the NQCC’s was among the first.”

The post Festival opens up the quantum realm appeared first on Physics World.

  •  

Understanding strongly correlated topological insulators

Topological insulators have generated a lot of interest in recent years because of their potential applications in quantum computing, spintronics and information processing.

The defining property of these materials is that their interior behaves as an electrical insulator while their surface behaves as an electrical conductor. In other words, electrons can only move along the material’s surface.

In some cases however, known as strongly correlated systems, the strong interactions between electrons cause this relatively simple picture to break down.

Understanding and modelling strongly correlated topological insulators, it turns out, is extremely challenging.

A team of researchers from the Kavli Institute for Theoretical Sciences, China, have recently tackled this challenge by using a new approach employing fermionic tensor states.

Their framework notably reduces the number of parameters needed in numerical simulations. This should lead to a greatly improved computational efficiency when modelling these systems.

By combining their methods with advanced numerical techniques, the researchers expect to be able to overcome the challenges posed by strong interaction effects.

This will lead to a deeper understanding of the properties of strongly correlated systems and could also enable the discovery of new materials with exciting new properties.

The post Understanding strongly correlated topological insulators appeared first on Physics World.

  •  

Physicists get dark excitons under control

Dark exciton control: Researchers assemble a large cryostat in an experimental physics laboratory, preparing for ultra-low temperature experiments with quantum dots on a semiconductor chip. (Courtesy: Universität Innsbruck)

Physicists in Austria and Germany have developed a means of controlling quasiparticles known as dark excitons in semiconductor quantum dots for the first time. The new technique could be used to generate single pairs of entangled photons on demand, with potential applications in quantum information storage and communication.

Excitons are bound pairs of negatively charged electrons and positively charged “holes”. When these electrons and holes have opposite spins, they recombine easily, emitting a photon in the process. Excitons of this type are known as “bright” excitons. When the electrons and holes have parallel spins, however, direct recombination by emitting a photon is not possible because it would violate the conservation of spin angular momentum. This type of exciton is therefore known as a “dark” exciton.

Because dark excitons are not optically active, they have much longer lifetimes than their bright cousins. For quantum information specialists, this is an attractive quality, because it means that dark excitons can store quantum states – and thus the information contained within these states – for much longer. “This information can then be released at a later time and used in quantum communication applications, such as optical quantum computing, secure communication via quantum key distribution (QKD) and quantum information distribution in general,” says Gregor Weihs, a quantum photonics expert at the Universität Innsbruck, Austria who led the new study.

The problem is that dark excitons are difficult to create and control. In semiconductor quantum dots, for example, Weihs explains that dark excitons tend to be generated randomly, for example when a quantum dot in a higher-energy state decays into a lower-energy state.

Chirped laser pulses lead to reversible exciton production

In the new work, which is detailed in Science Advances, the researchers showed that they could control the production of dark excitons in quantum dots by using laser pulses that are chirped, meaning that the frequency (or colour) of the laser light varies within the pulse. Such chirped pulses, Weihs explains, can turn one quantum dot state into another.

“We first bring the quantum dot to the (bright) biexciton state using a conventional technique and then apply a (storage) chirped laser pulse that turns this biexciton occupation (adiabatically) into a dark state,” he says. “The storage pulse is negatively chirped – its frequency decreases with time, or in terms of colour, it turns redder.” Importantly, the process is reversible: “To convert the dark exciton back into a bright state, we apply a (positively chirped) retrieval pulse to it,” Weihs says.

One possible application for the new technique would be to generate single pairs of entangled photons on demand – the starting point for many quantum communication protocols. Importantly, Weihs adds that this should be possible with almost any type of quantum dot, whereas an alternative method known as polarization entanglement works for only a few quantum dot types with very special properties. “For example, it could be used to create ‘time-bin’ entangled photon pairs,” he tells Physics World. “Time-bin entanglement is particularly suited to transmitting quantum information through optical fibres because the quantum state stays preserved over very long distances.”

The study’s lead author, Florian Kappe, and his colleague Vikas Remesh describe the project as “a challenging but exciting and rewarding experience” that combined theoretical and experimental tools. “The nice thing, we feel, is that on this journey, we developed a number of optical excitation methods for quantum dots for various applications,” they say via e-mail.

The physicists are now studying the coherence time of the dark exciton states, which is an important property in determining how long they can store quantum information. According to Weihs, the results from this work could make it possible to generate higher-dimensional time-bin entangled photon pairs – for example, pairs of quantum states called qutrits that have three possible values.

“Thinking beyond this, we imagine that the technique could even be applied to multi-excitonic complexes in quantum dot molecules,” he adds. “This could possibly result in multi-photon entanglement, such as so-called GHZ (Greenberger-Horne-Zeilinger) states, which are an important resource in multiparty quantum communication scenarios.”

The post Physicists get dark excitons under control appeared first on Physics World.

  •  

IOP president-elect Michele Dougherty named next Astronomer Royal

The space scientist Michele Dougherty from Imperial College London has been appointed the next Astronomer Royal – the first woman to hold the position. She will succeed the University of Cambridge cosmologist Martin Rees, who has held the role for the past three decades.

The title of Astronomer Royal dates back to the creation of the Royal Observatory in Greenwich in 1675, when it mostly involved advising Charles II on using the stars to improve navigation at sea. John Flamsteed from Derby was the first Astronomer Royal and since then 15 people have held the role.

Dougherty will now act as the official adviser to King Charles III on astronomical matters. She will hold the role alongside her Imperial job as well as being executive chair of the Science and Technology Facilities Council and the next president of the Institute of Physics (IOP), a two-year position she will take up in October.

After gaining a PhD in 1988 from the University of Natal in South Africa, Dougherty moved to Imperial in 1991, where she was head of physics from 2018 until 2024. She has been principal investigator of the magnetometer on the Cassini-Huygens mission to Saturn and its moons and also for the magnetometer for the JUICE craft, which is currently travelling to Jupiter to study its three icy moons.

She was made Commander of the Order of the British Empire in the 2018 New Year Honours for “services to UK Physical Science Research”. Dougherty is also a fellow of the Royal Society, who won its Hughes medal in 2008 for studying Saturn’s moons and had a Royal Society Research Professorship from 2014 to 2019.

“I am absolutely delighted to be taking on the important role of Astronomer Royal,” says Dougherty. “As a young child I never thought I’d end up working on planetary spacecraft missions and science, so I can’t quite believe I’m actually taking on this position. I look forward to engaging the general public in how exciting astronomy is, and how important it and its outcomes are to our everyday life.”

Tom Grinyer, IOP group chief executive officer, offered his “warmest congratulations” to Dougherty. “As incoming president of the IOP and the first woman to hold this historic role [of Astronomer Royal], Dougherty is an inspirational ambassador for science and a role model for every young person who has gazed up at the stars and imagined a future in physics or astronomy.”

The post IOP president-elect Michele Dougherty named next Astronomer Royal appeared first on Physics World.

  •  

MOND versus dark matter: the clash for cosmology’s soul

The clash between dark matter and modified Newtonian dynamics (MOND) can get a little heated at times. On one side is the vast majority of astronomers who vigorously support the concept of dark matter and its foundational place in cosmology’s standard model. On the other side is the minority – a group of rebels convinced that tweaking the laws of gravity rather than introducing a new particle is the answer to explaining the composition of our universe.

Both sides argue passionately and persuasively, pointing out evidence that supports their view while discrediting the other side. Often it seems to come down to a matter of perspective – both sides use the same results as evidence for their cause. For the rest of us, how can we tell who is correct?

As long as we still haven’t identified what dark matter is made of, there will remain some ambiguity, leaving a door ajar for MOND. However, it’s a door that dark-matter researchers hope will be slammed shut in the not-too-distant future.

Crunch time for WIMPs

In part two of this series, where I looked at the latest proposals from dark-matter scientists, we met University College London’s Chamkaur Ghag, who is the spokesperson for Lux-ZEPLIN. This experiment is searching for “weakly interacting massive particles” or WIMPs – the leading dark-matter candidate – down a former gold mine in South Dakota, US. A huge seven-tonne tank of liquid xenon, surrounded by an array of photomultiplier tubes, watches patiently for the flashes of light that may occur when a passing WIMP interacts with a xenon atom.

Running since 2021, the experiment just released the results of its most recent search through 280 days of data, which uncovered no evidence of WIMPs above a mass of 9 GeV/c2 (Phys. Rev. Lett. 135 011802). These results help to narrow the range of possible dark-matter theories, as the new limits impose constraints on WIMP parameters that are almost five times more rigorous than the previous best. Another experiment at the INFN Laboratori Nazionali del Gran Sasso in Italy, called XENONnT, is also hoping to spot the elusive WIMPs – in its case by looking for rare nuclear recoil interactions in a liquid xenon target chamber.

Huge water tank surrounded by pipes
Deep underground The XENON Dark Matter Project is hosted by the INFN Gran Sasso National Laboratory in Italy. The latest detector in this programme is the XENONnT (pictured) which uses liquid xenon to search for dark-matter particles. (Courtesy: XENON Collaboration)

Lux-ZEPLIN and XENONnT will cover half the parameter space of masses and energies that WIMPs could in theory have, but Ghag is more excited about a forthcoming, next-generation xenon-based WIMP detector dubbed XLZD that might settle the matter. XLZD brings together both the Lux-ZEPLIN and XENONnT collaborations, to design and build a single, common multi-tonne experiment that will hopefully leave WIMPs with no place to hide. “XLZD will probably be the final experiment of this type,” says Ghag. “It’s designed to be much larger and more sensitive, and is effectively the definitive experiment.”

I think none of us are ever going to fully believe it completely until we’ve found a WIMP and can reproduce it in a lab

Richard Massey

If WIMPs do exist, then this detector will find them, and it could happen on UK shores. Several locations around the world are in the running to host the experiment, including Boulby Mine Underground Laboratory near Whitby Bay on the north-east coast of England. If everything goes to plan, XLZD – which will contain between 40 and 100 tonnes of xenon – will be up and running and providing answers by the 2030s. It will be a huge moment for dark matter, and a nervous one for its researchers.

“I think none of us are ever going to fully believe it completely until we’ve found [a WIMP] and can reproduce it in a lab and show that it’s not just some abstract stuff that we call dark matter, but that it is a particular particle that we can identify,” says astronomer Richard Massey of the University of Durham, UK.

But if WIMPs are in fact a dead-end, then it’s not a complete death-blow for dark matter – there are other dark-matter candidates and other dark-matter experiments. For example, the Forward Search Experiment (FASER) at CERN’s Large Hadron Collider is looking for less massive dark-matter particles such as axions (read more about them in part 2). However, WIMPs have been a mainstay of dark-matter models since the 1980s. If the xenon-based experiments turn up empty-handed it will be a huge blow, and the door will creak open just a little bit more for MOND.

Galactic frontier

MOND’s battleground isn’t in particle detectors – it’s in the outskirts of galaxies and galaxy clusters, and its proof lies in the history of how our universe formed. This is dark matter’s playground too, with the popular models for how galaxies grow being based on a universe in which dark matter forms 85% of all matter. So it’s out in the depths of space where the two models clash.

The current standard model of cosmology describes how the growth of the large-scale structure of the universe, over the past 13.8 billion years of cosmic history since the Big Bang, is influenced by a combination of dark matter and dark energy (responsible for the accelerated expansion of the universe). Essentially, density fluctuations in the cosmic microwave background (CMB) radiation reflect the clumping of dark matter in the very early universe. As the cosmos aged, these clumps thinned out into the cosmic web of matter. This web is a universe-spanning network of dark-matter filaments, where all the matter lies, between which are voids that are comparatively less densely packed with matter than the filaments. Galaxies can form inside “dark matter haloes”, and at the densest points in the dark-matter filaments, galaxy clusters coalesce.

Simulations in this paradigm – known as lambda cold dark matter (ΛCDM) – suggest that galaxy and galaxy-cluster formation should be a slow process, with small galaxies forming first and gradually merging over billions of years to build up into the more massive galaxies that we see in the universe today. And it works – kind of. Recently, the James Webb Space Telescope (JWST) peered back in time to between just 300 and 400 million years after the Big Bang and found the universe to be populated by tiny galaxies perhaps just a thousand or so light-years across (ApJ 970 31). This is as expected, and over time they would grow and merge into larger galaxies.

1 Step back in time

infrared image showing thousands of stars and galaxies
a (Courtesy: NASA/ESA/CSA/STScI/ Brant Robertson, UC Santa Cruz/ Ben Johnson, CfA/ Sandro Tacchella, University of Cambridge/ Phill Cargile, CfA)

Graph of brightness versus wavelength of light showing a clear peak at roughly 1.8 microns
b (Courtesy: NASA/ESA/CSA/ Joseph Olmsted, STScI/ S Carniani, Scuola Normale Superiore/ JADES Collaboration)

Data from the James Webb Space Telescope (JWST) form the basis of the JWST Advanced Deep Extragalactic Survey (JADES). (a) This infrared image from the JWST’s NIRCam highlights galaxy JADES-GS-z14-0. (b) The JWST’s NIRSpec (Near-Infrared Spectrograph) obtained this spectrum of JADES-GS-z14-0. A galaxy’s redshift can be determined from the location of a critical wavelength known as the Lyman-alpha break. For JADES-GS-z14-0 the redshift value is 14.32 (+0.08/–0.20), making it the second most distant galaxy known at less than 300 million years after the Big Bang. The current record holder, as of August 2025, is MoM-z14, which has a redshift of 14.4 (+0.02/–0.02), placing it less than 280 million years after the Big Bang (arXiv:2505.11263). Both galaxies belong to an era referred to as the “cosmic dawn”, following the epoch of reionization, when the universe became transparent to light. JADES-GS-z14-0 is particularly interesting to researchers not just because of its distance, but also because it is very bright. Indeed, it is much more intrinsically luminous and massive than expected for a galaxy that formed so soon after the Big Bang, raising more questions on the evolution of stars and galaxies in the early universe.

Yet the deeper we push into the universe, the more we observe challenges to the ΛCDM model, which ultimately threatens the very existence of dark matter. For example, those early galaxies that the JWST has observed, while being quite small, are also surprisingly bright – more so than ΛCDM predicts. This has been attributed to an initial mass function (IMF – the property that determines the average mass of stars that form) that skews more towards higher-mass stars and therefore more luminous stars than today. It does sound reasonable, except that astronomers still don’t understand why the IMF is what it is today (favouring the smallest stars; massive stars are rare) never mind what it might have been over 13 billion years ago.

Not everyone is convinced, and this is compounded by slightly later galaxies, seen around a billion years after the Big Bang, which continue the trend of being more luminous and more massive than expected. Indeed, some of these galaxies sport truly enormous black holes hundreds of times more massive than the black hole at the heart of our Milky Way. Just a couple of billion years later and significantly large galaxy clusters are already present, earlier than one would have surmised with ΛCDM.

The fall of ΛCDM?

Astrophysicist and MOND advocate Pavel Kroupa, from the University of Bonn in Germany, highlights giant elliptical galaxies in the early universe as an example of what he sees as a divergence from ΛCDM.

“We know from observations that the massive elliptical galaxies formed on shorter timescales than the less massive ellipticals,” he explains. This phenomenon has been referred to as “downsizing”, and Kroupa declares it is “a big problem for  ΛCDM” because the model says that “the big galaxies take longer to form, but what we see is exactly the opposite”.

To quantify this problem, a 2020 study (MNRAS 498 5581) by Australian astronomer Sabine Bellstedt and colleagues showed that half the mass in present-day elliptical galaxies was in place 11 billion years ago, compared with other galaxy types that only accrued half their mass on average about 6 billion years ago. The smallest galaxies only accrued that mass as recently as 4 billion years ago, in apparent contravention of ΛCDM.

Observations (ApJ 905 40) of a giant elliptical galaxy catalogued as C1-23152, which we see as it existed 12 billion years ago, show that it formed 200 billion solar masses worth of stars in just 450 million years – a huge firestorm of star formation that ΛCDM simulations just can’t explain. Perhaps it is an outlier – we’ve only sampled a few parts of the sky, not conducted a comprehensive census yet. But as astronomers probe these cosmic depths more extensively, such explanations begin to wear thin.

Kroupa argues that by replacing dark matter with MOND, such giant early elliptical galaxies suddenly make sense. Working with Robin Eappen, who is a PhD student at Charles University in Prague, they modelled a giant gas cloud in the very early universe collapsing under gravity according to MOND, rather than if there were dark matter present.

“It is just stunning that the time [of formation of such a large elliptical] comes out exactly right,” says Kroupa. “The more massive cloud collapses faster on exactly the correct timescale, compared to the less massive cloud that collapses slower. So when we look at an elliptical galaxy, we know that thing formed from MOND and nothing else.”

Elliptical galaxies are not the only thing with a size problem. In 2021 Alexia Lopez, a PhD student at the University of Central Lancashire, UK, discovered a “Giant Arc” of galaxies spanning 3.3 billion light-years, some 9.2 billion light-years away. And in 2023 Lopez spotted another gigantic structure, a “Big Ring” (shaped more like a coil) of galaxies 1.3 billion light-years in diameter, but with a circumference of about 4 billion light-years. The opposite of these giant structures are the massive under-dense voids that take up space between the filaments of the cosmic web. The KBC Void (sometimes called the “Local Hole”), for example, is about two billion light-years across and the Milky Way among a host of other galaxies sits inside it. The trouble is, simulations in ΛCDM, with dark matter at the heart of it, cannot replicate structures and voids this big.

“We live in this huge under-density; we’re not at the centre of it but we are within it and such an under-density is completely impossible in ΛCDM,” says Kroupa, before declaring, “Honestly, it’s not worthwhile to talk about the ΛCDM model anymore.”

A bohemian model

Such fighting talk is dismissed by dark-matter astronomers because although there are obviously deficiencies in the ΛCDM model, it does such a good job of explaining so many other things. If we’re to kill ΛCDM because it cannot explain a few large ellipticals or some overly large galaxy groups or voids, then there needs to be a new model that can explain not only these anomalies, but also everything else that ΛCDM does explain.

“Ultimately we need to explain all the observations, and some of those MOND does better and some of those ΛCDM does better, so it’s how you weigh those different baskets,” says Stacy McGaugh, a MOND researcher from Case Western Reserve University in the US.

As it happens, Kroupa and his Bonn colleague Jan Pflamm-Altenburg are working on a new model that they think has what it takes to overthrow dark matter and the broader ΛCDM paradigm. Calling it the Bohemian model (the name has a double meaning – Kroupa is originally from Czechia), it incorporates MOND as its main pillar and Kroupa describes the results they are getting from their simulations in this paradigm as “stunning” (A&A 698 A167).

A lot of experts at Ivy League universities will say it’s all completely impossible. But I know that part of the community is just itching to have a completely different model

Pavel Kroupa

But Kroupa admits that not everybody will be happy to see it published. “If it’s published, a lot of experts at Ivy League universities will say it’s all completely impossible,” he says. “But I know for a fact that there is part of the community, the ‘bright part’ as I call them, which is just itching to have a completely different model.”

Kroupa is staying tight-lipped on the precise details of his new model, but says that according to simulations the puzzle of large-scale structure forming earlier than expected, and growing larger faster than expected, is answered by the Bohemian model. “These structures [such as the Giant Arc and the KBC Void] are so radical that they are not possible in the ΛCDM model,” he says. “However, they pop right out of this Bohemian model.”

Binary battle

Whether you believe Kroupa’s promises of a better model or whether you see it all as bluster, the fact remains that a dark-matter-dominated universe still has some problems. Maybe they’re not serious, and all it will take is a few tweaks to make those problems go away. But maybe they’ll persist, and require new physics of some kind, and it’s this possibility that continues to leave the door open for MOND. For the rest of us, we’re still grasping for a definitive statement one way or another.

For MOND, perhaps that definitive statement could still turn out to be binary stars, as discussed in the first article in this series. Researchers have been particularly interested in so-called “wide binaries” – pairs of stars that are more than 500 AU apart. Thanks to the vast distance between them, the gravitational impact of each star on the other is weak, making it a perfect test for MOND. Idranil Banik, of the University of St Andrews, UK, controversially concluded that there was no evidence for MOND operating on the smaller scales of binary-star systems. However, other researchers such as Kyu-Hyun Chae of Sejong University in South Korea argue that they have found evidence for MOND in binary systems, and have hit out at Banik’s findings.

Indeed, after the first part of this series was published, Chae reached out to me, arguing that Banik had analysed the data incorrectly. Chae specifically points out the fraction of wide binaries (pairs that are more than 500 AU apart, meaning that the gravitational impact of each star on the other is weak, making it a perfect test for MOND) with an extra unseen close stellar companion (a factor designated fmulti) to one or both of the binary stars must be calibrated for when performing the MOND calculations. Often when two stars are extremely close together, their angular separation is so small that we can’t resolve them and don’t realize that they are binary, he explains. So we might mistake a triple system, with two stars so close together that we can’t distinguish them and a third star on a wider circumbinary orbit, for just a wide binary.

“I initially believed Banik’s claim, but because what’s at stake is too big and I started feeling suspicious, I chose to do my own investigation,” says Chae (ApJ 952 128). “I came to realize the necessity of calibrating fmulti due to the intrinsic degeneracy between mass and gravity (one cannot simultaneously determine the gravity boost factor and the amount of hidden mass).”

The probability of a wide binary having an unseen extra stellar companion is the same as for shorter binaries (those that we can resolve). But for shorter binaries the gravitational acceleration is high enough that they obey regular Newtonian gravity – MOND only comes into the picture at wider separations. Therefore, the mass uncertainty in the study of wide binaries in a MOND regime can be calibrated for using those shorter-period binaries. Chae argues that Banik did not do this. “I’m absolutely confident that if the Banik et al. analysis is properly carried out, it will reveal MOND’s low-acceleration gravitational anomaly to some degree.”

So perhaps there is hope for MOND in binary systems. Given that dark matter shouldn’t be present on the scale of binary systems, any anomalous gravitational effect could only be explained by MOND. A detection would be pretty definitive, if only everyone could agree upon it.

the Bullet Cluster
Bullet time and mass This spectacular new image of the Bullet Cluster was created using NASA’s James Webb Space Telescope and Chandra X-ray Observatory. The new data allow for an improved measurement of the thousands of galaxies in the Bullet Cluster. This means astronomers can more accurately “weigh” both the visible and invisible mass in these galaxy clusters. Astronomers also now have an improved idea of how that mass is distributed. (X-ray: NASA/CXC/SAO; near-infrared: NASA/ESA/CSA/STScI; processing: NASA/STScI/ J DePasquale)

But let’s not kid ourselves – MOND still has a lot of catching up to do on dark matter, which has become a multi-billion-dollar industry with thousands of researchers working on it and space missions such as the European Space Agency’s Euclid space telescope. Dark matter is still in pole position, and its own definitive answers might not be too far away.

“Finding dark matter is definitely not too much to hope for, and that’s why I’m doing it,” says Richard Massey. He highlights not only Euclid, but also the work of the James Webb Space Telescope in imaging gravitational lensing on smaller scales and the Nancy G Roman Space Telescope, which will launch later this decade on a mission to study weak gravitational lensing – the way in which small clumps of matter, such as individual dark matter haloes around galaxies, subtly warp space.

“These three particular telescopes give us the opportunity over the next 10 years to catch dark matter doing something, and to be able to observe it when it does,” says Massey. That “something” could be dark-matter particles interacting, perhaps in a cluster merger in deep space, or in a xenon tank here on Earth.

“That’s why I work on dark matter rather than anything else,” concludes Massey. “Because I am optimistic.”

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales
  • In the second part of the series, Keith Cooper explored competing theories of dark matter

The post MOND versus dark matter: the clash for cosmology’s soul appeared first on Physics World.

  •  

Amorphous carbon membrane creates precision proton beams for cancer therapy

A new method for generating high-energy proton beams could one day improve the precision of proton therapy for treating cancer. Developed by an international research collaboration headed up at the National University of Singapore, the technique involves accelerating H2+ ions and then using a novel two-dimensional carbon membrane to split the high-energy ion beam into beams of protons.

One obstacle when accelerating large numbers of protons together is that they all carry the same positive charge and thus naturally repel each other. This so-called space–charge effect makes it difficult to keep the beam tight and focused.

“By accelerating H₂⁺ ions instead of single protons, the particles don’t repel each other as strongly,” says project leader Jiong Lu. “This enables delivery of proton beam currents up to an order of magnitude higher than those from existing cyclotrons.”

Lu explains that a high-current proton beam can deliver more protons in a shorter time, making proton treatments quicker, more precise and targeting tumours more effectively. Such a proton beam could also be employed in FLASH therapy, an emerging treatment that delivers therapeutic radiation at ultrahigh dose rates to reduce normal tissue toxicity while preserving anti-tumour activity.

Industry-compatible fabrication

The key to this technique lies in the choice of an optimal membrane with which to split the H₂⁺ ions. For this task, Lu and colleagues developed a new material – ultraclean monolayer amorphous carbon (UC-MAC). MAC is similar in structure to graphene, but instead of an ordered honeycomb structure of hexagonal rings, it contains a disordered mix of five-, six-, seven and eight-membered carbon rings. This disorder creates angstrom-scale pores in the films, which can be used to split the H₂⁺ ions into protons as they pass through.

Ultraclean monolayer amorphous carbon
Pentagons, hexagons, heptagons, octagons Illustration of disorder-to-disorder synthesis (left); scanning transmission electron microscopy image of UC-MAC (right). (Courtesy: National University of Singapore)

Scaling the manufacture of ultrathin MAC films, however, has previously proved challenging, with no industrial synthesis method available. To address this problem, the researchers proposed a new fabrication approach in which the emergence of long-range order in the material is suppressed, not by the conventional approach of low-temperature growth, but by a novel disorder-to-disorder (DTD) strategy.

DTD synthesis uses plasma-enhanced chemical vapor deposition (CVD) to create a MAC film on a copper substrate containing numerous nanoscale crystalline grains. This disordered substrate induces high levels of randomized nucleation in the carbon layer and disrupts long-range order. The approach enabled wafer-scale (8-inch) production of UC-MAC films within just 3 s – an order of magnitude faster than conventional CVD methods.

Disorder creates precision

To assess the ability of UC-MAC to split H₂⁺ ions into protons, the researchers generated a high-energy H2+ nanobeam and focused it onto a freestanding two-dimensional UC-MAC crystal. This resulted in the ion beam splitting to create high-precision proton beams. For comparison they repeated the experiment (with beam current stabilities controlled within 10%) using single-crystal graphene, non-clean MAC with metal impurities and commercial carbon thin films (8 nm).

Measuring double-proton events – in which two proton signals are detected from a single H2+ ion splitting – as an indicator for proton scattering revealed that the UC-MAC membrane produced far fewer unwanted scattered protons than the other films. Ion splitting using UC-MAC resulted in about 47 double-proton events over a 20 s collection time, while the graphene film exhibited roughly twice this number and the non-clean MAC slightly more. The carbon thin film generated around 46 times more scattering events.

The researchers point out that the reduced double-proton events in UC-MAC “demonstrate its superior ability to minimize proton scattering compared with commercial materials”. They note that as well as UC-MAC creating a superior quality proton beam, the technique provides control over the splitting rate, with yields ranging from 88.8 to 296.0 proton events per second per detector.

“Using UC-MAC to split H₂⁺ produces a highly sharpened, high-energy proton beam with minimal scattering and high spatial precision,” says Lu. “This allows more precise targeting in proton therapy – particularly for tumours in delicate or critical organs.”

“Building on our achievement of producing proton beams with greatly reduced scattering, our team is now developing single molecule ion reaction platforms based on two-dimensional amorphous materials using high-energy ion nanobeam systems,” he tells Physics World. “Our goal is to make proton beams for cancer therapy even more precise, more affordable and easier to use in clinical settings.”

The study is reported in Nature Nanotechnology.

The post Amorphous carbon membrane creates precision proton beams for cancer therapy appeared first on Physics World.

  •  

Elusive scattering of antineutrinos from nuclei spotted using small detector

Evidence of the coherent elastic scattering of reactor antineutrinos from atomic nuclei has been reported by the German-Swiss Coherent Neutrino Nucleus Scattering (CONUS) collaboration. This interaction has a higher cross section (probability) than the processes currently used to detect neutrinos, and could therefore lead to smaller detectors. It also involves lower-energy neutrinos, which could offer new ways to look for new physics beyond the Standard Model.

Antineutrinos only occasionally interact with matter, which makes them very difficult to detect. They can be observed using inverse beta decay, which involves the capture of electron antineutrinos by protons, producing neutrons and positrons. An alternative method involves observing the scattering of antineutrinos from electrons. Both these reactions have small cross sections, so huge detectors are required to capture just a few events. Moreover, inverse beta decay can only detect antineutrinos if they have energies above about 1.8 MeV, which precludes searches for low-energy physics beyond the Standard Model.

It is also possible to detect neutrinos by the tiny kick a nucleus receives when a neutrino scatters off it. “It’s very hard to detect experimentally because the recoil energy of the nucleus is so low, but on the other hand the interaction probability is a factor of 100–1000 higher than these typical reactions that are otherwise used,” says Christian Buck of the Max Planck Institute for Nuclear Physics in Heidelberg. This enables measurements with kilogram-scale detectors.

This was first observed in 2017 by the COHERENT collaboration using a 14.6 kg caesium iodide crystal to detect neutrinos from the Spallation Neutron Source at the Oak Ridge National Laboratory in the US. These neutrinos have a maximum energy of 55 MeV, making them ideal for the interaction. Moreover, the neutrinos come in pulses, allowing the signal to be distinguished from background radiation.

Reactor search

Multiple groups have subsequently looked for signals from nuclear reactors, which produce lower-energy neutrinos. These include the CONUS collaboration, which operated at the Brokdorf nuclear reactor in Germany until 2022. However, the only group to report a strong hint of a signal included Juan Collar of the University of Chicago. In 2022 it published results suggesting a stronger than expected signal at the Dresden-2 power reactor in the US.

Now, Buck and his CONUS colleagues present data from the CONUS+ experiment conducted at the Leibstadt reactor in Switzerland. They used three 1 kg germanium diodes sensitive to energies as low as 160 eV. They extracted the neutrino spectrum from background radiation by taking data when the reactor was running and when it was not. Writing in Nature, the team conclude that 395±106 neutrinos had been detected during 119 days of operation, which is consistent with the Standard Model 3.7σ away from zero. The experiment is currently in its second run, with the detector masses increased to 2.4 kg to provide better statistics and potentially a lower threshold energy.

Collar, however, is sceptical of the result. “[The researchers] seem to have an interest in dismissing the limitations of these detectors – limitations that affect us too,” he says. “The main difference between our approach and theirs is that we have made a best effort to demonstrate that our data are not contaminated by residual sources of low-energy noise dominant in this type of device prior to a careful analysis.” His group will soon release data taken at the Vandellòs reactor in Spain. “When we release these, we will take the time to point out the issues visible in their present paper,” he says. “It is a long list.”

Buck accepts that, if the previous measurements by Collar’s group are correct, the CONUS+ researchers should have detected least 10 times more neutrinos than they actually did. “I would say the control of backgrounds at our site in Leibstadt is better because we do not have such a strong neutron background. We have clearly demonstrated that the noise Collar has in mind is not dominant in the energy region of interest in our case.”

Patrick Huber at Virginia Tech in the US says, “Let’s see what Collar’s new result is going to be. I think this is a good example of the scientific method at work. Science doesn’t care who’s first – scientists care, but for us, what matters is that we get it right. But with the data that we have in hand, most experts, myself included, think that the current result is essentially the result we have been looking for.”

The post Elusive scattering of antineutrinos from nuclei spotted using small detector appeared first on Physics World.

  •  

‘I left the school buzzing and on a high’

After 40 years lecturing on physics and technology, you’d think I’d be ready for any classroom challenge thrown at me. Surely, during that time, I’d have covered all the bases? As an academic with a background in designing military communication systems, I’m used to giving in-depth technical lectures to specialists. I’ve delivered PowerPoint presentations to a city mayor and council dignitaries (I’m still not sure why, to be honest). And perhaps most terrifying of all, I’ve even had my mother sit in on one of my classes.

During my retirement, I’ve taken part in outreach events at festivals, where I’ve learned how to do science demonstrations to small groups that have included everyone from babies to great-grandparents. I once even gave a talk about noted local engineers to a meeting of the Women’s Institute in what was basically a shed in a Devon hamlet. But nothing could have prepared me for a series of three talks I gave earlier this year.

I’d been invited to a school to speak to three classes, each with about 50 children aged between six and 11. The remit from the headteacher was simple: talk about “My career as a physicist”. To be honest, most of my working career focused on things like phased-array antennas, ferrite anisotropy and computer modelling of microwave circuits, which isn’t exactly easy to adapt for a young audience.

But for a decade or so my research switched to sports physics and I’ve given talks to more than 200 sports scientists in a single room. I once even wrote a book called Projectile Dynamics in Sport (Routledge, 2011). So I turned up at the school armed with a bag full of balls, shuttlecocks, Frisbees and flying rings. I also had a javelin (in the form of a telescopic screen pointer) and a “secret weapon” for my grand finale.

Our first game was “guess the sport”. The pupils did well, correctly discriminating the difference between a basketball, softball and a football, and even between an American football and a rugby ball. We discussed the purposes of dimples on a golf ball, the seam on a cricket ball and the “skirt” on a shuttlecock – the feathers, which are always taken from the right wing of a goose. Unless they are plastic.

As physicists, you’re probably wondering why the feathers are taken from its right side – and I’ll leave that as an exercise for the reader. But one pupil was more interested in the poor goose, asking me what happens when its feathers are pulled out. Thinking on my feet, I said the feathers grow back and the bird isn’t hurt. Truth is I have no idea, but I didn’t want to upset her.

Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy

Then: the finale. From my bag I took out a genuine Aboriginal boomerang, complete with authentic religious symbols. Not wanting to delve into Indigenous Australian culture or discuss a boomerang’s return mechanism in terms of gyroscopy and precession, I instead allowed the class to throw around three foam versions of it. Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy but ended each session with five minutes of carefree enjoyment.

There is something uniquely joyful about the energy of children when they engage in learning. At this stage, curiosity is all. They ask questions because they genuinely want to know how the world works. And when I asked them a question, hands shot up so fast and arms were waved around so frantically to attract my attention that some pupils’ entire body shook. At one point I picked out an eager firecracker who swiftly realized he didn’t know the answer and shrank into a self-aware ball of discomfort.

Mostly, though, children’s excitement is infectious. I left the school buzzing and on a high. I loved it. In this vibrant environment, learning isn’t just about facts or skills; it’s about puzzle-solving, discovery, imagination, excitement and a growing sense of independence. The enthusiasm of young learners turns the classroom into a place of shared exploration, where every day brings something new to spark their imagination.

How lucky primary teachers are to work in such a setting, and how lucky I was to be invited into their world.

The post ‘I left the school buzzing and on a high’ appeared first on Physics World.

  •  

New metalaser is a laser researcher’s dream

A new type of nanostructured lasing system called a metalaser emits light with highly tuneable wavefronts – something that had proved impossible to achieve with conventional semiconductor lasers. According to the researchers in China who developed it, the new metalaser can generate speckle-free laser holograms and could revolutionize the field of laser displays.

The first semiconductor lasers were invented in the 1960s and many variants have since been developed. Their numerous advantages – including small size, long lifetimes and low operating voltages – mean they are routinely employed in applications ranging from optical communications and interconnects to biomedical imaging and optical displays.

To make further progress with this class of lasers, researchers have been exploring ways of creating them at the nanoscale. One route for doing this is to integrate light-scattering arrays called metasurfaces with laser mirrors or insert them inside resonators. However, the wavefronts of the light emitted by these metalasers have proven very difficult to control, and to date only a few simple profiles have been possible without introducing additional optical elements.

Not significantly affected by perturbations

In the new work, a team led by Qinghai Song of the Harbin Institute of Technology, Shenzhen, created a metalaser that consists of silicon nitride nanodisks that have holes in their centres and are arranged in a periodic array. This configuration generates bound states in a continuous medium (BICs). Since the laser energy is concentrated in the centre of each nanodisk, the wavelength of the BIC is not significantly affected by perturbations such as tiny holes in the structure.

“At the same time, the in-plane electric fields of these modes are distributed along the periphery of each nanodisk,” Song explains. “This greatly enhances the light field inside the centre of the hole and induces an effective dipole moment there, which is what produces a geometric phase change to the light emission at each pixel.”

By rotating the holes in the nanodisks, Song says that it is possible to introduce specific geometric phase profiles into the metasurface. The laser emission can then be tailored to create focal spots, focal lines and doughnut shapes as well as holographic images.

And that is not all. Unlike in conventional laser modes, the waves scattered from the new metalaser are too weak to undergo resonant amplification. This means that the speckle noise generated is negligibly small, which resolves the longstanding challenge of reducing speckle noise in holographic displays without reducing image quality.

According to Song, this property could revolutionize laser displays. He adds that the physical concept outlined in the team’s work could be extended to other nanophotonic devices, substantially improving their performance in various optics and photonics applications.

“Controlling laser emission at will has always been a dream of laser researchers,” he tells Physics World. “Researchers have traditionally done this by introducing metasurfaces into structures such as laser oscillators. This approach, while very straightforward, is severely limited by the resonant conditions of this type of laser system. With other types of laser, they had to either integrate a metasurface wave plate outside the laser cavity or use bulky and complicated components to compensate for phase changes.”

With the new metalaser, the laser emission can be changed from fixed profiles such as Hermite-Gaussian modes and Laguerre-Gaussian modes to arbitrarily customized beams, he says. One consequence of this is that the lasers could be fabricated to match the numerical aperture of fibres or waveguides, potentially boosting the performance of optical communications and optical information processing.

Developing a programmable metalaser will be the researchers’ next goal, Song says.

The new metalaser design is described in Nature.

The post New metalaser is a laser researcher’s dream appeared first on Physics World.

  •  

New laser-plasma accelerator could soon deliver X-ray pulses

A free-electron laser (FEL) that is driven by a plasma-based electron accelerator has been unveiled by Sam Barber at Lawrence Berkeley National Laboratory and colleagues. The device is a promising step towards compact, affordable free-electron lasers that are capable of producing intense, ultra-short X-ray laser pulses. It was developed in collaboration with researchers at Berkeley Lab, University of California Berkeley, University of Hamburg and Tau Systems.

A FEL creates X-rays by the rapid back-and-forth acceleration of fast-moving electron pulses using a series of magnets called an undulator. These X-rays are emitted at a narrow wavelength and then interact with the pulse as it travels down the undulator. The result is a bright X-ray pulse with laser-like coherence.

What is more, wavelength of the emitted X-rays can be adjusted simply by changing the energy of the electron pulses, making FELs highly tuneable.

Big and expensive

FELs are especially useful for generating intense, ultra-short X-ray pulses, which cannot be produced using conventional laser systems. So far, several X-ray FELs have been built for this purpose – but each of them relies on kilometre-scale electron accelerators costing huge amounts of money to build and maintain.

To create cheaper and more accessible FELs, researchers are exploring the use of laser-plasma accelerators (LPAs) – which can accelerate electron pulses to high energies over distances of just a few centimetres.

Yet as Barber explains, “LPAs have had a reputation for being notoriously hard to use for FELs because of things like parameter jitter and the large energy spread of the electron beam compared to conventional accelerators. But sustained research across the international landscape continues to drive improvements in all aspects of LPA performance.”

Recently, important progress was made by a group at the Chinese Academy of Sciences (CAS), who used an LPA to create FEL pulses by a factor of 50. Their pulses have a wavelength of 27 nm – which is close to the X-ray regime – but only about 10% of pulses succeeded.

Very stable laser

Now, the team has built on this by making several improvements to the FEL setup, with the aim to enhance its compatibility with LPAs. “On our end, we have taken great pains to ensure a very stable laser with several active feedback systems,” Barber explains. “Our strategy has essentially been to follow the playbook established by the original FEL research: start at longer wavelengths where it is easier to optimize and learn about the process and then scale the system to the shorter wavelengths.”

With these refinements, the team amplified their FEL’s output by a factor of 1000, achieving this in over 90% of their shots. This vastly outperformed the CAS result – albeit at a longer wavelength. “We designed the experiment to operate the FEL at around 420 nm, which is not a particularly exciting wavelength for scientific use cases – it’s just blue light,” Barber says. “But, with very minor upgrades, we plan to scale it for sub-100 nm wavelength where scientific applications become interesting.”

The researchers are optimistic that further breakthroughs are within reach, which could improve the prospects for LPA-driven FEL experiments. One especially important target is reaching the “saturation level” at X-ray wavelengths: the point beyond which FEL amplification no longer increases significantly.

“Another really crucial component is developing laser technology to scale the current laser systems to much higher repetition rates,” Barber says. “Right now, the typical laser used for LPAs can operate at around 10 Hz, but that will need to scale up dramatically to compare to the performance of existing light sources that are pushing megahertz.”

The research is described in Physical Review Letters.

The post New laser-plasma accelerator could soon deliver X-ray pulses appeared first on Physics World.

  •  

Space ice reveals its secrets

The most common form of water in the universe appears to be much more complex than was previously thought. While past measurements suggested that this “space ice” is amorphous, researchers in the UK have now discovered that it contains crystals. The result poses a challenge to current models of ice formation and could alter our understanding of ordinary liquid water.

Unlike most other materials, water is denser as a liquid than it is as a solid. It also expands rather than contracts when it cools; becomes less viscous when compressed; and exists in many physical states, including at least 20 polymorphs of ice.

One of these polymorphs is commonly known as space ice. Found in the bulk matter in comets, on icy moons and in the dense molecular clouds where stars and planets form, it is less dense than liquid water (0.94 g cm−3 rather than 1 g cm−3), and X-ray diffraction images indicate that it is an amorphous solid. These two properties give it its formal name: low-density amorphous ice, or LDA.

While space ice was discovered almost a century ago, Michael Davies, who studied LDA as part of his PhD research at University College London and the University of Cambridge, notes that its exact atomic structure is still being debated. “It is unclear, for example, whether LDA is a ‘true glassy state’ (meaning a frozen liquid with no ordered structure) or a high disordered crystal,” Davies explains.

The memory of ice

In the new work, Davies and colleagues used two separate computational simulations to better understand this atomic structure. In the first simulation, they froze “boxes” of water molecules by cooling them to -150 °C at different rates, which produced crystalline and amorphous ice in varying proportions. They then compared this spectrum of structures to the structure of amorphous ice as measured by X-ray diffraction.

“The best model to match experiments was a ‘goldilocks’ scenario – that is, one that is not too amorphous and not too crystalline,” Davies explains. “Specifically, we found ice that was up to 20% crystalline and 80% amorphous, with the structure containing tiny crystals around 3-nm wide.”

The second simulation began with large “boxes” of ice consisting of many small ice crystals packed together. “Here, we varied the number of crystals in the boxes to again give a range of very crystalline to amorphous models,” Davies says. “We found very close agreement to experiment with models that had very similar structures compared to the first approach with 25% crystalline ice.”

To back up these findings, the UCL/Cambridge researchers performed a series of experiments. “By re-crystallizing different samples of LDA formed via different ‘parent ice phases’ we found that the final crystal structure formed varied depending on the pathway to creation,” Davies tells Physics World. In other words, he adds, “The final structure had a memory of its parent.”

This is important, Davies continues, because if LDA was truly amorphous and contained no crystalline grains at all, this “memory” effect would not be possible.

Impact on our understanding

The discovery that LDA is not completely amorphous has implications for our understanding of ordinary liquid water. The prevailing “two state” model for water is appealing because it accounts for many of water’s thermodynamic anomalies. However, it rests on the assumption that both LDA and high-density amorphous ice have corresponding liquid forms, and that liquid water can be modelled as a mixture of the two.

“Our finding that LDA actually contains many small crystallites presents some challenges to this model,” Davies says. “It is thus of paramount importance for us to now confirm if a truly amorphous version of LDA is achievable in experiments.”

The existence of structure within LDA also has implications for “panspermia” theory, which hypothesizes that the building blocks of life (such as simple amino acids) were carried to Earth within an icy comet.  “Our findings suggest that LDA would be a less efficient transporting material for these organic molecules because a partly crystalline structure has less space in which these ingredients could become embedded,” Davies says.

“The theory could still hold true, though,” he adds, “as there are amorphous regions in the ice where such molecules could be trapped and stored.”

Challenges in determining atomic structure

The study, which is detailed in Physical Review B, highlights the difficulty of determining the exact atomic structure of materials. According to Davies, it could therefore be important for understanding other amorphous materials, including some that are widely used in technologies such as OLEDs and fibre optics.

“Our methodology could be applied to these materials to determining whether they are truly glassy,” he says. “Indeed, glass fibres that transport data along long distances need to be amorphous to function efficiently. If they are found to contain tiny crystals, these could then be removed to improve performance.”

The researchers are now focusing on understanding the structure of other amorphous ices, including high-density amorphous ice. “There is much for us to investigate with regards to the links between amorphous ice phases and liquid water,” Davies concludes.

The post Space ice reveals its secrets appeared first on Physics World.

  •  

Building a career from a passion for science communication

This episode of the Physics World Weekly podcast features an interview with Kirsty McGhee, who is a scientific writer at the quantum-software company Qruise. It is the second episode in our two-part miniseries on careers for physicists.

While she was doing a PhD in condensed matter physics, McGhee joined Physics World’s Student Contributors Network. This involved writing articles about peer-reviewed research and also proof reading articles written by other contributors.

McGhee explains how the network broadened her knowledge of physics and improved her communication skills. She also says that potential employers looked favourably on her writing experience.

At Qruise, McGhee has a range of responsibilities that include writing documentation, marketing, website design, and attending conference exhibitions. She explains how her background in physics prepared her for these tasks, and what new skills she is learning.

The post Building a career from a passion for science communication appeared first on Physics World.

  •  

Tritium and helium targets shed light on three-nucleon interactions

An experiment that scattered high-energy electrons from helium-3 and tritium nuclei has provided the first evidence for three-nucleon short-range correlations. The data were taken in 2018 at Jefferson Lab in the US and further studies of these correlations could improve our understanding of both atomic nuclei and neutron stars.

Atomic nuclei contain nucleons (protons and neutrons) that are bound together by the strong force. These nucleons are not static and they can move rapidly about the nucleus. While nucleons can move independently, they can also move as correlated pairs, trios and larger groupings. Studying this correlated motion can provide important insights into interactions between nucleons – interactions that define the structures of tiny nuclei and huge neutron stars.

The momenta of nucleons can be measured by scattering a beam of high-energy electrons from nuclei. This is because the de Broglie wavelength of these electrons is smaller that the size of the nucleons – allowing individual nucleons to be isolated. During the scattering process, momentum is exchanged between a nucleon and an electron, and how this occurs provides important insights into the correlations between nucleons.

Electron scattering has already revealed that most of the momentum in nuclei is associated with single nucleons, with some also assigned to correlated pairs. These experiments also suggested that nuclei have additional momenta that had not been accounted for.

Small but important

“We know that the three-nucleon interaction is important in the description of nuclear properties, even though it’s a very small contribution,” explains John Arrington at the Lawrence Berkeley National Laboratory in the US. “Until now, there’s never really been any indication that we’d observed them at all. This work provides a first glimpse at them.”

In 2018, Arrington and others did a series of electron-scattering experiments at Jefferson Lab with helium-3 and tritium targets. Now Arrington and an international team of physicists has scoured this scattering data for evidence of short-range, three-nucleon correlations.

Studying these correlations in nuclei with just three nucleons is advantageous because there are no correlations between four or more nucleons. These correlations would make it more difficult to isolate three-nucleon effects in the scattering data.

A further benefit of looking at tritium and helium-3 is that they are “mirror nuclei”. Tritium comprises one proton and two neutrons, while helium-3 comprises two protons and a neutron. The strong force that binds nucleons together acts equally on protons and neutrons. However, there are subtle differences in how protons and neutrons interact with each other – and these differences can be studied by comparing tritium and helium-3 electron scattering experiments.

A clean picture

“We’re trying to show that it’s possible to study three-nucleon correlations at Jefferson Lab even though we can’t get the energies necessary to do these studies in heavy nuclei,” says principle investigator Shujie Li, at Lawrence Berkeley. “These light systems give us a clean picture — that’s the reason we put in the effort of getting a radioactive target material.”

Both helium-3 and tritium are rare isotopes of their respective elements. Helium-3 is produced from the radioactive decay of tritium, which itself is produced in nuclear reactors. Tritium is a difficult isotope to work with because it is used to make nuclear weapons; has a half–life of about 12 years; and is toxic when ingested or inhaled. To succeed, the team had to create a special cryogenic chamber to contain their target of tritium gas.

Analysis of the scattering experiments revealed tantalizing hints of three-nucleon short-range correlations. Further investigation is need to determine exactly how the correlations occur. Three nucleons could become correlated simultaneously, for example, or an existing correlated pair could become correlated to a third nucleon.

Three-nucleon interactions are believed to play an important role in the properties of neutron stars, so further investigation into some of the smallest of nuclei could shed light on the inner workings of much more massive objects. “It’s much easier to study a three-nucleon correlation in the lab than in a neutron star,” says Arrington.

The research is described in Physics Letters B.

The post Tritium and helium targets shed light on three-nucleon interactions appeared first on Physics World.

  •  

The Butler-Volmer equation revisited: effect of metal work function

bv graphic main image

The Butler-Volmer equation is commonly the standard model of electrochemical kinetics.  Typically, the effects of applied voltage on the free energies of activation of the forward and backward reactions are analyzed and used to derive a current-voltage relationship. Traditionally, specific properties of the electrode metal were not considered in this derivation and consequently the resulting expression contained no information on the variation of exchange current density with electrode-material-specific parameters such as work function Φ. In recent papers1,2, Buckley and Leddy revisited the classical derivation of the Butler-Volmer equation to include the effect of the electrode metal.  We considered in detail the complementary relationship of the chemical potential of electrons μe and the Galvani potential φ and so derived expressions for the current-voltage relationship and the exchange current density that include μe The exchange current density j0 appears as an exponential function of Δμe.  Making the approximation Δμe ≈ —FΔΦ yields a linear relationship between ln j0 and Φ. This linear increase in ln j0 with Φ had long been reported3 but had not been explained.  In this webinar, these recent modifications of the Butler-Volmer equation and their consequences will be discussed.

1 K S R Dadallagei, D L Parr IV, J R Coduto, A Lazicki, S DeBie, C D Haas and J Leddy,  J. Electrochem. Soc, 170, 086508 (2023)

2 D N Buckley and J Leddy,  J. Electrochem. Soc, 171, 116503 (2024)

3 S Trasatti,  J. Electroanal. Chem., 39, 163—184 (1972)

D Noel Buckley
D Noel Buckley

D Noel Buckley is professor of physics emeritus at the University of Limerick, Ireland and adjunct professor of chemical and biomolecular engineering at Case Western Reserve University.   He is a fellow and past-president of ECS and has served as an editor of both the Journal of the Electrochemical Society and Electrochemical and Solid State Letters. He has over 50 years of research experience on a range of topics.  His PhD research on oxygen electrochemistry at University College Cork, Ireland was followed by postdoctoral research on high-temperature corrosion at the University of Pennsylvania.  From 1979 to 1996, he worked at Bell Laboratories (Murray Hill, NJ), initially on lithium batteries but principally on III-V semiconductors for electronics and photonics. His research at the University of Limerick has been on semiconductor electrochemistry, stress in electrodeposited nanofilms and electrochemical energy storage, principally vanadium flow batteries in collaboration with Bob Savinell’s group at Case. His recent interest in the theory of electron transfer kinetics arose from collaboration with Johna Leddy at the University of Iowa. He has taught courses in scientific writing since 2006 at the University of Limerick and short courses at several ECS Meetings. He is a recipient of the Heinz Gerischer Award and the ECS Electronics and Photonics Division Award. Recently, he led Poetry Evenings at ECS Meetings in Gothenburg and Montreal.

ECS logo

The post The Butler-Volmer equation revisited: effect of metal work function appeared first on Physics World.

  •  

Entangled histories: women in quantum physics

Writing about women in science remains an important and worthwhile thing to do. That’s the premise that underlies Women in the History of Quantum Physics: Beyond Knabenphysik – an anthology charting the participation of women in quantum physics, edited by Patrick Charbonneau, Michelle Frank, Margriet van der Heijden and Daniela Monaldi.

What does a history of women in science accomplish? This volume firmly establishes that women have for a long time made substantial contributions to quantum physics. It raises the profiles of figures like Chien-Shiung Wu, whose early work on photon entanglement is often overshadowed by her later fame in nuclear physics; and Grete Hermann, whose critiques of John von Neumann and Werner Heisenberg make her central to early quantum theory.

But in specifically recounting the work of these women in quantum, do we risk reproducing the same logic of exclusion that once kept them out – confining women to a specialized narrative? The answer is no, and this book is an especially compelling illustration of why.

A reference and a reminder

Two big ways this volume demonstrates its necessity are by its success as a reference, a place to look for the accomplishments and contributions of women in quantum physics; and as a reminder that we still have far to go before there is anything like true diversity, equality or the disappearance of prejudice in science.

The subtitle Beyond Knabenphysik – meaning “boys’ physics” in German – points to one of the book’s central aims: to move past a vision of quantum physics as a purely male domain. Originally a nickname for quantum mechanics given because of the youth of its pioneers, Knabenphysik comes to be emblematic of the collaboration and mentorship that welcomed male physicists and consistently excluded women.

The exclusion was not only symbolic but material. Hendrika Johanna van Leeuwen, who co-developed a key theorem in classical magnetism, was left out of the camaraderie and recognition extended to her male colleagues. Similarly, credit for Laura Chalk’s research into the Stark effect – an early confirmation of Schrödinger’s wave equation – was under-acknowledged in favour of that of her male collaborator’s.

Something this book does especially well is combine the sometimes conflicting aims of history of science and biography. We learn not only about the trajectories of these women’s careers, but also about the scientific developments they were a part of. The chapter on Hertha Sponer, for instance, traces both her personal journey and her pioneering role in quantum spectroscopy. The piece on Freda Friedman Salzman situates her theoretical contributions within the professional and social networks that both enabled and constrained her. In so doing, the book treats each of these women as not only whole human beings, but also integral players in a complex history of one of the most successful and debated physical theories in history.

Lost physics

Because the history is told chronologically, we trace quantum physics from some of the early astronomical images suggesting discrete quantized elements to later developments in quantum electrodynamics. Along the way, we encounter women like Maria McEachern, who revisits Williamina Fleming’s spectral work; Maria Lluïsa Canut, whose career spanned crystallography and feminist activism; and Sonja Ashauer, a Brazilian physicist whose PhD at Cambridge placed her at the heart of theoretical developments but whose story remains little known.

This history could lead to a broader reflection on how credit, networking and even theorizing are accomplished in physics. Who knows how many discoveries in quantum physics, and science more broadly, could have been made more quickly or easily without the barriers and prejudice women and other marginalized persons faced then and still face today? Or what discoveries still lie latent?

Not all the women profiled here found lasting professional homes in physics. Some faced barriers of racism as well as gender discrimination, like Carolyn Parker who worked on the Manhattan Project’s polonium research and is recognized as the first African American woman to have earned a postgraduate degree in physics. She died young without having received full recognition in her lifetime. Others – like Elizabeth Monroe Boggs who performed work in quantum chemistry – turned to policy work after early research careers. Their paths reflect both the barriers they faced and the broader range of contributions they made.

Calculate, don’t think

The book makes a compelling argument that the heroic narrative of science doesn’t just undermine the contributions of women, but of the less prestigious more broadly. Placing these stories side by side yields something greater than the sum of its parts. It challenges the idea that physics is the work of lone geniuses by revealing the collective infrastructures of knowledge-making, much of which has historically relied not only on women’s labour – and did they labour – but on their intellectual rigour and originality.

Many of the women highlighted were at times employed “to calculate, not to think” as “computers”, or worked as teachers, analysts or managers. They were often kept from more visible positions even when they were recognized by colleagues for their expertise. Katharine Way, for instance, was praised by peers and made vital contributions to nuclear data, yet was rarely credited with the same prominence as her male collaborators. It shows clearly that those employed to support from behind the scenes could and did contribute to theoretical physics in foundational ways.

The book also critiques the idea of a “leaky pipeline”, showing that this metaphor oversimplifies. It minimizes how educational and institutional investments in women often translate into contributions both inside and outside formal science. Ana María Cetto Kramis, for example, who played a foundational role in stochastic electrodynamics, combined research with science diplomacy and advocacy.

Should women’s accomplishments be recognized in relation to other women’s, or should they be integrated into a broader historiography? The answer is both. We need inclusive histories that acknowledge all contributors, and specialized works like this one that repair the record and show what emerges specifically and significantly from women’s experiences in science. Quantum physics is a unique field, and women played a crucial and distinctive role in its formation. This recognition offers an indispensable lesson: in physics and in life it’s sometimes easy to miss what’s right in front of us, no less so in the history of women in quantum physics.

  • 2025 Cambridge University Press 486 pp £37.99hb

The post Entangled histories: women in quantum physics appeared first on Physics World.

  •  

A new milestone in particle physics with tau lepton pair production

Tau leptons are fundamental particles in the lepton family, similar to electrons and muons, but with unique properties that make them particularly challenging to study. Like other leptons, they have a half-integer spin, but they are significantly heavier and have extremely short lifetimes, decaying rapidly into other particles. These characteristics limit opportunities for direct observation and detailed analysis.

The Standard Model of particle physics describes the fundamental particles and forces, along with the mathematical framework that governs their interactions. According to quantum electrodynamics (QED), a component of the Standard Model, protons in high-energy environments can emit photons (γ), which can then fuse to create a pair of tau leptons (ττ⁻):    γ γ → ττ

Using QED equations, scientists have previously calculated the probability of this process, how the tau leptons would be produced, and how often it should occur at specific energies. While muons have been extensively studied in proton collisions, tau leptons have remained more elusive due to their short lifetimes.

In a major breakthrough, researchers at CERN have used data from the CMS detector at the Large Hadron Collider (LHC) to make the first measurement of tau lepton pair production via photon-photon fusion in proton-proton collisions. Previously, this phenomenon had only been observed in lead-ion (PbPb) collisions by the ATLAS and CMS collaborations. In those cases, the photons were generated by the strong electromagnetic fields of the heavy nuclei, within a highly complex environment filled with many particles and background noise. In contrast, proton-proton collisions are much cleaner but also much rarer, making the detection of photon-induced tau production a greater technical challenge.

Notably, the team were able to distinguish QED photon collisions from QCD (Quantum Chromodynamics) collisions by the lack of the underlying event. They demonstrated tau particles were being produced without other nearby tracks (paths left by particles) using the excellent vertex resolution of their pixel detector. To verify the technique, the researchers did careful studies of the same processes in muon pair production and developed corrections to apply to the tau lepton processes.

Demonstrating tau pair production in proton-proton collisions not only confirms theoretical predictions but also opens a new avenue for studying tau leptons in high-energy environments. This breakthrough enhances our understanding of lepton interactions and provides a valuable tool for testing the Standard Model with greater precision.

The post A new milestone in particle physics with tau lepton pair production appeared first on Physics World.

  •