↩ Accueil

Vue normale

Reçu avant avant-hier

High-speed 3D microscope improves live imaging of fast biological processes

12 septembre 2025 à 10:00

A new high-speed multifocus microscope could facilitate discoveries in developmental biology and neuroscience thanks to its ability to image rapid biological processes over the entire volume of tiny living organisms in real time.

The pictures from many 3D microscopes are obtained sequentially by scanning through different depths, making them too slow for accurate live imaging of fast-moving natural functions in individual cells and microscopic animals. Even current multifocus microscopes that capture 3D images simultaneously have either relatively poor image resolution or can only image to shallow depths.

In contrast, the new 25-camera “M25” microscope – developed during his doctorate by Eduardo Hirata-Miyasaki and his supervisor Sara Abrahamsson, both then at the University of California Santa Cruz, together with collaborators at the Marine Biological Laboratory in Massachusetts and the New Jersey Institute of Technology – enables high-resolution 3D imaging over a large field-of-view, with each camera capturing 180 × 180 × 50 µm volumes at a rate of 100 per second.

“Because the M25 microscope is geared towards advancing biomedical imaging we wanted to push the boundaries for speed, high resolution and looking at large volumes with a high signal-to-noise ratio,” says Hirata-Miyasaki, who is now based in the Chan Zuckerberg Biohub in San Francisco.

The M25, detailed in Optica, builds on previous diffractive-based multifocus microscopy work by Abrahamsson, explains Hirata-Miyasaki. In order to capture multiple focal planes simultaneously, the researchers devised a multifocus grating (MFG) for the M25. This diffraction grating splits the image beam coming from the microscope into a 5 × 5 grid of evenly illuminated 2D focal planes, each of which is recorded on one of the 25 synchronized machine vision cameras, such that every camera in the array captures a 3D volume focused on a different depth. To avoid blurred images, a custom-designed blazed grating in front of each camera lens corrects for the chromatic dispersion (which spreads out light of different wavelengths) introduced by the MFG.

The team used computer simulations to reveal the optimal designs for the diffractive optics, before creating them at the University of California Santa Barbara nanofabrication facility by etching nanometre-scale patterns into glass. To encourage widespread use of the M25, the researchers have published the fabrication recipes for their diffraction gratings and made the bespoke software for acquiring the microscope images open source. In addition, the M25 mounts to the side port of a standard microscope, and uses off-the-shelf cameras and camera lenses.

The M25 can image a range of biological systems, since it can be used for fluorescence microscopy – in which fluorescent dyes or proteins are used to tag structures or processes within cells – and can also work in transmission mode, in which light is shone through transparent samples. The latter allows small organisms like C. elegans larvae, which are commonly used for biological research, to be studied without disrupting them.

The researchers performed various imaging tests using the prototype M25, including observations of the natural swimming motion of entire C. elegans larvae. This ability to study cellular-level behaviour in microscopic organisms over their whole volume may pave the way for more detailed investigations into how the nervous system of C. elegans controls its movement, and how genetic mutations, diseases or medicinal drugs affect that behaviour, Hirata-Miyasaki tells Physics World. He adds that such studies could further our understanding of human neurodegenerative and neuromuscular diseases.

“We live in a 3D world that is also very dynamic. So with this microscope I really hope that we can keep pushing the boundaries of acquiring live volumetric information from small biological organisms, so that we can capture interactions between them and also [see] what is happening inside cells to help us understand the biology,” he continues.

As part of his work at the Chan Zuckerberg Biohub, Hirata-Miyasaki is now developing deep-learning models for analysing dynamic cell and organism multichannel dynamic live datasets, like those acquired by the M25, “so that we can extract as much information as possible and learn from their dynamics”.

Meanwhile Abrahamsson, who is currently working in industry, hopes that other microscopy development labs will make their own M25 systems.  She is also considering commercializing the instrument to help ensure its widespread use.

The post High-speed 3D microscope improves live imaging of fast biological processes appeared first on Physics World.

LIGO could observe intermediate-mass black holes using artificial intelligence

10 septembre 2025 à 19:41

A machine learning-based approach that could help astronomers detect lower-frequency gravitational waves has been unveiled by researchers in the UK, US, and Italy. Dubbed deep loop shaping, the system would apply real-time corrections to the mirrors used in gravitational wave interferometers. This would dramatically reduce noise in the system, and could lead to a new wave of discoveries of black hole and neutron star mergers – according to the team.

In 2015, the two LIGO interferometers made the very first observation of a gravitational wave: attributing its origin to a merger of two black holes that were roughly 1.3 billion light–years from Earth.

Since then numerous gravitational waves have been observed with frequencies ranging from 30–2000 Hz. These are believed to be from the mergers of small black holes and neutron stars.

So far, however, the lower reaches of the gravitational wave frequency spectrum (corresponding to much larger black holes) have gone largely unexplored. Being able to detect gravitational waves at 10–30 Hz would allow us to observe the mergers of intermediate-mass black holes at 100–100,000 solar masses. We could also measure the eccentricities of binary black hole orbits. However, these detections are not currently possible because of vibrational noise in the mirrors at the end of each interferometer arm.

Subatomic precision

“As gravitational waves pass through LIGO’s two 4-km arms, they warp the space between them, changing the distance between the mirrors at either end,” explains Rana Adhikari at Caltech, who is part of the team that has developed the machine-learning technique. “These tiny differences in length need to be measured to an accuracy of 10-19 m, which is 1/10,000th the size of a proton. [Vibrational] noise has limited LIGO for decades.”

To minimize noise today, these mirrors are suspended by a multi-stage pendulum system to suppress seismic disturbances. The mirrors are also polished and coated to eliminate surface imperfections almost entirely. On top of this, a feedback control system corrects for many of the remaining vibrations and imperfections in the mirrors.

Yet for lower-frequency gravitational waves, even this subatomic level of precision and correction is not enough. As a laser beam impacts a mirror, the mirror can absorb minute amounts of energy – creating tiny thermal distortions that complicate mirror alignment. In addition, radiation pressure from the laser, combined with seismic motions that are not fully eliminated by the pendulum system, can introduce unwanted vibrations in the mirror.

The team proposed that this problem could finally be addressed with the help of artificial intelligence (AI). “Deep loop shaping is a new AI method that helps us to design and improve control systems, with less need for deep expertise in control engineering,” describes Jonas Buchli at Google DeepMind, who led the research. “While this is helping us to improve control over high precision devices, it can also be applied to many different control problems.”

Deep reinforcement learning

The team’s approach is based on deep reinforcement learning, whereby a system tests small adjustments to its controls and adapts its strategy over time through a feedback system of rewards and penalties.

With deep loop shaping, the team introduced smarter feedback controls for the pendulum system suspending the interferometer’s mirrors. This system can adapt in real time to keep the mirrors aligned with minimal control noise – counteracting thermal distortions, seismic vibrations, and forces induced by radiation pressure.

“We tested our controllers repeatedly on the LIGO system in Livingston, Louisiana,” Buchli continues. “We found that they worked as well on hardware as in simulation, confirming that our controller keeps the observatory’s system stable over prolonged periods.”

Based on these promising results, the team is now hopeful that deep loop shaping could help to boost the cosmological reach of LIGO and other existing detectors, along with future generations of gravitational-wave interferometers.

“We are opening a new frequency band, and we might see a different universe much like the different electromagnetic bands like radio, light, and X-rays tell complementary stories about the universe,” says team member Jan Harms at the Gran Sasso Science Institute in Italy. “We would gain the ability to observe larger black holes, and to provide early warnings for neutron star mergers. This would allow us to tell other astronomers where to point their telescopes before the explosion occurs.”

The research is described in Science.

The post LIGO could observe intermediate-mass black holes using artificial intelligence appeared first on Physics World.

Physicists set to decide location for next-generation Einstein Telescope

10 septembre 2025 à 11:30

A decade ago, on 14 September 2015, the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in Hanford, Washington, and Livingston, Louisiana, finally detected a gravitational wave. The LIGO detectors – two L-shaped laser interferometers with 4 km-long arms – had measured tiny differences in laser beams bouncing off mirrors at the end of each arm. The variations in the length of the arms, caused by the presence of a gravitational wave, were converted into the now famous audible “chirp signal”, which indicated the final approach between two merging black holes.

Since that historic detection, which led to the 2017 Nobel Prize for Physics, the LIGO detectors, together with VIRGO in Italy, have measured several hundred gravitational waves – from mergers of black holes to neutron-star collisions. More recently, they have been joined by the KAGRA detector in Japan, which is located some 200 m underground, shielding it from vibrations and environmental noise.

Yet the current number of gravitational waves could be dwarfed by what the planned Einstein Telescope (ET) would measure. This European-led, third-generation gravitational-wave detector would be built several hundred metres underground and be at least 10 times more sensitive than its second-generation counterparts including KAGRA. Capable of “listening” to a thousand times larger volume of the universe, the new detector would be able to spot many more sources of gravitational waves. In fact, the ET will be able to gather in a day what it took LIGO and VIRGO a decade to collect.

The ET is designed to operate in two frequency domains. The low-frequency regime – 2–40 Hz – is below current detectors’ capabilities and will let the ET pick up waves from more massive black holes. The high-frequency domain, on the other hand, would operate from 40 Hz to 10 kHz  and detect a wide variety of astrophysical sources, including merging black holes and other high-energy events. The detected signals from waves would also be much longer with the ET, lasting for hours. This would allow physicists to “tune in” much earlier as black holes or neutron stars approach each other.

Location, location, location

But all that is still a pipe dream, because the ET, which has a price tag of €2bn, is not yet fully funded and is unlikely to be ready until 2035 at the earliest. The precise costs will depend on the final location of the experiment, which is still up for grabs.

Three regions are vying to host the facility: the Italian island of Sardinia, the Belgian-German-Dutch border region and the German state of Saxony. Each candidate is currently investigating the suitability of its preferred site (see box below), the results of which will be published in a “bid book” by the end of 2026. The winning site will be picked in 2027 with construction beginning shortly after.

Other factors that will dictate where the ET is built include logistics in the host region, the presence of companies and research institutes (to build and exploit the facility) and government support. With the ET offering high-quality jobs, economic return, scientific appeal and prestige, that could give the German-Belgian-Dutch candidacy the edge given the three nations could share the cost.

Another major factor is the design of the ET. One proposal is to build it as an equilateral triangle with each side being 10 km. The other is a twin L-shaped design where both arms are 15 km long and each detector located far from each other. The latter design is similar to the two LIGO over-ground detectors, which are 3000 km apart. If the “2L design” is chosen, the detector would then be built at two of the three competing sites.

The 2L design is being investigated by all three sites, but those behind the Sardinia proposal strongly favour this approach. “With the detectors properly oriented relative to each other, this design could outperform the triangular design across all key scientific objectives,” claims Domenico D’Urso, scientific director of the Italian candidacy. He points to a study by the ET collaboration in 2023 that investigated the impact of the ET design on its scientific goals. “The 2L design enables, for example, more precise localization of gravitational wave sources, enhancing sky-position reconstruction,” he says. “And it provides superior overall sensitivity.”

Where could the next-generation Einstein Telescope be built?

Three sites are vying to host the Einstein Telescope (ET), with each offering various geological advantages. Lausitz in Saxony benefits from being a former coal-mining area. “Because of this mining past, the subsurface was mapped in great detail decades ago,” says Günther Hasinger, founding director of the German Center for Astrophysics, which is currently being built in Lausitz and would house the ET if picked. The granite formation in Lausitz is also suitable for a tunnel complex because the rock is relatively dry. Not much water would need to be pumped away, causing less vibration.

Thanks to the former lead, zinc and silver mine of Sos Enattos, meanwhile, the subsurface near Nuoro in Sardinia – another potential location for the ET – is also well known. The island is on a very stable, tectonic microplate, making it seismically quiet. Above ground, the area is undeveloped and sparsely populated, further shielding the experiment from noise.

The third ET candidate, lying near the point where Belgium, Germany and the Netherlands meet, also has a hard subsurface, which is needed for the tunnels. It is topped by a softer, clay-like layer that would dampen vibrations from traffic and industry. “We are busy investigating the suitability of the subsurface and the damping capacity of the top layer,” says Wim Walk of the Dutch Center for Subatomic Physics (Nikhef), which is co-ordinating the candidacy for this location. “That research requires a lot of work, because the subsurface here has not yet been properly mapped.”

Localization is important for multi­messenger astronomy. In other words, if a gravitational-wave source can be located quickly and precisely in the sky, other telescopes can be pointed towards it to observe any eventual light or other electromagnetic (EM) signals. This is what happened after LIGO detected a gravitational wave on 17 August 2017, originating from a neutron star collision. Dozens of ground- and space-based satellites were able to pick up a gamma-ray burst and the subsequent EM afterglow.

The triangle design, however, is favoured by the Belgian-German-Dutch consortium. It would be the Earth equivalent to the European Space Agency’s planned LISA space-based gravitational-waves detector, which will consist of three spacecraft in a triangle configuration that is set for launch in 2035, the same year that the ET could open. LISA would detect gravitational waves with even much lower frequency, coming, for example, from mergers of supermassive black holes.

While the Earth-based triangle design would not be able to locate the source as precisely, it would – unlike the 2L design – be able to do “null stream” measurements. These would yield  a clearer picture of the noise from the environment and the detector itself, including  “glitches”, which are bursts of noise that overlap with gravitational-wave signals. “With a non-stop influx of gravitational waves but also of noise and glitches, we need some form of automatic clean-up of the data,” says Jan Harms, a physicist at the Gran Sasso Science Institute in Italy and member of the scientific ET collaboration. “The null stream could provide that.”

However, it is not clear if that null stream would be a fundamental advantage for data analysis, with Harms and colleagues thinking more work is needed. “For example, different forms of noise could be connected to each other, which would compromise the null stream,” he says. The problem is also that a detector with a null stream has not yet been realized. And that applies to the triangle design in general. “While the 2L design is well established in the scientific community,” adds D’Urso.

Backers of the triangle design see the ET as being part of a wider, global network of third-generation detectors, where the localization argument no longer matters. Indeed, the US already has plans for an above-ground successor to LIGO. Known as the Cosmic Explorer, it would feature two L-shaped detectors with arm lengths of up to 40 km. But with US politics in turmoil, it is questionable how realistic these plans are.

Matthew Evans, a physicist at the Massachusetts Institute of Technology and member of the LIGO collaboration, recognizes the “network argument”. “I think that the global gravitational waves community are double counting in some sense,” he says. Yet for Evans it is all about the exciting discoveries that could be made with a next-generation gravitational-wave detector. “The best science will be done with ET as 2Ls,” he says.

The post Physicists set to decide location for next-generation Einstein Telescope appeared first on Physics World.

New hollow-core fibres break a 40-year limit on light transmission

9 septembre 2025 à 11:32

Optical fibres form the backbone of the Internet, carrying light signals across the globe. But some light is always lost as it travels, becoming attenuated by about 0.14 decibels per kilometre even in the best fibres. That means signals must be amplified every few dozen kilometres – a performance that hasn’t improved in nearly four decades.

Physicists at the University of Southampton, UK have now developed an alternative that could call time on that decades-long lull. Writing in Nature Photonics, they report hollow-core fibres that exhibit 35% less attenuation while transmitting signals 45% faster than standard glass fibres.

“A bit like a soap bubble”

The core of conventional fibres is made of pure glass and is surrounded by a cladding of slightly different glass. Because the core has a higher refractive index than the cladding, light entering the fibre reflects internally, bouncing back and forth in a process known as total internal reflection. This effect traps the light and guides it along the fibre’s length.

The Southampton team led by Francesco Poletti swapped the standard glass core for air. Because air is more transparent than glass, channelling light through it cuts down on scattering and speeds up signals. The problem is that air’s refractive index is lower, so the new fibre can’t use total internal reflection. Instead, Poletti and colleagues guided the light using a mechanism called anti-resonance, which requires the walls of the hollow core to be made from ultra-thin glass membranes.

“It’s a bit like a soap bubble,” Poletti says, explaining that such bubbles appear iridescent because their thin films reflect some wavelengths and lets others through. “We designed our fibre the same way, with glass membranes that reflect light at certain frequencies back into the core.” That anti-resonant reflection, he adds, keeps the light trapped and moving through the fibre’s hollow centre.

Greener telecommunications

To make the new air-core fibre, the researchers stacked thin glass capillaries in a precise pattern, forming a hollow channel in the middle. Heating and drawing the stack into a hair-thin filament preserved this pattern on a microscopic scale. The finished fibre has a nested design: an air core surrounded by ultra-thin layers that provide anti-resonant guidance and cut down on leakage.

To test their design, the team measured transmission through a full spool of fibre, then cut the fibre shorter and compared the results. They also fired in light pulses and tracked the echoes. Their results show that the hollow fibres reduce attenuation to just 0.091 decibels per kilometre. This lower loss implies that fewer amplifiers would be needed in long cables, lowering costs and energy use. “There’s big potential for greener telecommunications when using our fibres,” says Poletti.

Poletti adds that reduced attenuation (and thus lower energy use) is only one of the new fibre’s advantages. At the 0.14 dB/km attenuation benchmark, the new hollow fibre supports a bandwidth of 54 THz compared to 10 THz for a normal fibre. At the reduced 0.1 dB/km attenuation, the bandwidth is still 18 THz, which is close to twice that of a normal cable. This means that a single strand can carry far more channels at once.

Perhaps the most impressive advantage is that because the speed of light is faster in air than in glass, data could travel the same distance up to 45% faster. “It’s almost the same speed light takes when we look at a distant star,” Poletti says. The resulting drop in latency, he adds, could be crucial for real-time services like online gaming or remote surgery, and could also speed up computing tasks such as training large language models.

Field testing

As well as the team’s laboratory tests, Microsoft has begun testing the fibres in real systems, installing segments in its network and sending live traffic through them. These trials prove the hollow-core design works with existing telecom equipment, opening the door to gradual rollout. In the longer run, adapting amplifiers and other gear that are currently tuned for solid glass fibres could unlock even better performance.

Poletti believes the team’s new fibres could one day replace existing undersea cables. “I’ve been working on this technology for more than 20 years,” he says, adding that over that time, scepticism has given way to momentum, especially now with Microsoft as an industry partner. But scaling up remains a real hurdle. Making short, flawless samples is one thing; mass-producing thousands of kilometres at low cost is another. The Southampton team is now refining the design and pushing toward large-scale manufacturing. They’re hopeful that improvements could slash losses by another order of magnitude and that the anti-resonant design can be tuned to different frequency bands, including those suited to new, more efficient amplifiers.

Other experts agree the advance marks a turning point. “The work builds on decades of effort to understand and perfect hollow-core fibres,” says John Ballato, whose group at Clemson University in the US develops fibres with specialty cores for high-energy laser and biomedical applications. While Ballato notes that such fibres have been used commercially in shorter-distance communications “for some years now”, he believes this work will open them up to long-haul networks.

The post New hollow-core fibres break a 40-year limit on light transmission appeared first on Physics World.

Quantum foundations: towards a coherent view of physical reality

3 septembre 2025 à 12:00

One hundred years after its birth, quantum mechanics remains one of the most powerful and successful theories in all of science. From quantum computing to precision sensors, its technological impact is undeniable – and one reason why 2025 is being celebrated as the International Year of Quantum Science and Technology.

Yet as we celebrate these achievements, we should still reflect on what quantum mechanics reveals about the world itself. What, for example, does this formalism actually tell us about the nature of reality? Do quantum systems have definite properties before we measure them? Do our observations create reality, or merely reveal it?

These are not just abstract, philosophical questions. Having a clear understanding of what quantum theory is all about is essential to its long-term coherence and its capacity to integrate with the rest of physics. Unfortunately, there is no scientific consensus on these issues, which continue to provoke debate in the research community.

That uncertainty was underlined by a recent global survey of physicists about quantum foundational issues, conducted by Nature (643 1157). It revealed a persistent tension between “realist” views, which seek an objective, visualizable account of quantum phenomena, and “epistemic” views that regard the formalism as merely a tool for organizing our knowledge and predicting measurement outcomes.

Only 5% of the 1100 people who responded to the Nature survey expressed full confidence in the Copenhagen interpretation, which is still prevalent in textbooks and laboratories. Further divisions were revealed over whether the wavefunction is a physical entity, a mere calculation device, or a subjective reflection of belief. The lack of agreement on such a central feature underscores the theoretical fragility underlying quantum mechanics.

The willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches

More broadly, 75% of respondents believe that quantum theory will eventually be replaced, at least partially, by a more complete framework. Encouragingly, 85% agree that attempts to interpret the theory in intuitive or physical terms are valuable. This willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches.

Beyond interpretation

We believe that this interpretative proliferation stems from a deeper problem, which is that quantum mechanics lacks a well-defined physical foundation. It describes the statistical outcomes of measurements, but it does not explain the mechanisms behind them. The concept of causality has been largely abandoned in favour of operational prescriptions such that quantum theory works impressively in practice but remains conceptually opaque.

In our view, the way forward is not to multiply interpretations or continue debating them, but to pursue a deeper physical understanding of quantum phenomena. One promising path is stochastic electrodynamics (SED), a classical theory augmented by a random electromagnetic background field, the real vacuum or zero-point field discovered by Max Planck as early as 1911. This framework restores causality and locality by explaining quantum behaviour as the statistical response of particles to this omnipresent background field.

Over the years, several researchers from different lines of thought have contributed to SED. Since our early days with Trevor Marshall, Timothy Boyer and others, we have refined the theory to the point that it can now account for the emergence of features that are considered building blocks of quantum formalism, such as the basic commutator and Heisenberg inequalities.

Particles acquire wave-like properties not by intrinsic duality, but as a consequence of their interaction with the vacuum field. Quantum fluctuations, interference patterns and entanglement emerge from this interaction, without the need to resort to non-local influences or observer-dependent realities. The SED approach is not merely mechanical, but rather electrodynamic.

Coherent thoughts

We’re not claiming that SED is the final word. But it does offer a coherent picture of microphysical processes based on physical fields and forces. Importantly, it doesn’t abandon the quantum formalism but rather reframes it as an effective theory – a statistical summary of deeper dynamics. Such a perspective enables us to maintain the successes of quantum mechanics while seeking to explain its origins.

For us, SED highlights that quantum phenomena can be reconciled with concepts central to the rest of physics, such as realism, causality and locality. It also shows that alternative approaches can yield testable predictions and provide new insights into long-standing puzzles. One phenomenon lying beyond current quantum formalism that we could now test, thanks to progress in experimental physics, is the predicted violation of Heisenberg’s inequalities over very short time periods.

As quantum science continues to advance, we must not lose sight of its conceptual foundations. Indeed, a coherent, causally grounded understanding of quantum mechanics is not a distraction from technological progress but a prerequisite for its full realization. By turning our attention once again to the foundations of the theory, we may finally complete the edifice that began to rise a century ago.

The centenary of quantum mechanics should be a time not just for celebration but critical reflection too.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum foundations: towards a coherent view of physical reality appeared first on Physics World.

Broadband wireless gets even broader thanks to integrated transmitter

3 septembre 2025 à 10:00

Researchers in China have unveiled an ultrabroadband system that uses the same laser and resonator to process signals at frequencies ranging from below 1 GHz up to more than 100 GHz. The system, which is based on a thin-film lithium niobate resonator developed in 2018 by members of the same team, could facilitate the spread of the so-called “Internet of things” in which huge numbers of different devices are networked together at different frequency bands to avoid interference.

Modern complementary metal oxide semiconductor (CMOS) electronic devices generally produce signals at frequencies of a few GHz. These signals are then often shifted into other frequency bands for processing and transmission. For example, sending electronic signals long distances down silicon optical fibres generally means using a frequency of around 200 THz, as silicon is transparent at the corresponding “telecoms” wavelength of 1550nm.

One of the most popular materials for performing this conversion is lithium niobate. This material has been called “the silicon of photonics” because it is highly nonlinear, allowing optical signals to be generated efficiently at a wide range of frequencies.

In integrated devices, bulk lithium niobate modulators are undesirable. However, in 2018 Cheng Wang and colleagues led by Marko Lončar of Harvard University in Massachusetts, US, developed a miniaturized, thin-film version that used an interferometric design to create a much stronger electro-optic effect in a shorter distance. “Usually, the bandwidth limit is set by the radiofrequency loss,” explains Wang, who is now at the City University of Hong Kong, China. “Being shorter means you can go to much higher frequencies.”

A broadband data transmission system

In the new work, Wang, together with researchers at Peking University in China and the University of California, Santa Barbara in the US, used an optimized version of this setup to make a broadband data transmission system. They divided the output of a telecom-wavelength oscillator into two arms. In one of these arms, optical signal modulation software imprinted a complex amplitude-phase pattern on the wave. The other arm was exposed to the data signal and a lithium niobate microring resonator. The two arms were then recombined at a photodetector, and the frequency difference between the two arms (in the GHz range) was transmitted using an antenna to a detector, where the process was reversed.

Crucially, the offset between the centre frequencies of the two arms (the frequency of the beat note at the photodetector when the two arms are recombined) is determined solely by the frequency shift imposed by the lithium niobate resonator. This can be tuned anywhere between 0.5 GHz and 115 GHz via the thermo-optic effect – essentially, incorporating a small electronic heater and using it to tune the refractive index. The signal is then encoded in modulations of the beat frequency, with additional information imprinted into the phase of the waves.

The researchers say this system is an improvement on standard electronic amplifiers because such devices usually operate in relatively narrow bands. Using them to make large jumps in frequency therefore means that signals need to be shifted multiple times. This introduces cumulative noise into the signal and is also problematic for applications such as robotic surgery, where the immediate arrival of a signal can literally be a matter of life and death.

Internet of things applications

The researchers demonstrated wireless data transfer across a distance of 1.3 m, achieving speeds of up to 100 gigabits per second. In the present setup, they used three different horn antennas to transmit microwaves of different frequencies through free space, but they hope to improve this: “That is our next goal – to get a fully frequency-tuneable link,” says Peking University’s Haowen Shu.

The researchers believe such a wideband setup could be crucial to the development of the “Internet of things” in which all sorts of different electronic devices are networked together without unwanted interference. Atmospheric transparency windows below 6 GHz, where loss is lower and propagation lengths are higher, are likely to be crucial for providing wireless Internet access to rural areas. Meanwhile, higher frequencies – with higher data rates – will probably be needed for augmented reality and remote surgery applications.

Alan Willner, an electrical engineer and optical scientist at the University of Southern California, US, who was not involved in the research, thinks the team is on the right track. “You have lots of spectrum in various radio bands for wireless communications,” he says. “But how are you going to take advantage of these bands to transmit high data rates in a cost-effective and flexible way? Are you going to use multiple different systems – one each for microwave, millimetre wave, and terahertz?  Using one tuneable and reconfigurable integrated platform to cover these bands is significantly better. This research is a great step in that direction.”

The research is published in Nature.

The post Broadband wireless gets even broader thanks to integrated transmitter appeared first on Physics World.

Super sticky underwater hydrogels designed using data mining and AI

29 août 2025 à 10:00

The way in which new materials are designed is changing, with data becoming ever more important in the discovery and design process. Designing soft materials is a particularly tricky task that requires selection of different “building blocks” (monomers in polymeric materials, for example) and optimization of their arrangement in molecular space.

Soft materials also exhibit many complex behaviours that need to be balanced, and their molecular and structural complexities make it difficult for computational methods to help in the design process – often requiring costly trial and error experimental approaches instead. Now, researchers at Hokkaido University in Japan have combined artificial intelligence (AI) with data mining methods to develop an ultra-sticky hydrogel material suitable for very wet environments – a difficult design challenge because the properties that make materials soft don’t usually promote adhesion. They report their findings in Nature.

Challenges of designing sticky hydrogels

Hydrogels are a permeable soft material composed of interlinked polymer networks with water held within the network. Hydrogels are highly versatile, with properties controlled by altering the chemical makeup and structure of the material.

Designing hydrogels computationally to perform a specific function is difficult, however, because the polymers used to build the hydrogel network can contain a plethora of chemical functional groups, complicating the discovery of suitable polymers and the structural makeup of the hydrogel. The properties of hydrogels are also influenced by factors including the molecular arrangement and intermolecular interactions between molecules (such as van der Waals forces and hydrogen bonds). There are further challenges for adhesive hydrogels in wet environments, as hydrogels will swell in the presence of water, which needs to be factored into the material design.

Data driven methods provide breakthrough

To develop a hydrogel with a strong and lasting underwater adhesion, the researchers mined data from the National Center for Biotechnology Information (NCBI) Protein database. This database contains the amino acid sequences responsible for adhesion in underwater biological systems – such as those found in bacteria, viruses, archaea and eukaryotes. The protein sequences were synthetically mimicked and adapted for the polymer strands in hydrogels.

“We were inspired by nature’s adhesive proteins, but we wanted to go beyond mimicking a few examples. By mining the entire protein database, we aimed to systematically explore new design rules and see how far AI could push the boundaries of underwater adhesion,” says co-lead author Hailong Fan.

The researchers used information from the database to initially design and synthesize 180 bioinspired hydrogels, each with a unique polymer network and all of which showed adhesive properties beyond other hydrogels. To improve them further, the team employed machine learning to create hydrogels demonstrating the strongest underwater adhesive properties to date, with instant and repeatable adhesive strengths exceeding 1 MPa – an order-of-magnitude improvement over previous underwater adhesives. In addition, the AI-designed hydrogels were found to be functional across many different surfaces in both fresh and saline water.

“The key achievement is not just creating a record-breaking underwater adhesive hydrogel but demonstrating a new pathway – moving from biomimetic experience to data-driven, AI-guided material design,” says Fan.

A versatile adhesive

The researchers took the three best performing hydrogels and tested them in different wet environments to show that they could maintain their adhesive properties for long time periods. One hydrogel was used to stick a rubber duck to a rock by the sea, which remained in place despite continuous wave impacts over many tide cycles. A second hydrogel was used to patch up a 20 mm hole on a pipe filled with water and instantly stopped a high-pressure leak. This hydrogel remained in place for five months without issue. The third hydrogel was placed under the skin of mice to demonstrate biocompatibility.

The super strong adhesive properties in wet environments could have far ranging applications, from biomedical engineering (prosthetic coatings or wearable biosensors) to deep-sea exploration and marine farming. The researchers also note that this data-driven approach could be adapted for designing other functional soft materials.

When asked about what’s next for this research, Fan says that “our next step is to study the molecular mechanisms behind these adhesives in more depth, and to expand this data-driven design strategy to other soft materials, such as self-healing and biomedical hydrogels”.

The post Super sticky underwater hydrogels designed using data mining and AI appeared first on Physics World.

Extremely stripped star reveals heavy elements as it explodes

28 août 2025 à 10:00
Artist's impression of a star just before it explodes
Stripped star Artist’s impression of the star that exploded to create SN 2021yfj. Shown are the ejection of silicon (grey), sulphur (yellow) and argon (purple) just before the final explosion. (Courtesy: WM Keck Observatory/Adam Makarenko)

For the first time, astronomers have observed clear evidence for a heavily stripped star that has shed many of its outer layers before its death in a supernova explosion. Led by Steve Schulze at Northwestern University, the team has spotted the spectral signatures of heavier elements that are usually hidden deep within stellar interiors.

Inside a star, atomic nuclei fuse together to form heavier elements in a process called nucleosynthesis. This releases a vast amount of energy that offsets the crushing force of gravity.

As stars age, different elements are consumed and produced. “Observations and models of stars tell us that stars are enormous balls of hydrogen when they are born,” Schulze explains. “The temperature and density at the core are so high that hydrogen is fused into helium. Subsequently, helium fuses into carbon, and this process continues until iron is produced.”

Ageing stars are believed to have an onion-like structure, with a hydrogen outer shell enveloping deeper layers of successively heavier elements. Near the end of a star’s life, inner-shell elements including silicon, sulphur, and argon fuse to form a core of iron. Unlike lighter elements, iron does not release energy as it fuses, but instead consumes energy from its surroundings. As a result, the star can no longer withstand its own gravity and it will collapse rapidly in and then explode in a dramatic supernova.

Hidden elements

Rarely, astronomers can observe an old star that has blown out its outer layers before exploding. When the explosion finally occurs, heavier elements that are usually hidden within deeper shells create absorption lines in the supernova’s light spectrum, allowing astronomers to determine the compositions of these inner layers. So far, inner-layer elements as heavy as carbon and oxygen have been observed, but not direct evidence for elements in deeper layers.

Yet in 2021, a mysterious new observation was made by a programme of the Zwicky Transient Facility headed by Avishay Gal-Yam at the Weizmann Institute of Science in Israel. The team was scanning the sky for signs of infant supernovae at the very earliest stages following their initial explosion.

“On 7 September 2021 it was my duty to look for infant supernovae,” Schulze recounts. “We discovered SN 2021yfj due to its rapid increase in brightness. We immediately contacted Alex Filippenko’s group at the University of California Berkeley to ask whether they could obtain a spectrum of this supernova.”

When the results arrived, the team realised that the absorption lines in the supernova’s spectrum were unlike anything they had encountered previously. “We initially had no idea that most of the features in the spectrum were produced by silicon, sulphur, and argon,” Schulze continues. Gal-Yam took up the challenge of identifying the mysterious features in the spectrum.

Shortly before death

In the meantime, the researchers examined simultaneous observations of SN 2021yfj, made by a variety of ground- and space-based telescopes. When Gal-Yam’s analysis was complete, all of the team’s data confirmed the same result. “We had detected a supernova embedded in a shell of material rich in silicon, sulphur, and argon,” Schulze describes. “These elements are formed only shortly before a star dies, and are often hidden beneath other materials – therefore, they are inaccessible under normal circumstances.”

The result provided clear evidence that the star had been more heavily stripped back towards the end of its life than any other observed previously: shedding many of its outer layers before the final explosion.

“SN 2021yfj demonstrates that stars can die in far more extreme ways than previously imagined,” says Schulze. “It reveals that our understanding of how stars evolve and die is still not complete, despite billions of them having already been studied.” By studying their results, the team now hopes that astronomers can better understand the later stages of stellar evolution, and the processes leading up to these dramatic ends.

The research is described in Nature.

The post Extremely stripped star reveals heavy elements as it explodes appeared first on Physics World.

Famous double-slit experiment gets its cleanest test yet

27 août 2025 à 14:00

Scientists at the Massachusetts Institute of Technology (MIT) in the US have achieved the cleanest demonstration yet of the famous double-slit experiment. Using two single atoms as the slits, they inferred the photon’s path by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when no path was observed, two bright spots when it was.

First performed in the 1800s by Thomas Young, the double-slit experiment has been revisited many times. Its setup is simple: send light toward a pair of slits in a screen and watch what happens. Its outcome, however, is anything but. If the light passes through the slits unobserved, as it did in Young’s original experiment, an interference pattern of bright and dark fringes appears, like ripples overlapping in a pond. But if you observe which slit the light goes through, as Albert Einsten proposed in a 1920s “thought experiment” and as other physicists have since demonstrated in the laboratory, the fringes vanish in favour of two bright spots. Hence, whether light acts as a wave (fringes) or a particle (spots) depends on whether anyone observes it. Reality itself seems to shift with the act of looking.

The great Einstein–Bohr debate

Einstein disliked the implications of this, and he and Niels Bohr debated them extensively. According to Einstein, observation only has an effect because it introduces noise. If the slits were mounted on springs, he suggested, their recoil would reveal the photon’s path without destroying the fringes.

Bohr countered that measuring the photon’s recoil precisely enough to reveal its path would blur the slits’ positions and erase interference. For him, this was not a flaw of technology but a law of nature – namely, his own principle of complementarity, which states that quantum systems can show wave-like or particle-like behaviour, but never both at once.

Physicists have performed numerous versions of the experiment since, and each time the results have sided with Bohr. Yet the unavoidable noise in real set-ups left room for doubt that this counterintuitive rule was truly fundamental.

Atoms as slits

To celebrate the International Year of Quantum Science and Technology, physicists in Wolfgang Ketterle’s group at MIT performed Einstein’s thought experiment directly. They began by cooling more than 10,000 rubidium atoms to near absolute zero and trapping them in a laser-made lattice such that each one acted as an individual scatterer of light. If a faint beam of light was sent through this lattice, a single photon could scatter off an atom.

Since the beam was so faint, the team could collect very little information per experimental cycle. “This was the most difficult part,” says team member Hanzhen Lin, a PhD student at MIT. “We had to repeat the experiment thousands of times to collect enough data.”

In every such experiment, the key was to control how much photon path information the atoms provided. The team did this by adjusting the laser traps to tune the “fuzziness” of the atoms’ position. Tightly trapped atoms had well-defined positions and so, according to Heisenberg’s uncertainty principle, they could not reveal much about the photon’s path. In these experiments, fringes appeared. Loosely trapped atoms, in contrast, had more position uncertainty and were able to move, meaning an atom struck by a photon could carry a trace of that interaction. This faint record was enough to collapse the interference fringes, leaving only spots. Once again, Bohr was right.

While Lin acknowledges that theirs is not the first experiment to measure scattered light from trapped atoms, he says it is the first to repeat the measurements after the traps were removed, while the atoms floated freely. This went further than Einstein’s spring-mounted slit idea, and (since the results did not change) eliminated the possibility that the traps were interfering with the observation.

“I think this is a beautiful experiment and a testament to how far our experimental control has come,” says Thomas Hird, a physicist who studies atom-light interactions at the University of Birmingham, UK, and was not involved in the research. “This probably far surpasses what Einstein could have imagined possible.”

The MIT team now wants to observe what happens when there are two atoms per site in the lattice instead of one. “The interactions between the atoms at each site may give us interesting results,” Lin says.

The team describes the experiment in Physical Review Letters.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Famous double-slit experiment gets its cleanest test yet appeared first on Physics World.

Optical imaging tool could help diagnose and treat sudden hearing loss

26 août 2025 à 15:00

Optical coherence tomography (OCT), a low-cost imaging technology used to diagnose and plan treatment for eye diseases, also shows potential as a diagnostic tool for assessing rapid hearing loss.

Researchers at the Keck School of Medicine of USC have developed an OCT device that can acquire diagnostic quality images of the inner ear during surgery. These images enable accurate measurement of fluids in the inner ear compartments. The team’s proof-of-concept study, described in Science Translational Medicine, revealed that the fluid levels correlated with the severity of a patient’s hearing loss.

An imbalance between the two inner ear fluids, endolymph and perilymph, is associated with sudden, unexplainable hearing loss and acute vertigo, symptoms of ear conditions such as Ménière’s disease, cochlear hydrops and vestibular schwannomas. This altered fluid balance – known as endolymphatic hydrops (ELH) – occurs when the volume of endolymph increases in one compartment and the volume of perilymph decreases in the other.

Because the fluid chambers of the inner ear are so small, there has previously been no effective way to assess endolymph-to-perilymph fluid balance in a living patient. Now, the Keck OCT device enables imaging of inner ear structures in real time during mastoidectomy – a procedure performed during many ear and skull base surgeries, and which provides optical access to the lateral and posterior semicircular canals (SCCs) of the inner ear.

OCT offers a quicker, more accurate and less expensive way to see inner ear fluids, hair cells and other structures compared with the “gold standard” MRI scans. The researchers hope that ultimately, the device will evolve into an outpatient assessment tool for personalized treatments for hearing loss and vertigo. If it can be used outside a surgical suite, OCT technology could also support the development and testing of new treatments, such as gene therapies to regenerate lost hair cells in the inner ear.

Intraoperative OCT

The intraoperative OCT system, developed by senior author John Oghalai and colleagues, comprises an OCT adaptor containing the entire interferometer, which attaches to the surgical microscope, plus a medical cart containing electronic devices including the laser, detector and computer.

The OCT system uses a swept-source laser with a central wavelength of 1307 nm and a bandwidth of 89.84 nm. The scanning beam spot size is 28.8 µm and has a depth-of-focus of 3.32 mm. The system’s axial resolution of 14.0 µm and lateral resolution of 28.8 µm provide an in-plane resolution of 403 µm2.

The laser output is directed into a 90:10 optical fibre fused coupler, with the 10% portion illuminating the interferometer’s reference arm. The other 90% illuminates the sample arm, passes through a fibre-optic circulator, and is combined with a red aiming beam that’s used to visually position the scanning beam on the region-of-interest.

After the OCT and aiming beams are guided onto the sample for scanning, and the interferometric signal needed for OCT imaging is generated, two output ports of the 50:50 fibre optic coupler direct the light signal into a balanced photodetector for conversion into an electronic signal. A low-pass dichroic mirror allows back-reflected visible light to pass through into an eyepiece and a camera. The surgeon can then use the eyepiece and real-time video to ensure correct positioning for the OCT imaging.

Feasibility study

The team performed a feasibility study on 19 patients undergoing surgery at USC to treat Ménière’s disease (an inner-ear disorder), vestibular schwannoma (a benign tumour) or middle-ear infection with normal hearing (the control group). All surgical procedures required a mastoidectomy.

Immediately after performing the mastoidectomy, the surgeon positioned the OCT microscope with the red aiming beam targeted at the SCCs of the inner ear. After acquiring a 3D volume image of the fluid compartments in the inner ear, which took about 2 min, the OCT microscope was removed from the surgical suite and the surgical procedure continued.

The OCT system could clearly distinguish the two fluid chambers within the SCCs. The researchers determined that higher endolymph levels correlated with patients having greater hearing loss. In addition to accurately measuring fluid levels, the system revealed that patients with vestibular schwannoma had higher endolymph-to-perilymph ratios than patients with Ménière’s disease, and that compared with the controls, both groups had increased endolymph and reduced perilymph, indicating ELH.

The success of this feasibility study may help improve current microsurgery techniques, by guiding complex temporal bone surgery that requires drilling close to the inner ear. OCT technology could help reduce surgical damage to delicate ear structures and better distinguish brain tumours from healthy tissue. The OCT system could also be used to monitor the endolymph-to-perilymph ratio in patients with Ménière’s disease undergoing endolymphatic shunting, to verify that the procedure adequately decompresses the endolymphatic space. Efforts to make a smaller, less expensive system for these types of surgical use are underway.

The researchers are currently working to improve the software and image processing techniques in order to obtain images from patients without having to remove the mastoid bone, which would enable use of the OCT system for outpatient diagnosis.

The team also plans to adapt a handheld version of an OCT device currently used to image the tympanic membrane and middle ear to enable imaging of the human cochlea in the clinic. Imaging down the ear canal non-invasively offers many potential benefits when diagnosing and treating patients who do not require surgery. For example, patients determined to have ELH could be diagnosed and treated rapidly, a process that currently takes 30 days or more.

Oghalai and colleagues are optimistic about improvements being made in OCT technology, particularly in penetration depth and tissue contrast. “This will enhance the utility of this imaging modality for the ear, complementing its potential to be completely non-invasive and expanding its indication to a wider range of diseases,” they write.

The post Optical imaging tool could help diagnose and treat sudden hearing loss appeared first on Physics World.

Electrochemical loading boosts deuterium fusion in a palladium target

25 août 2025 à 10:02

Researchers in Canada have used electrochemistry to increase the rate of nuclear fusion within a metal target that is bombarded with high-energy deuterium ions. While the process is unlikely to lead to a new source of energy – it consumes far more energy than it produces – further research could provide new insights into fusion and other areas of science.

Although modern fusion reactors are huge projects sometimes costing billions, the first evidence for an artificial fusion reaction – observed by Mark Oliphant and Ernest Rutherford in 1934 – was a simple experiment in which deuterium nuclei in a solid target were bombarded with deuterium ions.

Palladium is a convenient target for such experiments because the metal’s lattice has the unusual propensity to selectively absorb hydrogen (and deuterium) atoms. In 1989 the chemists Stanley Pons of the University of Utah and Martin Fleischmann of the University of Southampton excited the world by claiming that the electrolysis of heavy water using a palladium cathode caused absorbed deuterium atoms to undergo spontaneous nuclear fusion under ambient conditions (with no ion bombardment). However, this observation of “cold fusion” could not be reproduced by others.

Now, Curtis Berlinguette at the University of British Columbia and colleagues have looked at whether electrochemistry could enhance the rate of fusion triggered by bombarding palladium with high-energy deuterium ions.

Benchtop accelerator

In the new work, the researchers used a palladium foil as the cathode in an electrochemical cell that was used in the electrolysis of heavy water. The other side of the cathode was the target for a custom-made benchtop megaelectronvolt particle accelerator. Kuo-Yi Chen, a postdoc in Berlinguette’s group, developed a microwave plasma thruster that was used to dissociate deuterium into ions. “Then we have a magnetic field that directs the ions into that metal target,” explains Berlinguette. The process, called plasma immersion ion implantation, is sometimes used to dope semiconductors, but has never previously been used to trigger nuclear fusion. Their apparatus is dubbed the Thunderbird Reactor.

The researchers used a neutron detector surrounding the apparatus to count the fusion events occurring. They found that, when they turned on the reactor, they initially detected very few events. However, as the amount of deuterium implanted in the palladium grew, the number of fusion events grew and eventually plateaued. The researchers then switched on the electrochemical cell, driving deuterium into the palladium from the other side using a simple lead-acid battery. They found that the number of fusion events detected increased another 15%.

Currently, the reactor produces less than 10-10 times the amount of energy it consumes. However, the researchers believe it could be used in future research. “We provide the community with an apparatus to study fusion reactions at lower energy conditions than has been done before,” says Berlinguette. “It’s an uncharted experimental space so perhaps there might be some interesting surprises there…What we are really doing is providing the first clear experimental link between electrochemistry and fusion science.”

Berlinguette also notes that, even if the work never finds any productive application in nuclear fusion research, the techniques involved could be useful elsewhere. In high temperature superconductivity, for example, researchers often use extreme pressures to create metal hydrides: “Now we’re showing you can do this using electrochemistry instead,” he says. He also points to the potential for deuteration of drugs, which is an active area of research in pharmacology.

The research is described in a paper in Nature, with Chen as lead author.

Jennifer Dionne and her graduate student Amy McKeown-Green at Stanford University in the US are impressed: “In the work back in the 1930s they had a static target,” says McKeown-Green. “This is a really cool example of how you can perturb the system in this low-energy, sub-million Kelvin regime.” She would be interested to see further analysis on exactly what the temperature is and whether other metals show similar behaviours.

“Hydrogen and elements like deuterium tend to sit in the interstitial sites in the palladium lattice and, at room temperature and pressure, about 70% of those will be full,” explains Dionne. “A cool thing about this paper is that they showed how an electrical bias increases the amount of deuteration of the target. It was either completely obvious or completely counter-intuitive depending on how you look at it, and they’ve proved definitively that you can increase the amount of deuteration and then increase the fusion rate.”

The post Electrochemical loading boosts deuterium fusion in a palladium target appeared first on Physics World.

Tenured scientists in the US slow down and produce less impactful work, finds study

23 août 2025 à 16:00

Researchers in the US who receive tenure produce more novel but less impactful work, according to an analysis of the output of more than 12,000 academics across 15 disciplines. The study also finds that publication rates rise steeply and steadily during tenure-track, typically peaking the year before a scientist receives a permanent position. After tenure, their average publication rate settles near the peak value.

Carried out by data scientists led by Giorgio Tripodi from Northwestern University in Illinois, the study examined the publication history of academics five years before tenure and five years after. The researchers say that the observed pattern – a rise before tenure, followed by a peak and then a steady level – is highly reproducible.

“Tenure in the US academic system is a very peculiar contract,” explains Tripodi. “It [features] a relatively long probation period followed by a permanent appointment [which is] a strong incentive to maximize research output and avoid projects that are more likely to fail during the tenure track.”

The study reveals that academics in non-lab-based disciplines, such as mathematics, business, economics, sociology and political science, exhibit a fall in research output after tenure. But for those in the other 10 disciplines, including physics, publication rates are sustained around the pre-tenure peak.

“In lab-based fields, collaborative teams and sustained funding streams may help maintain high productivity post-tenure,” says Tripodi. “In contrast, in more individual-centred disciplines like mathematics or sociology, where research output is less dependent on continuous lab operation, the post-tenure slowdown appears to be more pronounced.”

The team also looked at the proportion of high-impact papers – defined as those in the top 5% of a field – and found that researchers in all 15 disciplines publish more high-impact papers before tenure than after. As for “novelty” – defined as atypical combinations of work – this increases with time, but the most novel papers tend to appear after tenure.

According to Tripodi, once tenure and job security has been secured, the pressure to publish shifts towards other objectives – a move that explains the plateau or decline seen in the publication data. “Our results show that tenure allows scientists to take more risks, explore novel research directions, and reorganize their research portfolio,” he adds.

The post Tenured scientists in the US slow down and produce less impactful work, finds study appeared first on Physics World.

Exoplanets suffering from a plague of dark matter could turn into black holes

21 août 2025 à 17:00

Dark matter could be accumulating inside planets close to the galactic centre, potentially even forming black holes that might consume the afflicted planets from the inside-out, new research has predicted.

According to the standard model of cosmology, all galaxies including the Milky Way sit inside huge haloes of dark matter, with the greatest density at the centre. This dark matter primarily interacts only through gravity, although some popular models such as weakly interacting massive particles (WIMPS) do imply that dark-matter particles may occasionally scatter off normal matter.

This has led PhD student Mehrdad Phoroutan Mehr and Tara Fetherolf of the University of California, Riverside, to make an extraordinary proposal: that dark matter could elastically scatter off molecules inside planets, lose energy and become trapped inside those planets, and then grow so dense that they collapse to form a black hole. In some cases, a black hole could be produced in just ten months, according to Mehr and Fetherolf’s calculations, reported in Physical Review D.

Even more remarkable is that while many planets would be consumed by their parasitic black hole, it is feasible that some planets could actually survive with a black hole inside them, while in others the black hole might evaporate, Mehr tells Physics World.

“Whether a black hole inside a planet survives or not depends on how massive it is when it first forms,” he says.

This leads to a trade-off between how quickly the black hole can grow and how soon the black hole can evaporate via Hawking radiation – the quantum effect that sees a black hole’s mass radiated away as energy.

The mass of a dark-matter particle remains unknown, but the less massive it is, and the more massive a planet is, then the greater the chance a planet has of capturing dark matter, and the more massive a black hole it can form. If the black hole starts out relatively massive, then the planet is in big trouble, but if it starts out very small then it can evaporate before it becomes dangerous. Of course, if it evaporates, another black hole could replace it in the future.

“Interestingly,” adds Mehr, “There is also a special in-between mass where these two effects balance each other out. In that case, the black hole neither grows nor evaporates – it could remain stable inside the planet for a long time.”

Keeping planets warm

It’s not the first time that dark matter has been postulated to accumulate inside planets. In 2011 Dan Hooper and Jason Steffen of Fermilab proposed that dark matter could become trapped inside planets and that the energy released through dark-matter particles annihilating could keep a planet outside the habitable zone warm enough for liquid water to exist on its surface.

Mehr and Fetherolf’s new hypothesis “is worth looking into more carefully”, says Hooper.

That said, Hooper cautions that the ability of dark matter to accumulate inside a planet and form a black hole should not be a general expectation for all models of dark matter. Rather, “it seems to me that there could be a small window of dark-matter models where such particles could be captured in stars at a rate that is high enough to lead to black hole formation,” he says.

Currently there remains a large parameter space for the possible properties for dark matter. Experiments and observations continue to chip away at this parameter space, but there remain a very wide range of possibilities. The ability of dark matter to self-annihilate is just one of those properties – not all models of dark matter allow for this.

If dark-matter particles do annihilate at a sufficiently high rate when they come into contact, then it is unlikely that the mass of dark matter inside a planet would ever grow large enough to form a black hole. But if they don’t self-annihilate, or at least not at an appreciable rate, then a black hole formed of dark matter could still keep a planet warm with its Hawking radiation.

Searching for planets with black holes inside

The temperature anomaly that this would create could provide a means of detecting planets with black holes inside them. It would be challenging – the planets that we expect to contain the most dark matter would be near the centre of the galaxy 26,000 light years away, where the dark-matter concentration in the halo is densest.

Even if the James Webb Space Telescope (JWST) could detect anomalous thermal radiation from such a distant planet, Mehr says that it would not necessarily be a smoking gun.

“If JWST were to observe that a planet is hotter than expected, there could be many possible explanations, we would not immediately attribute this to dark matter or a black hole,” says Mehr. “Rather, our point is that if detailed studies reveal temperatures that cannot be explained by ordinary processes, then dark matter could be considered as one possible – though still controversial – explanation.”

Another problem is that black holes cannot be distinguished from planets purely through their gravity. A Jupiter-mass planet has the same gravitational pull as a Jupiter-mass black hole that has just eaten a Jupiter-mass planet. This means that planetary detection methods that rely on gravity, from radial velocity Doppler shift measurements to astrometry and gravitational microlensing events, could not tell a planet and a black hole apart.

The planets in our own Solar System are also unlikely to contain much dark matter, says Mehr. “We assume that the dark matter density primarily depends on the distance from the centre of the galaxy,” he explains.

Where we are, the density of dark matter is too low for the planets to capture much of it, since the dark-matter halo is concentrated in the galactic centre. Therefore, we needn’t worry about Jupiter or Saturn, or even Earth, turning into a black hole.

The post Exoplanets suffering from a plague of dark matter could turn into black holes appeared first on Physics World.

Nano-engineered flyers could soon explore Earth’s mesosphere

21 août 2025 à 13:00

Small levitating platforms that can stay airborne indefinitely at very high altitudes have been developed by researchers in the US and Brazil. Using photophoresis, the devices could be adapted to carry small payloads in the mesosphere where flight is notoriously difficult. It could even be used in the atmospheres of moons and other planets.

Photophoresis occurs when light illuminates one side of a particle, heating it slightly more than the other. The resulting temperature difference in the surrounding gas means that molecules rebound with more energy on the warmer side than the cooler side – producing a tiny but measurable push.

For most of the time since its discovery in the 1870s, the effect was little more than a curiosity. But with more recent advances in nanotechnology, researchers have begun to explore how photophoresis could be put to practical use.

“In 2010, my graduate advisor, David Keith, had previously written a paper that described photophoresis as a way of flying microscopic devices in the atmosphere, and we wanted to see if larger devices could carry useful payloads,” explains Ben Schafer at Harvard University, who led the research. “At the same time, [Igor Bargatin’s group at the University of Pennsylvania] was doing fascinating work on larger devices that generated photophoretic forces.”

Carrying payloads

These studies considered a wide variety of designs: from artificial aerosols, to thin disks with surfaces engineered to boost the effect. Building on this earlier work, Schafer’s team investigated how lightweight photophoretic devices could be optimized to carry payloads in the mesosphere: the atmospheric layer at about 50–80 km above Earth’s surface, where the sparsity of air creates notoriously difficult flight conditions for conventional aircraft or balloons.

“We used these results to fabricate structures that can fly in near-space conditions, namely, under less than the illumination intensity of sunlight and at the same pressures as the mesosphere,” Schafer explains.

The team’s design consists two alumina membranes – each 100 nm thick, and perforated with nanoscale holes. The membranes are positioned a short distance apart, and connected by ligaments. In addition, the bottom membrane is coated with a light-absorbing chromium layer, causing it to heat the surrounding air more than the top layer as it absorbs incoming sunlight.

As a result, air molecules move preferentially from the cooler top side toward the warmer bottom side through the membranes’ perforations: a photophoretic process known as thermal transpiration. This one-directional flow creates a pressure imbalance across the device, generating upward thrust. If this force exceeds the device’s weight, it can levitate and even carry a payload. The team also suggests that the devices could be kept aloft at night using the infrared radiation emitted by Earth into space.

Simulations and experiments

Through a combination of simulations and experiments, Schafer and his colleagues examined how factors such as device size, hole density, and ligament distribution could be tuned to maximize thrust at different mesospheric altitudes – where both pressure and temperature can vary dramatically. They showed that platforms 10 cm in radius could feasibly remain aloft throughout the mesosphere, powered by sunlight at intensities lower than those actually present there.

Based on these results, the team created a feasible design for a photophoretic flyer with a 3 cm radius, capable of carrying a 10 mg payload indefinitely at altitudes of 75 km. With an optimized design, they predict payloads as large as 100 mg could be supported during daylight.

“These payloads could support a lightweight communications payload that could transmit data directly to the ground from the mesosphere,” Schafer explains. “Small structures without payloads could fly for weeks or months without falling out of the mesosphere.”

With this proof of concept, the researchers are now eager to see photophoretic flight tested in real mesospheric conditions. “Because there’s nothing else that can sustainably fly in the mesosphere, we could use these devices to collect ground-breaking atmospheric data to benefit meteorology, perform telecommunications, and predict space weather,” Schafer says.

Requiring no fuel, batteries, or solar panels, the devices would be completely sustainable. And the team’s ambitions go beyond Earth: with the ability to stay aloft in any low-pressure atmosphere with sufficient light, photophoretic flight could also provide a valuable new approach to exploring the atmosphere of Mars.

The research is described in Nature.

The post Nano-engineered flyers could soon explore Earth’s mesosphere appeared first on Physics World.

Equations, quarks and a few feathers: more physics than birds

20 août 2025 à 12:00

Lots of people like birds. In Britain alone, 17 million households collectively spend £250m annually on 150,000 tonnes of bird food, while 1.2 million people are paying members of the Royal Society for the Protection of Birds (RSPB), Europe’s largest conservation charity. But what is the Venn diagram overlap between those who like birds and those who like physics?

The 11,000 or more species of birds in the world have evolved to occupy separate ecological niches, with many remarkable abilities that, while beyond human capabilities, can be explained by physics. Owls, for example, detect their prey by hearing with asymmetric ears then fly almost silently to catch it. Kingfishers and ospreys, meanwhile, dive for fish in freshwater or sea, compensating for the change of refractive index at the surface. Kestrels and hummingbirds, on the other hand, can hover through clever use of aerodynamics.

Many birds choose when to migrate by detecting subtle changes in barometric pressure. They are often colourful and can even be blue – a pigment that is scarce in nature – due to the structure of their feathers, which can make them appear kaleidoscopic depending on the viewing angle. Many species can even see into the ultraviolet; the blue tits in our gardens look very different in each other’s eyes than they do to ours.

Those of us with inquisitive minds cannot help but wonder how they do these things. Now, The Physics of Birds and Birding: the Sounds, Colors and Movements of Birds, and Our Tools for Watching Them by retired physicist Michael Hurben covers all of these wonders and more.

Where are the birds?

In each chapter Hurben introduces a new physics-related subject, often with an unexpected connection to birds. The more abstruse topics include fractals, gravity, electrostatics, osmosis and Fourier transforms. You might not think quarks would be mentioned in a book on birds, but they are. Some of these complicated subjects, however, take the author several pages to explain, and it can then be a disappointment to discover just a short paragraph mentioning a bird. It is also only in the final chapter that the author explains flight, the attribute unique among vertebrates to birds (and bats).

The antepenultimate chapter justifies the second part of the book’s title – birding. It describes the principles underlying some of the optical instruments used by humans to detect and identify birds, such as binoculars, telescopes and cameras. The physics is simpler, so the answers here might be more familiar to non-scientist birders. Indeed, focal lengths, refractive indices, shape of lenses and anti-reflection coatings, for example, are often covered in school physics and known to anyone wearing spectacles.

Unfortunately, Hurben has not heeded the warning given to Stephen Hawking by his editor of A Brief History of Time, which is that each equation would halve the book’s readership. That masterpiece includes only the single equation, which any physicist could predict. But The Physics of Birds and Birding sets the scene with seven equations in its first chapter, and many more throughout. While understanding is helped by over 100 small diagrams, if you’re expecting beautiful photos and illustrations of birds, you’ll be disappointed. In fact, there are no images of birds whatsoever – and without them the book appears like an old fashioned black-and-white textbook.

Physicist or birder?

The author’s interest in birds appears to be in travelling to see them, and he has a “life-list” of over 5000 species. But not much attention in this book is paid to those of us who are more interested in studying birds for conservation. For example, there is no mention of thermal imaging instruments or drones – technology that depends a lot on physics – which are increasingly being used to avoid fieldworkers having to search through sensitive vegetation or climb trees to find birds or their nests. Nowadays, there are more interactions between humans and birds using devices such as smartphones, GPS or digital cameras, or indeed the trackers attached to birds by skilled and licensed scientists, but none of these is covered in The Physics of Birds and Birding.

Although I am a Fellow of the Institute of Physics and the Royal Society of Biology who has spent more than 50 years as an amateur birder and published many papers on both topics, it is not clear who is the intended target audience for this volume. It seems to me that it would be of more interest to some physicists who enjoy seeing physics being applied to the natural world, than for birders who want to understand how birds work. Either way, the book is definitely for only a select part of the birder-physicist Venn diagram.

  • 2025 Pelagic Publishing 240pp £30 pb; £30 ebook

The post Equations, quarks and a few feathers: more physics than birds appeared first on Physics World.

Big data, big wins: how solar astrophysics can be a ‘game-changer’ in sports analytics

18 août 2025 à 16:15
NASA image of the sun plus a high-tech outline of a football player
Star potential Data-analysis techniques originally developed for studying information about the Sun could help nurture the sporting stars of tomorrow. (Courtesy: NASA/Goddard/SDO; Shutterstock/E2.art.lab)

If David Jess were a professional footballer – and not a professional physicist – he’d probably be a creative midfielder: someone who links defence and attack to set up goal-scoring opportunities for his team mates. Based in the Astrophysics Research Centre at Queen’s University Belfast (QUB), Northern Ireland, Jess orchestrates his scientific activities in much the same way. Combining vision, awareness and decision-making, he heads a cross-disciplinary research team pursuing two very different and seemingly unconnected lines of enquiry.

Jess’s research within the QUB’s solar-physics groups centres on optical studies of the Sun’s lower atmosphere. That involves examining how the Sun’s energy travels through its near environment – in the form of both solar flares and waves. In addition, his group is developing instruments to support international research initiatives in astrophysics, including India’s upcoming National Large Solar Telescope.

But Jess is also a founding member of the Predictive Sports Analytics (PSA) research group within QUB and Ulster University’s AI Collaboration Centre – a £16m R&D facility supporting the adoption of AI and machine-learning technologies in local industry. PSA links researchers from a mix of disciplines – including physics, mathematics, statistics and computer science – with sports scientists in football, rugby, cycling and athletics. Its goal is to advance the fundamental science and application of predictive modelling in sports and health metrics. 

Joined-up thinking

Astronomy and sports science might seem worlds apart, but they have lots in common, not least because both yield vast amounts of data. “We’re lucky,” says Jess. “Studying the closest star in the solar system means we are not photon-starved – there’s no shortage of light – and we are able to make observations of the Sun’s atmosphere at very high frame rates, which means we’re accustomed to managing and manipulating really big data sets.”

Similarly, big data also fuels the sports analytics industry. Many professional athletes wear performance-tracking sports vests with embedded GPS trackers that can generate tens of millions of data points over the course of, say, a 90-minute football match. The trackers capture information such as a player’s speed, their distance travelled, and the number of sprints and high-intensity runs.

“Trouble is,” says Jess, “you’re not really getting the ebb and flow of all that data by just summing it all up into the ‘one big number’.” Researchers in the PSA group are therefore trying to understand how athlete data evolves over time – often in real-time – to see if there’s some nuance or wrinkle that’s been missed in the “big-picture” metrics that emerge at the end of a game or training session.

It’s all in the game for PSA

Meeting to look at PSA's data
Team talk As PSA’s research in sports analytics grows, David Jess (second right) wants to recruit PhD students keen to move beyond their core physics and maths to develop skills in other disciplines too. (Courtesy: QUB)

Set up in 2023, the Predictive Sports Analysis (PSA) research group in Belfast has developed collaborations with professional football teams, rugby squads and other sporting organizations across Northern Ireland and beyond. From elite-level to grassroots sports, real-world applications of PSA’s research aim to give athletes and coaches a competitive edge. Current projects include:

  • Player/squad speed distribution analyses to monitor strength and conditioning improvements with time (also handy for identifying growth and performance trajectories in youth sport)
  • Longitudinal examination of acceleration intensity as a proxy for explosive strength, which correlates with heart-rate variability (a useful aid to alert coaching staff to potential underlying cardiac conditions)
  • 3D force vectorization to uncover physics-based thresholds linked to concussion and musculoskeletal injury in rugby

The group’s work might, for example, make it possible not only to measure how tired a player becomes after a 90-minute game but also to pinpoint the rates and causes of fatigue during the match. “Insights like this have the power to better inform coaching staff so they can create bespoke training regimes to target these specific areas,” adds Jess.

Work at PSA involves a mix of data mining, analysis, interpretation and visualization – teasing out granular insights from raw, unfiltered data streams by adapting and applying tried-and-tested statistical and mathematical methods from QUB’s astrophysics research. Take, for example, observational studies of solar flares – large eruptions of electromagnetic radiation from the Sun’s atmosphere lasting for a few minutes up to several hours.

David Jess
Solar insights David Jess from Queen’s University Belfast assembles a near-UV instrument for hyperspectral imaging at the Dunn Solar Telescope in New Mexico, US. (Courtesy: QUB)

“We might typically capture a solar-flare event at multiple wavelengths – optical, X-ray and UV, for example – to investigate the core physical processes from multiple vantage points,” says Jess. In other words, they can see how one wavelength component differs from another or how the discrete spectral components correlate and influence each other. “Statistically, that’s not so different from analysing the player data during a football match, with each player offering a unique vantage point in terms of the data points they generate,” he adds.

If that sounds like a stretch, Jess insists that PSA is not an indulgence or sideline. “We are experts in big data at PSA and, just as important, all of us have a passion for sports,” says Jess, who is a big fan of Chelsea FC. “What’s more, knowledge transfer between QUB’s astrophysics and sports analytics programmes works in both directions and delivers high-impact research dividends.”  

The benefits of association

In-house synergies are all well and good, but the biggest operational challenge for PSA since it was set up in 2023 has been external. As a research group in QUB’s School of Mathematics and Physics, Jess and colleagues need to find ways to “get in the door” with prospective clients and clubs in the professional sports community. Bridging that gap isn’t straightforward for a physics lab that isn’t established in the sports-analytics business.

But clear communication as well as creative and accessible data visualization can help successful engagement. “Whenever we meet sports scientists at a professional club, the first thing we tell them is we’re not trying to do their job,” says Jess. “Rather, it’s about making their job easier to do and putting more analytical tools at their disposal.”

PSA’s skill lies in extracting “hidden signals” from big data sets to improve how athlete performance is monitored. Those insights can then be used by coaches, physiotherapists and medical staff to optimize training and recovery schedules as well as to improve the fitness, health and performance of individual athletes and teams.

Validation is everything in the sports analytics business, however, and the barriers to entry are high. That’s one reason why PSA’s R&D collaboration with STATSports could be a game-changer. Founded in 2007 in Newry, Northern Ireland, the company makes wearable devices that record and transmit athlete performance metrics hundreds of times each second.

Athlete running and being monitored by the PSA team.
Fast-track physics Real-time monitoring of athlete performance by PSA PhD students Jack Brown (left) and Eamon McGleenan. The researchers capture acceleration and sprint metrics to provide feedback on sprint profiling and ways to mitigate injury risks. (Courtesy: QUB)

STATSports is now a global leader in athlete monitoring and GPS performance analysis. Its technology is used by elite football clubs such as Manchester City, Liverpool, Arsenal and Juventus, as well as national football teams (including England, Argentina, USA and Australia) and leading teams in rugby and American football.

The tie-up lets PSA work with an industry “name”, while STATSports gets access to blue-sky research that could translate into technological innovation and commercial opportunities.

“PSA is an academic research team first and foremost, so we don’t want to just rest on our laurels,” explains Jess. “With so much data – whether astrophysics or sports analytics – we want to be at the cutting edge and deliver new advances that loop back to enhance the big data techniques we’re developing.”

Right now, PhD physicist Eamon McGleenan provides the direct line from PSA into STATSports, which is funding his postgraduate work. The joint research project, which also involves sports scientists from Saudi Pro League football club Al Qadsiah, uses detailed data about player sprints during a game. The aim is to use force, velocity and acceleration curves – as well as the power generated by a player’s legs – to evaluate the performance metrics that underpin athlete fatigue.

By reviewing these metrics during the course of a game, McGleenan and colleagues can model how an athlete’s performance drops off in real-time, indicating their level of fatigue. The hope is that the research will lead to in-game modelling systems to help coaches and medical staff at pitch-side to make data-driven decisions about player substitutions (rather than just taking a player off because they “look leggy”).

Six physicists who also succeeded at sport

Illustration of people doing a range of sports shown in silhouette
(Courtesy: Shutterstock/Christos Georghiou)

Quantum physicist Niels Bohr was a keen footballer, who played in goal for Danish side Akademisk Boldklub in the early 1900s. He once let a goal in because he was more focused on solving a maths problem mid-game by scribbling calculations on the goal post. His mathematician brother Harald Bohr also played for the club and won silver at the 1908 London Olympics for the Danish national team.

Jonathan Edwards, who originally studied physics at Durham University, still holds the men’s world record for the triple-jump. Edwards broke the record twice on 7 August 1995 at the World Athletics Championships in Gothenburg, Sweden, first jumping 18.16m and then 18.29m barely 20 minutes later.

David Florence, who studied physics at the University of Nottingham, won silver in the single C1 canoe slalom at the Beijing Olympics in 2008. He also won silver in the doubles C2 slalom at the 2012 Olympics in London and in Rio de Janeiro four years later.

Louise Shanahan is a middle-distance runner who competed for Ireland in the women’s 800m race at the delayed 2020 Summer Olympics while still doing a PhD in physics on the properties of nanodiamonds at the University of Cambridge. She has recently set up a sports website called TrackAthletes.

US professional golfer Bryson DeChambeau is nicknamed “The Scientist” owing to his analytical, science-based approach to the sport – and the fact that he majored in physics at Southern Methodist University in Dallas, US. DeChambeau won the 2020 and 2024 US Open.

In 2023 Harvard University’s Jenny Hoffman, who studies the electronic properties of exotic materials, became the fastest woman to run across the US, completing the 5000 km journey in 47 days, 12 hours and 35 minutes. In doing so, she beat the previous record by more than a week.

Matin Durrani

The transfer market

Jess says that the PSA group has been inundated with applications from physics students since it was set up. That’s not surprising, argues Jess, given that a physics degree provides many transferable skills to suit PSA’s broad scientific remit. Those skills include being able to manage, mine and interpret large data sets; disseminate complex results and actionable insights to a non-specialist audience; and work with industry partners in the sports technology sector.

“We’re looking for multidisciplinarians at PSA,” says Jess, with a nod to his group’s ongoing PhD recruitment opportunities. “The ideal candidates will be keen to move beyond their existing knowledge base in physics and maths to develop skills in other specialist fields.” There have also been discussions with QUB’s research and enterprise department about the potential for a PSA spin-out venture – though Jess, for his part, remains focused on research.

“My priority is to ensure the sustainability of PSA,” he concludes. “That means more grant funding – whether from the research councils or industry partners – while training up the next generation of early-career researchers. Longer term, though, I do think that PSA has the potential to be a ‘disruptor’ in the sports-analytics industry.”

The post Big data, big wins: how solar astrophysics can be a ‘game-changer’ in sports analytics appeared first on Physics World.

Melting ice propels itself across a patterned surface

15 août 2025 à 15:42

Researchers in the US are first to show how a melting ice disc can quickly propel itself across a patterned surface in a manner reminiscent of the Leidenfrost effect. Jonathan Boreyko and colleagues at Virginia Tech demonstrated how the discs can suddenly slingshot themselves along herringbone channels when a small amount of heat is applied.

The Leidenfrost effect is a classic physics experiment whereby a liquid droplet levitates above a hot surface – buoyed by vapour streaming from the bottom of the droplet. In 2022, Boreyko’s team extended the effect to a disc of ice. This three-phase Leidenfrost effect requires a much hotter surface because the ice must first melt to liquid, which then evaporates.

The team also noticed that the ice discs can propel themselves in specific directions across an asymmetrically-patterned surface. This ratcheting effect also occurs with Leidenfrost droplets, and is related to the asymmetric emission of vapour.

“Quite separately, we found out about a really interesting natural phenomenon at Death Valley in California, where boulders slowly move across the desert,” Boreyko adds. “It turns out this happens because they are sitting on thin rafts of ice, which the wind can then push over the underlying meltwater.”

Combined effects

In their latest study, Boreyko’s team considered how these two effects could be combined – allowing ice discs to propel themselves across cooler surfaces like the Death Valley boulders, but without any need for external forces like the wind.

They patterned a surface with a network of V-shaped herringbone channels, each branching off at an angle from a central channel. At first, meltwater formed an even ring around the disc – but as the channels directed its subsequent flow, the ice began to move in the same direction.

“For the Leidenfrost droplet ratchets, they have to heat the surface way above the boiling point of the liquid,” Boreyko explains. “In contrast, for melting ice discs, any temperature above freezing will cause the ice to melt and then move along with the meltwater.”

The speed of the disc’s movement depended on how easily water spreads out on to the herringbone channels. When etched onto bare aluminium, the channels were hydrophilic – encouraging meltwater to flow along them. Predictably, since liquid water is far more dense and viscous than vapour, this effect unfolded far more slowly than the three-phase Leidenfrost effect demonstrated in the team’s previous experiment.

Surprising result

Yet as Boreyko describes, “a much more surprising result was when we tried spraying a water-repellent coating over the surface structure.” While preventing meltwater from flowing quickly through the channels, this coating roughened the surface with nanostructures, which initially locked the ice disc in place as it rested on the ridges between the channels.

As the ice melted, the ring of meltwater partially filled the channels beneath the disc. Gradually, however, the ratcheted surface directed more water to accumulate in front of the disc – introducing a Laplace pressure difference between both sides of the disc.

When this pressure difference is strong enough, the ice suddenly dislodges from the surface. “As the meltwater preferentially escaped on one side, it created a surface tension force that ‘slingshotted’ the ice at a dramatically higher speed,” Boreyko describes.

Applications of the new effect include surfaces could be de-iced with just a small amount of heating. Alternatively, energy could be harvested from ice-disc motion. It could also be used to propel large objects across a surface, says Boreyko. “It turns out that whenever you have more liquid on the front side of an object, and less on the backside, it creates a surface tension force that can be dramatic.”

The research is described in ACS Applied Materials & Interfaces.

The post Melting ice propels itself across a patterned surface appeared first on Physics World.

Jet stream study set to improve future climate predictions

13 août 2025 à 11:35
Factors influencing the jet stream in the southern hemisphere
Driven by global warming The researchers identified which factors influence the jet stream in the southern hemisphere. (Courtesy: Leipzig University/Office for University Communications)

An international team of meteorologists has found that half of the recently observed shifts in the southern hemisphere’s jet stream are directly attributable to global warming – and pioneered a novel statistical method to pave the way for better climate predictions in the future.

Prompted by recent changes in the behaviour of the southern hemisphere’s summertime eddy-driven jet (EDJ) – a band of strong westerly winds located at a latitude of between 30°S and 60°S – the Leipzig University-led team sifted through historical measurement data to show that wind speeds in the EDJ have increased, while the wind belt has moved consistently toward the South Pole. They then used a range of innovative methods to demonstrate that 50% of these shifts are directly attributable to global warming, with the remainder triggered by other climate-related changes, including warming of the tropical Pacific and the upper tropical atmosphere, and the strengthening of winds in the stratosphere.

“We found that human fingerprints on the EDJ are already showing,” says lead author Julia Mindlin, research fellow at Leipzig University’s Institute for Meteorology. “Global warming, springtime changes in stratospheric winds linked to ozone depletion, and tropical ocean warming are all influencing the jet’s strength and position.”

“Interestingly, the response isn’t uniform, it varies depending on where you look, and climate models are underestimating how strong the jet is becoming. That opens up new questions about what’s missing in our models and where we need to dig deeper,” she adds.

Storyline approach

Rather than collecting new data, the researchers used existing, high-quality observational and reanalysis datasets – including the long-running HadCRUT5 surface temperature data, produced by the UK Met Office and the University of East Anglia, and a variety of sea surface temperature (SST) products including HadISST, ERSSTv5 and COBE.

“We also relied on something called reanalysis data, which is a very robust ‘best guess’ of what the atmosphere was doing at any given time. It is produced by blending real observations with physics-based models to reconstruct a detailed picture of the atmosphere, going back decades,” says Mindlin.

To interpret the data, the team – which also included researchers at the University of Reading, the University of Buenos Aires and the Jülich Supercomputing Centre – used a statistical approach called causal inference to help isolate the effects of specific climate drivers. They also employed “storyline” techniques to explore multiple plausible futures rather than simply averaging qualitatively different climate responses.

“These tools offer a way to incorporate physical understanding while accounting for uncertainty, making the analysis both rigorous and policy-relevant,” says Mindlin.

Future blueprint

For Mindlin, these findings are important for several reasons. First, they demonstrate “that the changes predicted by theory and climate models in response to human activity are already observable”. Second, she notes that they “help us better understand the physical mechanisms that drive climate change, especially the role of atmospheric circulation”.

“Third, our methodology provides a blueprint for future studies, both in the southern hemisphere and in other regions where eddy-driven jets play a role in shaping climate and weather patterns,” she says. “By identifying where and why models diverge from observations, our work also contributes to improving future projections and enhances our ability to design more targeted model experiments or theoretical frameworks.”

The team is now focused on improving understanding of how extreme weather events, like droughts, heatwaves and floods, are likely to change in a warming world. Since these events are closely linked to atmospheric circulation, Mindlin stresses that it is critical to understand how circulation itself is evolving under different climate drivers.

One of the team’s current areas of focus is drought in South America. Mindlin notes that this is especially challenging due to the short and sparse observational record in the region, and the fact that drought is a complex phenomenon that operates across multiple timescales.

“Studying climate change is inherently difficult – we have only one Earth, and future outcomes depend heavily on human choices,” she says. “That’s why we employ ‘storylines’ as a methodology, allowing us to explore multiple physically plausible futures in a way that respects uncertainty while supporting actionable insight.”

The results are reported in the Proceedings of the National Academy of Sciences.

The post Jet stream study set to improve future climate predictions appeared first on Physics World.

Festival opens up the quantum realm

13 août 2025 à 10:07
quantum hackathon day 1 NQCC
Collaborative insights: The UK Quantum Hackathon, organized by the NQCC for the fourth consecutive year and a cornerstone of the Quantum Fringe festival, allowed industry experts to work alongside early-career researchers to explore practical use cases for quantum computing. (Courtesy: NQCC)

The International Year of Quantum Science and Technology (IYQ) has already triggered an explosion of activities around the world to mark 100 years since the emergence of quantum mechanics. In the UK, the UNESCO-backed celebrations have provided the perfect impetus for the University of Edinburgh’s Quantum Software Lab (QSL) to work with the National Quantum Computing Centre (NQCC) to organize and host a festival of events that have enabled diverse communities to explore the transformative power of quantum computing.

Known collectively as the Quantum Fringe, in a clear nod to Edinburgh’s famous cultural festival, some 16 separate events have been held across Scotland throughout June and July. Designed to make quantum technologies more accessible and more relevant to the outside world, the programme combined education and outreach with scientific meetings and knowledge exchange.

The Quantum Fringe programme evolved from several regular fixtures in the quantum calendar. One of these cornerstones was the NQCC’s flagship event, the UK Quantum Hackathon, which is now in its fourth consecutive year. In common with previous editions, the 2025 event challenged teams of hackers to devise quantum solutions to real-world use cases set by mentors from different industry sectors. The teams were supported throughout the three-day event by the industry mentors, as well as by technical experts from providers of various quantum resources.

quantum hackathon - NQCC
Time constrained: the teams of hackers were given two days to formulate their solution and test it on simulators, annealers and physical processors. (Courtesy: NQCC)

This year, perhaps buoyed by the success of previous editions, there was a significant uptick in the number of use cases submitted by end-user organizations. “We had twice as many applications as we could accommodate, and over half of the use cases we selected came from newcomers to the event,” said Abby Casey, Quantum Readiness Delivery Lead at the NQCC. “That level of interest suggests that there is a real appetite among the end-user community for understanding how quantum computing could be used in their organizations.”

Reflecting the broader agenda of the IYQ, this year the NQCC particularly encouraged use cases that offered some form of societal benefit, and many of the 15 that were selected aimed to align with the UN’s Sustainable Development Goals. One team investigated the accuracy of quantum-powered neural networks for predicting the progression of a tumour, while another sought to optimize the performance of graphene-based catalysts for fuel cells. Moonbility, a start-up firm developing digital twins to optimize the usage of transport and infrastructure, challenged its team to develop a navigation system capable of mapping out routes for people with specific mobility requirements, such as step-free access or calmer environments for those with anxiety disorders.

During the event the hackers were given just two days to explore the use case, formulate a solution, and generate results using quantum simulators, annealers and physical processors. The last day provided an opportunity for the teams to share their findings with their peers and a five-strong judging panel that was chaired by Sir Peter Knight, one of the architects of the UK’s National Quantum Technologies Programme and co-chair of the IYQ’s Steering Committee a prime mover in the IYQ celebrations. “Your effort, energy and passion have been quite extraordinary,” commented Sir Peter at the end of the event. “It’s truly impressive to see what you have achieved in just two days.”

From the presentations it was clear that some of the teams had adapted their solution to reflect the physical constraints of the hardware platform they had been allocated. Those explorations were facilitated by the increased participation of mentors from hardware developers, including QuEra and Pasqal for cold-atom architectures, and Rigetti and IBM for gate-based superconducting processors. “Cold atoms offer greater connectivity than superconducting platforms, which may make them more suited to solving particular types of problems,” said Gerard Milburn of the University of Sussex, who has recently become a Quantum Fellow at the NQCC.

quantum hackathon day 3 NQCC
Results day: The final day of the hackathon allowed the teams to share their results with the other participants and a five-strong judging panel. (Courtesy: NQCC)

The winning team, which had been challenged by Aioi R&D Lab to develop a quantum-powered solution for scheduling road maintenance, won particular praise for framing the problem in a way that recognized the needs of all road users, not just motorists. “It was really interesting that they thought about the societal value right at the start, and then used those ethical considerations to inform the way they approached the problem,” said Knight.

The wider impact of the hackathon is clear to see, with the event providing a short, intense and collaborative learning experience for early-career researchers, technology providers, and both small start-up companies and large multinationals. This year, however, the hackathon also provided the finale to the Quantum Fringe, which was the brainchild of Elham Kashefi and her team at the QSL. Taking inspiration from the better-known Edinburgh Fringe, the idea was to create a diverse programme of events to engage and inspire different audiences with the latest ideas in quantum computing.

“We wanted to celebrate the International Year of Quantum in a unique way,” said Mina Doosti, one of the QSL’s lead researchers. “We had lots of very different events, many of which we hadn’t foreseen at the start. It was very refreshing, and we had a lot of fun.”

One of Doosti’s favourite events was a two-day summer school designed for senior high-school students. As well as introducing the students to the concepts of quantum computing, the QSL researchers challenged them to write some code that could be run on IBM’s free-to-access quantum computer. “The organizers and lecturers from the QSL worked hard to develop material that would make sense to the students, and the attendees really grabbed the opportunity to come and learn,” Doosti explained. “From the questions they were asking and the way they tackled the games and challenges, we could see that they were interested and that they had learnt something.”

From the outset the QSL team were also keen for the Quantum Fringe to become a focal point for quantum-inspired activities that were being planned by other organizations. Starting from a baseline of four pillar events that had been organized by the NQCC and the QSL in previous years, the programme eventually swelled to 16 separate gatherings with different aims and outcomes. That included a public lecture organized by the new QCi3 Hub – a research consortium focused on interconnected quantum technologies – which attracted around 200 people who wanted to know more about the evolution of quantum science and its likely impact across technology, industry, and society. An open discussion forum hosted by Quantinuum, one of the main sponsors of the festival, also brought together academic researchers, industry experts and members of the public to identify strategies for ensuring that quantum computing benefits everyone in society, not just a privileged few.

Quantum researchers also had plenty of technical events to choose from. The regular AIMday Quantum Computing, now in its third year, enabled academics to work alongside industry experts to explore a number of business-led challenges. More focused scientific meetings allowed researchers to share their latest results in quantum cryptography and cybersecurity, algorithms and complexity, and error correction in neutral atoms. For her part, Doosti co-led the third edition of Foundations in Quantum Computing, a workshop that combines invited talks with dedicated time for focused discussion. “The speakers are briefed to cover the evolution of a particular field and to highlight open challenges, and then we use the discussion sessions to brainstorm ideas around a specific question,” she explained.

Those scientific meetings were complemented by a workshop on responsible quantum innovation, again hosted by the QCi3 Hub, and a week-long summer school on the Isle of Skye that was run by Heriot-Watt University and the London School of Mathematics. “All of our partners ran their events in the way they wanted, but we helped them with local support and some marketing and promotion,” said Ramin Jafarzadegan, the QSL’s operations manager and the chair of the Quantum Fringe festival. “Bringing all of these activities together delivered real value because visitors to Edinburgh could take part in multiple events.”

Indeed, one clear benefit of this approach was that some of the visiting scientists stayed for longer, which also enabled them to work alongside the QSL team. That has inspired a new scheme, called QSL Visiting Scholars, that aims to encourage scientists from other institutions to spend a month or so in Edinburgh to pursue collaborative projects.

As a whole, the Quantum Fringe has helped both the NQCC and the QSL in their ambitions to bring diverse stakeholders together to create new connections and to grow the ecosystem for quantum computing in the UK. “The NQCC should have patented the ‘quantum hackathon’ name,” joked Sir Peter. “Similar events are popping up everywhere these days, but the NQCC’s was among the first.”

The post Festival opens up the quantum realm appeared first on Physics World.

MOND versus dark matter: the clash for cosmology’s soul

12 août 2025 à 12:00

The clash between dark matter and modified Newtonian dynamics (MOND) can get a little heated at times. On one side is the vast majority of astronomers who vigorously support the concept of dark matter and its foundational place in cosmology’s standard model. On the other side is the minority – a group of rebels convinced that tweaking the laws of gravity rather than introducing a new particle is the answer to explaining the composition of our universe.

Both sides argue passionately and persuasively, pointing out evidence that supports their view while discrediting the other side. Often it seems to come down to a matter of perspective – both sides use the same results as evidence for their cause. For the rest of us, how can we tell who is correct?

As long as we still haven’t identified what dark matter is made of, there will remain some ambiguity, leaving a door ajar for MOND. However, it’s a door that dark-matter researchers hope will be slammed shut in the not-too-distant future.

Crunch time for WIMPs

In part two of this series, where I looked at the latest proposals from dark-matter scientists, we met University College London’s Chamkaur Ghag, who is the spokesperson for Lux-ZEPLIN. This experiment is searching for “weakly interacting massive particles” or WIMPs – the leading dark-matter candidate – down a former gold mine in South Dakota, US. A huge seven-tonne tank of liquid xenon, surrounded by an array of photomultiplier tubes, watches patiently for the flashes of light that may occur when a passing WIMP interacts with a xenon atom.

Running since 2021, the experiment just released the results of its most recent search through 280 days of data, which uncovered no evidence of WIMPs above a mass of 9 GeV/c2 (Phys. Rev. Lett. 135 011802). These results help to narrow the range of possible dark-matter theories, as the new limits impose constraints on WIMP parameters that are almost five times more rigorous than the previous best. Another experiment at the INFN Laboratori Nazionali del Gran Sasso in Italy, called XENONnT, is also hoping to spot the elusive WIMPs – in its case by looking for rare nuclear recoil interactions in a liquid xenon target chamber.

Huge water tank surrounded by pipes
Deep underground The XENON Dark Matter Project is hosted by the INFN Gran Sasso National Laboratory in Italy. The latest detector in this programme is the XENONnT (pictured) which uses liquid xenon to search for dark-matter particles. (Courtesy: XENON Collaboration)

Lux-ZEPLIN and XENONnT will cover half the parameter space of masses and energies that WIMPs could in theory have, but Ghag is more excited about a forthcoming, next-generation xenon-based WIMP detector dubbed XLZD that might settle the matter. XLZD brings together both the Lux-ZEPLIN and XENONnT collaborations, to design and build a single, common multi-tonne experiment that will hopefully leave WIMPs with no place to hide. “XLZD will probably be the final experiment of this type,” says Ghag. “It’s designed to be much larger and more sensitive, and is effectively the definitive experiment.”

I think none of us are ever going to fully believe it completely until we’ve found a WIMP and can reproduce it in a lab

Richard Massey

If WIMPs do exist, then this detector will find them, and it could happen on UK shores. Several locations around the world are in the running to host the experiment, including Boulby Mine Underground Laboratory near Whitby Bay on the north-east coast of England. If everything goes to plan, XLZD – which will contain between 40 and 100 tonnes of xenon – will be up and running and providing answers by the 2030s. It will be a huge moment for dark matter, and a nervous one for its researchers.

“I think none of us are ever going to fully believe it completely until we’ve found [a WIMP] and can reproduce it in a lab and show that it’s not just some abstract stuff that we call dark matter, but that it is a particular particle that we can identify,” says astronomer Richard Massey of the University of Durham, UK.

But if WIMPs are in fact a dead-end, then it’s not a complete death-blow for dark matter – there are other dark-matter candidates and other dark-matter experiments. For example, the Forward Search Experiment (FASER) at CERN’s Large Hadron Collider is looking for less massive dark-matter particles such as axions (read more about them in part 2). However, WIMPs have been a mainstay of dark-matter models since the 1980s. If the xenon-based experiments turn up empty-handed it will be a huge blow, and the door will creak open just a little bit more for MOND.

Galactic frontier

MOND’s battleground isn’t in particle detectors – it’s in the outskirts of galaxies and galaxy clusters, and its proof lies in the history of how our universe formed. This is dark matter’s playground too, with the popular models for how galaxies grow being based on a universe in which dark matter forms 85% of all matter. So it’s out in the depths of space where the two models clash.

The current standard model of cosmology describes how the growth of the large-scale structure of the universe, over the past 13.8 billion years of cosmic history since the Big Bang, is influenced by a combination of dark matter and dark energy (responsible for the accelerated expansion of the universe). Essentially, density fluctuations in the cosmic microwave background (CMB) radiation reflect the clumping of dark matter in the very early universe. As the cosmos aged, these clumps thinned out into the cosmic web of matter. This web is a universe-spanning network of dark-matter filaments, where all the matter lies, between which are voids that are comparatively less densely packed with matter than the filaments. Galaxies can form inside “dark matter haloes”, and at the densest points in the dark-matter filaments, galaxy clusters coalesce.

Simulations in this paradigm – known as lambda cold dark matter (ΛCDM) – suggest that galaxy and galaxy-cluster formation should be a slow process, with small galaxies forming first and gradually merging over billions of years to build up into the more massive galaxies that we see in the universe today. And it works – kind of. Recently, the James Webb Space Telescope (JWST) peered back in time to between just 300 and 400 million years after the Big Bang and found the universe to be populated by tiny galaxies perhaps just a thousand or so light-years across (ApJ 970 31). This is as expected, and over time they would grow and merge into larger galaxies.

1 Step back in time

infrared image showing thousands of stars and galaxies
a (Courtesy: NASA/ESA/CSA/STScI/ Brant Robertson, UC Santa Cruz/ Ben Johnson, CfA/ Sandro Tacchella, University of Cambridge/ Phill Cargile, CfA)

Graph of brightness versus wavelength of light showing a clear peak at roughly 1.8 microns
b (Courtesy: NASA/ESA/CSA/ Joseph Olmsted, STScI/ S Carniani, Scuola Normale Superiore/ JADES Collaboration)

Data from the James Webb Space Telescope (JWST) form the basis of the JWST Advanced Deep Extragalactic Survey (JADES). (a) This infrared image from the JWST’s NIRCam highlights galaxy JADES-GS-z14-0. (b) The JWST’s NIRSpec (Near-Infrared Spectrograph) obtained this spectrum of JADES-GS-z14-0. A galaxy’s redshift can be determined from the location of a critical wavelength known as the Lyman-alpha break. For JADES-GS-z14-0 the redshift value is 14.32 (+0.08/–0.20), making it the second most distant galaxy known at less than 300 million years after the Big Bang. The current record holder, as of August 2025, is MoM-z14, which has a redshift of 14.4 (+0.02/–0.02), placing it less than 280 million years after the Big Bang (arXiv:2505.11263). Both galaxies belong to an era referred to as the “cosmic dawn”, following the epoch of reionization, when the universe became transparent to light. JADES-GS-z14-0 is particularly interesting to researchers not just because of its distance, but also because it is very bright. Indeed, it is much more intrinsically luminous and massive than expected for a galaxy that formed so soon after the Big Bang, raising more questions on the evolution of stars and galaxies in the early universe.

Yet the deeper we push into the universe, the more we observe challenges to the ΛCDM model, which ultimately threatens the very existence of dark matter. For example, those early galaxies that the JWST has observed, while being quite small, are also surprisingly bright – more so than ΛCDM predicts. This has been attributed to an initial mass function (IMF – the property that determines the average mass of stars that form) that skews more towards higher-mass stars and therefore more luminous stars than today. It does sound reasonable, except that astronomers still don’t understand why the IMF is what it is today (favouring the smallest stars; massive stars are rare) never mind what it might have been over 13 billion years ago.

Not everyone is convinced, and this is compounded by slightly later galaxies, seen around a billion years after the Big Bang, which continue the trend of being more luminous and more massive than expected. Indeed, some of these galaxies sport truly enormous black holes hundreds of times more massive than the black hole at the heart of our Milky Way. Just a couple of billion years later and significantly large galaxy clusters are already present, earlier than one would have surmised with ΛCDM.

The fall of ΛCDM?

Astrophysicist and MOND advocate Pavel Kroupa, from the University of Bonn in Germany, highlights giant elliptical galaxies in the early universe as an example of what he sees as a divergence from ΛCDM.

“We know from observations that the massive elliptical galaxies formed on shorter timescales than the less massive ellipticals,” he explains. This phenomenon has been referred to as “downsizing”, and Kroupa declares it is “a big problem for  ΛCDM” because the model says that “the big galaxies take longer to form, but what we see is exactly the opposite”.

To quantify this problem, a 2020 study (MNRAS 498 5581) by Australian astronomer Sabine Bellstedt and colleagues showed that half the mass in present-day elliptical galaxies was in place 11 billion years ago, compared with other galaxy types that only accrued half their mass on average about 6 billion years ago. The smallest galaxies only accrued that mass as recently as 4 billion years ago, in apparent contravention of ΛCDM.

Observations (ApJ 905 40) of a giant elliptical galaxy catalogued as C1-23152, which we see as it existed 12 billion years ago, show that it formed 200 billion solar masses worth of stars in just 450 million years – a huge firestorm of star formation that ΛCDM simulations just can’t explain. Perhaps it is an outlier – we’ve only sampled a few parts of the sky, not conducted a comprehensive census yet. But as astronomers probe these cosmic depths more extensively, such explanations begin to wear thin.

Kroupa argues that by replacing dark matter with MOND, such giant early elliptical galaxies suddenly make sense. Working with Robin Eappen, who is a PhD student at Charles University in Prague, they modelled a giant gas cloud in the very early universe collapsing under gravity according to MOND, rather than if there were dark matter present.

“It is just stunning that the time [of formation of such a large elliptical] comes out exactly right,” says Kroupa. “The more massive cloud collapses faster on exactly the correct timescale, compared to the less massive cloud that collapses slower. So when we look at an elliptical galaxy, we know that thing formed from MOND and nothing else.”

Elliptical galaxies are not the only thing with a size problem. In 2021 Alexia Lopez, a PhD student at the University of Central Lancashire, UK, discovered a “Giant Arc” of galaxies spanning 3.3 billion light-years, some 9.2 billion light-years away. And in 2023 Lopez spotted another gigantic structure, a “Big Ring” (shaped more like a coil) of galaxies 1.3 billion light-years in diameter, but with a circumference of about 4 billion light-years. The opposite of these giant structures are the massive under-dense voids that take up space between the filaments of the cosmic web. The KBC Void (sometimes called the “Local Hole”), for example, is about two billion light-years across and the Milky Way among a host of other galaxies sits inside it. The trouble is, simulations in ΛCDM, with dark matter at the heart of it, cannot replicate structures and voids this big.

“We live in this huge under-density; we’re not at the centre of it but we are within it and such an under-density is completely impossible in ΛCDM,” says Kroupa, before declaring, “Honestly, it’s not worthwhile to talk about the ΛCDM model anymore.”

A bohemian model

Such fighting talk is dismissed by dark-matter astronomers because although there are obviously deficiencies in the ΛCDM model, it does such a good job of explaining so many other things. If we’re to kill ΛCDM because it cannot explain a few large ellipticals or some overly large galaxy groups or voids, then there needs to be a new model that can explain not only these anomalies, but also everything else that ΛCDM does explain.

“Ultimately we need to explain all the observations, and some of those MOND does better and some of those ΛCDM does better, so it’s how you weigh those different baskets,” says Stacy McGaugh, a MOND researcher from Case Western Reserve University in the US.

As it happens, Kroupa and his Bonn colleague Jan Pflamm-Altenburg are working on a new model that they think has what it takes to overthrow dark matter and the broader ΛCDM paradigm. Calling it the Bohemian model (the name has a double meaning – Kroupa is originally from Czechia), it incorporates MOND as its main pillar and Kroupa describes the results they are getting from their simulations in this paradigm as “stunning” (A&A 698 A167).

A lot of experts at Ivy League universities will say it’s all completely impossible. But I know that part of the community is just itching to have a completely different model

Pavel Kroupa

But Kroupa admits that not everybody will be happy to see it published. “If it’s published, a lot of experts at Ivy League universities will say it’s all completely impossible,” he says. “But I know for a fact that there is part of the community, the ‘bright part’ as I call them, which is just itching to have a completely different model.”

Kroupa is staying tight-lipped on the precise details of his new model, but says that according to simulations the puzzle of large-scale structure forming earlier than expected, and growing larger faster than expected, is answered by the Bohemian model. “These structures [such as the Giant Arc and the KBC Void] are so radical that they are not possible in the ΛCDM model,” he says. “However, they pop right out of this Bohemian model.”

Binary battle

Whether you believe Kroupa’s promises of a better model or whether you see it all as bluster, the fact remains that a dark-matter-dominated universe still has some problems. Maybe they’re not serious, and all it will take is a few tweaks to make those problems go away. But maybe they’ll persist, and require new physics of some kind, and it’s this possibility that continues to leave the door open for MOND. For the rest of us, we’re still grasping for a definitive statement one way or another.

For MOND, perhaps that definitive statement could still turn out to be binary stars, as discussed in the first article in this series. Researchers have been particularly interested in so-called “wide binaries” – pairs of stars that are more than 500 AU apart. Thanks to the vast distance between them, the gravitational impact of each star on the other is weak, making it a perfect test for MOND. Idranil Banik, of the University of St Andrews, UK, controversially concluded that there was no evidence for MOND operating on the smaller scales of binary-star systems. However, other researchers such as Kyu-Hyun Chae of Sejong University in South Korea argue that they have found evidence for MOND in binary systems, and have hit out at Banik’s findings.

Indeed, after the first part of this series was published, Chae reached out to me, arguing that Banik had analysed the data incorrectly. Chae specifically points out the fraction of wide binaries (pairs that are more than 500 AU apart, meaning that the gravitational impact of each star on the other is weak, making it a perfect test for MOND) with an extra unseen close stellar companion (a factor designated fmulti) to one or both of the binary stars must be calibrated for when performing the MOND calculations. Often when two stars are extremely close together, their angular separation is so small that we can’t resolve them and don’t realize that they are binary, he explains. So we might mistake a triple system, with two stars so close together that we can’t distinguish them and a third star on a wider circumbinary orbit, for just a wide binary.

“I initially believed Banik’s claim, but because what’s at stake is too big and I started feeling suspicious, I chose to do my own investigation,” says Chae (ApJ 952 128). “I came to realize the necessity of calibrating fmulti due to the intrinsic degeneracy between mass and gravity (one cannot simultaneously determine the gravity boost factor and the amount of hidden mass).”

The probability of a wide binary having an unseen extra stellar companion is the same as for shorter binaries (those that we can resolve). But for shorter binaries the gravitational acceleration is high enough that they obey regular Newtonian gravity – MOND only comes into the picture at wider separations. Therefore, the mass uncertainty in the study of wide binaries in a MOND regime can be calibrated for using those shorter-period binaries. Chae argues that Banik did not do this. “I’m absolutely confident that if the Banik et al. analysis is properly carried out, it will reveal MOND’s low-acceleration gravitational anomaly to some degree.”

So perhaps there is hope for MOND in binary systems. Given that dark matter shouldn’t be present on the scale of binary systems, any anomalous gravitational effect could only be explained by MOND. A detection would be pretty definitive, if only everyone could agree upon it.

the Bullet Cluster
Bullet time and mass This spectacular new image of the Bullet Cluster was created using NASA’s James Webb Space Telescope and Chandra X-ray Observatory. The new data allow for an improved measurement of the thousands of galaxies in the Bullet Cluster. This means astronomers can more accurately “weigh” both the visible and invisible mass in these galaxy clusters. Astronomers also now have an improved idea of how that mass is distributed. (X-ray: NASA/CXC/SAO; near-infrared: NASA/ESA/CSA/STScI; processing: NASA/STScI/ J DePasquale)

But let’s not kid ourselves – MOND still has a lot of catching up to do on dark matter, which has become a multi-billion-dollar industry with thousands of researchers working on it and space missions such as the European Space Agency’s Euclid space telescope. Dark matter is still in pole position, and its own definitive answers might not be too far away.

“Finding dark matter is definitely not too much to hope for, and that’s why I’m doing it,” says Richard Massey. He highlights not only Euclid, but also the work of the James Webb Space Telescope in imaging gravitational lensing on smaller scales and the Nancy G Roman Space Telescope, which will launch later this decade on a mission to study weak gravitational lensing – the way in which small clumps of matter, such as individual dark matter haloes around galaxies, subtly warp space.

“These three particular telescopes give us the opportunity over the next 10 years to catch dark matter doing something, and to be able to observe it when it does,” says Massey. That “something” could be dark-matter particles interacting, perhaps in a cluster merger in deep space, or in a xenon tank here on Earth.

“That’s why I work on dark matter rather than anything else,” concludes Massey. “Because I am optimistic.”

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales
  • In the second part of the series, Keith Cooper explored competing theories of dark matter

The post MOND versus dark matter: the clash for cosmology’s soul appeared first on Physics World.

Elusive scattering of antineutrinos from nuclei spotted using small detector

11 août 2025 à 18:01

Evidence of the coherent elastic scattering of reactor antineutrinos from atomic nuclei has been reported by the German-Swiss Coherent Neutrino Nucleus Scattering (CONUS) collaboration. This interaction has a higher cross section (probability) than the processes currently used to detect neutrinos, and could therefore lead to smaller detectors. It also involves lower-energy neutrinos, which could offer new ways to look for new physics beyond the Standard Model.

Antineutrinos only occasionally interact with matter, which makes them very difficult to detect. They can be observed using inverse beta decay, which involves the capture of electron antineutrinos by protons, producing neutrons and positrons. An alternative method involves observing the scattering of antineutrinos from electrons. Both these reactions have small cross sections, so huge detectors are required to capture just a few events. Moreover, inverse beta decay can only detect antineutrinos if they have energies above about 1.8 MeV, which precludes searches for low-energy physics beyond the Standard Model.

It is also possible to detect neutrinos by the tiny kick a nucleus receives when a neutrino scatters off it. “It’s very hard to detect experimentally because the recoil energy of the nucleus is so low, but on the other hand the interaction probability is a factor of 100–1000 higher than these typical reactions that are otherwise used,” says Christian Buck of the Max Planck Institute for Nuclear Physics in Heidelberg. This enables measurements with kilogram-scale detectors.

This was first observed in 2017 by the COHERENT collaboration using a 14.6 kg caesium iodide crystal to detect neutrinos from the Spallation Neutron Source at the Oak Ridge National Laboratory in the US. These neutrinos have a maximum energy of 55 MeV, making them ideal for the interaction. Moreover, the neutrinos come in pulses, allowing the signal to be distinguished from background radiation.

Reactor search

Multiple groups have subsequently looked for signals from nuclear reactors, which produce lower-energy neutrinos. These include the CONUS collaboration, which operated at the Brokdorf nuclear reactor in Germany until 2022. However, the only group to report a strong hint of a signal included Juan Collar of the University of Chicago. In 2022 it published results suggesting a stronger than expected signal at the Dresden-2 power reactor in the US.

Now, Buck and his CONUS colleagues present data from the CONUS+ experiment conducted at the Leibstadt reactor in Switzerland. They used three 1 kg germanium diodes sensitive to energies as low as 160 eV. They extracted the neutrino spectrum from background radiation by taking data when the reactor was running and when it was not. Writing in Nature, the team conclude that 395±106 neutrinos had been detected during 119 days of operation, which is consistent with the Standard Model 3.7σ away from zero. The experiment is currently in its second run, with the detector masses increased to 2.4 kg to provide better statistics and potentially a lower threshold energy.

Collar, however, is sceptical of the result. “[The researchers] seem to have an interest in dismissing the limitations of these detectors – limitations that affect us too,” he says. “The main difference between our approach and theirs is that we have made a best effort to demonstrate that our data are not contaminated by residual sources of low-energy noise dominant in this type of device prior to a careful analysis.” His group will soon release data taken at the Vandellòs reactor in Spain. “When we release these, we will take the time to point out the issues visible in their present paper,” he says. “It is a long list.”

Buck accepts that, if the previous measurements by Collar’s group are correct, the CONUS+ researchers should have detected least 10 times more neutrinos than they actually did. “I would say the control of backgrounds at our site in Leibstadt is better because we do not have such a strong neutron background. We have clearly demonstrated that the noise Collar has in mind is not dominant in the energy region of interest in our case.”

Patrick Huber at Virginia Tech in the US says, “Let’s see what Collar’s new result is going to be. I think this is a good example of the scientific method at work. Science doesn’t care who’s first – scientists care, but for us, what matters is that we get it right. But with the data that we have in hand, most experts, myself included, think that the current result is essentially the result we have been looking for.”

The post Elusive scattering of antineutrinos from nuclei spotted using small detector appeared first on Physics World.

‘I left the school buzzing and on a high’

11 août 2025 à 12:00

After 40 years lecturing on physics and technology, you’d think I’d be ready for any classroom challenge thrown at me. Surely, during that time, I’d have covered all the bases? As an academic with a background in designing military communication systems, I’m used to giving in-depth technical lectures to specialists. I’ve delivered PowerPoint presentations to a city mayor and council dignitaries (I’m still not sure why, to be honest). And perhaps most terrifying of all, I’ve even had my mother sit in on one of my classes.

During my retirement, I’ve taken part in outreach events at festivals, where I’ve learned how to do science demonstrations to small groups that have included everyone from babies to great-grandparents. I once even gave a talk about noted local engineers to a meeting of the Women’s Institute in what was basically a shed in a Devon hamlet. But nothing could have prepared me for a series of three talks I gave earlier this year.

I’d been invited to a school to speak to three classes, each with about 50 children aged between six and 11. The remit from the headteacher was simple: talk about “My career as a physicist”. To be honest, most of my working career focused on things like phased-array antennas, ferrite anisotropy and computer modelling of microwave circuits, which isn’t exactly easy to adapt for a young audience.

But for a decade or so my research switched to sports physics and I’ve given talks to more than 200 sports scientists in a single room. I once even wrote a book called Projectile Dynamics in Sport (Routledge, 2011). So I turned up at the school armed with a bag full of balls, shuttlecocks, Frisbees and flying rings. I also had a javelin (in the form of a telescopic screen pointer) and a “secret weapon” for my grand finale.

Our first game was “guess the sport”. The pupils did well, correctly discriminating the difference between a basketball, softball and a football, and even between an American football and a rugby ball. We discussed the purposes of dimples on a golf ball, the seam on a cricket ball and the “skirt” on a shuttlecock – the feathers, which are always taken from the right wing of a goose. Unless they are plastic.

As physicists, you’re probably wondering why the feathers are taken from its right side – and I’ll leave that as an exercise for the reader. But one pupil was more interested in the poor goose, asking me what happens when its feathers are pulled out. Thinking on my feet, I said the feathers grow back and the bird isn’t hurt. Truth is I have no idea, but I didn’t want to upset her.

Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy

Then: the finale. From my bag I took out a genuine Aboriginal boomerang, complete with authentic religious symbols. Not wanting to delve into Indigenous Australian culture or discuss a boomerang’s return mechanism in terms of gyroscopy and precession, I instead allowed the class to throw around three foam versions of it. Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy but ended each session with five minutes of carefree enjoyment.

There is something uniquely joyful about the energy of children when they engage in learning. At this stage, curiosity is all. They ask questions because they genuinely want to know how the world works. And when I asked them a question, hands shot up so fast and arms were waved around so frantically to attract my attention that some pupils’ entire body shook. At one point I picked out an eager firecracker who swiftly realized he didn’t know the answer and shrank into a self-aware ball of discomfort.

Mostly, though, children’s excitement is infectious. I left the school buzzing and on a high. I loved it. In this vibrant environment, learning isn’t just about facts or skills; it’s about puzzle-solving, discovery, imagination, excitement and a growing sense of independence. The enthusiasm of young learners turns the classroom into a place of shared exploration, where every day brings something new to spark their imagination.

How lucky primary teachers are to work in such a setting, and how lucky I was to be invited into their world.

The post ‘I left the school buzzing and on a high’ appeared first on Physics World.

New laser-plasma accelerator could soon deliver X-ray pulses

8 août 2025 à 10:00

A free-electron laser (FEL) that is driven by a plasma-based electron accelerator has been unveiled by Sam Barber at Lawrence Berkeley National Laboratory and colleagues. The device is a promising step towards compact, affordable free-electron lasers that are capable of producing intense, ultra-short X-ray laser pulses. It was developed in collaboration with researchers at Berkeley Lab, University of California Berkeley, University of Hamburg and Tau Systems.

A FEL creates X-rays by the rapid back-and-forth acceleration of fast-moving electron pulses using a series of magnets called an undulator. These X-rays are emitted at a narrow wavelength and then interact with the pulse as it travels down the undulator. The result is a bright X-ray pulse with laser-like coherence.

What is more, wavelength of the emitted X-rays can be adjusted simply by changing the energy of the electron pulses, making FELs highly tuneable.

Big and expensive

FELs are especially useful for generating intense, ultra-short X-ray pulses, which cannot be produced using conventional laser systems. So far, several X-ray FELs have been built for this purpose – but each of them relies on kilometre-scale electron accelerators costing huge amounts of money to build and maintain.

To create cheaper and more accessible FELs, researchers are exploring the use of laser-plasma accelerators (LPAs) – which can accelerate electron pulses to high energies over distances of just a few centimetres.

Yet as Barber explains, “LPAs have had a reputation for being notoriously hard to use for FELs because of things like parameter jitter and the large energy spread of the electron beam compared to conventional accelerators. But sustained research across the international landscape continues to drive improvements in all aspects of LPA performance.”

Recently, important progress was made by a group at the Chinese Academy of Sciences (CAS), who used an LPA to create FEL pulses by a factor of 50. Their pulses have a wavelength of 27 nm – which is close to the X-ray regime – but only about 10% of pulses succeeded.

Very stable laser

Now, the team has built on this by making several improvements to the FEL setup, with the aim to enhance its compatibility with LPAs. “On our end, we have taken great pains to ensure a very stable laser with several active feedback systems,” Barber explains. “Our strategy has essentially been to follow the playbook established by the original FEL research: start at longer wavelengths where it is easier to optimize and learn about the process and then scale the system to the shorter wavelengths.”

With these refinements, the team amplified their FEL’s output by a factor of 1000, achieving this in over 90% of their shots. This vastly outperformed the CAS result – albeit at a longer wavelength. “We designed the experiment to operate the FEL at around 420 nm, which is not a particularly exciting wavelength for scientific use cases – it’s just blue light,” Barber says. “But, with very minor upgrades, we plan to scale it for sub-100 nm wavelength where scientific applications become interesting.”

The researchers are optimistic that further breakthroughs are within reach, which could improve the prospects for LPA-driven FEL experiments. One especially important target is reaching the “saturation level” at X-ray wavelengths: the point beyond which FEL amplification no longer increases significantly.

“Another really crucial component is developing laser technology to scale the current laser systems to much higher repetition rates,” Barber says. “Right now, the typical laser used for LPAs can operate at around 10 Hz, but that will need to scale up dramatically to compare to the performance of existing light sources that are pushing megahertz.”

The research is described in Physical Review Letters.

The post New laser-plasma accelerator could soon deliver X-ray pulses appeared first on Physics World.

Entangled histories: women in quantum physics

6 août 2025 à 12:00

Writing about women in science remains an important and worthwhile thing to do. That’s the premise that underlies Women in the History of Quantum Physics: Beyond Knabenphysik – an anthology charting the participation of women in quantum physics, edited by Patrick Charbonneau, Michelle Frank, Margriet van der Heijden and Daniela Monaldi.

What does a history of women in science accomplish? This volume firmly establishes that women have for a long time made substantial contributions to quantum physics. It raises the profiles of figures like Chien-Shiung Wu, whose early work on photon entanglement is often overshadowed by her later fame in nuclear physics; and Grete Hermann, whose critiques of John von Neumann and Werner Heisenberg make her central to early quantum theory.

But in specifically recounting the work of these women in quantum, do we risk reproducing the same logic of exclusion that once kept them out – confining women to a specialized narrative? The answer is no, and this book is an especially compelling illustration of why.

A reference and a reminder

Two big ways this volume demonstrates its necessity are by its success as a reference, a place to look for the accomplishments and contributions of women in quantum physics; and as a reminder that we still have far to go before there is anything like true diversity, equality or the disappearance of prejudice in science.

The subtitle Beyond Knabenphysik – meaning “boys’ physics” in German – points to one of the book’s central aims: to move past a vision of quantum physics as a purely male domain. Originally a nickname for quantum mechanics given because of the youth of its pioneers, Knabenphysik comes to be emblematic of the collaboration and mentorship that welcomed male physicists and consistently excluded women.

The exclusion was not only symbolic but material. Hendrika Johanna van Leeuwen, who co-developed a key theorem in classical magnetism, was left out of the camaraderie and recognition extended to her male colleagues. Similarly, credit for Laura Chalk’s research into the Stark effect – an early confirmation of Schrödinger’s wave equation – was under-acknowledged in favour of that of her male collaborator’s.

Something this book does especially well is combine the sometimes conflicting aims of history of science and biography. We learn not only about the trajectories of these women’s careers, but also about the scientific developments they were a part of. The chapter on Hertha Sponer, for instance, traces both her personal journey and her pioneering role in quantum spectroscopy. The piece on Freda Friedman Salzman situates her theoretical contributions within the professional and social networks that both enabled and constrained her. In so doing, the book treats each of these women as not only whole human beings, but also integral players in a complex history of one of the most successful and debated physical theories in history.

Lost physics

Because the history is told chronologically, we trace quantum physics from some of the early astronomical images suggesting discrete quantized elements to later developments in quantum electrodynamics. Along the way, we encounter women like Maria McEachern, who revisits Williamina Fleming’s spectral work; Maria Lluïsa Canut, whose career spanned crystallography and feminist activism; and Sonja Ashauer, a Brazilian physicist whose PhD at Cambridge placed her at the heart of theoretical developments but whose story remains little known.

This history could lead to a broader reflection on how credit, networking and even theorizing are accomplished in physics. Who knows how many discoveries in quantum physics, and science more broadly, could have been made more quickly or easily without the barriers and prejudice women and other marginalized persons faced then and still face today? Or what discoveries still lie latent?

Not all the women profiled here found lasting professional homes in physics. Some faced barriers of racism as well as gender discrimination, like Carolyn Parker who worked on the Manhattan Project’s polonium research and is recognized as the first African American woman to have earned a postgraduate degree in physics. She died young without having received full recognition in her lifetime. Others – like Elizabeth Monroe Boggs who performed work in quantum chemistry – turned to policy work after early research careers. Their paths reflect both the barriers they faced and the broader range of contributions they made.

Calculate, don’t think

The book makes a compelling argument that the heroic narrative of science doesn’t just undermine the contributions of women, but of the less prestigious more broadly. Placing these stories side by side yields something greater than the sum of its parts. It challenges the idea that physics is the work of lone geniuses by revealing the collective infrastructures of knowledge-making, much of which has historically relied not only on women’s labour – and did they labour – but on their intellectual rigour and originality.

Many of the women highlighted were at times employed “to calculate, not to think” as “computers”, or worked as teachers, analysts or managers. They were often kept from more visible positions even when they were recognized by colleagues for their expertise. Katharine Way, for instance, was praised by peers and made vital contributions to nuclear data, yet was rarely credited with the same prominence as her male collaborators. It shows clearly that those employed to support from behind the scenes could and did contribute to theoretical physics in foundational ways.

The book also critiques the idea of a “leaky pipeline”, showing that this metaphor oversimplifies. It minimizes how educational and institutional investments in women often translate into contributions both inside and outside formal science. Ana María Cetto Kramis, for example, who played a foundational role in stochastic electrodynamics, combined research with science diplomacy and advocacy.

Should women’s accomplishments be recognized in relation to other women’s, or should they be integrated into a broader historiography? The answer is both. We need inclusive histories that acknowledge all contributors, and specialized works like this one that repair the record and show what emerges specifically and significantly from women’s experiences in science. Quantum physics is a unique field, and women played a crucial and distinctive role in its formation. This recognition offers an indispensable lesson: in physics and in life it’s sometimes easy to miss what’s right in front of us, no less so in the history of women in quantum physics.

  • 2025 Cambridge University Press 486 pp £37.99hb

The post Entangled histories: women in quantum physics appeared first on Physics World.

❌