↩ Accueil

Vue normale

Reçu avant avant-hier

‘Breathing’ crystal reversibly releases oxygen

9 septembre 2025 à 17:03

A new transition-metal oxide crystal that reversibly and repeatedly absorbs and releases oxygen could be ideal for use in fuel cells and as the active medium in clean energy technologies such as thermal transistors, smart windows and new types of batteries. The “breathing” crystal, discovered by scientists at Pusan National University in Korea and Hokkaido University in Japan, is made from strontium, cobalt and iron and contains oxygen vacancies.

Transition-metal oxides boast a huge range of electrical properties that can be tuned all the way from insulating to superconducting. This means they can find applications in areas as diverse as energy storage, catalysis and electronic devices.

Among the different material parameters that can be tuned are the oxygen vacancies. Indeed, ordering these vacancies can produce new structural phases that show much promise for oxygen-driven programmable devices.

Element-specific behaviours

In the new work, a team of researchers led by physicist Hyoungjeen Jeen of Pusan and materials scientist Hiromichi Ohta in Hokkaido studied SrFe0.5Co0.5Ox. The researchers focused on this material, they say, since it belongs to the family of topotactic oxides, which are the main oxides being studied today in solid-state ionics. “However, previous work had not discussed which ion in this compound was catalytically active,” explains Jeen. “What is more, the cobalt-containing topotactic oxides studied so far were fragile and easily fractured during chemical reactions.”

The team succeeded in creating a unique platform from a solid solution of epitaxial SrFe0.5Co0.5O2.5 in which both the cobalt and iron ions bathed in the same chemical environment. “In this way, we were able to test which ion was better for reduction reactions and whether or not it sustained its structural integrity,” Jeen tells Physics World. “We found that our material showed element-specific reduction behaviours and reversible redox reactions.”

The researchers made their material using a pulsed laser deposition technique, ideal for the epitaxial synthesis of multi-element oxides that allowed them to grow SrFe0.5Co0.5O2.5 crystals in which the iron and cobalt ions were randomly located in the crystal. This random arrangement was key to the material’s ability to repeatedly release and absorb oxygen, they say.

“It’s like giving the crystal ‘lungs’ so that it can inhale and exhale oxygen on command,” says Jeen.

Stable and repeatable

This simple breathing picture comes from the difference in the catalytic activity of cobalt and iron in the compound, he explains. Cobalt ions prefer to lose and gain oxygen and these ions are the main sites for the redox activity. However, since iron ions prefer not to lose oxygen during the reduction reaction, they serve as pillars in this architecture. This allows for stable and repeatable oxygen release and uptake.

Until now, most materials that absorb and release oxygen in such a controlled fashion were either too fragile or only functioned at extremely high temperatures. The new material works under more ambient conditions and is stable. “This finding is striking in two ways: only cobalt ions are reduced, and the process leads to the formation of an entirely new and stable crystal structure,” explains Jeen.

The researchers also showed that the material could return to its original form when oxygen was reintroduced, so proving that the process is fully reversible. “This is a major step towards the realization of smart materials that can adjust themselves in real time,” says Ohta. “The potential applications include developing a cathode for intermediate solid oxide fuel cells, an active medium for thermal transistors (devices that can direct heat like electrical switches), smart windows that adjust their heat flow depending on the weather and even new types of batteries.”

Looking ahead, Jeen, Ohta and colleagues aim to investigate the material’s potential for practical applications.

They report their present work in Nature Communications.

The post ‘Breathing’ crystal reversibly releases oxygen appeared first on Physics World.

Zero-point motion of atoms measured directly for the first time

5 septembre 2025 à 10:11

Physicists in Germany say they have measured the correlated behaviour of atoms in molecules prepared in their lowest quantum energy state for the first time. Using a technique known as Coulomb explosion imaging, they showed that the atoms do not simply vibrate individually. Instead, they move in a coupled fashion that displays fixed patterns.

According to classical physics, molecules with no thermal energy – for example, those held at absolute zero – should not move. However, according to quantum theory, the atoms making up these molecules are never completely “frozen”, so they should exhibit some motion even at this chilly temperature. This motion comes from the atoms’ zero-point energy, which is the minimum energy allowed by quantum mechanics for atoms in their ground state at absolute zero. It is therefore known as zero-point motion.

Reconstructing the molecule’s original structure

To study this motion, a team led by Till Jahnke from the Institute for Nuclear Physics at Goethe University Frankfurt and the Max Planck Institute for Nuclear Physics in Heidelberg used the European XFEL in Hamburg to bombard their sample – an iodopyridine molecule consisting of 11 atoms – with ultrashort, high-intensity X-ray pulses. These high-intensity pulses violently eject electrons out of the iodopyridine, causing its constituent atoms to become positively charged (and thus to repel each other) so rapidly that the molecule essentially explodes.

To image the molecular fragments generated by the explosion, the researchers used a customized version of a COLTRIMS reaction microscope. This approach allowed them to reconstruct the molecule’s original structure.

From this reconstruction, the researchers were able to show that the atoms do not simply vibrate individually, but that they do so in correlated, coordinated patterns. “This is known, of course, from quantum chemistry, but it had so far not been measured in a molecule consisting of so many atoms,” Jahnke explains.

Data challenges

One of the biggest challenges Jahnke and colleagues faced was interpreting what the microscope data was telling them. “The dataset we obtained is super-rich in information and we had already recorded it in 2019 when we began our project,” he says. “It took us more than two years to understand that we were seeing something as subtle (and fundamental) as ground-state fluctuations.”

Since the technique provides detailed information that is hidden to other imaging approaches, such as crystallography, the researchers are now using it to perform further time-resolved studies – for example, of photochemical reactions. Indeed, they performed and published the first measurements of this type at the beginning of 2025, while the current study (which is published in Science) was undergoing peer review.

“We have pushed the boundaries of the current state-of-the-art of this measurement approach,” Jahnke tells Physics World, “and it is nice to have seen a fundamental process directly at work.”

For theoretical condensed matter physicist Asaad Sakhel at Balqa Applied University, Jordan, who was not involved in this study, the new work is “an outstanding achievement”. “Being able to actually ‘see’ zero-point motion allows us to delve deeper into the mysteries of quantum mechanics in our quest to a further understanding of its foundations,” he says.

The post Zero-point motion of atoms measured directly for the first time appeared first on Physics World.

Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates

4 septembre 2025 à 10:00

Physicists at the Chinese Academy of Sciences (CAS) have used diamond-based quantum sensors to uncover what they say is the first unambiguous experimental evidence for the Meissner effect – a hallmark of superconductivity – in bilayer nickelate materials at high pressures. The discovery could spur the development of highly sensitive quantum detectors that can be operated under high-pressure conditions.

Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. Apart from a sharp drop in electrical resistance, another important sign that a material has crossed this threshold is the appearance of the Meissner effect, in which the material expels a magnetic field from its interior (diamagnetism). This expulsion creates such a strong repulsive force that a magnet placed atop the superconducting material will levitate above it.

In “conventional” superconductors such as solid mercury, the Tc is so low that the materials must be cooled with liquid helium to keep them in the superconducting state. In the late 1980s, however, physicists discovered a new class of superconductors that have a Tabove the boiling point of liquid nitrogen (77 K). These “unconventional” or high-temperature superconductors are derived not from metals but from insulators containing copper oxides (cuprates).

Since then, the search has been on for materials that superconduct at still higher temperatures, and perhaps even at room temperature. Discovering such materials would have massive implications for technologies ranging from magnetic resonance imaging machines to electricity transmission lines.

Enter nickel oxides

In 2019 researchers at Stanford University in the US identified nickel oxides (nickelates) as additional high-temperature superconductors. This created a flurry of interest in the superconductivity community because these materials appear to superconduct in a way that differs from their copper-oxide cousins.

Among the nickelates studied, La3Ni2O7-δ (where δ can range from 0 to 0.04) is considered particularly promising because in 2023, researchers led by Meng Wang of China’s Sun Yat-Sen University spotted certain signatures of superconductivity at a temperature of around 80 K. However, these signatures only appeared when crystals of the material were placed in a device called a diamond anvil cell (DAC). This device subjects samples of material to extreme pressures of more than 400 GPa (or 4 × 106 atmospheres) as it squeezes them between the flattened tips of two tiny, gem-grade diamond crystals.

The problem, explains Xiaohui Yu of the CAS’ Institute of Physics, is that it is not easy to spot the Meissner effect under such high pressures. This is because the structure of the DAC limits the available sample volume and hinders the use of highly sensitive magnetic measurement techniques such as SQUID. Another problem is that the sample used in the 2023 study contains several competing phases that could mix and degrade the signal of the La3Ni2O7-δ.

Nitrogen-vacancy centres embedded as in-situ quantum sensors

In the new work, Yu and colleagues used nitrogen-vacancy (NV) centres embedded in the DAC as in-situ quantum sensors to track and image the Meissner effect in pressurized bilayer La3Ni2O7-δ. This newly developed magnetic sensing technique boasts both high sensitivity and high spatial resolution, Yu says. What is more, it fits perfectly into the DAC high-pressure chamber.

Next, they applied a small external magnetic field of around 120 G. Under these conditions, they measured the optically detected magnetic resonance (ODMR) spectra of the NV centres point by point. They could then extract the local magnetic field from the resonance frequencies of these spectra. “We directly mapped the Meissner effect of the bilayer nickelate samples,” Yu says, noting that the team’s image of the magnetic field clearly shows both a diamagnetic region and a region where magnetic flux is concentrated.

Weak demagnetization signal

The researchers began their project in late 2023, shortly after receiving single-crystal samples of La3Ni2O7-δ from Wang. “However, after two months of collecting data, we still had no meaningful results,” Yu recalls. “From these experiments, we learnt that the demagnetization signal in La3Ni2O7-δ crystals was quite weak and that we needed to improve either the nickelate sample or the sensitivity of the quantum sensor.”

To overcome these problems, they switched to using polycrystalline samples, enhancing the quality of the nickelate samples by doping them with praseodymium to make La2PrNi2O7. This produced a sample with an almost pure bilayer structure and thus a much stronger demagnetization signal. They also used shallow NV centres implanted on the DAC cutlet (the smaller face of the two diamond tips).

“Unlike the NV centres in the original experiments, which were randomly distributed in the pressure-transmitting medium and have relatively large ODMR widths, leading to only moderate sensitivity in the measurements, these shallow centres are evenly distributed and well aligned, making it easier for us to perform magnetic imaging with increased sensitivity,” Yu explains.

These improvements enabled the team to obtain a demagnetization signal from the La2PrNi2O7 and La3Ni2O7-δ samples, he tells Physics World. “We found that the diamagnetic signal from the La2PrNi2O7 samples is about five times stronger than that from the La3Ni2O7-δ ones prepared under similar conditions – a result that is consistent with the fact that the Pr-doped samples are of a better quality.”

Physicist Jun Zhao of Fudan University, China, who was not involved in this work, says that Yu and colleagues’ measurement represents “an important step forward” in nickelate research. “Such measurements are technically very challenging, and their success demonstrates both experimental ingenuity and scientific significance,” he says. “More broadly, their result strengthens the case for pressurized nickelates as a new platform to study high-temperature superconductivity beyond the cuprates. It will certainly stimulate further efforts to unravel the microscopic pairing mechanism.”

As well as allowing for the precise sensing of magnetic fields, NV centres can also be used to accurately measure many other physical quantities that are difficult to measure under high pressure, such as strain and temperature distribution. Yu and colleagues say they are therefore looking to further expand the application of these structures for use as quantum sensors in high-pressure sensing.

They report their current work in National Science Review.

The post Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates appeared first on Physics World.

Desert dust helps freeze clouds in the northern hemisphere

2 septembre 2025 à 10:07

Micron-sized dust particles in the atmosphere could trigger the formation of ice in certain types of clouds in the Northern Hemisphere. This is the finding of researchers in Switzerland and Germany, who used 35 years of satellite data to show that nanoscale defects on the surface of these aerosol particles are responsible for the effect. Their results, which agree with laboratory experiments on droplet freezing, could be used to improve climate models and to advance studies of cloud seeding for geoengineering.

In the study, which was led by environmental scientist Diego Villanueva of ETH Zürich, the researchers focused on clouds in the so-called mixed-phase regime, which form at temperatures of between −39° and 0°C and are commonly found in mid- and high-latitudes, particularly over the North Atlantic, Siberia and Canada. These mixed-phase regime clouds (MPRCs) are often topped by a liquid or ice layer, and their makeup affects how much sunlight they reflect back into space and how much water they can release as rain or snow. Understanding them is therefore important for forecasting weather and making projections of future climate.

Researchers have known for a while that MPRCs are extremely sensitive to the presence of ice-nucleating particles in their environment. Such particles mainly come from mineral dust aerosols (such as K-feldspar, quartz, albite and plagioclase) that get swept up into the upper atmosphere from deserts. The Sahara Desert in northern Africa, for example, is a prime source of such dust in the Northern Hemisphere.

More dust leads to more ice clouds

Using 35 years of satellite data collected as part of the Cloud_cci project and MERRA-2 aerosol reanalyses, Villanueva and colleagues looked for correlations between dust levels and the formation of ice-topped clouds. They found that at temperatures of between -15°C and -30°C, the more dust there was, the more frequent the ice clouds were. What is more, their calculated increase in ice-topped clouds with increasing dust loading agrees well with previous laboratory experiments that predicted how dust triggers droplet freezing.

The new study, which is detailed in Science, shows that there is a connection between aerosols in the micrometre-size range and cloud ice observed over distances of several kilometres, Villanueva says. “We found that it is the nanoscale defects on the surface of dust aerosols that trigger ice clouds, so the process of ice glaciation spans more than 15 orders of magnitude in length,” he explains.

Thanks to this finding, Villaneuva tells Physics World that climate modellers can use the team’s dataset to better constrain aerosol-cloud processes, potentially helping them to construct better estimates of cloud feedback and global temperature projections.

The result also shows how sensitive clouds are to varying aerosol concentrations, he adds. “This could help bring forward the field of cloud seeding and include this in climate geoengineering efforts.”

The researchers say they have successfully replicated their results using a climate model and are now drafting a new manuscript to further explore the implications of dust-driven cloud glaciation for climate, especially for the Arctic.

The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.

Making molecules with superheavy elements could shake up the periodic table

29 août 2025 à 14:00

Nuclear scientists at the Lawrence Berkeley National Laboratory (LBNL) in the US have produced and identified molecules containing nobelium for the first time. This element, which has an atomic number of 102, is the heaviest ever to be observed in a directly-identified molecule, and team leader Jennifer Pore says the knowledge gained from such work could lead to a shake-up at the bottom of the periodic table.

“We compared the chemical properties of nobelium side-by-side to simultaneously produced molecules containing actinium (element number 89),” says Pore, a research scientist at LBNL. “The success of these measurements demonstrates the possibility to further improve our understanding of heavy and superheavy-element chemistry and so ensure that these elements are placed correctly on the periodic table.”

The periodic table currently lists 118 elements. As well as vertical “groups” containing elements with similar properties and horizontal “periods” in which the number of protons (atomic number Z) in the nucleus increases from left to right, these elements are arranged in three blocks. The block that contains actinides such as actinium (Ac) and nobelium (No), as well as the slightly lighter lanthanide series, is often shown offset, below the bottom of the main table.

The end of a predictive periodic table?

Arranging the elements this way is helpful because it gives scientists an intuitive feel for the chemical properties of different elements. It has even made it possible to predict the properties of new elements as they are discovered in nature or, more recently, created in the laboratory.

The problem is that the traditional patterns we’ve come to know and love may start to break down for elements at the bottom of the table, putting an end to the predictive periodic table as we know it. The reason, Pore explains, is that these heavy nuclei have a very large number of protons. In the actinides (Z > 88), for example, the intense charge of these “extra” protons exerts such a strong pull on the inner electrons that relativistic effects come into play, potentially changing the elements’ chemical properties.

“As some of the electrons are sucked towards the centre of the atom, they shield some of the outer electrons from the pull,” Pore explains. “The effect is expected to be even stronger in the superheavy elements, and this is why they might potentially not be in the right place on the periodic table.”

Understanding the full impact of these relativistic effects is difficult because elements heavier than fermium (Z = 100) need to be produced and studied atom by atom. This means resorting to complex equipment such as accelerated ion beams and the FIONA (For the Identification Of Nuclide A) device at LBNL’s 88-Inch Cyclotron Facility.

Producing and directly identifying actinide molecules

The team chose to study Ac and No in part because they represent the extremes of the actinide series. As the first in the series, Ac has no electrons in its 5f shell and is so rare that the crystal structure of an actinium-containing molecule was only determined recently. The chemistry of No, which contains a full complement of 14 electrons in its 5f shell and is the heaviest of the actinides, is even less well known.

In the new work, which is described in Nature, Pore and colleagues produced and directly identified molecular species containing Ac and No ions. To do this, they first had to produce Ac and No. They achieved this by accelerating beams of 48Ca with the 88-Inch Cyclotron and directing them onto targets of 169Tm and 208Pb, respectively. They then used the Berkeley Gas-filled Separator to separate the resulting actinide ions from unreacted beam material and reaction by-products.

The next step was to inject the ions into a chamber in the FIONA spectrometer known as a gas catcher. This chamber was filled with high-purity helium, as well as trace amounts of H2O and N2, at a pressure of approximately 150 torr. After interactions with the helium gas reduced the actinide ions to their 2+ charge state, so-called “coordination compounds” were able to form between the 2+ actinide ions and the H2O and N2 impurities. This compound-formation step took place either in the gas buffer cell itself or as the gas-ion mixture exited the chamber via a 1.3-mm opening and entered a low-pressure (several torr) environment. This transition caused the gas to expand at supersonic speeds, cooling it rapidly and allowing the molecular species to stabilize.

Once the actinide molecules formed, the researchers transferred them to a radio-frequency quadrupole cooler-buncher ion trap. This trap confined the ions for up to 50 ms, during which time they continued to collide with the helium buffer gas, eventually reaching thermal equilibrium. After they had cooled, the molecules were reaccelerated using FIONA’s mass spectrometer and identified according to their mass-to-charge ratio.

A fast and sensitive instrument

FIONA is much faster than previous such instruments and more sensitive. Both properties are important when studying the chemistry of heavy and superheavy elements, which Pore notes are difficult to make, and which decay quickly. “Previous experiments measured the secondary particles made when a molecule with a superheavy element decayed, but they couldn’t identify the exact original chemical species,” she explains. “Most measurements reported a range of possible molecules and were based on assumptions from better-known elements. Our new approach is the first to directly identify the molecules by measuring their masses, removing the need for such assumptions.”

As well as improving our understanding of heavy and superheavy elements, Pore says the new work might also have applications in radioactive isotopes used in medical treatment. For example, the 225Ac isotope shows promise for treating certain metastatic cancers, but it is difficult to make and only available in small quantities, which limits access for clinical trials and treatment. “This means that researchers have had to forgo fundamental chemistry experiments to figure out how to get it into patients,” Pore notes. “But if we could understand such radioactive elements better, we might have an easier time producing the specific molecules needed.”

The post Making molecules with superheavy elements could shake up the periodic table appeared first on Physics World.

Highest-resolution images ever taken of a single atom reveal new kind of vibrations

25 août 2025 à 16:00

Researchers in the US have directly imaged a class of extremely low-energy atomic vibrations called moiré phasons for the first time. In doing so, they proved that these vibrations are not just a theoretical concept, but are in fact the main way that atoms vibrate in certain twisted two-dimensional materials. Such vibrations may play a critical role in heat and charge transport and how quantum phases behave in these materials.

“Phasons had only been predicted by theory until now, and no one had ever directly observed them, or even thought that this was possible,” explains Yichao Zhang of the University of Maryland, who co-led the effort with Pinshane Huang of the University of Illinois at Urbana-Champaign. “Our work opens up an entirely new way of understanding lattice vibrations in 2D quantum materials.”

A second class of moiré phonons

When two sheets of a 2D materials are placed on top of each other and slightly twisted, their atoms form a moiré pattern, or superlattice. This superlattice contains quasi-periodic regions of rotationally aligned regions (denoted AA or AB) separated by a network of stacking faults called solitons.

Materials of this type are also known to possess distinctive vibrational modes known as moiré phonons, which arise from vibrations of the material’s crystal lattice. These modes vary with the twist angle between layers and can change the physical properties of the materials.

In addition to moiré phonons, two-dimensional moiré materials are also predicted to host a second class of vibrational mode known as phasons. However, these phasons had never been directly observed experimentally until now.

Imaging phasons at the picometre scale

In the new work, which is published in Science, the researchers used a powerful microscopy technique called electron ptychography that enabled them to image samples with spatial resolutions as fine as 15 picometres (1 pm = 10-12 m). At this level of precision, explains Zhang, subtle changes in thermally driven atomic vibrations can be detected by analysing the shape and size of individual atoms. “This meant we could map how atoms vibrate across different stacking regions of the moiré superlattice,” she says. “What we found was striking: the vibrations weren’t uniform – atoms showed larger amplitudes in AA-stacked regions and highly anisotropic behaviour at soliton boundaries. These patterns align precisely with theoretical predictions for moiré phasons.”

Coloured dots showing thermal vibrations in a single atom
Good vibrations: The experiment measured thermal vibrations in a single atom. (Courtesy: Yichao Zhang et al.)

Zhang has been studying phonons using electron microscopy for years, but limitations on imaging resolutions had largely restricted her previous studies to nanometre (10-9 m) scales. She recently realized that electron ptychography would resolve atomic vibrations with much higher precision, and therefore detect moiré phasons varying across picometre scales.

She and her colleagues chose to study twisted 2D materials because they can support many exotic electronic phenomena, including superconductivity and correlated insulated states. However, the role of lattice dynamics, including the behaviour of phasons in these structures, remains poorly understood. “The problem,” she explains, “is that phasons are both extremely low in energy and spatially non-uniform, making them undetectable by most experimental techniques. To overcome this, we had to push electron ptychography to its limits and validate our observations through careful modelling and simulations.”

This work opens new possibilities for understanding (and eventually controlling) how vibrations behave in complex 2D systems, she tells Physics World. “Phasons can affect how heat flows, how electrons move, and even how new phases of matter emerge. If we can harness these vibrations, we could design materials with programmable thermal and electronic properties, which would be important for future low-power electronics, quantum computing and nanoscale sensors.”

More broadly, electron ptychography provides a powerful new tool for exploring lattice dynamics in a wide range of advanced materials. The team is now using electron ptychography to study how defects, strain and interfaces affect phason behaviour. These imperfections are common in many real-world materials and devices and can cause their performance to deteriorate significantly. “Ultimately, we hope to capture how phasons respond to external stimuli, like how they evolve with change in temperature or applied fields,” Zhang reveals. “That could give us an even deeper understanding of how they interact with electrons, excitons or other collective excitations in quantum materials.”

The post Highest-resolution images ever taken of a single atom reveal new kind of vibrations appeared first on Physics World.

Starlink satellite emissions interfere with radio astronomy

22 août 2025 à 11:42
The Engineering Development Array 2, seen from above. There are three buildings - two white, one blue - at the top of the photo, and white cables stretch out like tendrils from the largest white building into the array. The array resembles a slightly irregular grid of white crosses. The bare ground underneath the array is a deep orange colour and a few scrubby plants can be seen at the periphery.
Prototype system: A drone photo of the Engineering Development Array 2 (EDA2) used in the survey. (Courtesy: D Grigg)

The largest ever survey of low-frequency radio emissions from satellites has detected emissions from the Starlink satellite “mega-constellation” across scientifically important low-frequency bands, including some that are protected for radio astronomy by international regulations. These emissions, which come from onboard electronics and are not intentional transmissions, could mask the weak radio-wave signals that astronomers seek to detect. As well as being damaging for radio astronomy, the researchers at Australia’s Curtin University who conducted the survey say their findings highlight the need for new regulations that cover unintended transmissions, not just deliberate ones.

“It is important to note that Starlink is not violating current regulations, so is doing nothing wrong,” says Steven Tingay, the executive director of the Curtin Institute of Radio Astronomy (CIRA) and a member of the survey team. Discussions with Starlink operator SpaceX on this topic, he adds, have been “constructive”.

The main purpose of Starlink and other mega-constellations is to provide Internet coverage around the world, including in areas that were previously unable to access it. In addition to SpaceX’s Starlink, other mega-constellations include Amazon’s Kuiper (US) and Eutelsat’s OneWeb (UK). This list is likely to expand in the future, with hundreds to tens of thousands of additional satellites planned for launch by China’s Shanghai Spacecom Satellite Technology (operator of the G60 Starlink/Qianfan constellation) and the Russian Federation (operator of the Sfera constellation).

While the effects of mega-constellations on optical astronomy have been widely studied, study leader Dylan Grigg, a PhD student in CIRA’s International Centre for Radio Astronomy Research, says that researchers are just beginning to realize the extent to which they are also adversely affecting radio astronomy. These effects extend to some of the most radio-quiet places on Earth. Indeed, several radio telescopes that were deliberately built in low-radio-noise locations – including the Murchison Widefield Array (MWA) in Western Australia and the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, as well as Europe’s Low Frequency Array (LOFAR) – have recently detected interfering satellite signals.

Largest survey of satellite effects on radio astronomy data

To understand the scale of the problem, Tingay, Grigg and colleagues turned to a radio telescope called the Engineering Development Array 2 (EDA2). This is a prototype station for the low-frequency half of the Square Kilometre Array (SKA-Low), which will be the world’s largest and most sensitive radio telescope when it comes online later this decade.

Using the EDA2, the researchers imaged the sky every two seconds at the frequencies that SKA-Low will cover. They did this using a software package Grigg developed that autonomously detects and identifies satellites in the images the EDA2 creates.

Although this was not the first time EDA2 has been deployed to analyse the effects of satellites on radio astronomy data, Grigg says it is the most comprehensive. “Ours is the largest survey looking into Starlink emissions at SKA-Low frequencies, with over 76 million of the images analysed,” he explains. “With the real SKA-Low coming online soon, we need as much information as possible to understand the threat satellite interference poses to radio astronomy.”

Emissions at protected frequencies

During the survey period, the researchers say they detected more than 112 000 radio emissions from over 1800 Starlink satellites. At some frequencies, up to 30% of all survey images contained at least one Starlink detection.

“While Starlink is not the only satellite network, it is the most immediate and frequent source of potential interference for radio astronomy,” Grigg says. “Indeed, it launched 477 satellites during this study’s four-month data collection period alone and has the most satellites in orbit – more than 7000 during the time of this study.”

But it is not only the sheer number of satellites that poses a challenge for astronomers. So, too, does the strength and frequency of their emissions. “Some satellites were detected emitting in bands where no signals are supposed to be present at all,” Grigg says. The list of rogue emitters, he adds, included 703 satellites the team identified at 150.8 MHz – a frequency that is meant to be reserved for radio astronomy under International Telecommunication Union regulations. “Since these emissions may come from components like onboard electronics and they’re not part of an intentional signal, astronomers can’t easily predict them or filter them out,” he says.

Potential for new regulations and mitigations

From a regulatory perspective, the widespread detection of unintended emissions, including within protected frequency bands, demonstrates the need for international regulation and limits on unintended emissions, Grigg tells Physics World. The Curtin team is now working with other radio astronomy research groups around the world with the aim of introducing updated policies that would regulate the impact of satellite constellations on radio astronomy.

In the meantime, Grigg says, “We are in an ongoing dialogue with SpaceX and are hopeful that we can continue to work with them to introduce mitigations to their satellites in the future.”

The survey is described in Astronomy & Astrophysics.

The post Starlink satellite emissions interfere with radio astronomy appeared first on Physics World.

Deep-blue LEDs get a super-bright, non-toxic boost

21 août 2025 à 10:00

A team led by researchers at Rutgers University in the US has discovered a new semiconductor that emits bright, deep-blue light. The hybrid copper iodide material is stable, non-toxic, can be processed in solution and has already been integrated into a light-emitting diode (LED). According to its developers, it could find applications in solid-state lighting and display technologies.

Creating white light for solid-state lighting and full-colour displays requires bright, pure sources of red, green and blue light. While stable materials that efficiently emit red or green light are relatively easily to produce, those that generate blue light (especially deep-blue light) are much more challenging. Existing blue-light emitters based on organic materials are unstable, meaning they lose their colour quality over time. Alternatives based on lead-halide perovskites or cadmium-containing colloidal quantum dots are more stable, but also toxic for humans and the environment.

Hybrid copper-halide-based emitters promise the best of both worlds, being both non-toxic and stable. They are also inexpensive, with tuneable optical properties and a high luminescence efficiency, meaning they are good at converting power into visible light.

Researchers have already used a pure inorganic copper iodide material, Cs3Cu2I5, to make deep-blue LEDs. This material emits light at the ideal wavelength of 445 nm, is robust to heat and moisture, and it emits between 87–95% of the excitation photons it absorbs as luminescence photons, giving it a high photoluminescence quantum yield (PLQY).

However, the maximum ratio of photon output to electron input (known as the maximum external quantum efficiency, EQEmax) for this material is very low, at just 1.02%.

Strong deep-blue photoluminescence

In the new work, a team led by Rutgers materials chemist Jing Li developed a hybrid copper iodide with the chemical formula 1D-Cu4I8(Hdabco)4 (CuI(Hda), where Hdabco is 1,4-diazabicyclo-[2.2.2]octane-1-ium. This material emits strong deep-blue light at 449 nm with a PLQY near unity (99.6%).

Li and colleagues opted to use CuI(Hda) as the sole light emitting layer and built a thin-film LED out of it using a solution process. The new device has an EQEmax of 12.6% with colour coordinates (0.147, 0.087) and a peak brightness of around 4000 cd m-2. It is also relatively stable, with an operational half-lifetime (T50) of approximately 204 hours under ambient conditions. These figures mean that its performance rivals the best existing solution-processed deep-blue LEDs, Li says. The team also fabricated a large-area device measuring 4 cm² to demonstrate that the material could be used in real-world applications.

Interfacial hydrogen-bond passivation strategy

The low PLQY of previous such devices is partly due to the fact that charge carriers (electrons and holes) in these materials rapidly recombine in a non-radiative way, typically due to surface and bulk defects, or traps. The charge carriers also have a low radiative recombination rate, which is associated with a small exciton (electron-hole pair) binding energy.

Li and colleagues overcame this problem in their new device thanks to an interfacial hydrogen-bond passivation (DIHP) strategy that involves introducing hydrogen bonds via an ultrathin sheet of polymethylacrylate (PMMA) and a carbazole-phosphonic acid-based self-assembled monolayer (Ac2PACz) at the two interfaces of the CuI(Hda) emissive layer. This effectively passivates both heterojunctions of the copper-iodide hydride light-emitting layer and optimizes exciton binding energies. “Such a synergistic surface modification dramatically boosts the performance of the deep-blue LED by a factor of fourfold,” explains Li.

According to Li, the study suggests a promising route for developing blue emitters that are both energy-efficient and environmentally benign, without compromising on performance. “Through the fabrication of blue LEDs using a low cost, stable and nontoxic material capable of delivering efficient deep-blue light, we address major energy and ecological limitations found in other types of solution-processable emitters,” she tells Physics World.

Li adds that the hydrogen-bonding passivation technique is not limited to the material studied in this work. It could also be applied to minimize interfacial energy losses in a wide range of other solution-based, light-emitting optoelectronic systems.

The team is now pursuing strategies for developing other solution-processable, high-performance hybrid copper iodide-based emitter materials similar to CuI(Hda). “Our goal is to further enhance the efficiency and extend the operational lifetime of LEDs utilizing these next-generation materials,” says Li.

The present work is detailed in Nature.

The post Deep-blue LEDs get a super-bright, non-toxic boost appeared first on Physics World.

Physicists discover a new proton magic number

20 août 2025 à 15:00

The first precise mass measurements of an extremely short-lived and proton-rich nucleus, silicon-22, have revealed the “magic” – that is, unusually tightly bound – nature of nuclei containing 14 protons. As well as shedding light on nuclear structure, the discovery could improve our understanding of the strong nuclear force and the mechanisms by which elements form.

At the lighter end of the periodic table, stable nuclei tend to contain similar numbers of neutrons and protons. As the number of protons increases, additional neutrons are needed to balance out the mutual repulsion of the positively-charged protons. As a rule, therefore, an isotope of a given element will be unstable if it contains either too few neutrons or too many.

In 1949, Maria Goeppert Mayer and J Hans D Jensen proposed an explanation for this rule. According to their nuclear shell model, nuclei that contain certain “magic” numbers of nucleons (neutrons and/or protons) are more bound because they have just the right number of nucleons to fully fill their shells. Nuclei that contain magic numbers of both protons and neutrons are even more bound and are said to be “doubly magic”. Subsequent studies showed that for neutrons, these magic numbers are 2, 8, 20, 28, 50, 82 and 126.

While the magic numbers for stable and long-lived nuclei are now well-established, those for exotic, short-lived ones with unusual proton-neutron ratios are comparatively little understood. Do these highly unstable nuclei have the same magic numbers as their more stable counterparts? Or are they different?

In recent years, studies showing that neutron-rich nuclei have magic numbers of 14, 16, 32 and 34 have brought scientists closer to answering this question. But what about protons?

“The hunt for new magic numbers in proton-rich nuclei is just as exciting,” says Yuan-Ming Xing, a physicist at the Institute for Modern Physics (IMP) of the Chinese Academy of Sciences, who led the latest study on silicon-22. “This is because we know much less about the evolution of the shell structure of these nuclei, in which the valence protons are loosely bound.” Protons in these nuclei can even couple to states in the continuum, Xing adds, forming the open quantum systems that have become such a hot topic in quantum research.

Mirror nuclei

After measurements on oxygen-22 (14 neutrons, 8 protons) showed that 14 is a magic number of neutrons for this neutron-rich isotope, the hunt was on for a proton-rich counterpart. An important theory in nuclear physics known as isospin symmetry states that nuclei with interchanged numbers of protons and neutrons will have identical characteristics. The magic numbers for protons and neutrons for these “mirror” nuclei, as they are known, are therefore expected to be the same. “Of all the new neutron-rich doubly-magic nuclei discovered, only one loosely bound mirror nucleus for oxygen-22 exists,” says IMP team member Yuhu Zhang. “This is silicon-22.”

The problem is that silicon-22 (14 protons, 8 neutrons) has a short half-life and is hard to produce in quantities large enough to study. To overcome this, the researchers used an improved version of a technique known as Bρ-defined isochronous mass spectroscopy.

Working at the Cooler-Storage Ring of the Heavy Ion Research Facility in Lanzhou, China, Xing, Zhang and an international team of collaborators began by accelerating a primary beam of stable 36Ar15+ ions to around two thirds the speed of light. They then directed this beam onto a 15-mm-thick beryllium target, causing some of the 36Ar ions to fragment into silicon-22 nuclei. After injecting these nuclei into the storage ring, the researchers could measure their velocity and the time it took them to circle the ring. From this, they could determine their mass. This measurement confirmed that the proton number 14 is indeed magic in silicon-22.

A better understanding of nucleon interactions

“Our work offers an excellent opportunity to test the fundamental theories of nuclear physics for a better understanding of nucleon interactions, of how exotic nuclear structures evolve and of the limit of existence of extremely exotic nuclei,” says team member Giacomo de Angelis, a nuclear physicist affiliated with the National Laboratories of Legnaro in Italy as well as the IMP. “It could also help shed more light on the reaction rates for element formation in stars – something that could help astrophysicists to better model cosmic events and understand how our universe works.”

According to de Angelis, this first mass measurement of the silicon-22 nucleus and the discovery of the magic proton number 14 is “a strong invitation not only for us, but also for other nuclear physicists around the world to investigate further”. He notes that researchers at the Facility for Rare Isotope Beams (FRIB) at Michigan State University, US, recently measured the energy of the first excited state of the silicon-22 nucleus. The new High Intensity Heavy-Ion Accelerator Facility (HIAF) in Huizhou, China, which is due to come online soon, should enable even more detailed studies.

“HIAF will be a powerful accelerator, promising us ideal conditions to explore other loosely bound systems, thereby helping theorists to more deeply understand nucleon-nucleon interactions, quantum mechanics of open quantum systems and the origin of elements in the universe,” he says.

The present study is detailed in Physical Review Letters

The post Physicists discover a new proton magic number appeared first on Physics World.

Laser-driven implosion could produce megatesla magnetic fields

20 août 2025 à 10:00

Magnetic fields so strong that they are typically only observed in astrophysical jets and highly magnetized neutron stars could be created in the laboratory, say physicists at the University of Osaka, Japan. Their proposed approach relies on directing extremely short, intense laser pulses into a hollow tube housing sawtooth-like inner blades. The fields created in this improved version of the established “microtube implosion” technique could be used to imitate effects that occur in various high-energy-density processes, including non-linear quantum phenomena and laser fusion as well as astrophysical systems.

Researchers have previously shown that advanced ultra-intense femtosecond (10-15 s) lasers can generate magnetic fields with strengths of up to several kilotesla. More recently, a suite of techniques that combines advanced laser technologies with complex microstructures promises to push this limit even higher, into the megatesla regime.

Microtube implosion is one such technique. Here, femtosecond laser pulses with intensities between 1020 and 1022W/cm2 are aimed at a hollow cylindrical target with an inner radius of between 1 and 10 mm. This produces a plasma of hot electrons with MeV energies that form a sheath field along the inner wall of the tube. These electrons accelerate ions radially inward, causing the cylinder to implode.

At this point, a “seed” magnetic field deflects the ions and electrons in opposite azimuthal directions via the Lorentz force. The loop currents induced in the same direction ultimately generate a strong axial magnetic field.

Self-generated loop current

Although the microtube implosion technique is effective, it does require a kilotesla-scale seed field. This complicates the apparatus and makes it rather bulky.

 In the latest work, Osaka’s Masakatsu Murakami and colleagues propose a new setup that removes the need for this seed field. It does this by replacing the 1‒10 mm cylinder with a micron-sized one that has a periodically slanted inner surface housing sawtooth-shaped blades. These blades introduce a geometrical asymmetry in the cylinder, causing the imploding plasma to swirl asymmetrically inside it and generating circulating currents near its centre. These self-generated loop currents then produce an intense axial magnetic field with a magnitude in the gigagauss range (1 gigagauss = 100 000 T).

Using “particle-in-cell” simulations running the fully relativistic EPOCH code on Osaka’s SQUID supercomputer, the researchers found that such vortex structures and their associated magnetic field arise from a self-consistent positive feedback mechanism. The initial loop current amplifies the central magnetic field, which in turn constrains the motion of charged particles more tightly via the Lorentz force – and thereby reinforces and intensifies the loop current further.

“This approach offers a powerful new way to create and study extreme magnetic fields in a compact format,” Murakami says. “It provides an experimental bridge between laboratory plasmas and the astrophysical universe and could enable controlled studies of strongly magnetized plasmas, relativistic particle dynamics and potentially magnetic confinement schemes relevant to both fusion and astrophysics.”

The researchers, who report their work in Physics of Plasmas, are now looking to realize their scheme in an experiment using petawatt-class lasers. “We will also investigate how these magnetic fields can be used to steer particles or compress plasmas,” Murakami tells Physics World.

The post Laser-driven implosion could produce megatesla magnetic fields appeared first on Physics World.

How hot can you make a solid before it melts?

19 août 2025 à 14:56

Gold can remain solid at temperatures of over 14 times its melting point, far beyond a long-assumed theoretical limit dubbed the “entropy catastrophe”. This finding is based on temperature measurements made using high-resolution inelastic X-ray scattering, and according to team leader Thomas White of the University of Nevada, US, it implies that the question “How hot can you make a solid before it melts?” has no good answer.

“Until now, we thought that solids could not exist above about three times their melting temperatures,” White says. “Our results show that if we heat a material rapidly – that is, before it has time to expand – it is possible to bypass this limit entirely.”

Gold maintains its solid crystalline structure

In their experiments, which are detailed in Nature, White and colleagues heated a 50-nanometre-thick film of gold using intense laser pulses just 50 femtoseconds long (1fs = 10-15 s). The key is the speed at which heating occurs. “By depositing energy faster than the gold lattice could expand, we created a state in which the gold was incredibly hot, but still maintained its solid crystalline structure,” White explains.

The team’s setup made it possible to achieve heating rates in excess of 1015 K s –1, and ultimately to heat the gold to 14 times its 1064 °C melting temperature. This is far beyond the boundary of the supposed entropy catastrophe, which was previously predicted to strike at 3000 °C.

To measure such extreme temperatures accurately, the researchers used the Linac Coherent Light Source (LCLS) at Stanford University as an ultrabright X-ray thermometer. In this technique, the atoms or molecules in a sample absorb photons from an X-ray laser at one frequency and then re-emit photons of a different frequency. The difference in these frequencies depends on a photon’s Doppler shift, and thus on whether it is propagating towards or away from the detector.

The method works because all the atoms in a material exhibit random thermal motion. The temperature of the sample therefore depends on the average kinetic energy of its atoms. Higher temperatures correspond to faster-moving atoms and a bigger spread in the velocities of the photons moving towards away or from the detector. Hence, the width of the spectrum of light scattered by the sample can be used to estimate its temperature. “This approach bypasses the need for complex computer modelling because we simply measure the velocity distribution of atoms directly,” White explains.

A direct, model-independent method

The team, which also includes researchers from the UK, Germany and Italy, undertook this project because its members wanted to develop a direct, model-independent method to measure atom temperatures in extreme conditions. The technical challenges of doing so were huge, White recalls. “We not only needed a high-resolution X-ray spectrometer capable of resolving energy features of just millielectronvolts (meV) but also an X-ray source bright enough to generate meaningful signals from small, short-lived samples,” he says.

A further challenge is that while pressure and density measurements under extreme conditions are more or less routine, temperature is typically inferred – often with large uncertainties. “In our experiments, these extreme states last just picoseconds or nanoseconds,” he says. “We can’t exactly insert a thermometer.”

White adds that this limitation has slowed progress across plasma and materials physics. “Our work provides the first direct method for measuring ion temperatures in dense, strongly driven matter, unlocking new possibilities in areas like planetary science – where we can now probe conditions inside giant planets – and in fusion energy, where temperature diagnostics are critical.”

Fundamental studies in materials science could also benefit, he adds, pointing out that scientists will now be able to explore the ultimate stability limits of solids experimentally as well as theoretically, studying how materials behave when pushed far beyond conventional thermodynamic boundaries.

The researchers are now applying their method to shock-compressed materials. “Just a few weeks ago, we completed a six-night experiment at the LCLS using the same high-resolution scattering platform to measure both particle velocity and temperature in shock-melted iron,” White says. “This is a major step forward. Not only are we tracking temperature in the solid phase, but now we’re accessing molten states under dynamic compression, that is, at conditions like those found inside planetary interiors.”

White tells Physics World that these experiments also went well, and he and his colleagues are now analysing the results. “Ultimately, our goal is to extend this approach to a wide range of materials and conditions, allowing for a new generation of precise, real-time diagnostics in extreme environments,” he says.

The post How hot can you make a solid before it melts? appeared first on Physics World.

Android phone network makes an effective early warning system for earthquakes

15 août 2025 à 10:00

The global network of Android smartphones makes a useful earthquake early warning system, giving many users precious seconds to act before the shaking starts. These findings, which come from researchers at Android’s parent organization Google, are based on a three-year-long study involving millions of phones in 98 countries. According to the researchers, the network’s capabilities could be especially useful in areas that lack established early warning systems.

By using Android smartphones, which make up 70% of smartphones worldwide, the Android Earthquake Alert (AEA) system can help provide life-saving warnings in many places around the globe,” says study co-leader Richard Allen, a visiting faculty researcher at Google who directs the Berkeley Seismological Laboratory at the University of California, Berkeley, US.

Traditional earthquake early warning systems use networks of seismic sensors expressly designed for this purpose. First implemented in Mexico and Japan, and now also deployed in Taiwan, South Korea, the US, Israel, Costa Rica and Canada, they rapidly detect earthquakes in areas close to the epicentre and issue warnings across the affected region. Even a few seconds of warning can be useful, Allen explains, because it enables people to take protective actions such as the “drop, cover and hold on” (DCHO) sequence recommended in most countries.

Building such seismic networks is expensive, and many earthquake-prone regions do not have them. What they do have, however, is smartphones. Most such devices contain built-in accelerometers, and as their popularity soared in the 2010s, seismic scientists began exploring ways of using them to detect earthquakes. “Although the accelerometers in these phones are less sensitive than the permanent instruments used in traditional seismic networks, they can still detect tremors during strong earthquakes,” Allen tells Physics World.

A smartphone-based warning system

By the late 2010s, several teams had developed smartphone apps that could sense earthquakes when they happen, with early examples including Mexico’s SkyAlert and Berkeley’s ShakeAlert. The latest study takes this work a step further. “By using the accelerometers in a network of smartphones like a seismic array, we are now able to provide warnings in some parts of the world where they didn’t exist before and are most needed,” Allen explains.

Working with study co-leader Marc Stogaitis, a principal software engineer at Android, Allen and colleagues tested the AEA system between 2021 and 2024. During this period, the app detected an average of 312 earthquakes a month, with magnitudes ranging from 1.9 to 7.8 (corresponding to events in Japan and Türkiye, respectively).

Detecting earthquakes with smartphones

Animation showing phones detecting shaking as a magnitude 6.2 earthquake in Türkiye progressed. Yellow dots are phones that detect shaking. The yellow circle is the P-wave’s estimated location and the red circle is for the S-wave. Note that phones can detect shaking for reasons other than an earthquake, and the system needs to handle this source of noise. This video has no sound. (Courtesy: Google)

For earthquakes of magnitude 4.5 or higher, the system sent “TakeAction” alerts to users. These alerts are designed to draw users’ attention immediately and prompt them to take protective actions such as DCHO. The system sent alerts of this type on average 60 times per month during the study period, for an average of 18 million individual alerts per month. The system also delivered lesser “BeAware” alerts to regions expected to experience a shaking intensity of 3 or 4.

To assess how effective these alerts were, the researchers used Google Search to collect voluntary feedback via user surveys. Between 5 February 2023 and 30 April 2024, 1 555 006 people responded to a survey after receiving alerts generated from an AEA detection. Their responses indicated that 85% of them did indeed experience shaking, with 36% receiving the alert before the ground began to move, 28% during and 23% after.

Graphic showing responses to survey on the effectiveness of the AEA and users' responses to alerts
Feeling the Earth move: Feedback from users who received an alert. A total of 1 555 006 responses to the user survey were collected over the period 5 February 2023 to 30 April 2024. During this time, alerts were issued for 1042 earthquakes detected by AEA. (Courtesy: Google)

Principles of operation

AEA works on the same principles of seismic wave propagation as traditional earthquake detection systems. When an Android smartphone is stationary, the system uses the output of its accelerometer to detect the type of sudden increase in acceleration that P and S waves in an earthquake would trigger. Once a phone detects such a pattern, it sends a message to Google servers with the acceleration information and an approximate location. The servers then search for candidate seismic sources that tally with this information.

“When a candidate earthquake source satisfies the observed data with a high enough confidence, an earthquake is declared and its magnitude, hypocentre and origin time are estimated based on the arrival time and amplitude of the P and S waves,” explains Stogaitis. “This detection capability is deployed as part of Google Play Services core system software, meaning it is on by default for most Android smartphones. As there are billions of Android phones around the world, this system provides an earthquake detection capability wherever there are people, in both wealthy and less-wealthy nations.”

In the future, Allen says that he and his colleagues hope to use the same information to generate other hazard-reducing tools. Maps of ground shaking, for example, could assist the emergency response after an earthquake.

For now, the researchers, who report their work in Science, are focused on improving the AEA system. “We are learning from earthquakes as they occur around the globe and the Android Earthquake Alerts system is helping to collect information about these natural disasters at a rapid rate,” says Allen. “We think that we can continue to improve both the quality of earthquake detections, and also improve on our strategies to deliver effective alerts.”

The post Android phone network makes an effective early warning system for earthquakes appeared first on Physics World.

Graphite ‘hijacks’ the journey from molten carbon to diamond

14 août 2025 à 13:00

At high temperatures and pressures, molten carbon has two options. It can crystallize into diamond and become one of the world’s most valuable substances. Alternatively, it can crystallize into graphite, which is industrially useful but somewhat less exciting.

Researchers in the US have now discovered what causes molten carbon to “choose” one crystalline form over the other. Their findings, which are based on sophisticated simulations that use machine learning to predict molecular behaviour, have implications for several fields, including geology, nuclear fusion and quantum computing as well as industrial diamond production.

Monitoring crystallization in molten carbon is challenging because the process is rapid and occurs under conditions that are hard to produce in a laboratory. When scientists have tried to study this region of carbon’s phase diagram using high pressure flash heating, their experiments have produced conflicting results.

A better understanding of phase changes near the crystallization point could bring substantial benefits. Liquid-phase carbon is a known intermediate in the synthesis of artificial diamonds, nanodiamonds and the nitrogen-vacancy-doped diamonds used in quantum computing. The presence of diamond in natural minerals can also shed light on tectonic processes in Earth-like planets and the deep-Earth carbon cycle.

Crystallization process can be monitored in detail

In the new work, a team led by chemist Davide Donadio of the University of California, Davis used machine-learning-accelerated, quantum-accurate molecular dynamics simulations to model how diamond and graphite form as liquid carbon cools from 5000 to 3000 K at pressures ranging from 5 to 30 GPa. While such extreme conditions can be created using laser heating, Donadio notes that doing so requires highly specialized equipment. Simulations also provide a level of control over conditions and an ability to monitor the crystallization process at the atomic scale that would be difficult, if not impossible, to achieve experimentally.

The team’s simulations showed that the crystallization behaviour of molten carbon is more complex than previously thought. While it crystallizes into diamond at higher pressures, at lower pressures (up to 15 GPa) it forms graphite instead. This was surprising, the researchers say, because even at these slightly lower pressures, the material’s most thermodynamically stable phase ought to be diamond rather than graphite.

“Nature taking the path of least resistance”

The team attributes this unexpected behaviour to an empirical observation known as Ostwald’s step rule, which states that crystallization often proceeds through intermediate metastable phases rather than directly to the phase that is most thermodynamically stable. In this case, the researchers say that graphite, a nucleating metastable crystal, acts as a stepping stone because its structure more closely resembles that of the parent liquid carbon. For this reason, it hinders the direct formation of the stable diamond phase.

“The liquid carbon essentially finds it easier to become graphite first, even though diamond is ultimately more stable under these conditions,” says co-author Tianshu Li, a professor of civil and environmental engineering at George Washington University. “It’s nature taking the path of least resistance.”

The insights gleaned from this work, which is described in Nature Communications, could help resolve inconsistencies among historical electrical and laser flash-heating experiments, Donadio says. Though these experiments were aimed at resolving the phase diagram of carbon near the graphite-diamond-liquid triple point, various experimental details and recrystallization conditions may have meant that their systems instead became “trapped” in metastable graphitic configurations. Understanding how this happens could prove useful for manufacturing carbon-based materials such as synthetic diamonds and nanodiamonds at high pressure and temperature.

“I have been studying crystal nucleation for 20 years and have always been intrigued by the behaviour of carbon,” Donadio tells Physics World. “Studies based on so-called empirical potentials have been typically unreliable in this context and ab initio density functional theory-based calculations are too slow. Machine learning potentials allow us to overcome these issues, having the right combination of accuracy and computational speed.”

Looking to the future, Donadio says he and his colleagues aim to study more complex chemical compositions. “We will also be focusing on targeted pressures and temperatures, the likes of which are found in the interiors of giant planets in our solar system.”

The post Graphite ‘hijacks’ the journey from molten carbon to diamond appeared first on Physics World.

Physicists get dark excitons under control

12 août 2025 à 16:30
Dark exciton control: Researchers assemble a large cryostat in an experimental physics laboratory, preparing for ultra-low temperature experiments with quantum dots on a semiconductor chip. (Courtesy: Universität Innsbruck)

Physicists in Austria and Germany have developed a means of controlling quasiparticles known as dark excitons in semiconductor quantum dots for the first time. The new technique could be used to generate single pairs of entangled photons on demand, with potential applications in quantum information storage and communication.

Excitons are bound pairs of negatively charged electrons and positively charged “holes”. When these electrons and holes have opposite spins, they recombine easily, emitting a photon in the process. Excitons of this type are known as “bright” excitons. When the electrons and holes have parallel spins, however, direct recombination by emitting a photon is not possible because it would violate the conservation of spin angular momentum. This type of exciton is therefore known as a “dark” exciton.

Because dark excitons are not optically active, they have much longer lifetimes than their bright cousins. For quantum information specialists, this is an attractive quality, because it means that dark excitons can store quantum states – and thus the information contained within these states – for much longer. “This information can then be released at a later time and used in quantum communication applications, such as optical quantum computing, secure communication via quantum key distribution (QKD) and quantum information distribution in general,” says Gregor Weihs, a quantum photonics expert at the Universität Innsbruck, Austria who led the new study.

The problem is that dark excitons are difficult to create and control. In semiconductor quantum dots, for example, Weihs explains that dark excitons tend to be generated randomly, for example when a quantum dot in a higher-energy state decays into a lower-energy state.

Chirped laser pulses lead to reversible exciton production

In the new work, which is detailed in Science Advances, the researchers showed that they could control the production of dark excitons in quantum dots by using laser pulses that are chirped, meaning that the frequency (or colour) of the laser light varies within the pulse. Such chirped pulses, Weihs explains, can turn one quantum dot state into another.

“We first bring the quantum dot to the (bright) biexciton state using a conventional technique and then apply a (storage) chirped laser pulse that turns this biexciton occupation (adiabatically) into a dark state,” he says. “The storage pulse is negatively chirped – its frequency decreases with time, or in terms of colour, it turns redder.” Importantly, the process is reversible: “To convert the dark exciton back into a bright state, we apply a (positively chirped) retrieval pulse to it,” Weihs says.

One possible application for the new technique would be to generate single pairs of entangled photons on demand – the starting point for many quantum communication protocols. Importantly, Weihs adds that this should be possible with almost any type of quantum dot, whereas an alternative method known as polarization entanglement works for only a few quantum dot types with very special properties. “For example, it could be used to create ‘time-bin’ entangled photon pairs,” he tells Physics World. “Time-bin entanglement is particularly suited to transmitting quantum information through optical fibres because the quantum state stays preserved over very long distances.”

The study’s lead author, Florian Kappe, and his colleague Vikas Remesh describe the project as “a challenging but exciting and rewarding experience” that combined theoretical and experimental tools. “The nice thing, we feel, is that on this journey, we developed a number of optical excitation methods for quantum dots for various applications,” they say via e-mail.

The physicists are now studying the coherence time of the dark exciton states, which is an important property in determining how long they can store quantum information. According to Weihs, the results from this work could make it possible to generate higher-dimensional time-bin entangled photon pairs – for example, pairs of quantum states called qutrits that have three possible values.

“Thinking beyond this, we imagine that the technique could even be applied to multi-excitonic complexes in quantum dot molecules,” he adds. “This could possibly result in multi-photon entanglement, such as so-called GHZ (Greenberger-Horne-Zeilinger) states, which are an important resource in multiparty quantum communication scenarios.”

The post Physicists get dark excitons under control appeared first on Physics World.

New metalaser is a laser researcher’s dream

11 août 2025 à 10:26

A new type of nanostructured lasing system called a metalaser emits light with highly tuneable wavefronts – something that had proved impossible to achieve with conventional semiconductor lasers. According to the researchers in China who developed it, the new metalaser can generate speckle-free laser holograms and could revolutionize the field of laser displays.

The first semiconductor lasers were invented in the 1960s and many variants have since been developed. Their numerous advantages – including small size, long lifetimes and low operating voltages – mean they are routinely employed in applications ranging from optical communications and interconnects to biomedical imaging and optical displays.

To make further progress with this class of lasers, researchers have been exploring ways of creating them at the nanoscale. One route for doing this is to integrate light-scattering arrays called metasurfaces with laser mirrors or insert them inside resonators. However, the wavefronts of the light emitted by these metalasers have proven very difficult to control, and to date only a few simple profiles have been possible without introducing additional optical elements.

Not significantly affected by perturbations

In the new work, a team led by Qinghai Song of the Harbin Institute of Technology, Shenzhen, created a metalaser that consists of silicon nitride nanodisks that have holes in their centres and are arranged in a periodic array. This configuration generates bound states in a continuous medium (BICs). Since the laser energy is concentrated in the centre of each nanodisk, the wavelength of the BIC is not significantly affected by perturbations such as tiny holes in the structure.

“At the same time, the in-plane electric fields of these modes are distributed along the periphery of each nanodisk,” Song explains. “This greatly enhances the light field inside the centre of the hole and induces an effective dipole moment there, which is what produces a geometric phase change to the light emission at each pixel.”

By rotating the holes in the nanodisks, Song says that it is possible to introduce specific geometric phase profiles into the metasurface. The laser emission can then be tailored to create focal spots, focal lines and doughnut shapes as well as holographic images.

And that is not all. Unlike in conventional laser modes, the waves scattered from the new metalaser are too weak to undergo resonant amplification. This means that the speckle noise generated is negligibly small, which resolves the longstanding challenge of reducing speckle noise in holographic displays without reducing image quality.

According to Song, this property could revolutionize laser displays. He adds that the physical concept outlined in the team’s work could be extended to other nanophotonic devices, substantially improving their performance in various optics and photonics applications.

“Controlling laser emission at will has always been a dream of laser researchers,” he tells Physics World. “Researchers have traditionally done this by introducing metasurfaces into structures such as laser oscillators. This approach, while very straightforward, is severely limited by the resonant conditions of this type of laser system. With other types of laser, they had to either integrate a metasurface wave plate outside the laser cavity or use bulky and complicated components to compensate for phase changes.”

With the new metalaser, the laser emission can be changed from fixed profiles such as Hermite-Gaussian modes and Laguerre-Gaussian modes to arbitrarily customized beams, he says. One consequence of this is that the lasers could be fabricated to match the numerical aperture of fibres or waveguides, potentially boosting the performance of optical communications and optical information processing.

Developing a programmable metalaser will be the researchers’ next goal, Song says.

The new metalaser design is described in Nature.

The post New metalaser is a laser researcher’s dream appeared first on Physics World.

Space ice reveals its secrets

7 août 2025 à 18:00

The most common form of water in the universe appears to be much more complex than was previously thought. While past measurements suggested that this “space ice” is amorphous, researchers in the UK have now discovered that it contains crystals. The result poses a challenge to current models of ice formation and could alter our understanding of ordinary liquid water.

Unlike most other materials, water is denser as a liquid than it is as a solid. It also expands rather than contracts when it cools; becomes less viscous when compressed; and exists in many physical states, including at least 20 polymorphs of ice.

One of these polymorphs is commonly known as space ice. Found in the bulk matter in comets, on icy moons and in the dense molecular clouds where stars and planets form, it is less dense than liquid water (0.94 g cm−3 rather than 1 g cm−3), and X-ray diffraction images indicate that it is an amorphous solid. These two properties give it its formal name: low-density amorphous ice, or LDA.

While space ice was discovered almost a century ago, Michael Davies, who studied LDA as part of his PhD research at University College London and the University of Cambridge, notes that its exact atomic structure is still being debated. “It is unclear, for example, whether LDA is a ‘true glassy state’ (meaning a frozen liquid with no ordered structure) or a high disordered crystal,” Davies explains.

The memory of ice

In the new work, Davies and colleagues used two separate computational simulations to better understand this atomic structure. In the first simulation, they froze “boxes” of water molecules by cooling them to -150 °C at different rates, which produced crystalline and amorphous ice in varying proportions. They then compared this spectrum of structures to the structure of amorphous ice as measured by X-ray diffraction.

“The best model to match experiments was a ‘goldilocks’ scenario – that is, one that is not too amorphous and not too crystalline,” Davies explains. “Specifically, we found ice that was up to 20% crystalline and 80% amorphous, with the structure containing tiny crystals around 3-nm wide.”

The second simulation began with large “boxes” of ice consisting of many small ice crystals packed together. “Here, we varied the number of crystals in the boxes to again give a range of very crystalline to amorphous models,” Davies says. “We found very close agreement to experiment with models that had very similar structures compared to the first approach with 25% crystalline ice.”

To back up these findings, the UCL/Cambridge researchers performed a series of experiments. “By re-crystallizing different samples of LDA formed via different ‘parent ice phases’ we found that the final crystal structure formed varied depending on the pathway to creation,” Davies tells Physics World. In other words, he adds, “The final structure had a memory of its parent.”

This is important, Davies continues, because if LDA was truly amorphous and contained no crystalline grains at all, this “memory” effect would not be possible.

Impact on our understanding

The discovery that LDA is not completely amorphous has implications for our understanding of ordinary liquid water. The prevailing “two state” model for water is appealing because it accounts for many of water’s thermodynamic anomalies. However, it rests on the assumption that both LDA and high-density amorphous ice have corresponding liquid forms, and that liquid water can be modelled as a mixture of the two.

“Our finding that LDA actually contains many small crystallites presents some challenges to this model,” Davies says. “It is thus of paramount importance for us to now confirm if a truly amorphous version of LDA is achievable in experiments.”

The existence of structure within LDA also has implications for “panspermia” theory, which hypothesizes that the building blocks of life (such as simple amino acids) were carried to Earth within an icy comet.  “Our findings suggest that LDA would be a less efficient transporting material for these organic molecules because a partly crystalline structure has less space in which these ingredients could become embedded,” Davies says.

“The theory could still hold true, though,” he adds, “as there are amorphous regions in the ice where such molecules could be trapped and stored.”

Challenges in determining atomic structure

The study, which is detailed in Physical Review B, highlights the difficulty of determining the exact atomic structure of materials. According to Davies, it could therefore be important for understanding other amorphous materials, including some that are widely used in technologies such as OLEDs and fibre optics.

“Our methodology could be applied to these materials to determining whether they are truly glassy,” he says. “Indeed, glass fibres that transport data along long distances need to be amorphous to function efficiently. If they are found to contain tiny crystals, these could then be removed to improve performance.”

The researchers are now focusing on understanding the structure of other amorphous ices, including high-density amorphous ice. “There is much for us to investigate with regards to the links between amorphous ice phases and liquid water,” Davies concludes.

The post Space ice reveals its secrets appeared first on Physics World.

Too-close exoplanet triggers flares from host star

1 août 2025 à 10:00

A young gas giant exoplanet appears to be causing its host star to emit energetic outbursts. This finding, which comes from astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) and collaborators in Germany, Sweden and Switzerland, is the first evidence of planets actively influencing their stars, rather than merely orbiting them.

“Until now, we had only seen stars flare on their own, but theorists have long suspected that close-in planets might disturb their stars’ magnetic fields enough to trigger extra flares,” explains Maximilian Günther, a project scientist with the European Space Agency’s Cheops (Characterising ExOPlanet Satellite) mission. “This study now offers the first observational hint that this might indeed be happening.”

Stars with flare(s)

Most stars produce flares at least occasionally. This is because as they spin, they build up magnetic energy – a process that Günther compares to the dynamos on Dutch bicycles. “When their twisted magnetic field lines occasionally snap, they release bursts of radiation,” he explains. “Our own Sun regularly behaves like this, and we experience its bursts of energy as part of space weather on Earth.” The charged particles that follow such flares, he adds, are responsible for the aurorae at our planet’s poles.

The flares the ASTRON team spotted came from a star called HIP 67522. Although classified as a G dwarf star like our own Sun, HIP 67522 is much younger, being 17 million years old rather than 4.5 billion. It is also slightly larger and cooler, and astronomers had previously used data from NASA’s Transiting Exoplanet Survey Satellite (TESS) to identify two planets orbiting it. Denoted HIP 67522 b and HIP 67522 c, both are located near their host, but HIP 67522 b is especially close, completing an orbit in just seven Earth days.

In the latest work, which is detailed in Nature, ASTRON’s Ekaterina Ilin and colleagues used Cheops’ precision targeting to make more detailed observations of the HIP 67522 system. These observations revealed a total of 15 flares, and Ilin notes that almost all of them appeared to be coming towards us as HIP 67522 b transited in front of its host as seen from Earth. This is significant, she says, because it suggests that the flares are being triggered by the planet, rather than by some other process.

“This is the first time we have seen a planet influencing its host star, overturning our previous assumptions that stars behave independently,” she says.

Six times more flaring

The ASTRON team estimate that HIP 67522 b is exposed to around six times as many flares as it would be if it wasn’t triggering some of them itself. This is an unusually high level of radiation, and it may help explain recent observations from the James Webb Space Telescope (JWST) that show HIP 67522 b losing its atmosphere faster than expected.

“The new study estimates that the planet is cutting its own atmosphere’s life short by half,” Günther says. “It might lose its atmosphere in the next 400‒700 million years, compared to the 1 billion years it would otherwise.”

If such a phenomenon turns out to be common, he adds, “it could help explain why some young planets have inflated atmospheres or evolve into smaller, denser worlds. And it could inform how we see the demography of ‘adult planets’.”

Astrobiology implications

One big unanswered question, Günther says, is whether the slightly more distant planet HIP 67522 c shows similar interactions with its host. “Comparing the two would be incredible, not only doubling the sampling size, but revealing how distance from the star affects magnetic interactions.”

The ASTRON researchers say they also want to understand the magnetic field of HIP 67522 b itself. More broadly, they plan to look for other such systems, hoping to find out how common they really are.

For Günther, who was not directly involved in the present study, even a single example is already important. “I have worked on exoplanets and stellar flares myself for many years, mostly inspired by the astrobiology implications, but this discovery opens a whole new window into how stars and planets can influence each other,” he says. “It is a wake-up call to me that planets are not just passive passengers; they actively shape their environments,” he tells Physics World. “That has big implications for how we think about planetary atmospheres, habitability and the evolution of worlds across the galaxy.”

The post Too-close exoplanet triggers flares from host star appeared first on Physics World.

Stacked perovskite photodetector outperforms conventional silicon image sensors

29 juillet 2025 à 10:00

A new photodetector made up of vertically stacked perovskite-based light absorbers can produce real photographic images, potentially challenging the dominance of silicon-based technologies in this sector.  The detector is the first to exploit the concept of active optical filtering, and its developers at ETH Zurich and Empa in Switzerland say it could be used to produce highly sensitive, artefact-free images with much improved colour fidelity compared to conventional sensors.

The human eye uses individual cone cells in the retina to distinguish between red, green and blue (RGB) colours. Imaging devices such as those found in smartphones and digital cameras are designed to mimic this capability. However, because their silicon-based sensors absorb light over the entire visible spectrum, they must split the light into its RGB components. Usually, they do this using colour-filter arrays (CFAs) positioned on top of a monochrome light sensor. Then, once the device has collected the raw data, complex algorithms are used to reconstruct a colour image.

Although this approach is generally effective, it is far from ideal. One drawback is the presence of “de-mosaicing” artefacts from the reconstruction process. Another is large optical losses, as pixels for red light contain filters that block green and blue light, while those for green block red and blue, and so on. This means that each pixel in the image sensor only receives about a third of the incident light spectrum, greatly reducing the efficacy of light capture.

No need for filters

A team led by ETH Zurich materials scientist Maksym Kovalenko has now developed an alternative image sensor based on lead halide perovskites. These crystalline semiconductor materials have the chemical formula APbX3, where A is a formamidinium, methylammonium or caesium cation and X is a halide such as chlorine, bromine or iodine.

Crucially, the composition of these materials determines which wavelengths of light they will absorb. For example, when they contain more iodide ions, they absorb red light, while materials containing more bromide or chloride ions absorb green or blue light, respectively. Stacks of these materials can thus be used to absorb these wavelengths selectively without the need for filters, since each material layer remains transparent to the other colours.

Schematic image showing silicon and perovskite image sensors. The silicon sensor is shown as a chequerboard pattern of blue, green and red pixels overlaying a grey grid beneath. It is captioned "The light sensors are not completely transparent. The pixels for different colorus must be arranged side-by-side in a mosaic pattern." The perovskite sensor is shown as layers of red, green and blue pixels stacked on top of each other, and is captioned "Sensor layers for different colours can be arranged one above the other, as the upper layers are transparent to the wavelengths of the lower layers. Each pixel then measures three coloures: red, green and blue."
Silicon vs perovskite: Perovskite image sensors can, in theory, capture three times as much light as conventional silicon image sensors of the same surface area while also providing three times higher spatial resolution. This is because their chemical composition determines how much they absorb or transmit different colours. (Courtesy: Sergii Yakunin / ETH Zurich and Empa)

The idea of vertically stacked detectors that filter each other optically has been discussed since at least 2017, including in early work from the ETH-Empa group, says team member Sergey Tsarev. “The benefits of doing this were clear, but the technical complexity discouraged many researchers,” Tsarev says.

To build their sensor, the ETH-Empa researchers had to fabricate around 30 functional thin-film layers on top of each other, without damaging prior layers. “It’s a long and often unrewarding process, especially in today’s fast-paced research environment where quicker results are often prioritized,” Tsarev explains. “This project took us nearly three years to complete, but we chose to pursue it because we believe challenging problems with long-term potential deserve our attention. They can push boundaries and bring meaningful innovation to society.”

The team’s measurements show that the new, stacked sensors reproduce RGB colours more precisely than conventional silicon technologies. The sensors also boast high external quantum efficiencies (defined as the number of photons produced per electron used) of 50%, 47% and 53% for the red, green and blue channels respectively.

Machine vision and hyperspectral imaging

Kovalenko says that in purely technical terms, the most obvious application for this sensor would be in consumer-grade colour cameras. However, he says that this path to commercialization would be very difficult due to competition from highly optimized and cost-effective conventional technologies already on the market. “A more likely and exciting direction,” he tells Physics World, “is in machine vision and in so-called hyperspectral imaging – that is, imaging at wavelengths other than red, green and blue.”

Photo of the sensor, which looks like a gold film stacked on top of grey films and connected to a flat cable
Thin-film technology: One of the two perovskite-based sensor prototypes that the researchers made to demonstrate that the technology can be successfully miniaturized. (Courtesy: Empa / ETH Zurich)

Perovskite sensors are particularly interesting in this context, explains team member Sergi Yakunin, because the wavelength range they absorb over can be precisely controlled by defining a larger number of colour channels that are clearly separated from other. In contrast, silicon’s broad absorption spectrum means that silicon-based hyperspectral imaging devices require numerous filters and complex computer algorithms.

“This is very impractical even with a relatively small number of colours,” Kovalenko says. “Hyperspectral image sensors based on perovskite could be used in medical analysis or in automated monitoring of agriculture and the environment, for example, or in other specialized imaging systems that can isolate and enhance particular wavelengths with high colour fidelity.”

The researchers now aim to devise a strategy for making their sensor compatible with standard CMOS technology. “This might include vertical interconnects and miniaturized detector pixels,” says Tsarev, “and would enable seamless transfer of our multilayer detector concept onto commercial silicon readout chips, bringing the technology closer to real-world applications and large-scale deployment.”

The study is detailed in Nature.

The post Stacked perovskite photodetector outperforms conventional silicon image sensors appeared first on Physics World.

New experiment uses levitated magnets to search for dark matter

28 juillet 2025 à 10:00
Photo of Christopher Tunnell standing in an office environment. He's wearing a white button-down shirt and there are bookcases in the background
Dark matter search: Team co-leader Christopher Tunnell is an associate professor of physics and astronomy at Rice University. (Courtesy: Jeff Fitlow/Rice)

A tiny neodymium particle suspended inside a superconducting trap could become a powerful new platform in the search for dark matter, say physicists at Rice University in the US and Leiden University in the Netherlands. Although they have not detected any dark matter signals yet, they note that their experiment marks the first time that magnetic levitation technology has been tested in this context, making it an important proof of concept.

“By showing what current technology can already achieve, we open the door to a promising experimental path to solving one of the biggest mysteries in modern physics,” says postdoctoral researcher Dorian Amaral, who co-led the project with his Rice colleague Christopher Tunnell, as well as Dennis Uitenbroek and Tjerk Oosterkamp in Leiden.

Dark matter is thought to make up most of the matter in our universe. However, since it has only ever been observed through its gravitational effects, we know very little about it, including whether it interacts (either with itself or with other particles) via forces other than gravity. Other fundamental properties, such as its mass and spin, are equally mysterious. Indeed, various theories predict dark matter particle masses that range from around 10−19 eV/c2 to a few times the mass of our own Sun – a staggering 90 orders of magnitude.

The B‒L model

The theory that predicts masses at the lower end of this range is known as the ultralight dark matter (ULDM) model. Some popular ULDM candidates include the QCD axion, axion-like particles and vector particles.

In their present work, Amaral and colleagues concentrated on vector particles. This type of dark-matter particle, they explain, can “communicate”, or interact, via charges that are different from those found in ordinary electromagnetism. Their goal, therefore, was to detect the forces arising from these so-called dark interactions.

To do this, the team focused on interactions that differ based on the baryon (B) and lepton (L) numbers of a particle. Several experiments, including fifth-force detectors such as MICROSCOPE and Eöt-Wash as well as gravitational wave interferometers such as LIGO/Virgo and KAGRA, likewise seek to explore interactions within this so-called B‒L model. Other platforms, such as torsion balances, optomechanical cavities and atomic interferometers, also show promise for making such measurements.

Incredibly sensitive setup

The Rice-Leiden team, however, chose to explore an alternative that involves levitating magnets with superconductors via the Meissner effect. “Levitated magnets are excellent force and acceleration sensors, making them ideal for detecting the minuscule signatures expected from ULDM,” Amaral says.

Such detectors also have a further advantage, he adds. Because they operate at ultralow temperatures, they are much less affected by thermal noise than is the case for detectors that rely on optical or electrical levitation. This allows them to levitate much larger and heavier objects, making them more sensitive to interactions such as those expected from B‒L model dark matter.

In their experiment, which is called POLONAISE (Probing Oscillations using Levitated Objects for Novel Accelerometry In Searches of Exotic physics), the Rice and Leiden physicists levitated a tiny magnet composed of three neodymium-iron-boron cubes inside a superconducting trap cooled to nearly absolute zero. “This setup was incredibly sensitive, enabling us to detect incredibly small motions caused by tiny external forces,” Amaral explains. “If ultralight dark matter exists, it would behave like a wave passing through the Earth, gently tugging on the magnet in a predictable, wave-like pattern. Detecting such a motion would be a direct signature of this elusive form of dark matter.”

An unconventional idea

The Rice-Leiden collaboration began after Oosterkamp and Tunnell met at a climate protest and got to chatting about their scientific work. After over a decade working on some of the world’s most sensitive dark matter experiments – with no clear detections to show for it – Tunnell was eager to return to the drawing board in terms of detector technologies. Oosterkamp, for his part, was exploring how quantum technologies could be applied to fundamental questions in physics. This shared interest in cross-disciplinary thinking, Amaral remembers, led them to the unconventional idea at the heart of their experiment. “From there, we spent a year bridging experimental and theoretical worlds. It was a leap outside our comfort zones – but one that paid off,” he says.

“Although we did not detect dark matter, our result is still valuable – it tells us what dark matter is not,” he adds. “It’s like searching a room and not finding the object you are looking for: now you know to look somewhere else.”

The team’s findings, which are detailed in Physical Review Letters, should help physicists refine theoretical models of dark matter, Amaral tells Physics World. “And on the experimental side, our work advises the key improvements needed to turn magnetic levitation into a world-leading tool for dark matter detection.”

The post New experiment uses levitated magnets to search for dark matter appeared first on Physics World.

Squid use Bragg reflectors in their skin to change colour

25 juillet 2025 à 10:00

Cephalopods such as squid and octopus can rapidly change the colour of their skin, but the way they do it has been something of a mystery – until now. Using a microscopy technique known as holotomography, scientists in the US discovered that the tuneable optical properties of squid skin stem from winding columns of platelets in certain cells. These columns have sinusoidal-wave refractive index profiles, and they function as Bragg reflectors, able to selectively transmit and reflect light at specific wavelengths.

“Our new result not only helps advance our understanding of structural colouration in cephalopods skin cells, it also provides new insights into how such gradient refractive index distributions can be leveraged to manipulate light in both biological and engineered systems,” says Alon Gorodetsky of the University of California, Irvine, who co-led this research study together with then-PhD student Georgii Bogdanov.

Stacked and winding columns of platelets

In their study, Gorodetsky, Bogdanov and colleagues including Roger Hanlon of the Marine Biological Laboratory (MBL) in Woods Hole, Massachusetts, examined the iridescent cells (iridophores) and cell clusters (splotches) responsible for producing colours in longfin inshore squids (Doryteuthis pealeii). To do this, they used holotomography, which creates three-dimensional images of individual cells and cell clusters by measuring subtle changes in a light beam as it passes through a sample of tissue. From this, they were able to map out changes in the sample’s refractive index across different structures.

The holotomography images revealed that the iridophores comprise stacked and winding columns of platelets made from a protein known as reflectin, which has a high refractive index, alternating with a low-refractive-index extracellular space. These Bragg-reflector-like structures are what allow tissue in the squid’s mantle to switch from nearly transparent to vibrantly coloured and back again.

Other natural Bragg reflectors

Squids aren’t the only animals that use Bragg reflectors for structural colouration, Gorodetsky notes. The scales of Morpho butterflies, for example, get their distinctive blue colouration from nanostructured Bragg gratings made from alternating high-refractive-index lamellae and low-refractive-index air gaps. Another example is the panther chameleon. The skin cells of this famously colour-changing reptile contain reconfigurable photonic lattices consisting of high-refractive-index nanocrystals within a low-refractive-index cytoplasm. These structures allow the animal to regulate its temperature as well as change its colour.

Yet despite these previous findings, and extensive research on cephalopod colouration, Gorodetsky says the question of how squid splotch iridophores can change from transparent to colourful , while maintaining their spectral purity, had not previously been studied in such depth. “In particular, the cells’ morphologies and refractive index distributions in three dimensions had not been previously resolved,” he explains. “Overcoming the existing knowledge gap required the development and application of combined experimental and computational approaches, including advanced imaging, refractive index mapping and optical modelling.”

Extending to infrared wavelengths

After using advanced computational modelling to capture the optical properties of the squid cells, the researchers, who report their work in Science, built on this result by designing artificial nanomaterials inspired by the natural structures they discovered. While the squid iridophores only change their visible appearance in response to neurophysiological stimuli, the researchers’ elastomeric composite materials (which contain both nanocolumnar metal oxide Bragg reflectors and nanostructured metal films) also change at infrared wavelengths.

Composite materials like the ones the UC Irvine-MBL team developed could have applications in adaptive camouflage or fabrics that adjust to hot and cold temperatures. They might also be used to improve multispectral displays, sensors, lasers, fibre optics and photovoltaics, all of which exploit multilayered Bragg reflectors with sinusoidal-wave refractive index profiles, says Gorodetsky.

The researchers now plan to further explore how gradient refractive index distributions contribute to light manipulation in other biological systems. “We also hope to refine our engineered multispectral composite materials to enhance their performance for specific practical applications, such as advanced camouflage and other wearable optical technologies,” Gorodetsky tells Physics World.

The post Squid use Bragg reflectors in their skin to change colour appeared first on Physics World.

Earth-shaking waves from Greenland mega-tsunamis imaged for the first time

24 juillet 2025 à 10:00

In September 2023, seismic detectors around the world began picking up a mysterious signal. Something – it wasn’t clear what – was causing the entire Earth to shake every 90 seconds. After a period of puzzlement, and a second, similar signal in October, theoretical studies proposed an explanation. The tremors, these studies suggested, were caused by standing waves, or seiches, that formed after landslides triggered huge tsunamis in a narrow waterway off the coast of Greenland.

Engineers at the University of Oxford, UK, have now confirmed this hypothesis. Using satellite altimetry data from the Surface Water Ocean Topography (SWOT) mission, the team constructed the first images of the seiches, demonstrating that they did indeed originate from landslide-triggered mega-tsunamis in Dickson Fjord, Greenland. While events of this magnitude are rare, the team say that climate change is likely to increase their frequency, making continued investments in advanced satellite missions essential for monitoring and responding to them.

An unprecedented view into the fjord

Unlike other altimeters, SWOT provides two-dimensional measurements of sea surface height down to the centimetre across the entire globe, including hard-to-reach areas like fjords, rivers and estuaries. For team co-leader Thomas Monahan, who studied the seiches as part of his PhD research at Oxford, this capability was crucial. “It gave us an unprecedented view into Dickson Fjord during the seiche events in September and October 2023,” he says. “By capturing such high-resolution images of sea-surface height at different time points following the two tsunamis, we could estimate how the water surface tilted during the wave – in other words, gauge the ‘slope’ of the seiche.”

The maps revealed clear cross-channel slopes with height differences of up to two metres. Importantly, these slopes pointed in opposite directions, showing that water was moving backwards as well as forwards across the channel. But that wasn’t the end of the investigation. “Finding the ‘seiche in the fjord’ was exciting but it turned out to be the easy part,” Monahan says. “The real challenge was then proving that what we had observed was indeed a seiche and not something else.”

Enough to shake the Earth for days

To do this, the Oxford engineers approached the problem like a game of Cluedo, ruling out other oceanographic “suspects” one by one. They also connected the slope measurements with ground-based seismic data that captured how the Earth’s crust moved as the wave passed through it. “By combining these two very different kinds of observations, we were able to estimate the size of the seiches and their characteristics even during periods in which the satellite was not overhead,” Monahan says.

Although no-one was present in Dickson Fjord during the seiches, the Oxford team’s estimates suggest that the event would have been terrifying to witness. Based on probabilistic (Bayesian) machine-learning analyses, the team say that the September seiche was initially 7.9 m tall, while the October one measured about 3.9 m.

“That amount of water sloshing back and forth over a 10-km-section of fjord walls creates an enormous force,” Monahan says. The September seiche, he adds, produced a force equivalent to 14 Saturn V rockets launching at once, around 500 GN. “[It] was literally enough to shake the entire Earth for days,” he says.

What made these events so powerful was the geometry of the fjord, Monahan says. “A sharp bend near its outlet effectively trapped the seiches, allowing them to reverberate for days,” he explains. “Indeed, the repeated impacts of water against fjord walls acted like a hammer striking the Earth’s crust, creating long-period seismic waves that propagated around the globe and that were strong enough to be detected worldwide.”

Risk of tsunamigenic landslides will likely grow

As for what caused the seiches, Monahan suggests that climate change may have been a contributing factor. As glaciers thin, they undergo a process called de-buttressing wherein the loss of ice removes support from the surrounding rock, leading it to collapse. It was likely this de-buttressing that caused two enormous landslides in Dickson Fjord within a month, and continued global warming will only increase the frequency. “As these events become more common, especially in steep, ice-covered terrain, the risk of tsunamigenic landslides will likely grow,” Monahan says.

The researchers say they would now like to better understand how the seiches dissipated afterwards. “Although previous work successfully simulated how the megatsunamis stabilized into seiches, how they decayed is not well understood,” says Monahan. “Future research could make use of SWOT satellite observations as a benchmark to better constrain the processes behind disputation.”

The findings, which are detailed in Nature Communications, show how top-of-the-line satellites like SWOT can fill these observational gaps, he adds. To fully leverage these capabilities, however, researchers need better processing algorithms tailored to complex fjord environments and new techniques for detecting and interpreting anomalous signals within these vast datasets. “We think scientific machine learning will be extremely useful here,” he says.

The post Earth-shaking waves from Greenland mega-tsunamis imaged for the first time appeared first on Physics World.

Scientists image excitons in carbon nanotubes for the first time

23 juillet 2025 à 10:00

Researchers in Japan have directly visualized the formation and evolution of quasiparticles known as excitons in carbon nanotubes for the first time. The work could aid the development of nanotube-based nanoelectronic and nanophotonic devices.

Carbon nanotubes (CNTs) are rolled-up hexagonal lattices of carbon just one atom thick. When exposed to light, they generate excitons, which are bound pairs of negatively-charged electrons and positively-charged “holes”. The behaviour of these excitons governs processes such as light absorption, emission and charge carrier transport that are crucial for CNT-based devices. However, because excitons are confined to extremely small regions in space and exist for only tens of femtoseconds (fs) before annihilating, they are very difficult to observe directly with conventional imaging techniques.

Ultrafast and highly sensitive

In the new work, a team led by Jun Nishida and Takashi Kumagai at the Institute for Molecular Science (IMS)/SOKENDAI, together with colleagues at the University of Tokyo and RIKEN, developed a technique for imaging excitons in CNTs. Known as ultrafast infrared scattering-type scanning near-field optical microscopy (IR s-SNOM), it first illuminates the CNTs with a short visible laser pulse to create excitons and then uses a time-delayed mid-infrared pulse to probe how these excitons behave.

“By scanning a sharp gold-coated atomic force microscope (AFM) tip across the surface and detecting the scattered infrared signal with high sensitivity, we can measure local changes in the optical response of the CNTs with 130-nm spatial resolution and around 150-fs precision,” explains Kumagai. “These changes correspond to where and how excitons are formed and annihilated.”

According to the researchers, the main challenge was to develop a measurement that was ultrafast and highly sensitive while also having a spatial resolution high enough to detect a signal from as few as around 10 excitons. “This required not only technical innovations in the pump-probe scheme in IR s-SNOM, but also a theoretical framework to interpret the near-field response from such small systems,” Kumagai says.

The measurements reveal that local strain and interactions between CNTs (especially in complex, bundled nanotube structures) govern how excitons are created and annihilated. Being able to visualize this behaviour in real time and real space makes the new technique a “powerful platform” for investigating ultrafast quantum dynamics at the nanoscale, Kumagai says. It also has applications in device engineering: “The ability to map where excitons are created and how they move and decay in real devices could lead to better design of CNT-based photonic and electronic systems, such as quantum light sources, photodetectors, or energy-harvesting materials,” Kumagai tells Physics World.

Extending to other low-dimensional systems

Kumagai thinks the team’s approach could be extended to other low-dimensional systems, enabling insights into local dynamics that have previously been inaccessible. Indeed, the researchers now plan to apply their technique to other 1D and 2D materials (such as semiconducting nanowires or transition metal dichalcogenides) and to explore how external stimuli like strain, doping, or electric fields affect local exciton dynamics.

“We are also working on enhancing the spatial resolution and sensitivity further, possibly toward single-exciton detection,” Kumagai says. “Ultimately, we aim to combine this capability with in operando device measurements to directly observe nanoscale exciton behaviour under realistic operating conditions.”

The technique is detailed in Science Advances.

The post Scientists image excitons in carbon nanotubes for the first time appeared first on Physics World.

How to keep the second law of thermodynamics from limiting clock precision

16 juillet 2025 à 10:00

The second law of thermodynamics demands that if we want to make a clock more precise – thereby reducing the disorder, or entropy, in the system – we must add energy to it. Any increase in energy, however, necessarily increases the amount of waste heat the clock dissipates to its surroundings. Hence, the more precise the clock, the more the entropy of the universe increases – and the tighter the ultimate limits on the clock’s precision become.

This constraint might sound unavoidable – but is it? According to physicists at TU Wien in Austria, Chalmers University of Technology, Sweden, and the University of Malta, it is in fact possible to turn this seemingly inevitable consequence on its head for certain carefully designed quantum systems. The result: an exponential increase in clock accuracy without a corresponding increase in energy.

Solving a timekeeping conundrum

Accurate timekeeping is of great practical importance in areas ranging from navigation to communication and computation. Recent technological advancements have brought clocks to astonishing levels of precision. However, theorist Florian Meier of TU Wien notes that these gains have come at a cost.

“It turns out that the more precisely one wants to keep time, the more energy the clock requires to run to suppress thermal noise and other fluctuations that negatively affect the clock,” says Meier, who co-led the new study with his TU Wien colleague Marcus Huber and a Chalmers experimentalist, Simone Gasparinetti. “In many classical examples, the clock’s precision is linearly related to the energy the clock dissipates, meaning a clock twice as accurate would produce twice the (entropy) dissipation.”

Clock’s precision can grow exponentially faster than the entropy

The key to circumventing this constraint, Meier continues, lies in one of the knottiest aspects of quantum theory: the role of observation. For a clock to tell the time, he explains, its ticks must be continually observed. It is this observation process that causes the increase in entropy. Logically, therefore, making fewer observations ought to reduce the degree of increase – and that’s exactly what the team showed.

“In our new work, we found that with quantum systems, if designed in the right way, this dissipation can be circumvented, ultimately allowing exponentially higher clock precision with the same dissipation,” Meier says. “We developed a model that, instead of using a classical clock hand to show the time, makes use of a quantum particle coherently travelling around a ring structure without being observed. Only once it completes a full revolution around the ring is the particle measured, creating an observable ‘tick’ of the clock.”

The clock’s precision can thus be improved by letting the particle travel through a longer ring, Meier adds. “This would not create more entropy because the particle is still only measured once every cycle,” he tells Physics World. “The mathematics here is of course much more involved, but what emerges is that, in the quantum case, the clock’s precision can grow exponentially faster than the entropy. In the classical analogue, in contrast, this relationship is linear.”

“Within reach of our technology”

Although such a clock has not yet been realized in the laboratory, Gasparinetti says it could be made by arranging many superconducting quantum bits in a line.

“My group is an experimental group that studies superconducting circuits, and we have been working towards implementing autonomous quantum clocks in our platform,” he says. “We have expertise in all the building blocks that are needed to build the type of clock proposed in in this work: generating quasithermal fields in microwave waveguides and coupling them to superconducting qubits; detecting single microwave photons (the clock ‘ticks’); and building arrays of superconducting resonators that could be used to form the ‘ring’ that gives the proposed clock its exponential boost.”

While Gasparinetti acknowledges that demonstrating this advantage experimentally will be a challenge, he isn’t daunted. “We believe it is within reach of our technology,” he says.

Solving a future problem

At present, dissipation is not the main limiting factor for when it comes to the performance of state-of-the-art clocks. As clock technology continues to advance, however, Meier says we are approaching a point where dissipation could become more significant. “A useful analogy here is in classical computing,” he explains. “For many years, heat dissipation was considered negligible, but in today’s data centres that process vast amounts of information, dissipation has become a major practical concern.

“In a similar way, we anticipate that for certain applications of high-precision clocks, dissipation will eventually impose limits,” he adds. “Our clock highlights some fundamental physical principles that can help minimize such dissipation when that time comes.”

The clock design is detailed in Nature Physics.

The post How to keep the second law of thermodynamics from limiting clock precision appeared first on Physics World.

High-quality muon beam holds promise for future collider

15 juillet 2025 à 10:00

Researchers in Japan have accelerated muons into the most precise, high-intensity beam to date, reaching energies high as 100 keV. The achievement could enable next-generation experiments such as better measurements of the muon’s anomalous magnetic moment – measurements that could, in turn, that point to new physics beyond the Standard Model.

Muons are sub-atomic particles similar to electrons, but around 200 times heavier. Thanks to this extra mass, muons radiate less energy than electrons as they travel in circles – meaning that a muon accelerator could, in principle, produce more energetic collisions than a conventional electron machine for a given energy input.

However, working with muons comes with challenges. Although scientists can produce high-intensity muon beams from the decay of other sub-atomic particles known as pions, these beams must then be cooled to make the velocities of their constituent particles more uniform before they can be accelerated to collider speeds. And while this cooling process is relatively straightforward for electrons, for muons it is greatly complicated by the particles’ short lifetime of just 2 ms. Indeed, traditional cooling techniques (such as synchrotron radiation cooling, laser cooling, stochastic cooling and electron cooling) simply do not work.

Another muon cooling and acceleration technique

To overcome this problem, researchers at the MUon Science Facility (MUSE) in the Japan Proton Accelerator Research Complex (J-PARC) have been developing an alternative muon cooling and acceleration technique. The MUSE method involves cooling positively-charged muons, or antimuons, down to thermal energies of 25 meV and then accelerating them using radio-frequency (rf) cavities.

In the new work, a team led by particle and nuclear physicist Shusei Kamioka directed antimuons (μ+) into a target made from a silica aerogel. This material has a unique property: a muon that stops inside it gets re-emitted as a muonium atom (an exotic atom consisting of an antimuon and an electron) with very low thermal energy. The researchers then fired a laser beam at these low-energy muonium atoms to remove their electrons, thereby producing antimuons with much lower – and, crucially, far more uniform – velocities than was the case for the starting beam. Finally, they guided the slowed particles into a rf cavity, where an electric field accelerated them to an energy of 100 keV.

Towards a muon accelerator?

The final beam has an intensity of 2 × 10−3 μ+ per pulse, and a measured emittance that is much lower (by a factor of 2.0 × 102 in the horizontal direction and 4.1 × 102 vertically) than the starting beam. This represents a two-orders-of-magnitude reduction in the spread of positions and momenta in the beam and makes accelerating the muons more efficient, says Kamioka.

According to the researchers, who report their work in Physical Review Letters, these improvements are important steps on the road to a muon collider. To make further progress, however, they will need to increase the beam’s energy and intensity even further, which they acknowledge will be challenging.

“We are now preparing for the next acceleration test at the new experimental area dedicated to muon acceleration,” Kamioka tells Physics World. “A 4 MeV acceleration with 1000 muon/s is planned for 2027 and a 212 MeV acceleration with 105 muon/s is planned for 2029.”

In total, the MUSE team expects that various improvements will produce a factor of 105–106 increase in the muon rate, which could be enough to enable applications such as the muon g−2/EDM experiment at J-PARC, he adds.

The post High-quality muon beam holds promise for future collider appeared first on Physics World.

❌