↩ Accueil

Vue normale

index.feed.received.before_yesterday

‘Phononic shield’ protects mantis shrimp from its own shock waves

4 mars 2025 à 15:59

When a mantis shrimp uses shock waves to strike and kill its prey, how does it prevent those shock waves from damaging its own tissues? Researchers at Northwestern University in the US have answered this question by identifying a structure within the shrimp that filters out harmful frequencies. Their findings, which they obtained by using ultrasonic techniques to investigate surface and bulk wave propagation in the shrimp’s dactyl club, could lead to novel advanced protective materials for military and civilian applications.

Dactyl clubs are hammer-like structures located on each side of a mantis shrimp’s body. They store energy in elastic structures similar to springs that are latched in place by tendons. When the shrimp contracts its muscles, the latch releases, releasing the stored energy and propelling the club forward with a peak force of up to 1500 N.

This huge force (relative to the animal’s size) creates stress waves in both the shrimp’s target – typically a hard-shelled animal such as a crab or mollusc – and the dactyl club itself, explains biomechanical engineer Horacio Dante Espinosa, who led the Northwestern research effort. The club’s punch also creates bubbles that rapidly collapse to produce shockwaves in the megahertz range. “The collapse of these bubbles (a process known as cavitation collapse), which takes place in just nanoseconds, releases intense bursts of energy that travel through the target and shrimp’s club,” he explains. “This secondary shockwave effect makes the shrimp’s strike even more devastating.”

Protective phononic armour

So how do the shrimp’s own soft tissues escape damage? To answer this question, Espinosa and colleagues studied the animal’s armour using transient grating spectroscopy (TGS) and asynchronous optical sampling (ASOPS). These ultrasonic techniques respectively analyse how stress waves propagate through a material and characterize the material’s microstructure. In this work, Espinosa and colleagues used them to provide high-resolution, frequency-dependent wave propagation characteristics that previous studies had not investigated experimentally.

The team identified three distinct regions in the shrimp’s dactyl club. The outermost layer consists of a hard hydroxyapatite coating approximately 70 μm thick, which is durable and resists damage. Beneath this, an approximately 500 μm-thick layer of mineralized chitin fibres arranged in a herringbone pattern enhances the club’s fracture resistance. Deeper still, Espinosa explains, is a region that features twisted fibre bundles organized in a corkscrew-like arrangement known as a Bouligand structure. Within this structure, each successive layer is rotated relative to its neighbours, giving it a unique and crucial role in controlling how stress waves propagate through the shrimp.

“Our key finding was the existence of phononic bandgaps (through which waves within a specific frequency range cannot travel) in the Bouligand structure,” Espinosa explains. “These bandgaps filter out harmful stress waves so that they do not propagate back into the shrimp’s club and body. They thus preserve the club’s integrity and protect soft tissue in the animal’s appendage.”

 The team also employed finite element simulations incorporating so-called Bloch-Floquet analyses and graded mechanical properties to understand the phonon bandgap effects. The most surprising result, Espinosa tells Physics World, was the formation of a flat branch around the 450 to 480 MHz range, which correlates to frequencies arising from bubble collapse originating during club impact.

Evolution and its applications

For Espinosa and his colleagues, a key goal of their research is to understand how evolution leads to natural composite materials with unique photonic, mechanical and thermal properties. In particular, they seek to uncover how hierarchical structures in natural materials and the chemistry of their constituents produce emergent mechanical properties. “The mantis shrimp’s dactyl club is an example of how evolution leads to materials capable of resisting extreme conditions,” Espinosa says. “In this case, it is the violent impacts the animal uses for predation or protection.”

The properties of the natural “phononic shield” unearthed in this work might inspire advanced protective materials for both military and civilian applications, he says. Examples could include the design of helmets, personnel armour, and packaging for electronics and other sensitive devices.

In this study, which is described in Science, the researchers analysed two-dimensional simulations of wave behaviour. Future research, they say, should focus on more complex three-dimensional simulations to fully capture how the club’s structure interacts with shock waves. “Designing aquatic experiments with state-of-the-art instrumentation would also allow us to investigate how phononic properties function in submerged underwater conditions,” says Espinosa.

The team would also like to use biomimetics to make synthetic metamaterials based on the insights gleaned from this work.

The post ‘Phononic shield’ protects mantis shrimp from its own shock waves appeared first on Physics World.

Black hole’s shadow changes from one year to the next

28 février 2025 à 10:30

New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.

The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.

Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.

Hot and violent region

The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.

The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.

New result reinforces previous interpretations

The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.

Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.

Very long baseline interferometry

To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.

The facilities contributing to this work were the Atacama Large Millimeter Array (ALMA) and the Atacama Pathfinder Experiment, both in Chile; the South Pole Telescope (SPT) in Antarctica; the IRAM 30-metre telescope and NOEMA Observatory in Spain; the James Clerk Maxwell Telescope (JCMT) and the Submillimeter Array (SMA) on Mauna Kea, Hawai’I, US; the Large Millimeter Telescope (LMT) in Mexico; the Kitt Peak Telescope in Arizona, US; and the Greenland Telescope (GLT). The distance between these telescopes – the baseline – ranges from 160 m to 10 700 km. Data were correlated at the Max-Planck-Institut für Radioastronomie (MPIfR) in Germany and the MIT Haystack Observatory in the US.

“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”

Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.

The analyses are detailed in Astronomy and Astrophysics.

The post Black hole’s shadow changes from one year to the next appeared first on Physics World.

Radioactive anomaly appears in the deep ocean

27 février 2025 à 10:30

Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.

Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.

Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.

Almost twice as much 10Be than expected

In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.

The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.

Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.

To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”

Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.

The post Radioactive anomaly appears in the deep ocean appeared first on Physics World.

Quantum-inspired technique simulates turbulence with high speed

26 février 2025 à 14:00

Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.

Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.

The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.

An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.

For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.

Highly compressed “tensor networks”

To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.

These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.

Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.

This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.

The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.

The study is detailed in Science Advances.

The post Quantum-inspired technique simulates turbulence with high speed appeared first on Physics World.

Astronomers create a ‘weather map’ for a gas giant exoplanet

25 février 2025 à 10:45

Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.

Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.

Observing the planet during a transit event

On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.

Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.

“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”

The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.

Removing unwanted signals

According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.

“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”

The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”

The post Astronomers create a ‘weather map’ for a gas giant exoplanet appeared first on Physics World.

‘Sneeze simulator’ could improve predictions of pathogen spread

20 février 2025 à 10:30

A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.

For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.

To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.

The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.

Reproducing human coughs and sneezes

One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.

During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.

A photo of a man with dark hair, glasses and a beard holding a 3D model of the human upper respiratory tract. A mask is mounted on a metal arm in the background.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)

“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”

These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.

“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.

The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.

The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.

The post ‘Sneeze simulator’ could improve predictions of pathogen spread appeared first on Physics World.

Scientists discover secret of ice-free polar-bear fur

19 février 2025 à 10:39

In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.

The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.

“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”

With and without sebum

Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.

This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.

What is responsible for the low ice adhesion?

Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.

This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.

Insights into human technology

As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”

The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”

The post Scientists discover secret of ice-free polar-bear fur appeared first on Physics World.

Schrödinger’s cat states appear in the nuclear spin state of antimony

13 février 2025 à 17:15

Physicists at the University of New South Wales (UNSW) are the first to succeed in creating and manipulating quantum superpositions of a single, large nuclear spin. The superposition involves spin states that are very far apart and are therefore the superposition is considered a Schrödinger’s cat state. The work could be important for applications in quantum information processing and quantum error correction.

It was Erwin Schrödinger who, in 1935, devised his famous thought experiment involving a cat that could, worryingly, be both dead and alive at the same time. In his gedanken experiment, the decay of a radioactive atom triggers a mechanism (the breaking of a vial containing a poisonous gas) that kills the cat. However, since the decay of the radioactive atom is a quantum phenomenon,  the atom is in a superposition of being decayed and not decayed. If the cat and poison are hidden in a box, we do not know if the cat is alive or dead. Instead, the state of the feline is  a superposition of dead and alive – known as a Schrödinger’s cat state – until we open the box.

Schrödinger’s cat state (or just cat state) is now used to refer a superposition of two very different states of a quantum system. Creating cat states in the lab is no easy task, but researchers have managed to do this in recent years using the quantum superposition of coherent states of a laser field with different amplitudes, or phases, of the field. They have also created cat states using a trapped ion (with the vibrational state of the ion in the trap playing the role of the cat) and coherent microwave fields confined to superconducting boxes combined with Rydberg atoms and superconducting quantum bits (qubits).

Antimony atom cat

The cat state in the UNSW study is an atom of antimony, which is a heavy atom with a large nuclear spin. The high spin value implies that, instead of just pointing up and down (that is, in one of two directions), the nuclear spin of antimony can be in spin states corresponding to eight different directions. This makes it a high-dimensional quantum system that is valuable for quantum information processing and for encoding error-correctable logical qubits. The atom was embedded in a silicon quantum chip that allows for readout and control of the nuclear spin state.

Normally, a qubit, is described by just two quantum states, explains Xi Yu, who is lead author of a paper describing the study. For example, an atom with its spin pointing down can be labelled as the “0” state and the spin pointing up, the “1” state. The problem with such a system is that information contained in these states is fragile and can be easily lost when a 0 switches to a 1, or vice versa. The probability of this logical error occurring is reduced by creating a qubit using a system like the antinomy atom. With its eight different spin directions, a single error is not enough to erase the quantum information – there are still seven quantum states left, and it would take seven consecutive errors to turn the 0 into a 1.

More room for error

The information is still encoded in binary code (0 and 1), but there is more room for error between the logical codes, says team leader Andrea Morello. “If an error occurs, we detect it straight away, and we can correct it before further errors accumulate.”

The researchers say they were not initially looking to make and manipulate cat states but started with a project on high-spin nuclei for reasons unrelated to quantum information. They were in fact interested in observing quantum chaos in a single nuclear spin, which had been an experimental “holy grail” for a very long time, says Morello. “Once we began working with this system, we first got derailed by the serendipitous discovery of nuclear electric resonance, he remembers “We then became aware of some new theoretical ideas for the use of high-spin systems in quantum information and quantum error correcting codes.

“We therefore veered towards that research direction, and this is our first big result in that context,” he tells Physics World.

Scalable technology

The main challenge the team had to overcome in their study was to set up seven “clocks” that had to be precisely synchronized, so they could keep track of the quantum state of the eight-level system. Until quite recently, this would have involved cumbersome programming of waveform generators, explains Morello. “The advent of FPGA [field-programmable gate array] generators, tailored for quantum applications, has made this research much easier to conduct now.”

While there have already been a few examples of such physical platforms in which quantum information can be encoded in a (Hilbert) space of dimension larger than two – for example, microwave cavities or trapped ions – these were relatively large in size: bulk microwave cavities are typically the size of matchbox, he says. “Here, we have reconstructed many of the properties of other high-dimensional systems, but within an atomic-scale object – a nuclear spin. It is very exciting, and quite plausible, to imagine a quantum processor in silicon, containing millions of such Schrödinger cat states.”

The fact that the cat is hosted in a silicon chip means that this technology could be scaled up in the long-term using methods similar to those already employed in the computer chip industry today, he adds.

Looking ahead, the UNSW team now plans to demonstrate quantum error correction in its antimony system. “Beyond that, we are working to integrate the antimony atoms with lithographic quantum dots, to facilitate the scalability of the system and perform quantum logic operations between cat-encoded qubits,” reveals Morello.

The present study is detailed in Nature Physics.

The post Schrödinger’s cat states appear in the nuclear spin state of antimony appeared first on Physics World.

Bacterial ‘cables’ form a living gel in mucus

12 février 2025 à 15:00

Bacterial cells in solutions of polymers such as mucus grow into long cable-like structures that buckle and twist on each other, forming a “living gel” made of intertwined cells. This behaviour is very different from what happens in polymer-free liquids, and researchers at the California Institute of Technology (Caltech) and Princeton University, both in the US, say that understanding it could lead to new treatments for bacterial infections in patients with cystic fibrosis. It could also help scientists understand how cells organize themselves into polymer-secreting conglomerations of bacteria called biofilms that can foul medical and industrial equipment.

Interactions between bacteria and polymers are ubiquitous in nature. For example, many bacteria live as multicellular colonies in polymeric fluids, including host-secreted mucus, exopolymers in the ocean and the extracellular polymeric substance that encapsulates biofilms. Often, these growing colonies can become infectious, including in cystic fibrosis patients, whose mucus is more concentrated than it is in healthy individuals.

Laboratory studies of bacteria, however, typically focus on cells in polymer-free fluids, explains study leader Sujit Datta, a biophysicist and bioengineer at Caltech. “We wondered whether interactions with extracellular polymers influence proliferating bacterial colonies,” says Datta, “and if so, how?”

Watching bacteria grow in mucus

In their work, which is detailed in Science Advances, the Caltech/Princeton team used a confocal microscope to monitor how different species of bacteria grew in purified samples of mucus. The samples, Dutta explains, were provided by colleagues at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine.

Normally, when bacterial cells divide, the resulting “daughter” cells diffuse away from each other. However, in polymeric mucus solutions, Datta and colleagues observed that the cells instead remained stuck together and began to form long cable-like structures. These cables can contain thousands of cells, and eventually they start bending and folding on top of each other to form an entangled network.

“We found that we could quantitively predict the conditions under which such cables form using concepts from soft-matter physics typically employed to describe non-living gels,” Datta says.

Support for bacterial colonies

The team’s work reveals that polymers, far from being a passive medium, play a pivotal role in supporting bacterial life by shaping how cells grow in colonies. The form of these colonies – their morphology – is known to influence cell-cell interactions and is important for maintaining their genetic diversity. It also helps determine how resilient a colony is to external stressors.

“By revealing this previously-unknown morphology of bacterial colonies in concentrated mucus, our finding could help inform ways to treat bacterial infections in patients with cystic fibrosis, in which the mucus that lines the lungs and gut becomes more concentrated, often causing the bacterial infections that take hold in that mucus to become life-threatening,” Datta tells Physics World.

Friend or foe?

As for why cable formation is important, Datta explains that there are two schools of thought. The first is that by forming large cables, bacteria may become more resilient against the body’s immune system, making them more infectious. The other possibility is that the reverse is true – that cable formation could in fact leave bacteria more exposed to the host’s defence mechanisms. These include “mucociliary clearance”, which is the process by which tiny hairs on the surface of the lungs constantly sweep up mucus and propel it upwards.

“Could it be that when bacteria are all clumped together in these cables, it is actually easier to get rid of them by expelling them out of the body?” Dutta asks.

Investigating these hypotheses is an avenue for future research, he adds. “Ours is a fundamental discovery on how bacteria grow in complex environments, more akin to their natural habitats,” Datta says. “We also expect it will motivate further work exploring how cable formation influences the ways in which bacteria interact with hosts, phages, nutrients and antibiotics.”

The post Bacterial ‘cables’ form a living gel in mucus appeared first on Physics World.

Organic photovoltaic solar cells could withstand harsh space environments

11 février 2025 à 15:00

Carbon-based organic photovoltaics (OPVs) may be much better than previously thought at withstanding the high-energy radiation and sub-atomic particle bombardments of space environments. This finding, by researchers at the University of Michigan in the US, challenges a long-standing belief that OPV devices systematically degrade under conditions such as those encountered by spacecraft in low-Earth orbit. If verified in real-world tests, the finding suggests that OPVs could one day rival traditional thin-film photovoltaic technologies based on rigid semiconductors such as gallium arsenide.

Lightweight, robust, radiation-resilient photovoltaics are critical technologies for many aerospace applications. OPV cells are particularly attractive for this sector because they are ultra-lightweight, thermally stable and highly flexible. This last property allows them to be integrated onto curved surfaces as well as flat ones.

Today’s single-junction OPV devices also have a further advantage. Thanks to power conversion efficiencies (PCEs) that now exceed 20%, their specific power – that is, the power generated per weight – can be up to 40 W/g. This is significantly higher than traditional photovoltaic technologies, including those based on silicon (1 W/g) and gallium arsenide (3 W/g) on flexible substrates. Devices with such a large specific power could provide energy for small spacecraft heading into low-Earth orbit and beyond.

Until now, however, scientists believed that these materials had a fatal flaw for space applications: they weren’t robust to irradiation by the energetic particles (predominantly fluxes of electrons and protons) that spacecraft routinely encounter.

Testing two typical OPV materials

In the new work, researchers led by electrical and computer engineer Yongxi Li and physicist Stephen Forrest analysed how two typical OPV materials behave when exposed to proton particles with differing energies. They did this by characterizing their optoelectronic properties before and after irradiation exposure. The first materials were made up of small molecules (DBP, DTDCPB and C70) that had been grown using a technique called vacuum thermal evaporation (VTE). The second group consisted of solution-processed small molecules and polymers (PCE-10, PM6, BT-CIC and Y6).

The team’s measurements show that the OPVs grown by VTE retained their initial PV efficiency under radiation fluxes of up to 1012 cm−2. In contrast, polymer-based OPVs lose 50% of their original efficiency under the same conditions. This, say the researchers, is because proton irradiation breaks carbon-hydrogen bonds in the polymers’ molecular alkyl side chains. This leads to polymer cross-linking and the generation of charge traps that imprison electrons and prevent them from generating useful current.

The good news, Forrest says, is that many of these defects can be mended by thermally annealing the materials at temperatures of 45 °C or less. After such an annealing, the cell’s PCE returns to nearly 90% of its value before irradiation. This means that Sun-facing solar cells made of these materials could essentially “self-heal”, though Forrest acknowledges that whether this actually happens in deep space is a question that requires further investigation. “It may be more straightforward to design the material so that the electron traps never appear in the first place or by filling them with other atoms, so eliminating this problem,” he says.

According to Li, the new study, which is detailed in Joule, could aid the development of standardized stability tests for how protons interact with OPV devices. Such tests already exist for c-Si and GaAs solar cells, but not for OPVs, he says.

The Michigan researchers say they will now be developing materials that combine high PCEs with strong resilience to proton exposure. “We will then use these materials to fabricate OPV devices that we will then test on CubeSats and spacecraft in real-world environments,” Li tells Physics World.

The post Organic photovoltaic solar cells could withstand harsh space environments appeared first on Physics World.

New class of quasiparticle appears in bilayer graphene

10 février 2025 à 10:00

A newly-discovered class of quasiparticles known as fractional excitons offers fresh opportunities for condensed-matter research and could reveal unprecedented quantum phases, say physicists at Brown University in the US. The new quasiparticles, which are neither bosons nor fermions and carry no charge, could have applications in quantum computing and sensing, they say.

In our everyday, three-dimensional world, particles are classified as either fermions or bosons. Fermions such as electrons follow the Pauli exclusion principle, which prevents them from occupying the same quantum state. This property underpins phenomena like the structure of atoms and the behaviour of metals and insulators. Bosons, on the other hand, can occupy the same state, allowing for effects like superconductivity and superfluidity.

Fractional excitons defy this traditional classification, says Jia Leo Li, who led the research. Their properties lie somewhere in between those of fermions and bosons, making them more akin to anyons, which are particles that exist only in two-dimensional systems. But that’s only one aspect of their unusual nature, Li adds. “Unlike typical anyons, which carry a fractional charge of an electron, fractional excitons are neutral particles, representing a distinct type of quantum entity,” he says.

The experiment

Li and colleagues created the fractional excitons using two sheets of graphene – a form of carbon just one atom thick – separated by a layer of another two-dimensional material, hexagonal boron nitride. This layered setup allowed them to precisely control the movement of electrons and positively-charged “holes” and thus to generate excitons, which are pairs of electrons and holes that behave like single particles.

The team then applied a 12 T magnetic field to their bilayer structure. This strong field caused the electrons in the graphene to split into fractional charges – a well-known phenomenon that occurs in the fractional quantum Hall effect. “Here, strong magnetic fields create Landau electronic levels that induce particles with fractional charges,” Li explains. “The bilayer structure facilitates pairing between these positive and negative charges, making fractional excitons possible.”

“Distinct from any known particles”

The fractional excitons represent a quantum system of neutral particles that obey fractional quantum statistics, interact via dipolar forces and are distinct from any known particles, Li tells Physics World. He adds that his team’s study, which is detailed in Nature, builds on prior works that predicted the existence of excitons in the fractional quantum Hall effect (see, for example, Nature Physics 13, 751 2017Nature Physics 15, 898-903 2019Science 375 (6577), 205-209 2022).

The researchers now plan to explore the properties of fractional excitons further. “Our key objectives include measuring the fractional charge of the constituent particles and confirming their anyonic statistics,” Li explains. Studies of this nature could shed light on how fractional excitons interact and flow, potentially revealing new quantum phases, he adds.

“Such insights could have profound implications for quantum technologies, including ultra-sensitive sensors and robust quantum computing platforms,” Li says. “As research progresses, fractional excitons may redefine the boundaries of condensed-matter physics and applied quantum science.”

The post New class of quasiparticle appears in bilayer graphene appeared first on Physics World.

Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets

6 février 2025 à 13:00

A new graphene nanostructure could become the basis for the first ferromagnets made purely from carbon. Known as an asymmetric or “Janus” graphene nanoribbon after the two-faced god in Roman mythology, the opposite edges of this structure have different properties, with one edge taking a zigzag form. Lu Jiong , a researcher at the National University of Singapore (NUS) who co-led the effort to make the structure, explains that it is this zigzag edge that gives rise to the ferromagnetic state, making the structure the first of its kind.

“The work is the first demonstration of the concept of a Janus graphene nanoribbon (JGNR) strand featuring a single ferromagnetic zigzag edge,” Lu says.

Graphene nanostructures with zigzag-shaped edges show much promise for technological applications thanks to their electronic and magnetic properties. Zigzag GNRs (ZGNRs) are especially appealing because the behaviour of their electrons can be tuned from metal-like to semiconducting by adjusting the length or width of the ribbons; modifying the structure of their edges; or doping them with non-carbon atoms. The same techniques can also be used to make such materials magnetic. This versatility means they can be used as building blocks for numerous applications, including quantum and spintronics technologies.

Previously, only two types of symmetric ZGNRs had been synthesized via on-surface chemistry: 6-ZGNR and nitrogen-doped 6-ZGNR, where the “6” refers to the number of carbon rows across the nanoribbon’s width. In the latest work, Lu and co-team leaders Hiroshi Sakaguchi of the University of Kyoto, Japan and Steven Louie at the University of California, Berkeley, US sought to expand this list.

 “It has been a long-sought goal to make other forms of zigzag-edge related GNRs with exotic quantum magnetic states for studying new science and developing new applications,” says team member Song Shaotang, the first author of a paper in Nature about the research.

ZGNRs with asymmetric edges

Building on topological classification theory developed in previous research by Louie and colleagues, theorists in the Singapore-Japan-US collaboration predicted that it should be possible to tune the magnetic properties of these structures by making ZGNRs with asymmetric edges. “These nanoribbons have one pristine zigzag edge and another edge decorated with a pattern of topological defects spaced by a certain number m of missing motifs,” Louie explains. “Our experimental team members, using innovative z-shaped precursor molecules for synthesis, were able to make two kinds of such ZGNRs. Both of these have one edge that supports a benzene motif array with a spacing of m = 2 missing benzene rings in between. The other edge is a conventional zigzag edge.”

Crucially, the theory predicted that the magnetic behaviour – ranging from antiferromagnetism to ferrimagnetism to ferromagnetism – of these JGNRs could be controlled by varying the value of m. In particular, says Louie, the configuration of m = 2 is predicted to show ferromagnetism – that is, all electron spins aligned in the same direction – concentrated entirely on the pristine zigzag edge. This behaviour contrasts sharply with that of symmetric ZGNRs, where spin polarization occurs on both edges and the aligned edge spins are antiferromagnetically coupled across the width of the ribbon.

Precursor design and synthesis

To validate these theoretical predictions, the team synthesized JGNRs on a surface. They then used advanced scanning tunnelling microscope (STM) and atomic force microscope (AFM) measurements to visualize the materials’ exact real-space chemical structure. These measurements also revealed the emergence of exotic magnetic states in the JGNRs synthesized in Lu’s lab at the NUS.

atomic model of the JGNRs
Two sides: An atomic model of the Janus graphene nanoribbons (left) and its atomic force microscopic image (right). (Courtesy: National University of Singapore)

In the past, Sakaguchi explains that GNRs were mainly synthesized using symmetric precursor chemical structures, largely because their asymmetric counterparts were so scarce. One of the challenges in this work, he notes, was to design asymmetric polymeric precursors that could undergo the essential fusion (dehydrogenation) process to form JGNRs. These molecules often orient randomly, so the researchers needed to use additional techniques to align them unidirectionally prior to the polymerization reaction. “Addressing this challenge in the future could allow us to produce JGNRs with a broader range of magnetic properties,” Sakaguchi says.

Towards carbon-based ferromagnets

According to Lu, the team’s research shows that JGNRs could become the first carbon-based spin transport channels to show ferromagnetism. They might even lead to the development of carbon-based ferromagnets, capping off a research effort that began in the 1980s.

However, Lu acknowledges that there is much work to do before these structures find real-world applications. For one, they are not currently very robust when exposed to air. “The next goal,” he says, “is to develop chemical modifications that will enhance the stability of these 1D structures so that they can survive under ambient conditions.”

A further goal, he continues, is to synthesize JGNRs with different values of m, as well as other classes of JGNRs with different types of defective edges. “We will also be exploring the 1D spin physics of these structures and [will] investigate their spin dynamics using techniques such as scanning tunnelling microscopy combined with electron spin resonance, paving the way for their potential applications in quantum technologies.”

The post Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets appeared first on Physics World.

Alternative building materials could store massive amounts of carbon dioxide

27 janvier 2025 à 13:00

Replacing conventional building materials with alternatives that sequester carbon dioxide could allow the world to lock away up to half the CO2 generated by humans each year – about 16 billion tonnes. This is the finding of researchers at the University of California Davis and Stanford University, both in the US, who studied the sequestration potential of materials such as carbonate-based aggregates and biomass fibre in brick.

Despite efforts to reduce greenhouse gas emissions by decarbonizing industry and switching to renewable sources of energy, it is likely that humans will continue to produce significant amounts of CO2 beyond the target “net zero” date of 2050. Carbon storage and sequestration – either at source or directly from the atmosphere – are therefore worth exploring as an additional route towards this goal. Researchers have proposed several possible ways of doing this, including injecting carbon underground or deep under the ocean. However, all these scenarios are challenging to implement practically and pose their own environmental risks.

Modifying common building materials

In the present work, a team of civil engineers and earth systems scientists led by Elisabeth van Roijen (then a PhD student at UC Davis) calculated how much carbon could be stored in modified versions of several common building materials. These include concrete (cement) and asphalt containing carbonate-based aggregates; bio-based plastics; wood; biomass-fibre bricks (from waste biomass); and biochar filler in cement.

The researchers obtained the “16 billion tonnes of CO2” figure by assuming that all aggregates currently employed in concrete would be replaced with carbonate-based versions. They also supplemented 15% of cement with biochar and the remainder with carbonatable cements; increased the amount of wood used in all new construction by 20%; and supplemented 15% of bricks with biomass and the remainder with carbonatable calcium hydroxide. A final element in their calculation was to replace all plastics used in construction today with bio-based plastics and all bitumen with bio-oil in asphalt.

“We calculated the carbon storage potential of each material based on the mass ratio of carbon in each material,” explains van Roijen. “These values were then scaled up based on 2016 consumption values for each material.”

“The sheer magnitude of carbon storage is pretty impressive”

While the production of some replacement materials would need to increase to meet the resulting demand, van Roijen and colleagues found that resources readily available today – for example, mineral-rich waste streams – would already let us replace 10% of conventional aggregates with carbonate-based ones. “These alone could store 1 billion tonnes of CO2,” she says. “The sheer magnitude of carbon storage is pretty impressive, especially when you put it in context of the level of carbon dioxide removal needed to stay below the 1.5 and 2 °C targets set by The Intergovernmental Panel on Climate Change (IPCC).”

Indeed, even if the world doesn’t implement these technologies until 2075, we could still store enough carbon between 2075 and 2100 to stay below these targets, she tells Physics World. “This is assuming, of course, that all other decarbonization efforts outlined in the IPCC reports are also implemented to achieve net-zero emissions,” she says.

Building materials are a good option for carbon storage

The motivation for the study, she explains, came from the urgent need – as expressed by the IPCC – to not only reduce new carbon emissions through rapid and significant decarbonization, but to also remove large amounts of COalready present in the atmosphere. “Rather than burying it in geological, terrestrial or ocean reservoirs, we wanted to look into the possibility of leveraging existing technology – namely conventional building materials – as a way to store CO2. Building materials are a good option for carbon storage given the massive quantity (30 billion tonnes) produced each year, not to mention their durability.”

Van Roijen, who is now a postdoctoral researcher at the US Department of Energy Renewable Energy Laboratory, hopes that this work, which is detailed in Science, will go beyond the reach of the research lab and attract the attention of policymakers and industrialists. While some of the technologies outlined in this study are new and require further research, others, such as bio-based plastics, are well established and simply need some economic and political support, she says. “That said, conventional building materials such as concrete and plastics are pretty cheap, so there will need to be some incentive for industries to make the switch over to these low-carbon materials.”

The post Alternative building materials could store massive amounts of carbon dioxide appeared first on Physics World.

Fast radio burst came from a neutron star’s magnetosphere, say astronomers

24 janvier 2025 à 16:00

The exact origins of cosmic phenomena known as fast radio bursts (FRBs) are not fully understood, but scientists at the Massachusetts Institute of Technology (MIT) in the US have identified a fresh clue: at least one of these puzzling cosmic discharges got its start very close to the object that emitted it. This result, which is based on measurements of a fast radio burst called FRB 20221022A, puts to rest a long-standing debate about whether FRBs can escape their emitters’ immediate surroundings. The conclusion: they can.

“Competing theories argued that FRBs might instead be generated much farther away in shock waves that propagate far from the central emitting object,” explains astronomer Kenzie Nimmo of MIT’s Kavli Institute for Astrophysics and Space Research. “Our findings show that, at least for this FRB, the emission can escape the intense plasma near a compact object and still be detected on Earth.”

As their name implies, FRBs are brief, intense bursts of radio waves. The first was detected in 2007, and since then astronomers have spotted thousands of others, including some within our own galaxy. They are believed to originate from cataclysmic processes involving compact celestial objects such as neutron stars, and they typically last a few milliseconds. However, astronomers have recently found evidence for bursts a thousand times shorter, further complicating the question of where they come from.

Nimmo and colleagues say they have now conclusively demonstrated that FRB 20221022A, which was detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in 2022, comes from a region only 10 000 km in size. This, they claim, means it must have originated in the highly magnetized region that surrounds a star: the magnetosphere.

“Fairly intuitive” concept

The researchers obtained their result by measuring the FRB’s scintillation, which Nimmo explains is conceptually similar to the twinkling of stars in the night sky. The reason stars twinkle is that because they are so far away, they appear to us as point sources. This means that their apparent brightness is more affected by the Earth’s atmosphere than is the case for planets and other objects that are closer to us and appear larger.

“We applied this same principle to FRBs using plasma in their host galaxy as the ‘scintillation screen’, analogous to Earth’s atmosphere,” Nimmo tells Physics World. “If the plasma causing the scintillation is close to the FRB source, we can use this to infer the apparent size of the FRB emission region.”

According to Nimmo, different models of FRB origins predict very different sizes for this region. “Emissions originating within the magnetized environments of compact objects (for example, magnetospheres) would produce a much smaller apparent size compared to emission generated in distant shocks propagating far from the central object,” she explains. “By constraining the emission region size through scintillation, we can determine which physical model is more likely to explain the observed FRB.”

Challenge to existing models

The idea for the new study, Nimmo says, stemmed from a conversation with another astronomer, Pawan Kumar of the University of Texas at Austin, early last year. “He shared a theoretical result showing how scintillation could be used a ‘probe’ to constrain the size of the FRB emission region, and, by extension, the FRB emission mechanism,” Nimmo says. “This sparked our interest and we began exploring the FRBs discovered by CHIME to search for observational evidence for this phenomenon.”

The researchers say that their study, which is detailed in Nature, shows that at least some FRBs originate from magnetospheric processes near compact objects such as neutron stars. This finding is a challenge for models of conditions in these extreme environments, they say, because if FRB signals can escape the dense plasma expected to exist near such objects, the plasma may be less opaque than previously assumed. Alternatively, unknown factors may be influencing FRB propagation through these regions.

A diagnostic tool

One advantage of studying FRB 20221022A is that it is relatively conventional in terms of its brightness and the duration of its signal (around 2 milliseconds). It does have one special property, however, as discovered by Nimmo’s colleagues at McGill University in Canada: its light is highly polarized. What is more, the pattern of its polarization implies that its emitter must be rotating in a way that is reminiscent of pulsars, which are highly magnetized, rotating neutron stars. This result is reported in a separate paper in Nature.

In Nimmo’s view, the MIT team’s study of this (mostly) conventional FRB establishes scintillation as a “powerful diagnostic tool” for probing FRB emission mechanisms. “By applying this method to a larger sample of FRBs, which we now plan to investigate, future studies could refine our understanding of their underlying physical processes and the diverse environments they occupy.”

The post Fast radio burst came from a neutron star’s magnetosphere, say astronomers appeared first on Physics World.

Terahertz light produces a metastable magnetic state in an antiferromagnet

24 janvier 2025 à 10:00

Physicists in the US, Europe and Korea have produced a long-lasting light-driven magnetic state in an antiferromagnetic material for the first time. While their project started out as a fundamental study, they say the work could have applications for faster and more compact memory and processing devices.

Antiferromagnetic materials are promising candidates for future high-density memory devices. This is because in antiferromagnets, the spins used as the bits or data units flip quickly, at frequencies in the terahertz range. Such rapid spin flips are possible because, by definition, the spins in antiferromagnets align antiparallel to each other, leading to strong interactions among the spins. This is different from ferromagnets, which have parallel electron spins and are used in today’s memory devices such as computer hard drives.

Another advantage is that antiferromagnets display almost no macroscopic magnetization. This means that bits can be packed more densely onto a chip than is the case for the ferromagnets employed in conventional magnetic memory, which do have a net magnetization.

A further attraction is that the values of bits in antiferromagnetic memory devices are generally unaffected by the presence of stray magnetic fields. However, Nuh Gedik of the Massachusetts Institute of Technology (MIT), who led the latest research effort, notes that this robustness can be a double-edged sword: the fact that antiferromagnet spins are insensitive to weak magnetic fields also makes them difficult to control.

Antiferromagnetic state lasts for more than 2.5 milliseconds

In the new work, Gedik and colleagues studied FePS3, which becomes an antiferromagnet below a critical temperature of around 118 K. By applying intense pulses of terahertz-frequency light to this material, they were able to control this transition, placing the material in a metastable magnetic state that lasts for more than 2.5 milliseconds even after the light source is switched off. While such light-induced transitions have been observed before, Gedik notes that they typically only last for picoseconds.

The technique works because the terahertz source stimulates the atoms in the FePS3 at the same frequency at which the atoms collectively vibrate (the resonance frequency). When this happens, Gedik explains that the atomic lattice undergoes a unique form of stretching. This stretching cannot be achieved with external mechanical forces, and it pushes the spins of the atoms out of their magnetically alternating alignment.

The result is a state in which the spin in one direction is larger, transforming the originally antiferromagnetic material into a state with net magnetization. This metastable state becomes increasingly robust as the temperature of the material approaches the antiferromagnetic transition point. That is a sign that critical fluctuations near the phase transition point are a key factor in enhancing both the magnitude and lifetime of the new magnetic state, Gedik says.

A new experimental setup

The team, which includes researchers from the Max Planck Institute for the Structure and Dynamics of Matter in Germany, the University of the Basque Country in Spain, Seoul National University and the Flatiron Institute in New York, wasn’t originally aiming to produce long-lived magnetic states. Instead, its members were investigating nonlinear interactions among low-energy collective modes, such as phonons (vibrations of the atomic lattice) and spin excitations called magnons, in layered magnetic materials like FePS3. It was for this purpose that they developed a new experimental setup capable of generating strong terahertz pulses with a wide spectral bandwidth.

“Since nonlinear interactions are generally weak, we chose a family of materials known for their strong coupling between magnetic spins and phonons,” Gedik says. “We also suspected that, under such intense resonant excitation in these particular materials, something intriguing might occur – and indeed, we discovered a new magnetic state with an exceptionally long lifetime.”

While the researchers’ focus remains on fundamental questions, they say the new findings may enable a “significant step” toward practical applications for ultrafast science. “The antiferromagnetic nature of the material holds great potential for potentially enabling faster and more compact memory and processing devices,” says. Gedik’s MIT colleague Batyr Ilyas. He adds that the observed long lifetime of the induced state means that it can be explored further using conventional experimental probes used in spintronic technologies.

The team’s next step will be to study the nonlinear interactions between phonons and magnons more closely using two-dimensional spectroscopy experiments. “Second, we plan to demonstrate the feasibility of probing this metastable state through electrical transport experiments,” Ilyas tells Physics World. “Finally, we aim to investigate the generalizability of this phenomenon in other materials, particularly those exhibiting enhanced fluctuations near room temperature.”

The work is detailed in Nature.

The post Terahertz light produces a metastable magnetic state in an antiferromagnet appeared first on Physics World.

New candidate emerges for a universal quantum electrical standard

23 janvier 2025 à 10:00

Physicists in Germany have developed a new way of defining the standard unit of electrical resistance. The advantage of the new technique is that because it is based on the quantum anomalous Hall effect rather than the ordinary quantum Hall effect, it does not require the use of applied magnetic fields. While the method in its current form requires ultracold temperatures, an improved version could allow quantum-based voltage and resistance standards to be integrated into a single, universal quantum electrical reference.

Since 2019, all base units in the International System of Units (SI) have been defined with reference to fundamental constants of nature. For example, the definition of the kilogram, which was previously based on a physical artefact (the international prototype kilogram), is now tied to Planck’s constant, h.

These new definitions do come with certain challenges. For example, today’s gold-standard way to experimentally determine the value of h (as well the elementary charge e, another base SI constant) is to measure a quantized electrical resistance (the von Klitzing constant RK = h/e2) and a quantized voltage (the Josephson constant KJ = 2e/h). With RK and KJ pinned down, scientists can then calculate e and h.

To measure RK with high precision, physicists use the fact that it is related to the quantized values of the Hall resistance of a two-dimensional electron system (such as the ones that form in semiconductor heterostructures) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in semiconductors like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. In graphene, a two-dimensional carbon sheet, fields of about 5 T are typically required.

The problem with this method is that KJ is measured by means of a separate phenomenon known as the AC Josephson effect, and the large external magnetic fields that are so essential to the QHE measurement render Josephson devices inoperable. According to Charles Gould of the Institute for Topological Insulators at the University of Würzburg (JMU), who led the latest research effort, this makes it difficult to integrate a QHE-based resistance standard with the voltage standard.

A way to measure RK at zero external magnetic field

Relying on the quantum anomalous Hall effect (QAHE) instead would solve this problem. This variant of the QHE arises from electron transport phenomena recently identified in a family of materials known as ferromagnetic topological insulators. Such quantum spin Hall systems, as they are also known, conduct electricity along their (quantized) edge channels or surfaces, but act as insulators in their bulk. In these materials, spontaneous magnetization means the QAHE manifests as a quantization of resistance even at weak (or indeed zero) magnetic fields.

In the new work, Gould and colleagues made Hall resistance quantization measurements in the QAHE regime on a device made from V-doped (Bi,Sb)2Te3. These measurements showed that the relative deviation of the Hall resistance from RK at zero external magnetic field is just (4.4 ± 8.7) nΩ Ω−1. The method thus makes it possible to determine RK at zero magnetic field with the needed precision — something Gould says was not previously possible.

The snag is that the measurement only works under demanding experimental conditions: extremely low temperatures (below about 0.05 K) and low electrical currents (below 0.1 uA). “Ultimately, both these parameters will need to be significantly improved for any large-scale use,” Gould explains. “To compare, the QHE works at temperatures of 4.2 K and electrical currents of about 10 uA; making its detection much easier and cheaper to operate.”

Towards a universal electrical reference instrument

The new study, which is detailed in Nature Electronics, was made possible thanks to a collaboration between two teams, he adds. The first is at Würzburg, which has pioneered studies on electron transport in topological materials for some two decades. The second is at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, which has been establishing QHE-based resistance standards for even longer. “Once the two teams became aware of each other’s work, the potential of a combined effort was obvious,” Gould says.

Because the project brings together two communities with very different working methods and procedures, they first had to find a window of operations where their work could co-exist. “As a simple example,” explains Gould, “the currents of ~100 nA used in the present study are considered extremely low for metrology, and extreme care was required to allow the measurement instrument to perform under such conditions. At the same time, this current is some 200 times larger than that typically used when studying topological properties of materials.”

As well as simplifying access to the constants h and e, Gould says the new work could lead to a universal electrical reference instrument based on the QAHE and the Josephson effect. Beyond that, it could even provide a quantum standard of voltage, resistance, and (by means of Ohm’s law) current, all in one compact experiment.

The possible applications of the QAHE in metrology have attracted a lot of attention from the European Union, he adds. “The result is a Europe-wide EURAMET metrology consortium QuAHMET aimed specifically at further exploiting the effect and operation of the new standard at more relaxed experimental conditions.”

The post New candidate emerges for a universal quantum electrical standard appeared first on Physics World.

Altermagnets imaged at the nanoscale

13 janvier 2025 à 14:00

A recently-discovered class of magnets called altermagnets has been imaged in detail for the first time thanks to a technique developed by physicists at the University of Nottingham’s School of Physics and Astronomy in the UK. The team exploited the unique properties of altermagnetism to map the magnetic domains in the altermagnet manganese telluride (MnTe) down to the nanoscale level, raising hopes that its unusual magnetic ordering could be controlled and exploited in technological applications.

In most magnetically-ordered materials, the spins of atoms (that is, their magnetic moments) have two options: they can line up parallel with each other, or antiparallel, alternating up and down. These arrangements arise from the exchange interaction between atoms, and lead to ferromagnetism and antiferromagnetism, respectively.

Altermagnets, which were discovered in 2024, are different. While their neighbouring spins are antiparallel, like an antiferromagnet, the atoms hosting these spins are rotated relative to their neighbours. This means that they combine some properties from both types of conventional magnetism. For example, the up, down, up ordering of their spins leads to a net magnetization of zero because – as in antiferromagnets – the spins essentially cancel each other out. However, their spin splitting is non-relativistic, as in ferromagnets.

Resolving altermagnetic states down to nanoscale

Working at the MAX IV international synchrotron facility in Sweden, a team led by Nottingham’s Peter Wadley used photoemission electron microscopy to detect the electrons emitted from the surface of MnTe when it was irradiated with a polarized X-ray beam.

“The emitted electrons depend on the polarization of the X-ray beam in ways not seen in other classes of magnetic materials,” explains Wadley, “and this can be used to map the magnetic domains in the material with unprecedented detail.”

Using this technique, the team was able to resolve altermagnetic states down to the nanoscale – from 100-nm-scale vortices and domain walls up to 10-μm-sized single-domain states. And that is not all: Wadley and colleagues found that they could control these features by cooling the material while a magnetic field is applied.

Potential uses of altermagnets

Magnetic materials are found in most long-term computer memory devices and in many advanced microchips, including those used for Internet of Things and artificial intelligence applications. If these materials were replaced with altermagnets, Wadley and colleagues say that the switching speed of microelectronic components and digital memory could increase by up to a factor of 1000, with lower energy consumption.

“The predicted properties of altermagnets make them very attractive from the point of view of fundamental research and applications,” Wadley tells Physics World. “With strong theoretical guidance from our collaborators at FZU Prague and the Max Planck Institute for the Physics of Complex Systems, we realised that our experience in materials development and magnetic imaging positioned us well to attempt to image and control altermagnetic domains.”

One of the main challenges the researchers faced was developing thin films of MnTe with surfaces of a sufficiently high quality that allowed them to detect the subtle X-ray spectroscopy signatures of the altermagnetic order. They hope that their study, detailed in Nature, will spur further interest in these materials.

“Altermagnets provide a new vista of predicted phenomena from unconventional domain walls to unique band structure effects,” Wadley says. “We are exploring these effects on multiple fronts and one of the major goals is to demonstrate a more efficient means of controlling the magnetic domains, for example, by applying electric currents rather than cooling them down.”

The post Altermagnets imaged at the nanoscale appeared first on Physics World.

Very thin films of a novel semimetal conduct electricity better than copper

13 janvier 2025 à 10:00

Metals usually become less conductive as they get thinner. Niobium phosphide, however, is different. According to researchers at Stanford University, US, a very thin film of this non-crystalline topological semimetal conducts electricity better than copper even in non-crystalline films. This surprising result could aid the development of ultrathin low-resistivity wires for nanoelectronics applications.

“As today’s electronic devices and chips become smaller and more complex, the ultrathin metallic wires that carry electrical signals within these chips can become a bottleneck when they are scaled down,” explains study leader Asir Intisar Khan, a visiting postdoctoral scholar and former PhD student in Eric Pop’s group at Stanford.

The solution, he says, is to create ultrathin conductors with a lower electrical resistivity to make the metal interconnects that enable dense logic and memory operations within neuromorphic and spintronic devices. “Low resistance will lead to lower voltage drops and lower signal delays, ultimately helping to reduce power dissipation at the system level,” Khan says.

The problem is that the resistivity of conventional metals increases when they are made into thin films. The thinner the film, the less good it is at conducting electricity.

Topological semimetals are different

Topological semimetals are different. Analogous to the better-known topological insulators, which conduct electricity along special edge states while remaining insulating in their bulk, these materials can carry large amounts of current along their surface even when their structure is somewhat disordered. Crucially, they maintain this surface-conducting property even as they are thinned down.

In the new work, Khan and colleagues found that the effective resistivity of non-crystalline films of niobium phosphide (NbP) decreases dramatically as the film thickness is reduced. Indeed, the thinnest films (< 5 nm) have resistivities lower than conventional metals like copper of similar thicknesses at room temperature.

Another advantage is that these films can be created and deposited on substrates at relatively low temperatures (around 400 °C). This makes them compatible with modern semiconductor and chip fabrication processes such as industrial back-end-of-line (BEOL). Such materials would therefore be relatively easy to integrate into state-of-the-art nanoelectronics. The fact that the films are non-crystalline is also an important practical advantage.

A “huge” collaboration

Khan says he began thinking about this project in 2022 after discussions with a colleague, Ching-Tzu Chen, from IBM’s TJ Watson Research Center. “At IBM, they were exploring the theory concept of using topological semimetals for this purpose,” he recalls. “Upon further discussion with Prof. Eric Pop, we wanted to explore the possibility of experimental realization of thin films of such semimetals at Stanford.”

This turned out to more difficult than expected, he says. While physicists have been experimenting with single crystals of bulk NbP and this class of topological semimetals since 2015, fabricating them at the ultrathin film limit of less than 5 nm at a temperature and using deposition methods compatible with industry and nanoelectronic fabrication was new. “We therefore had to optimize the deposition process from a variety of angles: substrate choice, strain engineering, temperature, pressure and stoichiometry, to name a few,” Khan tells Physics World.

The project turned out to be a “huge” collaboration in the end, with researchers from Stanford, Ajou University, Korea, and IBM Watson all getting involved, he adds.

The researchers says they will now be running further tests on their material. “We also think NbP is not the only material with this property, so there’s much more to discover,” Pop says.

The results are detailed in Science.

The post Very thin films of a novel semimetal conduct electricity better than copper appeared first on Physics World.

Quasiparticles become massless – but only when they’re moving in the right direction

10 janvier 2025 à 10:00

Physicists at Penn State and Columbia University in the US say they have seen the “smoking gun” signature of an elusive quasiparticle predicted by theorists 16 years ago. Known as semi-Dirac fermions, the quasiparticles were spotted in a crystal of the topological semimetal ZrSiS and they have a peculiar property: they only behave like they have mass when they’re moving in a certain direction.

“When we shine infrared light on ZrSiS crystals and carefully measure the reflected light, we observed optical transitions that follow a unique power-law scaling, B2/3, with B being the magnetic field,” explains Yinming Shao, a physicist at Penn State and lead author of a study in Physical Review X on the quasiparticle. “This special power-law turns out to be the exact prediction from 16 years ago of semi-Dirac fermions.”

The team performed the experiments using the 17.5 Tesla magnet at the US National High Magnetic Field Laboratory in Florida. This high field was crucial to the result, Shao explains, because applying a magnetic field to a material causes its electronic energy levels to become quantized into discrete (Landau) levels. The energy gap between these levels then depends on the electrons’ mass and the strength of the field.

Normally, the energy levels of the electrons should increase by set amounts as the magnetic field increases, but in this case they didn’t. Instead, they followed the B2/3 pattern.

Realizing semi-Dirac fermions

Previous efforts to create semi-Dirac fermions relied on stretching graphene (a sheet of carbon just one atom thick) until the material’s two so-called Dirac points touch. These points occur in the region where the material’s valence and conduction bands meet. At these points, something special happens: the relationship between the energy and momentum of charge carriers (electrons and holes) in graphene is described by the Dirac equation, rather than the standard Schrödinger equation as is the case for most crystalline materials. The presence of these unusual band structures (known as Dirac cones) enables the charge carriers in graphene to behave like massless particles.

The problem is that making Dirac points touch in graphene turned out to require an unrealistically high level of strain. Shao and colleagues chose to work with ZrSiS instead because it also has Dirac points, but in this case, they exist continuously along a so-called nodal line. The researchers found evidence for semi-Dirac fermions at the crossing points of these nodal lines.

Interesting optical responses

The idea for the study stemmed from an earlier project in which researchers investigating a similar compound, ZrSiSe, spotted some interesting optical responses when they applied a magnetic field to the material out-of-plane. “I found that similar band-structure features that make ZrSiSe interesting would require applying a magnetic field in-plane for ZrSiS, so we carried out this measurement and indeed observed many unexpected features,” Shao says.

The greatest challenges, he recalls, was to figure out how to interpret the observations, since real materials like ZrSiS have a much more complicated Fermi surface than the ones that feature in early theoretical models. “We collaborated with many different theorists and eventually singled out the signatures originating from semi-Dirac fermions in this material,” he says.

The team still has much to understand about the material’s behaviour, he tells Physics World. “There are some unexplained fine electronic energy level-splitting in the data that we do not fully understand yet and which may originate from electronic interaction effects.”

As for applications, Shao notes that ZrSiS is a layered material, much like graphite – a form of carbon that is, in effect, made up of many layers of graphene. “This means that once we can figure out how to obtain a single layer cut of this compound, we can harness the power of semi-Dirac fermions and control its properties with the same precision as graphene,” he says.

The post Quasiparticles become massless – but only when they’re moving in the right direction appeared first on Physics World.

Sun-like stars produce ‘superflares’ about once a century

9 janvier 2025 à 10:00

Stars like our own Sun produce “superflares” around once every 100 years, surprising astronomers who had previously estimated that such events occurred only every 3000 to 6000 years. The result, from a team of astronomers in Europe, the US and Japan, could be important not only for fundamental stellar physics but also for forecasting space weather.

The Sun regularly produces solar flares, which are energetic outbursts of electromagnetic radiation. Sometimes, these flares are accompanied by plasma in events known as coronal mass ejections. Both activities can trigger powerful solar storms when they interact with the Earth’s upper atmosphere, posing a danger to spacecraft and satellites as well as electrical grids and radio communications on the ground.

Despite their power, though, these events are much weaker than the “superflares” recently observed by NASA’s Kepler and TESS missions at other Sun-like stars in our galaxy. The most intense superflares release energies of about 1025 J, which show up as short, sharp peaks in the stars’ visible light spectrum.

Observations from the Kepler space telescope

In the new study, which is detailed in Science, astronomers sought to find out whether our Sun is also capable of producing superflares, and if so, how often they happen. This question can be approached in two different ways, explains study first author Valeriy Vasilyev, a postdoctoral researcher at the Max Planck Institute for Solar System Research, Germany. “One option is to observe the Sun directly and record events, but it would take a very long time to gather enough data,” Vasilyev says. “The other approach is to study a large number of stars with characteristics similar to those of the Sun and extrapolate their flare activity to our Sun.”

The researchers chose the second option. Using a new method they developed, they analysed Kepler space telescope data on the fluctuations of more than 56,000 Sun-like stars during the period between 2009‒2013. This dataset, which is much larger and more representative than previous datasets because it based on recent advances in our understanding of Sun-like stars, corresponds to around 220,000 years of solar observations.

The new technique can detect superflares and precisely localize them on the telescope images with sub-pixel resolution, Vasilyev says. It also accounts for how light propagates through the telescope’s optics as well as instrumental effects that could “contaminate” the data.

The team, which also includes researchers from the University of Graz, Austria; the University of Oulu, Finland; the National Astronomical Observatory of Japan; the University of Colorado Boulder in the US; and the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, both in France; carefully analysed the detected flares. They checked for potential sources of error, such as those originating from unresolved binary stars, flaring M- and K-dwarf stars and fast-rotating active stars that might have been wrongly classified. Thanks to these robust, statistical evaluations, they identified almost 3000 bright stellar flares in the population they observed – a detection rate that implies that superflares occur roughly once per century, per star.

Sun should also be capable of producing superflares

According to Vasilyev, the team’s results also suggest that solar flares and stellar superflares are generated by the same physical mechanisms. This is important because reconstructions of past solar activity, which are based on the concentrations of cosmogenic isotopes in terrestrial archives such as tree rings, tell us that our Sun occasionally experiences periods of higher or lower solar activity lasting several decades.

One example is the Maunder Minimum, a decades-long period during the 17th century when very few sunspots were recorded. At the other extreme, solar activity was comparatively higher during the Modern Maximum that occurred around the mid-20th century. Based on the team’s analysis, Vasilyev says that “so-called grand minima and grand maxima are not regular but tend to cluster in time. This means that centuries could pass by without extreme solar flares followed by several such events occurring over just a few years or decades.”

It is possible, he adds, that a superflare occurred in the past century but went unnoticed. “While we have no evidence of such an event, excluding it with certainty would require continuous and systematic monitoring of the Sun,” he tells Physics World.  The most intense solar flare in recorded history, the so-called “Carrington event” of September 1859, was documented essentially by chance: “By the time he [the English astronomer Richard Carrington] called someone to show them the bright glow he observed (which lasted only a few minutes), the brightness had already faded.”

Between 1996 and 2002, when instruments provided direct measurements of total solar brightness with sufficient accuracy and temporal resolution, 12 flares with Carrington-like energies were detected. Had these flares been aimed at Earth, it is possible that they would have had similar effects, he says.

The researchers now plan to investigate the conditions required to produce superflares. “We will be extending our research by analysing data from next-generation telescopes, such as the European mission PLATO, which I am actively involved in developing,” Vasilyev says. “PLATO’s launch is due for the end of 2026 and will provide valuable information with which we can refine our understanding of stellar activity and even the impact of superflares on exoplanets.”

The post Sun-like stars produce ‘superflares’ about once a century appeared first on Physics World.

New method recycles quantum dots used in microscopic lasers

7 janvier 2025 à 15:00

Researchers at the University of Strathclyde, UK, have developed a new method to recycle the valuable semiconductor colloidal quantum dots used to fabricate supraparticle lasers. The recovered particles can be reused to build new lasers with a photoluminescence quantum yield almost as high as lasers made from new particles.

Supraparticle lasers are a relatively new class of micro-scale lasers that show much promise in applications such as photocatalysis, environmental sensing, integrated photonics and biomedicine. The active media in these lasers – the supraparticles – are made by assembling and densely packing colloidal quantum dots (CQDs) in the microbubbles formed in a surfactant-stabilized oil-and-water emulsion. The underlying mechanism is similar to the way that dish soap, cooking oil and water mix when we do the washing up, explains Dillon H Downie, a physics PhD student at Strathclyde and a member of the research team led by Nicolas Laurand.

Supraparticles have a high refractive index compared to their surrounding medium. Thanks to this difference, light at the interface between them experiences total internal reflection. This means that when the diameter of the supraparticles is an integer multiple of the wavelength of the incident light, so-called whispering gallery modes (resonant light waves that travel around a concave boundary) form within the supraparticles.

“The supraparticles are therefore microresonators made of an optical gain material (the quantum dots),” explains Downie, “and individual supraparticles can be made to lase by optically pumping them.”

Conceptual image of a supraparticle showing them as a collection of spheres suspended inside a larger sphere, with a red and purple ring around the middle representing the whispering gallery mode circulation
Resonating and recyclable: Supraparticle lasers confine and amplify light through whispering gallery modes — resonant light waves circulating along a spherical boundary — inside a tiny sphere made from aggregated colloidal quantum dots. (Courtesy: Dillon H Downie, University of Strathclyde)

The problem is that many CQDs are made from expensive and sometimes toxic elements. Demand for these increasingly scarce elements will likely outstrip supply before the end of this decade, but at present, only 2% of quantum dots made from these rare-earth elements are recycled. While researchers have been exploring ways of recovering them from electronic waste, the techniques employed often require specialized instruments, complex bio-metallurgical absorbents and hazardous acid-leaching processes. A more environmentally friendly approach is thus sorely needed.

Exceptional recycling potential

In the new work, Laurand, Downie and colleagues recycled supraparticle lasers by first disassembling the CQDs in them. They did this by suspending the dots in an oil phase and applying ultrasonic high-frequency sound waves and heat. They then added water to separate out the dots. Finally, they filtered and purified the disassembled CQDs and tested their fluorescence efficiency before reassembling them into a new laser configuration.

Using this process, the researchers were able to recover 85% of the quantum dots from the initial supraparticle batch. They also found that the recycled quantum dots boasted a photoluminescence quantum yield of 83 ± 16%, which is comparable to the 86 ± 9% for the original particles.

“By testing the lasers’ performance both before and after this process we confirmed their exceptional recycling potential,” Downie says.

Simple, practical technique

Downie describes the team’s technique as simple and practical even for research labs that lack specialized equipment such as centrifuges and scrubbers. He adds that it could also be applied to other self-assembled nanocomposites.

“As we expect nanoparticle aggregates in everything from wearable medical devices to ultrabright LEDs in the future, it is, therefore, not inconceivable that some of these could be sent back for specialized recycling in the same way we do with commercial batteries today,” he tells Physics World. “We may even see a future where rare-earth or some semiconductor elements become critically scarce, necessitating the recycling for any and all devices containing such valuable nanoparticles.”

By proving that supraparticles are reusable, Downie adds, the team’s method provides “ample justification” to anyone wishing to incorporate supraparticle technology into their devices. “This is seen as especially relevant if they are to be used in biomedical applications such as targeted drug delivery systems, which would otherwise be limited to single-use,” he says.

With work on colloidal quantum dots and supraparticle lasers maturing at an incredible rate, Downie adds that it is “fantastic to be able to mature the process of their recycling alongside this progress, especially at such an early stage in the field”.

The study is detailed in Optical Materials Express.

The post New method recycles quantum dots used in microscopic lasers appeared first on Physics World.

Cross-linked polymer is both stiff and stretchy

6 janvier 2025 à 12:33

A new foldable “bottlebrush” polymer network is both stiff and stretchy – two properties that have been difficult to combine in polymers until now. The material, which has a Young’s modulus of 30 kPa even when stretched up to 800% of its original length, could be used in biomedical devices, wearable electronics and soft robotics systems, according to its developers at the University of Virginia School of Engineering and Applied Science in the US.

Polymers are made by linking together building blocks of monomers into chains. To make polymers elastic, these chains are crosslinked by covalent chemical bonds. The crosslinks connect the polymer chains so that when a force is applied to stretch the polymer, it recovers its shape when the force is removed.

A polymer can be made stiffer by adding more crosslinks, to shorten the polymer chain. The stiffness increases because the crosslinks supress the thermal fluctuations of network strands, but this has the effect of making it brittle. This limitation has held back the development of materials that need both stiffness and stretchability, says materials scientist and engineer Liheng Cai, who led this new research effort.

Foldable bottlebrush polymers

In their new work, the researchers hypothesized that foldable bottlebrush-like polymers might not suffer from this problem. These polymers consist of many densely packed linear side chains randomly separated by small spacer monomers. There is a prerequisite, however: the side chains need to have a relatively high molecular weight (MW) and a low glass transition temperature (Tg) while the spacer monomer needs to be low MW and incompatible with the side chains. Achieving this requires control over the incompatibility between backbones and side chain chemistries, explains Baiqiang Huang, who is a PhD student in Cai’s group.

The researchers discovered that two polymers, poly(dimethyl siloxane) (PDMS) and benzyl methacrylate (BnMA) fit the bill here. PDMS is used as the side chain material and BnMA as the spacer monomer. The two are highly incompatible and have very different Tg values of −100°C and 54°C, respectively.

When stretched, the collapsed backbone in the polymer unfolds to release the stored length, so allowing it to be “remarkably extensible”, write the researchers in Science Advances. In contrast, the stiffness of the material changes little thanks to the molecular properties of the side chains in the polymer, says Huang. “Indeed, in our experiments, we demonstrated a significant enhancement in mechanical performance, achieving a constant Young’s modulus of 30 kPa and a tensile breaking strain that increased 40-fold, from 20% to 800%, compared to standard polymers.”

And that is not all: the design of the new foldable bottlebrush polymer means that stiffness and stretchability can be controlled independently in a material for the first time.

Potential applications

The work will be important for when it comes to developing next-generation materials with tailored mechanical properties. According to the researchers, potential applications include durable and flexible prosthetics, high-performance wearable electronics and stretchable materials for soft robotics and medical implants.

Looking forward, the researchers say they will now be focusing on optimizing the molecular structure of their polymer network to fine-tune its mechanical properties for specific applications. They also aim to incorporate functional metallic nanoparticles into the networks, so creating multifunctional materials with specific electrical, magnetic or optical properties. “These efforts will extend the utility of foldable bottlebrush polymer networks to a broader range of applications,” says Cai.

The post Cross-linked polymer is both stiff and stretchy appeared first on Physics World.

Solar wind squashed Uranus’s magnetosphere during Voyager 2 flyby

2 janvier 2025 à 10:45

Some of our understanding of Uranus may be false, say physicists at NASA’s Jet Propulsion Laboratory who have revisited Voyager 2 data before and after its 1986 flyby of this ice-giant planet. The new analyses could shed more light on some of the mysterious and hitherto unexplainable measurements made by the spacecraft. For example, why did it register a strongly asymmetric, plasma-free magnetosphere – something that is unheard of for planets in our solar system – and belts of highly energetic electrons?

Voyager 2 reached Uranus, the seventh planet in our solar system, 38 years ago. The spacecraft gathered its data in just five days and the discoveries from this one and, so far, only flyby provide most of our understanding of this ice giant. Two major findings that delighted astronomers were its 10 new moons and two rings. Other observations perplexed researchers, however.

One of these, explains Jamie Jasinski, who led this new study, was the observation of the second most intense electron radiation belt after Jupiter’s. How such a belt could be maintained or even exist at Uranus lacked an explanation until now. “The other mystery was that the magnetosphere did not have any plasma,” he says. “Indeed, we have been calling the Uranian magnetosphere a ‘vacuum magnetosphere’ because of how empty it is.”

Unrepresentative conditions

These observations, however, may not be representative of the conditions that usually prevail at Uranus, Jasinski explains, because they were simply made during an anomalous period. Indeed, just before the flyby, unusual solar activity  squashed the planet’s magnetosphere down to about 20% of its original volume. Such a situation exists only very rarely and was likely responsible for creating a plasma-free magnetosphere with the observed highly excited electron radiation belts.

Jasinski and colleagues came to their conclusions by analysing Voyager 2 data of the solar wind (a stream of charged particles emanating from the Sun) upstream of Uranus for the few days before the flyby started. They saw that the dynamic pressure of the solar wind increased by a factor of 20, meaning that it dramatically compressed the magnetosphere of Uranus. They then looked at eight months of solar wind data obtained by the spacecraft at Uranus’ orbit and found that the solar wind conditions present during the flyby only occur 4% of the time.

“The flyby therefore occurred during the maximum peak solar wind intensity in that entire eight-month period,” explains Jasinski.

The scientific picture we have of Uranus since the Voyager 2 flyby is that it has an extreme magnetospheric environment, he says. But maybe the flyby just happened to occur during some strange activity rather than it being like that generally.

The timing was just wrong

Jasinski previously worked on NASA’s MESSENGER mission to Mercury. Out of the thousands of orbits made by this spacecraft around the planet over a four-year period, there were occasional times where activity from the Sun completely eroded the entire magnetic field. “That really highlighted for me that if we had made an observation during one of those events, we would have a very different idea of Mercury.”

Following this line of thought, Jasinski asked himself whether we had simply observed Uranus during a similar anomalous time. “The Voyager 2 flyby lasted just five days, so we may have observed Uranus at just the ‘wrong time’,” he says.

One of the most important take-home messages from this study is that we can’t take the results from just one flyby as a being a good representation of the Uranus system, he tells Physics World. Future missions must therefore be designed so that a spacecraft remains in orbit for a few years, enabling variations to be observed over long time periods.

Why we need to go back to Uranus

One of the reasons that we need to go back to Uranus, Jasinski says, is to find out whether any of its moons have subsurface liquid oceans. To observe such oceans with a spacecraft, the moons need to be inside the magnetosphere. This is because the magnetosphere, as it rotates, provides a predictable, steadily varying magnetic field at the moon. This field can then induce a magnetic field response from the ocean that can be measured by the spacecraft. The conductivity of the ocean – and therefore the magnetic signal from the moon – will vary with the depth, thickness and salinity of the ocean.

If the moon is outside the magnetosphere, this steady and predictable external field does not exist and it can no longer drive the induction response. We cannot therefore, detect a magnetic field from the ocean if the moon is outside the magnetosphere.

Before these latest results, researchers thought that the outermost moons, Titania and Oberon, would spend a significant part of their orbit around the planet outside of the magnetosphere, Jasinski explains. This is because we thought that Uranus’s magnetosphere was generally small. However, in light of the new findings, this is probably not true and both moons will orbit inside the magnetosphere since it is much larger than previously thought.

Titania and Oberon are the most likely candidates for harbouring oceans, he adds, because they are slightly larger than the other moons. This means that they can retain heat better and therefore be warmer and less likely to be completely frozen.

“A future mission to Uranus is critical in collecting the scientific measurements to answer some of the most intriguing science questions in our solar system,” says Jasinski. “Only by going back to Uranus and orbiting the planet can we really gain an understanding of this curious planet.”

Happily, in 2022, the US National Academies outlined that a Uranus Orbiter and Probe mission should be a future NASA flagship mission that NASA should prioritize in the coming decade. Such a mission would help us unravel the nature of Uranus’s magnetosphere and its interaction with the planet’s atmosphere, moons and rings, and with the solar wind. “Of course, modern instrumentation would also revolutionize the type of discoveries we would make compared to previous missions,” says Jasinski.

The present study is detailed in Nature Astronomy.

The post Solar wind squashed Uranus’s magnetosphere during Voyager 2 flyby appeared first on Physics World.

Supramolecular biomass foam removes microplastics from water

23 décembre 2024 à 15:00

A reusable and biodegradable fibrous foam developed by researchers at Wuhan University in China can remove up to 99.8% of microplastics from polluted water. The foam, which is made from a self-assembled network of chitin and cellulose obtained from biomass wastes, has been successfully field-tested in four natural aquatic environments.

The amount of plastic waste in the environment has reached staggering levels and is now estimated at several billion metric tons. This plastic degrades extremely slowly and poses a hazard for ecosystems throughout its lifetime. Aquatic life is particularly vulnerable, as micron-sized plastic particles can combine with other pollutants in water and be ingested by a wide range of organisms. Removing these microplastic particles would help limit the damage, but standard filtration technologies are ineffective as the particles are so small.

A highly porous interconnected structure

The new adsorbent developed by Wuhan’s Hongbing Deng and colleagues consists of intertwined beta-chitin nanofibre sheets (obtained from squid bone) with protonated amines and suspended cellulose fibres (obtained from cotton). This structure contains a number of functional groups, including -OH, -NH3+ and -NHCO- that allow the structure to self-assemble into a highly porous interconnected network.

This self-assembly is important, Deng explains, because it means the foam does not require “complex processing (no cross-linking and minimal use of chemical reagents) or adulteration with toxic or expensive substances,” he tells Physics World.

The functional groups make the surface of the foam rough and positively charged, providing numerous sites that can interact and adsorb plastic particles ranging in size from less than 100 nm to over 1000 microns. Deng explains that multiple mechanisms are at work during this process, including physical interception, electrostatic attraction and intermolecular interactions. The latter group includes interactions that involv hydrogen bonding, van der Waals forces and weak hydrogen bonding interactions (between OH and CH groups, for example).

The researchers tested their foam in lake water, coastal water, still water (a small pond) and water used for agricultural irrigation. They also combined these systematic adsorption experiments with molecular dynamics (MD) simulations and Hirshfeld partition (IGMH) calculations to better understand how the foam was working.

They found that the foam can adsorb a variety of nanoplastics and microplastics, including the polystyrene, polymethyl methacrylate, polypropylene and polyethylene terephthalate found in everyday objects such as electronic components, food packaging and textiles. Importantly, the foam can adsorb these plastics even in water bodies polluted with toxic metals such as lead and chemical dyes. It adsorbed nearly 100% of the particles in its first cycle and around 96-98% of the particles over the following five cycles.

“The great potential of biomass”

Because the raw materials needed to make the foam are readily available, and the fabrication process is straightforward, Deng thinks it could be produced on a large scale. “Other microplastic removal materials made from biomass feedstocks have been reported in recent years, but some of these needed to be functionalized with other chemicals,” he says. “Such treatments can increase costs or hinder their large-scale production.”

 Deng and his team have applied for a patent on the material and are now looking for industrial partners to help them produce it. In the meantime, he hopes the work will help draw attention to the microplastic problem and convince more scientists to work on it. “We believe that the great potential of biomass will be recognized and that the use of biomass resources will become more diverse and thorough,” he says.

The present work is described in Science Advances.

The post Supramolecular biomass foam removes microplastics from water appeared first on Physics World.

Immiscible ice layers may explain why Uranus and Neptune lack magnetic poles

17 décembre 2024 à 14:26

When the Voyager 2 spacecraft flew past Uranus and Neptune in 1986 and 1989, it detected something strange: neither of these “ice giant” planets has a well-defined north and south magnetic pole. This absence has remained mysterious ever since, but simulations performed at the University of California, Berkeley (UCB) in the US have now suggested an explanation. According to UCB planetary scientist Burkhard Militzer, the disorganized magnetic fields of Uranus and Neptune may arise from a separation of the icy fluids that make up their interiors. The theory could be tested in laboratory experiments of fluids at high pressures, as well as by a proposed mission to Uranus in the 2040s.

On Earth, the dipole magnetic field that loops from the North Pole to the South Pole arises from convection in the planet’s liquid-iron outer core. Since Uranus and Neptune lack such a dipole field, this implies that the convective movement of material in their interiors must be very different.

In 2004, planetary scientists Sabine Stanley and Jeremy Bloxham suggested that the planets’ interiors might contain immiscible layers. This separation would make widespread convection impossible, preventing a global dipolar magnetic field from forming, while convection in just one layer would produce the disorganized magnetic field that Voyager 2 observed. However, the nature of these non-mixing layers was still unexplained – hampered, in part, by a lack of data.

“Since both planets have been visited by only one spacecraft (Voyager 2), we do not have many measurements to analyse,” Militzer says.

Two immiscible fluids

To investigate conditions deep beneath Uranus and Neptune’s icy surfaces, Militzer developed computer models to simulate how a mixture of water, methane and ammonia will behave at the temperatures (above 4750 K) and pressures (above 3 x 106 atmospheres) that prevail there. The results surprised him. “One morning, I opened my laptop,” he recalls. “When I started analysing my latest simulations, I could not believe my eyes. An initially homogeneous mixture of water, methane and ammonia had separated into two distinct layers.”

The upper layer, he explains, is thin, rich in water and convecting, which allows it to generate the disordered magnetic field. The lower layer is magnetically inactive and composed of carbon, nitrogen and hydrogen. “This had never been observed before and I could tell right then that this result might allow us to understand what has been going on in the interiors of Uranus and Neptune,” he says.

A plastic polymer-like- and a water-rich layer

Militzer’s model, which he describes in PNAS, shows that the hydrogen content in the methane-ammonia mixture gradually decreases with depth, transforming into a C-N-H fluid. This C-N-H layer is almost like a plastic polymer, Militzer explains, and cannot support even a disorganized magnetic field – unlike the upper, water-rich layer, which likely convects.

A future mission to Uranus with the right instruments on board could provide observational evidence for this structure, Militzer says. “I would advocate for a Doppler imager so we can detect the planet’s natural oscillation frequencies,” he tells Physics World. Though such instruments are expensive and heavy, he says they are essential to detecting the presence of the predicted two ice layers in Uranus’ interior: “Like one can distinguish between an oboe and a clarinet, these frequencies can tell [us] about a planet’s interior structure.”

A follow-up to Voyager 2 could also reveal how the ice giants’ structures have evolved since they formed 4.5 billion years ago. Initially, their interiors would have contained only a single ice layer, and this layer would have generated a strong dipolar magnetic field with well-defined north and south poles. “Then, at some point, this ice separated into two distinct layers and their magnetic field switched from dipolar to disordered fields that we see today,” Militzer explains.

Determining when this switch occurred would help us understand not only Uranus and Neptune, but also ice giants orbiting stars other than our Sun. “The most common exoplanets discovered to date are around the same size as Uranus and Neptune, so when we observe the magnetic field of such ‘sub-Neptune’ exoplanets in the future, we might be able to say something about their age,” Militzer says.

In the near term, Militzer hopes that experimentalists will be able to test his theory in extremely-high temperatures and pressure fluid systems that mimic the proportions of elements found on Uranus and Neptune. But his long-term hopes are pinned on a new mission that could detect the predicted layers directly. “While I will have long retired when such a detection might eventually be made, I would be so happy to see it in my lifetime,” he says.

The post Immiscible ice layers may explain why Uranus and Neptune lack magnetic poles appeared first on Physics World.

Laser beam casts a shadow in a ruby crystal

16 décembre 2024 à 14:00

Particles of light – photons – are massless, so they normally pass right through each other. This generally means they can’t cast a shadow. In a new work, however, physicist Jeff Lundeen of the University of Ottawa, Canada and colleagues found that this counterintuitive behaviour can, in fact, happen when a laser beam is illuminated by another light source as it passes through a highly nonlinear medium. As well as being important for basic science, the work could have applications in laser fabrication and imaging.

The light-shadow experiment began when physicists led by Raphael Akel Abrahao sent a high-power beam of green laser light through a cube-shaped ruby crystal. They then illuminated this beam from the side with blue light and observed that the beam cast a shadow on a piece of white paper. This shadow extended through an entire face of the crystal. Writing in Optica, they note that “under ordinary circumstances, photons do not interact with each other, much less block each other as needed for a shadow.” What was going on?

Photon-photon interactions

The answer, they explain, boils down to some unusual photon-photon interactions that take place in media that absorb light in a highly nonlinear way. While several materials fit this basic description, most become saturated at high laser intensities. This means they become more transparent in the presence of a strong laser field, producing an “anti-shadow” that is even brighter than the background – the opposite of what the team was looking for.

What they needed, instead, was a material that absorbs more light at higher optical intensities. Such behaviour is known as “reverse saturation of absorption” or “saturable transmission”, and it only occurs if four conditions are met. Firstly, the light-absorbing system needs to have two electronic energy levels: a ground state and an excited state. Secondly, the transition from the ground to the excited state must be less strong (technically, it must have a smaller cross-section) than the transition from the first exited state to a higher excited state. Thirdly, after the material absorbs light, neither the first nor the second excited states should decay back to other levels when the light is re-emitted. Finally, the incident light should only saturate the first transition.

Diagram showing how the green laser increases the optical absorption of the blue illuminating laser beam, alongside a photo of the setup
Shadow experiment: A high-power green laser beam is directed through a ruby cube and illuminated with a blue laser beam from the side. The green laser beam increases the optical absorption of the blue illuminating laser beam, creating a matching region in the illuminating light and creating a darker area that appears as a shadow of the green laser beam. (Courtesy: R. A. Abrahao, H. P. N. Morin, J. T. R. Pagé, A. Safari, R. W. Boyd, J. S. Lundeen)

That might sound like a tall order, but it turns out that ruby fits the bill. Ruby is an aluminium oxide crystal that contains impurities of chromium atoms. These impurities distort its crystal lattice and give it its familiar red colour. When green laser light (532 nm) is applied to ruby, it drives an electronic transition from the ground state (denoted 4A2) to an excited state 4T2. This excited state then decays rapidly via phonons (vibrations of the crystal lattice) to the 2E state.

At this point, the electrons absorb blue light (450 nm) and transition from 2E to a different excited state, denoted 2T1. While electrons in the 4A2 state could, in principle, absorb blue light directly, without any intermediate step, the absorption cross-section of the transition from 2E to 2T1 is larger, Abrahao explains.

The result is that in the presence of the green laser beam, the ruby absorbs more of the illuminating blue light. This leaves behind a lower-optical-intensity region of blue illumination within the ruby – in other words, the green laser beam’s shadow.

Shadow behaves like an ordinary shadow

This laser shadow behaves like an ordinary shadow in many respects. It follows the shape of the object (the green laser beam) and conforms to the contours of the surfaces it falls on. The team also developed a theoretical model that predicts that the darkness of the shadow will increase as a function of the power of the green laser beam. In their experiment, the maximum contrast was 22% – a figure that Abrahao says is similar to a typical shadow on a sunny day. He adds that it could be increased in the future.

Lundeen offers another way of looking at the team’s experiment. “Fundamentally, a light wave is actually composed of a hybrid particle made up of light and matter, called a polariton,” he explains. “When light travels in a glass or crystal, both aspects of the polariton are important and, for example, explain why the wave travels more slowly in these media than in vacuum. In the absence of either part of the polariton, either the photon or atom, there would be no shadow.”

Strictly speaking, it is therefore not massless light that is creating the shadow, but the material component of the polariton, which has mass, adds Abrahao, who is now a postdoctoral researcher at Brookhaven National Laboratory in the US.

As well as helping us to better understand light-matter interactions, Abrahao tells Physics World that the experiment “could also come in useful in any device in which we need to control the transmission of a laser beam with anther laser beam”. The team now plans to search for other materials and combinations of wavelengths that might produce a similar “laser shadow” effect.

The post Laser beam casts a shadow in a ruby crystal appeared first on Physics World.

Generative AI has an electronic waste problem, researchers warn

13 décembre 2024 à 10:34

The rising popularity of generative artificial intelligence (GAI), and in particular large language models such as ChatGPT, could produce a significant surge in electronic waste, according to new analyses by researchers in Israel and China. Without mitigation measures, the researchers warn that this stream of e-waste could reach 2.5 million tons (2.2 billion kg) annually by 2030, and potentially even more.

“Geopolitical factors, such as restrictions on semiconductor imports, and the trend for rapid server turnover for operational cost saving, could further exacerbate e-waste generation,” says study team member Asaf Tzachor, who studies existential risks at Reichman University in Herzliya, Israel.

GAI or Gen AI is a form of artificial intelligence that creates new content, such as text, images, music, or videos using patterns it has learned from existing data. Some of the principles that make this pattern-based learning possible were developed by the physicist John Hopfield, who shared the 2024 Nobel Prize for Physics with computer scientist and AI pioneer Geoffrey Hinton. Perhaps the best-known example of Gen AI is ChatGPT (the “GPT” stands for “generative pre-trained transformer”), which is an example of a Large Language Model (LLM).

While the potential benefits of LLMs are significant, they come at a price. Notably, they require so much energy to train and operate that some major players in the field, including Google and ChatGPT developer OpenAI, are exploring the possibility of building new nuclear reactors for this purpose.

Quantifying and evaluating Gen AI’s e-waste problem

Energy use is not the only environmental challenge associated with Gen AI, however. The amount of e-waste it produces – including printed circuit boards and batteries that can contain toxic materials such as lead and chromium – is also a potential issue. “While the benefits of AI are well-documented, the sustainability aspects, and particularly e-waste generation, have been largely overlooked,” Tzachor says.

Tzachor and his colleagues decided to address what they describe as a “significant knowledge gap” regarding how GAI contributes to e-waste. Led by sustainability scientist Peng Wang at the Institute of Urban Environment, Chinese Academy of Sciences, they developed a computational power-drive, material flow analysis (CP-MFA) framework to quantify and evaluate the e-waste it produces. This involved modelling the computational resources required for training and deploying LLMs, explains Tzachor, and translating these resources into material flows and e-waste projections.

“We considered various future scenarios of GAI development, ranging from the most aggressive to the most conservative growth,” he tells Physics World. “We also incorporated factors such as geopolitical restrictions and server lifecycle turnover.”

Using this CP-MFA framework, the researchers estimate that the total amount of Gen AI-related e-waste produced between 2023 and 2030 could reach the level of 5 million tons in a “worst-case” scenario where AI finds the most widespread applications.

A range of mitigation measures

That worst-case scenario is far from inevitable, however. Writing in Nature Computational Science, the researchers also modelled the effectiveness of different e-waste management strategies. Among the strategies they studied were increasing the lifespan of existing computing infrastructures through regular maintenance and upgrades; reusing or remanufacturing key components; and improving recycling processes to recover valuable materials in a so-called “circular economy”.

Taken together, these strategies could reduce e-waste generation by up to 86%, according to the team’s calculations. Investing in more energy-efficient technologies and optimizing AI algorithms could also significantly reduce the computational demands of LLMs, Tzachor adds, and would reduce the need to update hardware so frequently.

Another mitigation strategy would be to design AI infrastructure in a way that uses modular components, which Tzachor says are easier to upgrade and recycle. “Encouraging policies that promote sustainable manufacturing practices, responsible e-waste disposal and extended producer responsibility programmes can also play a key role in reducing e-waste,” he explains.

As well as helping policymakers create regulations that support sustainable AI development and effective e-waste management, the study should also encourage AI developers and hardware manufacturers to adopt circular economy principles, says Tzachor. “On the academic side, it could serve as a foundation for future research aimed at exploring the environmental impacts of AI applications other than LLMs and developing more comprehensive sustainability frameworks in general.”

The post Generative AI has an electronic waste problem, researchers warn appeared first on Physics World.

❌