↩ Accueil

Vue normale

index.feed.received.today — 4 avril 2025

Photovoltaic battery runs on nuclear waste

4 avril 2025 à 14:50

Scientists in the US have developed a new type of photovoltaic battery that runs on the energy given off by nuclear waste. The battery uses a scintillator crystal to transform the intense gamma rays from radioisotopes into electricity and can produce more than a microwatt of power. According to its developers at Ohio State University and the University of Toledo, it could be used to power microelectronic devices such as microchips.

The idea of a nuclear waste battery is not new. Indeed, Raymond Cao, the Ohio State nuclear engineer who led the new research effort, points out that the first experiments in this field date back to the early 1950s. These studies, he explains, used a 50 milli-Curie 90Sr-90Y source to produce electricity via the electron-voltaic effect in p-n junction devices.

However, the maximum power output of these devices was just 0.8 μW, and their power conversion efficiency (PCE) was an abysmal 0.4 %. Since then, the PCE of nuclear voltaic batteries has remained low, typically in the 1–3% range, and even the most promising devices have produced, at best, a few hundred nanowatts of power.

Exploiting the nuclear photovoltaic effect

Cao is confident that his team’s work will change this. “Our yet-to-be-optimized battery has already produced 1.5 μW,” he says, “and there is much room for improvement.”

To achieve this benchmark, Cao and colleagues focused on a different physical process called the nuclear photovoltaic effect. This effect captures the energy from highly-penetrating gamma rays indirectly, by coupling a photovoltaic solar cell to a scintillator crystal that emits visible light when it absorbs radiation. This radiation can come from several possible sources, including nuclear power plants, storage facilities for spent nuclear fuel, space- and submarine-based nuclear reactors or, really, anyplace that happens to have large amounts of gamma ray-producing radioisotopes on hand.

The scintillator crystal Cao and colleagues used is gadolinium aluminium garnet (GAGG), and they attached it to a solar cell made from polycrystalline CdTe. The resulting device measures around 2 x 2 x 1 cm, and they tested it using intense gamma rays emitted by two different radioactive sources, 137Cs and 60Co, that produced 1.5 kRad/h and 10 kRad/h, respectively. 137Cs is the most common fission product found in spent nuclear fuel, while 60Co is an activation product.

Enough power for a microsensor

The Ohio-Toledo team found that the maximum power output of their battery was around 288 nW with the 137Cs source. Using the 60Co irradiator boosted this to 1.5 μW. “The greater the radiation intensity, the more light is produced, resulting in increased electricity generation,” Cao explains.

The higher figure is already enough to power a microsensor, he says, and he and his colleagues aim to scale the system up to milliwatts in future efforts. However, they acknowledge that doing so presents several challenges. Scaling up the technology will be expensive, and gamma radiation gradually damages both the scintillator and the solar cell. To overcome the latter problem, Cao says they will need to replace the materials in their battery with new ones. “We are interested in finding alternative scintillator and solar cell materials that are more radiation-hard,” he tells Physics World.

The researchers are optimistic, though, arguing that optimized nuclear photovoltaic batteries could be a viable option for harvesting ambient radiation that would otherwise be wasted. They report their work in Optical Materials X.

The post Photovoltaic battery runs on nuclear waste appeared first on Physics World.

index.feed.received.before_yesterday

Zwitterions make medical implants safer for patients

2 avril 2025 à 17:00

A new technique could reduce the risk of blood clots associated with medical implants, making them safer for patients. The technique, which was developed by researchers at the University of Sydney, Australia, involves coating the implants with highly hydrophilic molecules known as zwitterions, thereby inhibiting the build-up of clot-triggering proteins.

Proteins in blood can stick to the surfaces of medical implants such as heart valves and vascular stents. When this happens, it produces a cascade effect in which multiple mechanisms lead to the formation of extensive clots and fibrous networks. These clots and networks can impair the function of implanted medical devices so much that invasive surgery may be required to remove or replace the implant.

To prevent this from happening, the surfaces of implants are often treated with polymeric coatings that resist biofouling. Hydrophilic polymeric coatings such as polyethylene glycol are especially useful, as their water-loving nature allows a thin layer of water to form between them and the surface of the implants, held in place via hydrogen and/or electrostatic bonds. This water layer forms a barrier that prevents proteins from sticking, or adsorbing, to the implant.

An extra layer of zwitterions

Recently, researchers discovered that polymers coated with an extra layer of small molecules called zwitterions provided even more protection against protein adsorption. “Zwitter” means “hybrid” in German; hence, zwitterions are molecules that carry both positive and negative charge, making them neutrally charged overall. These molecules are also very hydrophilic and easily form tight bonds with water molecules. The resulting layer of water has a structure that is similar to that of bulk water, which is energetically stable.

A further attraction of zwitterionic coatings for medical implants is that zwitterions are naturally present in our bodies. In fact, they make up the hydrophilic phospholipid heads of mammalian cell membranes, which play a vital role in regulating interactions between biological cells and the extracellular environment.

Plasma functionalization

In the new work, researchers led by Sina Naficy grafted nanometre-thick zwitterionic coatings onto the surfaces of implant materials using a technique called plasma functionalization. They found that the resulting structures reduce the amount of fibrinogen proteins that adsorb onto the implants by roughly nine-fold and decrease blood clot formation (thrombosis) by almost 75%.

Naficy and colleagues achieved their results by optimizing the density, coverage and thickness of the coating. This was critical for realizing the full potential of these materials, they say, because a coating that is not fully optimized would not reduce clotting.

Naficy tells Physics World that the team’s main goal is to enhance the surface properties of medical devices. “These devices when implanted are in contact with blood and can readily cause thrombosis or infection if the surface initiates certain biological cascade reactions,” he explains. “Most such reactions begin when specific proteins adsorb on the surface and activate the next stage of cascade. Optimizing surface properties with the aid of zwitterions can control / inhibit protein adsorption, hence reducing the severity of adverse body reactions.”

The researchers say they will now be evaluating the long-term stability of the zwitterion-polymer coatings and trying to scale up their grafting process. They report their work in Communications Materials and Cell Biomaterials.

The post Zwitterions make medical implants safer for patients appeared first on Physics World.

AI speeds up detection of neutron star mergers

31 mars 2025 à 17:05

A new artificial intelligence/machine learning method rapidly and accurately characterizes binary neutron star mergers based on the gravitational wave signature they produce. Though the method has not yet been tested on new mergers happening “live”, it could enable astronomers to make quicker estimates of properties such as the location of mergers and the masses of the neutron stars. This information, in turn, could make it possible for telescopes to target and observe the electromagnetic signals that accompany such mergers.

When massive objects such as black holes and neutron stars collide and merge, they emit ripples in spacetime known as gravitational waves (GWs). In 2015 scientists on Earth began observing these ripples using kilometre-scale interferometers that measure the minuscule expansion and contraction of space–time that occurs when a gravitational wave passes through our planet. These interferometers are located in the US, Italy and Japan and are known collectively as the LVK observatories after their initials: the Laser Interferometer GW Observatory (LIGO), the Virgo GW Interferometer (Virgo) and the Kamioka GW Detector (KAGRA).

When two neutron stars in a binary pair merge, they emit electromagnetic waves as well as GWs. While both types of wave travel at the speed of light, certain poorly understood processes that occur within and around the merging pair cause the electromagnetic signal to be slightly delayed. This means that the LVK observatories can detect the GW signal coming from a binary neutron star (BNS) merger seconds, or even minutes, before its electromagnetic counterpart arrives. Being able to identify GWs quickly and accurately therefore increases the chances of detecting other signals from the same event.

This is no easy task, however. GW signals are long and complex, and the main technique currently used to interpret them, Bayesian inference, is slow. While faster alternatives exist, they often make algorithmic approximations that negatively affect their accuracy.

Trained with millions of GW simulations

Physicists led by Maximilian Dax of the Max Planck Institute for Intelligent Systems in Tübingen, Germany have now developed a machine learning (ML) framework that accurately characterizes and localizes BNS mergers within a second of a GW being detected, without resorting to such approximations. To do this, they trained a deep neural network model with millions of GW simulations.

Once trained, the neural network can take fresh GW data as input and predict corresponding properties of the merging BNSs – for example, their masses, locations and spins – based on its training dataset. Crucially, this neural network output includes a sky map. This map, Dax explains, provides a fast and accurate estimate for where the BNS is located.

The new work built on the group’s previous studies, which used ML systems to analyse GWs from binary black hole (BBH) mergers. “Fast inference is more important for BNS mergers, however,” Dax says, “to allow for quick searches for the aforementioned electromagnetic counterparts, which are not emitted by BBH mergers.”

The researchers, who report their work in Nature, hope their method will help astronomers to observe electromagnetic counterparts for BNS mergers more often and detect them earlier – that is, closer to when the merger occurs. Being able to do this could reveal important information on the underlying processes that occur during these events. “It could also serve as a blueprint for dealing with the increased GW signal duration that we will encounter in the next generation of GW detectors,” Dax says. “This could help address a critical challenge in future GW data analysis.”

So far, the team has focused on data from current GW detectors (LIGO and Virgo) and has only briefly explored next-generation ones. They now plan to apply their method to these new GW detectors in more depth.

The post AI speeds up detection of neutron star mergers appeared first on Physics World.

Brillouin microscopy speeds up by a factor of 1000

28 mars 2025 à 16:00

Researchers at the EMBL in Germany have dramatically reduced the time required to create images using Brillouin microscopy, making it possible to study the viscoelastic properties of biological samples far more quickly and with less damage than ever before. Their new technique can image samples with a field of view of roughly 10 000 pixels at a speed of 0.1 Hz – a 1000-fold improvement in speed and throughput compared to standard confocal techniques.

Mechanical properties such as the elasticity and viscosity of biological cells are closely tied to their function. These properties also play critical roles in processes such as embryo and tissue development and can even dictate how diseases such as cancer evolve. Measuring these properties is therefore important, but it is not easy since most existing techniques to do so are invasive and thus inherently disruptive to the systems being imaged.

Non-destructive, label- and contact-free

In recent years, Brillouin microscopy has emerged as a non-destructive, label- and contact-free optical spectroscopy method for probing the viscoelastic properties of biological samples with high resolution in three dimensions. It relies on Brillouin scattering, which occurs when light interacts with the phonons (or collective vibrational modes) that are present in all matter. This interaction produces two additional peaks, known as Stokes and anti-Stokes Brillouin peaks, in the spectrum of the scattered light. The position of these peaks (the Brillouin shift) and their linewidth (the Brillouin width) are related to the elastic and viscous properties, respectively, of the sample.

The downside is that standard Brillouin microscopy approaches analyse just one point in a sample at a time. Because the scattering signal from a single point is weak, imaging speeds are slow, yielding long light exposure times that can damage photosensitive components within biological cells.

“Light sheet” Brillouin imaging

To overcome this problem, EMBL researchers led by Robert Prevedel began exploring ways to speed up the rate at which Brillouin microscopy can acquire two- and three-dimensional images. In the early days of their project, they were only able to visualize one pixel at a time. With typical measurement times of tens to hundreds of milliseconds for a single data point, it therefore took several minutes, or even hours, to obtain two-dimensional images of 50–250 square pixels.

In 2022, however, they succeeded in expanding the field of view to include an entire spatial line — that is, acquiring image data from more than 100 points in parallel. In their latest work, which they describe in Nature Photonics, they extended the technique further to allow them to view roughly 10 000 pixels in parallel over the full plane of a sample. They then used the  new approach to study mechanical changes in live zebrafish larvae.

“This advance enables much faster Brillouin imaging, and in terms of microscopy, allows us to perform ‘light sheet’ Brillouin imaging,” says Prevedel. “In short, we are able to ‘under-sample’ the spectral output, which leads to around 1000 fewer individual measurements than normally needed.”

Towards a more widespread use of Brillouin microscopy

Prevedel and colleagues hope their result will lead to more widespread use of Brillouin microscopy, particularly for photosensitive biological samples. “We wanted to speed-up Brillouin imaging to make it a much more useful technique in the life sciences, yet keep overall light dosages low. We succeeded in both aspects,” he tells Physics World.

Looking ahead, the researchers plan to further optimize the design of their approach and merge it with microscopes that enable more robust and straightforward imaging. “We then want to start applying it to various real-world biological structures and so help shed more light on the role mechanical properties play in biological processes,” Prevedel says.

The post Brillouin microscopy speeds up by a factor of 1000 appeared first on Physics World.

Sterile neutrinos are a no-show (again)

27 mars 2025 à 17:00

New data from the NOvA experiment at Fermilab in the US contain no evidence for so-called “sterile” neutrinos, in line with results from most – though not all – other neutrino detectors to date. As well as being consistent with previous experiments, the finding aligns with standard theoretical models of neutrino oscillation, in which three active types, or flavours, of neutrino convert into each other. The result also sets more stringent limits on how much an additional sterile type of neutrino could affect the others.

“The global picture on sterile neutrinos is still very murky, with a number of experiments reporting anomalous results that could be attributed to sterile neutrinos on one hand and a number of null results on the other,” says NOvA team member Adam Lister of the University of Wisconsin, Madison, US. “Generally, these anomalous results imply we should see large amounts of sterile-driven neutrino disappearance at NOvA, but this is not consistent with our observations.”

Neutrinos were first proposed in 1930 by Wolfgang Pauli as a way to account for missing energy and spin in the beta decay of nuclei. They were observed in the laboratory in 1956, and we now know that they come in (at least) three flavours: electron, muon and tau. We also know that these three flavours oscillate, changing from one to another as they travel through space, and that this oscillation means they are not massless (as was initially thought).

Significant discrepancies

Over the past few decades, physicists have used underground detectors to probe neutrino oscillation more deeply. A few of these detectors, including the LSND at Los Alamos National Laboratory, BEST in Russia, and Fermilab’s own MiniBooNE, have observed significant discrepancies between the number of neutrinos they detect and the number that mainstream theories predict.

One possible explanation for this excess, which appears in some extensions of the Standard Model of particle physics, is the existence of a fourth flavour of neutrino. Neutrinos of this “sterile” type do not interact with the other flavours via the weak nuclear force. Instead, they interact only via gravity.

Detecting sterile neutrinos would fundamentally change our understanding of particle physics. Indeed, some physicists think sterile neutrinos could be a candidate for dark matter – the mysterious substance that is thought to make up around 85% of the matter in the universe but has so far only made itself known through the gravitational force it exerts.

Near and far detectors

The NOvA experiment uses two liquid scintillator detectors to monitor a stream of neutrinos created by firing protons at a carbon target. The near detector is located at Fermilab, approximately 1 km from the target, while the far detector is 810 km away in northern Minnesota. In the new study, the team measured how many muon-type neutrinos survive the journey through the Earth’s crust from the near detector to the far one. The idea is that if fewer neutrinos survive than the conventional three-flavour oscillations picture predicts, some of them could have oscillated into sterile neutrinos.

The experimenters studied two different interactions between neutrinos and normal matter, says team member V Hewes of the University of Cincinnati, US. “We looked for both charged current muon neutrino and neutral current interactions, as a sterile neutrino would manifest differently in each,” Hewes explains. “We then compared our data across those samples in both detectors to simulations of neutrino oscillation models with and without the presence of a sterile neutrino.”

No excess of neutrinos seen

Writing in Physical Review Letters, the researchers state that they found no evidence of neutrinos oscillating into sterile neutrinos. What is more, introducing a fourth, sterile neutrino did not provide better agreement with the data than sticking with the standard model of three active neutrinos.

This result is in line with several previous experiments that looked for sterile neutrinos, including those performed at T2K, Daya Bay, RENO and MINOS+. However, Lister says it places much stricter constraints on active-sterile neutrino mixing than these earlier results. “We are really tightening the net on where sterile neutrinos could live, if they exist,” he tells Physics World.

The NOvA team now hopes to tighten the net further by reducing systematic uncertainties. “To that end, we are developing new data samples that will help us better understand the rate at which neutrinos interact with our detector and the composition of our beam,” says team member Adam Aurisano, also at the University of Cincinnati. “This will help us better distinguish between the potential imprint of sterile neutrinos and more mundane causes of differences between data and prediction.”

NOvA co-spokesperson Patricia Vahle, a physicist at the College of William & Mary in Virginia, US, sums up the results. “Neutrinos are full of surprises, so it is important to check when anomalies show up,” she says. “So far, we don’t see any signs of sterile neutrinos, but we still have some tricks up our sleeve to extend our reach.”

The post Sterile neutrinos are a no-show (again) appeared first on Physics World.

Atomic anomaly explained without recourse to hypothetical ‘dark force’

27 mars 2025 à 10:00

Physicists in Germany have found an alternative explanation for an anomaly that had previously been interpreted as potential evidence for a mysterious “dark force”. Originally spotted in ytterbium atoms, the anomaly turns out to have a more mundane cause. However, the investigation, which involved high-precision measurements of shifts in ytterbium’s energy levels and the mass ratios of its isotopes, could help us better understand the structure of heavy atomic nuclei and the physics of neutron stars.

Isotopes are forms of an element that have the same number of protons and electrons, but different numbers of neutrons. These different numbers of neutrons produce shifts in the atom’s electronic energy levels. Measuring these so-called isotope shifts is therefore a way of probing the interactions between electrons and neutrons.

In 2020, a team of physicists at the Massachusetts Institute of Technology (MIT) in the US observed an unexpected deviation in the isotope shift of ytterbium. One possible explanation for this deviation was the existence of a new “dark force” that would interact with both ordinary, visible matter and dark matter via hypothetical new force-carrying particles (bosons).

Although dark matter is thought to make up about 85 percent of the universe’s total matter, and its presence can be inferred from the way light bends as it travels towards us from distant galaxies, it has never been detected directly. Evidence for a new, fifth force (in addition to the known strong, weak, electromagnetic and gravitational forces) that acts between ordinary and dark matter would therefore be very exciting.

A team led by Tanja Mehlstäubler from the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig and Klaus Blaum from the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg has now confirmed that the anomaly is real. However, the PTB-MPIK researchers say it does not stem from a dark force. Instead, it arises from the way the nuclear structure of ytterbium isotopes deforms as more neutrons are added.

Measuring ytterbium isotope shifts and atomic masses

Mehlstäubler, Blaum and colleagues came to this conclusion after measuring shifts in the atomic energy levels of five different ytterbium isotopes: 168,170,172,174,176Yb. They did this by trapping ions of these isotopes in an ion trap at the PTB and then using an ultrastable laser to drive certain electronic transitions. This allowed them to pin down the frequencies of specific transitions (2S1/22D5/2 and 2S1/22F7/2) with a precision of 4 ×10−9, the highest to date.

They also measured the atomic masses of the ytterbium isotopes by trapping individual highly-charged Yb42+ ytterbium ions in the cryogenic PENTATRAP Penning trap mass spectrometer at the MPIK. In the strong magnetic field of this trap, team member and study lead author Menno Door explains, the ions are bound to follow a circular orbit. “We measure the rotational frequency of this orbit by amplifying the miniscule inducted current in surrounding electrodes,” he says. “The measured frequencies allowed us to very precisely determine the related mass ratios of the various isotopes with a precision of 4 ×10−12.”

From these data, the researchers were able to extract new parameters that describe how the ytterbium nucleus deforms. To back up their findings, a group at TU Darmstadt led by Achim Schwenk simulated the ytterbium nuclei on large supercomputers, calculating their structure from first principles based on our current understanding of the strong and electromagnetic interactions. “These calculations confirmed that the leading signal we measured was due to the evolving nuclear structure of ytterbium isotopes, not a new fifth force,” says team member Matthias Heinz.

“Our work complements a growing body of research that aims to place constraints on a possible new interaction between electrons and neutrons,” team member Chih-Han Yeh tells Physics World. “In our work, the unprecedented precision of our experiments refined existing constraints.”

The researchers say they would now like to measure other isotopes of ytterbium, including rare isotopes with high or low neutron numbers. “Doing this would allow us to control for uncertain ‘higher-order’ nuclear structure effects and further improve the constraints on possible new physics,” says team member Fiona Kirk.

Door adds that isotope chains of other elements such as calcium, tin and strontium would also be worth investigating. “These studies would allow to further test our understanding of nuclear structure and neutron-rich matter, and with this understanding allow us to probe for possible new physics again,” he says.

The work is detailed in Physical Review Letters.

The post Atomic anomaly explained without recourse to hypothetical ‘dark force’ appeared first on Physics World.

Novel zinc alloys could make bone screws biodegradable

26 mars 2025 à 10:58

Orthopaedic implants that bear loads while bones heal, then disappear once they’re no longer needed, could become a reality thanks to a new technique for enhancing the mechanical properties of zinc alloys. Developed by researchers at Monash University in Australia, the technique involves controlling the orientation and size of microscopic grains in these strong yet biodegradable materials.

Implants such as plates and screws provide temporary support for fractured bones until they knit together again. Today, these implants are mainly made from sturdy materials such as stainless steel or titanium that remain in the body permanently. Such materials can, however, cause discomfort and bone loss, and subsequent injuries to the same area risk additional damage if the permanent implants warp or twist.

To address these problems, scientists have developed biodegradable alternatives that dissolve once the bone has healed. These alternatives include screws made from magnesium-based materials such as MgYREZr (trade name MAGNEZIX), MgYZnMn (NOVAMag) and MgCaZn (RESOMET). However, these materials have compressive yield strengths of just 50 to 260 MPa, which is too low to support bones that need to bear a patient’s weight. They also produce hydrogen gas as they degrade, possibly affecting how biological tissues regenerate.

Zinc alloys do not suffer from the hydrogen gas problem. They are biocompatible, dissolving slowly and safely in the body. There is even evidence that Zn2+ ions can help the body heal by stimulating bone formation. But again, their mechanical strength is low: at less than 30 MPa, they are even worse than magnesium in this respect.

Making zinc alloys strong enough for load-bearing orthopaedic implants is not easy. Mechanical strategies such as hot-extruding binary alloys have not helped much. And methods that focus on reducing the materials’ grain size (to hamper effects like dislocation slip) have run up against a discouraging problem: at body temperature (37 °C), ultrafine-grained Zn alloys become mechanically weaker as their so-called “creep resistance” decreases.

Grain size goes bigger

In the new work, a team led by materials scientist and engineer Jian-Feng Nei tried a different approach. By increasing grain size in Zn alloys rather than decreasing it, the Monash team was able to balance the alloys’ strength and creep resistance – something they say could offer a route to stronger zinc alloys for biodegradable implants.

In compression tests of extruded Zn–0.2 wt% Mg alloy samples with grain sizes of 11 μm, 29 μm and 47 μm, the team measured stress-strain curves that show a markedly higher yield strength for coarse-grained samples than for fine-grained ones. What is more, the compressive yield strengths of these coarser-grained zinc alloys are notably higher than those of MAGNEZIX, NOVAMag and RESOMET biodegradable magnesium alloys. At the upper end, they even rival those of high-strength medical-grade stainless steels.

The researchers attribute this increased compressive yield to a phenomenon called the inverse Hall–Petch effect. This effect comes about because larger grains favour metallurgical effects such as intra-granular pyramidal slip as well as a variation of a well-known metal phenomenon called twinning, in which a specific kind of defect forms when part of the material’s crystal structure flips its orientation. Larger grains also make the alloys more flexible, allowing them to better adapt to surrounding biological tissues. This is the opposite of what happens with smaller grains, which facilitate inter-granular grain boundary sliding and make alloys more rigid.

The new work, which is detailed in Nature, could aid the development of advanced biodegradable implants for orthopaedics, cardiovascular applications and other devices, says Nei. “With improved biocompatibility, these implants could be safer and do away with the need for removal surgeries, lowering patient risk and healthcare costs,” he tells Physics World. “What is more, new alloys and processing techniques could allow for more personalized treatments by tailoring materials to specific medical needs, ultimately improving patient outcomes.”

The Monash team now aims to improve the composition of the alloys and achieve more control over how they degrade. “Further studies on animals and then clinical trials will test their strength, safety and compatibility with the body,” says Nei. “After that, regulatory approvals will ensure that the biodegradable metals meet medical standards for orthopaedic implants.”

The team is also setting up a start-up company with the goal of developing and commercializing the materials, he adds.

The post Novel zinc alloys could make bone screws biodegradable appeared first on Physics World.

How would an asteroid strike affect life on Earth?

14 mars 2025 à 10:30

How would the climate and the environment on our planet change if an asteroid struck? Researchers at the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea have now tried to answer this question by running several impact simulations with a state-of-the-art Earth system model on their in-house supercomputer. The results show that the climate, atmospheric chemistry and even global photosynthesis would be dramatically disrupted in the three to four years following the event, due to the huge amounts of dust produced by the impact.

Beyond immediate effects such as scorching heat, earthquakes and tsunamis, an asteroid impact would have long-lasting effects on the climate because of the large quantities of aerosols and gases ejected into the atmosphere. Indeed, previous studies on the Chicxulub 10-km asteroid impact, which happened around 66 million years ago, revealed that dust, soot and sulphur led to a global “impact winter” and was very likely responsible for the dinosaurs going extinct during the Cretaceous/Paleogene period.

“This winter is characterized by reduced sunlight, because of the dust filtering it out, cold temperatures and decreased precipitation at the surface,” says Axel Timmermann, director of the ICCP and leader of this new study. “Severe ozone depletion would occur in the stratosphere too because of strong warming caused by the dust particles absorbing solar radiation there.”

These unfavourable climate conditions would inhibit plant growth via a decline in photosynthesis both on land and in the sea and would thus affect food productivity, Timmermann adds.

Something surprising and potentially positive would also happen though, he says: plankton in the ocean would recover within just six months and its abundance could even increase afterwards. Indeed, diatoms (silicate-rich algae) would be more plentiful than before the collision. This might be because the dust created by the asteroid is rich in iron, which would trigger plankton growth as it sinks into the ocean. These phytoplankton “blooms” could help alleviate emerging food crises triggered by the reduction in terrestrial productivity, at least for several years after the impact, explains Timmermann.

The effect of a “Bennu”-sized asteroid impact

In this latest study, published in Science Advances, the researchers simulated the effect of a “Bennu”-sized asteroid impact. Bennu is a so-called medium-sized asteroid with a diameter of around 500 m. This type of asteroid is more likely to impact Earth than the “planet killer” larger asteroids, but has been studied far less.

There is an estimated 0.037% chance of such an asteroid colliding with Earth in September 2182. While this probability is small, such an impact would be very serious, says Timmermann, and would lead to climate conditions similar to those observed after some of the largest volcanic eruptions in the last 100 000 years. “It is therefore important to assess the risk, which is the product of the probability and the damage that would be caused, rather than just the probability by itself,” he tells Physics World. “Our results can serve as useful benchmarks to estimate the range of environmental effects from future medium-sized asteroid collisions.”

The team ran the simulations on the IBS’ supercomputer Aleph using the Community Earth System Model Version 2 (CESM2) and the Whole Atmosphere Community Climate Model Version 6 (WACCM6). The simulations injected up to 400 million tonnes of dust into the stratosphere.

The climate effects of impact-dust aerosols mainly depend on their abundance in the atmosphere and how they evolve there. The simulations revealed that global mean temperatures would drop by 4° C, a value that’s comparable with the cooling estimated as a result of the Toba volcano erupting around 74 000 years ago (which emitted 2000 Tg (2×1015 g) of sulphur dioxide). Precipitation also decreased 15% worldwide and ozone dropped by a dramatic 32% in the first year following the asteroid impact.

Asteroid impacts may have shaped early human evolution

“On average, medium-sized asteroids collide with Earth about every 100 000 to 200 000 years,” says Timmermann. “This means that our early human ancestors may have experienced some of these medium-sized events. These may have impacted human evolution and even affected our species’ genetic makeup.”

The researchers admit that their model has some inherent limitations. For one, CESM2/WACCM6, like other modern climate models, is not designed and optimized to simulate the effects of massive amounts of aerosol injected into the atmosphere. Second, the researchers only focused on the asteroid colliding with the Earth’s land surface. This is obviously less likely than an impact on the ocean, because roughly 70% of Earth’s surface is covered by water, they say. “An impact in the ocean would inject large amounts of water vapour rather than climate-active aerosols such as dust, soot and sulphur into the atmosphere and this vapour needs to be better modelled – for example, for the effect it has on ozone loss,” they say.

The effect of the impact on specific regions on the planet also needs to be better simulated, the researchers add. Whether the asteroid impacts during winter or summer also needs to be accounted for since this can affect the extent of the climate changes that would occur.

Finally, as well as the dust nanoparticles investigated in this study, future work should also look at soot emissions from wildfires ignited by “impact “spherules”, and sulphur and CO2 released from target evaporites, say Timmermann and colleagues. “The ‘impact winter’ would be intensified and prolonged if other aerosols such as soot and sulphur were taken into account.”

The post How would an asteroid strike affect life on Earth? appeared first on Physics World.

Perovskite solar cells can be completely recycled

11 mars 2025 à 16:00

A research team headed up at Linköping University in Sweden and Cornell University in the US has succeeded in recycling almost all of the components of perovskite solar cells using simple, non-toxic, water-based solvents. What’s more, the researchers were able to use the recycled components to make new perovskite solar cells with almost the same power conversion efficiency as those created from new materials. This work could pave the way to a sustainable perovskite solar economy, they say.

While solar energy is considered an environmentally friendly source of energy, most of the solar panels available today are based on silicon, which is difficult to recycle. This has led to the first generation of silicon solar panels, which are reaching the end of their life cycles, ending up in landfills, says Xun Xiao, one of the team members at Linköping University.

When developing emerging solar cell technologies, we therefore need to take recycling into consideration, adds one of the leaders of the new study, Feng Gao, also at Linköping. “If we don’t know how to recycle them, maybe we shouldn’t put them on the market at all.”

To this end, many countries around the world are imposing legal requirements on photovoltaic manufacturers, to ensure that they collect and recycle any solar cell waste they produce. These initiatives include the WEEE directive 2012/19/EU in the European Union and equivalent legislation in Asia and the US.

Perovskites are one of the most promising materials for making next-generation solar cells. Not only are they relatively inexpensive, they are also easy to fabricate, lightweight, flexible and transparent. This allows them to be placed on top of a variety of surfaces, unlike their silicon counterparts. And since they boast a power conversion efficiency (PCE) of more than 25%, this makes them comparable to existing photovoltaics on the market.

A shorter lifespan

One of their downsides, however, is that perovskite solar cells have a shorter lifespan than silicon solar cells. This means that recycling is even more critical for these materials. Today, perovskite solar cells are disassembled using dangerous solvents such as dimethylformamide, but Gao and colleagues have now developed a technique in which water can be used as the solvent.

Perovskites are crystalline materials with an ABXstructure, where A is caesium, methylammonium (MA) or formamidinium (FA); B is lead or tin; and X is chlorine, bromine or iodine. Solar cells made of these materials are composed of different layers: the hole/electron transport layers; the perovskite layer; indium tin oxide substrates; and cover glasses.

In their work, which they detail in Nature, the researchers succeeded in delaminating end-of-life devices layer by layer, using water containing three low-cost additives: sodium acetate, sodium iodide and hypophosphorous acid. Despite being able to dissolve organic iodide salts such as methylammonium iodide and formamidinium iodide, water only marginally dissolves lead iodide (about 0.044 g per 100 ml at 20 °C). The researchers therefore developed a way to increase the amount of lead iodide that dissolves in water by introducing acetate ions into the mix. These ions readily coordinate with lead ions, forming highly soluble lead acetate (about 44.31 g per 100 ml at 20 °C).

Once the degraded perovskites had dissolved in the aqueous solution, the researchers set about recovering pure and high-quality perovskite crystals from the solution. They did this by providing extra iodide ions to coordinate with lead. This resulted in [PbI]+ transitioning to [PbI2]0 and eventually to [PbI3] and the formation of the perovskite framework.

To remove the indium tin oxide substrates, the researchers sonicated these layers in a solution of water/ethanol (50%/50% volume ratio) for 15 min. Finally, they delaminated the cover glasses by placing the degraded solar cells on a hotplate preheated to 150 °C for 3 min.

They were able to apply their technology to recycle both MAPbI3 and FAPbI3 perovskites.

New devices made from the recycled perovskites had an average power conversion efficiency of 21.9 ± 1.1%, with the best samples clocking in at 23.4%. This represents an efficiency recovery of more than 99% compared with those prepared using fresh materials (which have a PCE of 22.1 ± 0.9%).

Looking forward, Gao and colleagues say they would now like to demonstrate that their technique works on a larger scale. “Our life-cycle assessment and techno-economic analysis has already confirmed that our strategy not only preserves raw materials, but also appreciably lowers overall manufacturing costs of solar cells made from perovskites,” says co-team leader Fengqi You, who works at Cornell University. “In particular, reclaiming the valuable layers in these devices drives down expenses and helps reduce the ‘levelized cost’ of electricity they produce, making the technology potentially more competitive and sustainable at scale,” he tells Physics World.

The post Perovskite solar cells can be completely recycled appeared first on Physics World.

‘Phononic shield’ protects mantis shrimp from its own shock waves

4 mars 2025 à 15:59

When a mantis shrimp uses shock waves to strike and kill its prey, how does it prevent those shock waves from damaging its own tissues? Researchers at Northwestern University in the US have answered this question by identifying a structure within the shrimp that filters out harmful frequencies. Their findings, which they obtained by using ultrasonic techniques to investigate surface and bulk wave propagation in the shrimp’s dactyl club, could lead to novel advanced protective materials for military and civilian applications.

Dactyl clubs are hammer-like structures located on each side of a mantis shrimp’s body. They store energy in elastic structures similar to springs that are latched in place by tendons. When the shrimp contracts its muscles, the latch releases, releasing the stored energy and propelling the club forward with a peak force of up to 1500 N.

This huge force (relative to the animal’s size) creates stress waves in both the shrimp’s target – typically a hard-shelled animal such as a crab or mollusc – and the dactyl club itself, explains biomechanical engineer Horacio Dante Espinosa, who led the Northwestern research effort. The club’s punch also creates bubbles that rapidly collapse to produce shockwaves in the megahertz range. “The collapse of these bubbles (a process known as cavitation collapse), which takes place in just nanoseconds, releases intense bursts of energy that travel through the target and shrimp’s club,” he explains. “This secondary shockwave effect makes the shrimp’s strike even more devastating.”

Protective phononic armour

So how do the shrimp’s own soft tissues escape damage? To answer this question, Espinosa and colleagues studied the animal’s armour using transient grating spectroscopy (TGS) and asynchronous optical sampling (ASOPS). These ultrasonic techniques respectively analyse how stress waves propagate through a material and characterize the material’s microstructure. In this work, Espinosa and colleagues used them to provide high-resolution, frequency-dependent wave propagation characteristics that previous studies had not investigated experimentally.

The team identified three distinct regions in the shrimp’s dactyl club. The outermost layer consists of a hard hydroxyapatite coating approximately 70 μm thick, which is durable and resists damage. Beneath this, an approximately 500 μm-thick layer of mineralized chitin fibres arranged in a herringbone pattern enhances the club’s fracture resistance. Deeper still, Espinosa explains, is a region that features twisted fibre bundles organized in a corkscrew-like arrangement known as a Bouligand structure. Within this structure, each successive layer is rotated relative to its neighbours, giving it a unique and crucial role in controlling how stress waves propagate through the shrimp.

“Our key finding was the existence of phononic bandgaps (through which waves within a specific frequency range cannot travel) in the Bouligand structure,” Espinosa explains. “These bandgaps filter out harmful stress waves so that they do not propagate back into the shrimp’s club and body. They thus preserve the club’s integrity and protect soft tissue in the animal’s appendage.”

 The team also employed finite element simulations incorporating so-called Bloch-Floquet analyses and graded mechanical properties to understand the phonon bandgap effects. The most surprising result, Espinosa tells Physics World, was the formation of a flat branch around the 450 to 480 MHz range, which correlates to frequencies arising from bubble collapse originating during club impact.

Evolution and its applications

For Espinosa and his colleagues, a key goal of their research is to understand how evolution leads to natural composite materials with unique photonic, mechanical and thermal properties. In particular, they seek to uncover how hierarchical structures in natural materials and the chemistry of their constituents produce emergent mechanical properties. “The mantis shrimp’s dactyl club is an example of how evolution leads to materials capable of resisting extreme conditions,” Espinosa says. “In this case, it is the violent impacts the animal uses for predation or protection.”

The properties of the natural “phononic shield” unearthed in this work might inspire advanced protective materials for both military and civilian applications, he says. Examples could include the design of helmets, personnel armour, and packaging for electronics and other sensitive devices.

In this study, which is described in Science, the researchers analysed two-dimensional simulations of wave behaviour. Future research, they say, should focus on more complex three-dimensional simulations to fully capture how the club’s structure interacts with shock waves. “Designing aquatic experiments with state-of-the-art instrumentation would also allow us to investigate how phononic properties function in submerged underwater conditions,” says Espinosa.

The team would also like to use biomimetics to make synthetic metamaterials based on the insights gleaned from this work.

The post ‘Phononic shield’ protects mantis shrimp from its own shock waves appeared first on Physics World.

Black hole’s shadow changes from one year to the next

28 février 2025 à 10:30

New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.

The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.

Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.

Hot and violent region

The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.

The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.

New result reinforces previous interpretations

The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.

Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.

Very long baseline interferometry

To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.

The facilities contributing to this work were the Atacama Large Millimeter Array (ALMA) and the Atacama Pathfinder Experiment, both in Chile; the South Pole Telescope (SPT) in Antarctica; the IRAM 30-metre telescope and NOEMA Observatory in Spain; the James Clerk Maxwell Telescope (JCMT) and the Submillimeter Array (SMA) on Mauna Kea, Hawai’I, US; the Large Millimeter Telescope (LMT) in Mexico; the Kitt Peak Telescope in Arizona, US; and the Greenland Telescope (GLT). The distance between these telescopes – the baseline – ranges from 160 m to 10 700 km. Data were correlated at the Max-Planck-Institut für Radioastronomie (MPIfR) in Germany and the MIT Haystack Observatory in the US.

“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”

Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.

The analyses are detailed in Astronomy and Astrophysics.

The post Black hole’s shadow changes from one year to the next appeared first on Physics World.

Radioactive anomaly appears in the deep ocean

27 février 2025 à 10:30

Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.

Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.

Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.

Almost twice as much 10Be than expected

In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.

The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.

Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.

To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”

Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.

The post Radioactive anomaly appears in the deep ocean appeared first on Physics World.

Quantum-inspired technique simulates turbulence with high speed

26 février 2025 à 14:00

Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.

Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.

The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.

An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.

For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.

Highly compressed “tensor networks”

To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.

These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.

Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.

This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.

The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.

The study is detailed in Science Advances.

The post Quantum-inspired technique simulates turbulence with high speed appeared first on Physics World.

Astronomers create a ‘weather map’ for a gas giant exoplanet

25 février 2025 à 10:45

Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.

Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.

Observing the planet during a transit event

On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.

Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.

“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”

The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.

Removing unwanted signals

According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.

“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”

The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”

The post Astronomers create a ‘weather map’ for a gas giant exoplanet appeared first on Physics World.

‘Sneeze simulator’ could improve predictions of pathogen spread

20 février 2025 à 10:30

A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.

For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.

To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.

The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.

Reproducing human coughs and sneezes

One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.

During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.

A photo of a man with dark hair, glasses and a beard holding a 3D model of the human upper respiratory tract. A mask is mounted on a metal arm in the background.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)

“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”

These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.

“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.

The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.

The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.

The post ‘Sneeze simulator’ could improve predictions of pathogen spread appeared first on Physics World.

Scientists discover secret of ice-free polar-bear fur

19 février 2025 à 10:39

In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.

The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.

“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”

With and without sebum

Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.

This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.

What is responsible for the low ice adhesion?

Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.

This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.

Insights into human technology

As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”

The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”

The post Scientists discover secret of ice-free polar-bear fur appeared first on Physics World.

Schrödinger’s cat states appear in the nuclear spin state of antimony

13 février 2025 à 17:15

Physicists at the University of New South Wales (UNSW) are the first to succeed in creating and manipulating quantum superpositions of a single, large nuclear spin. The superposition involves spin states that are very far apart and are therefore the superposition is considered a Schrödinger’s cat state. The work could be important for applications in quantum information processing and quantum error correction.

It was Erwin Schrödinger who, in 1935, devised his famous thought experiment involving a cat that could, worryingly, be both dead and alive at the same time. In his gedanken experiment, the decay of a radioactive atom triggers a mechanism (the breaking of a vial containing a poisonous gas) that kills the cat. However, since the decay of the radioactive atom is a quantum phenomenon,  the atom is in a superposition of being decayed and not decayed. If the cat and poison are hidden in a box, we do not know if the cat is alive or dead. Instead, the state of the feline is  a superposition of dead and alive – known as a Schrödinger’s cat state – until we open the box.

Schrödinger’s cat state (or just cat state) is now used to refer a superposition of two very different states of a quantum system. Creating cat states in the lab is no easy task, but researchers have managed to do this in recent years using the quantum superposition of coherent states of a laser field with different amplitudes, or phases, of the field. They have also created cat states using a trapped ion (with the vibrational state of the ion in the trap playing the role of the cat) and coherent microwave fields confined to superconducting boxes combined with Rydberg atoms and superconducting quantum bits (qubits).

Antimony atom cat

The cat state in the UNSW study is an atom of antimony, which is a heavy atom with a large nuclear spin. The high spin value implies that, instead of just pointing up and down (that is, in one of two directions), the nuclear spin of antimony can be in spin states corresponding to eight different directions. This makes it a high-dimensional quantum system that is valuable for quantum information processing and for encoding error-correctable logical qubits. The atom was embedded in a silicon quantum chip that allows for readout and control of the nuclear spin state.

Normally, a qubit, is described by just two quantum states, explains Xi Yu, who is lead author of a paper describing the study. For example, an atom with its spin pointing down can be labelled as the “0” state and the spin pointing up, the “1” state. The problem with such a system is that information contained in these states is fragile and can be easily lost when a 0 switches to a 1, or vice versa. The probability of this logical error occurring is reduced by creating a qubit using a system like the antinomy atom. With its eight different spin directions, a single error is not enough to erase the quantum information – there are still seven quantum states left, and it would take seven consecutive errors to turn the 0 into a 1.

More room for error

The information is still encoded in binary code (0 and 1), but there is more room for error between the logical codes, says team leader Andrea Morello. “If an error occurs, we detect it straight away, and we can correct it before further errors accumulate.”

The researchers say they were not initially looking to make and manipulate cat states but started with a project on high-spin nuclei for reasons unrelated to quantum information. They were in fact interested in observing quantum chaos in a single nuclear spin, which had been an experimental “holy grail” for a very long time, says Morello. “Once we began working with this system, we first got derailed by the serendipitous discovery of nuclear electric resonance, he remembers “We then became aware of some new theoretical ideas for the use of high-spin systems in quantum information and quantum error correcting codes.

“We therefore veered towards that research direction, and this is our first big result in that context,” he tells Physics World.

Scalable technology

The main challenge the team had to overcome in their study was to set up seven “clocks” that had to be precisely synchronized, so they could keep track of the quantum state of the eight-level system. Until quite recently, this would have involved cumbersome programming of waveform generators, explains Morello. “The advent of FPGA [field-programmable gate array] generators, tailored for quantum applications, has made this research much easier to conduct now.”

While there have already been a few examples of such physical platforms in which quantum information can be encoded in a (Hilbert) space of dimension larger than two – for example, microwave cavities or trapped ions – these were relatively large in size: bulk microwave cavities are typically the size of matchbox, he says. “Here, we have reconstructed many of the properties of other high-dimensional systems, but within an atomic-scale object – a nuclear spin. It is very exciting, and quite plausible, to imagine a quantum processor in silicon, containing millions of such Schrödinger cat states.”

The fact that the cat is hosted in a silicon chip means that this technology could be scaled up in the long-term using methods similar to those already employed in the computer chip industry today, he adds.

Looking ahead, the UNSW team now plans to demonstrate quantum error correction in its antimony system. “Beyond that, we are working to integrate the antimony atoms with lithographic quantum dots, to facilitate the scalability of the system and perform quantum logic operations between cat-encoded qubits,” reveals Morello.

The present study is detailed in Nature Physics.

The post Schrödinger’s cat states appear in the nuclear spin state of antimony appeared first on Physics World.

Bacterial ‘cables’ form a living gel in mucus

12 février 2025 à 15:00

Bacterial cells in solutions of polymers such as mucus grow into long cable-like structures that buckle and twist on each other, forming a “living gel” made of intertwined cells. This behaviour is very different from what happens in polymer-free liquids, and researchers at the California Institute of Technology (Caltech) and Princeton University, both in the US, say that understanding it could lead to new treatments for bacterial infections in patients with cystic fibrosis. It could also help scientists understand how cells organize themselves into polymer-secreting conglomerations of bacteria called biofilms that can foul medical and industrial equipment.

Interactions between bacteria and polymers are ubiquitous in nature. For example, many bacteria live as multicellular colonies in polymeric fluids, including host-secreted mucus, exopolymers in the ocean and the extracellular polymeric substance that encapsulates biofilms. Often, these growing colonies can become infectious, including in cystic fibrosis patients, whose mucus is more concentrated than it is in healthy individuals.

Laboratory studies of bacteria, however, typically focus on cells in polymer-free fluids, explains study leader Sujit Datta, a biophysicist and bioengineer at Caltech. “We wondered whether interactions with extracellular polymers influence proliferating bacterial colonies,” says Datta, “and if so, how?”

Watching bacteria grow in mucus

In their work, which is detailed in Science Advances, the Caltech/Princeton team used a confocal microscope to monitor how different species of bacteria grew in purified samples of mucus. The samples, Dutta explains, were provided by colleagues at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine.

Normally, when bacterial cells divide, the resulting “daughter” cells diffuse away from each other. However, in polymeric mucus solutions, Datta and colleagues observed that the cells instead remained stuck together and began to form long cable-like structures. These cables can contain thousands of cells, and eventually they start bending and folding on top of each other to form an entangled network.

“We found that we could quantitively predict the conditions under which such cables form using concepts from soft-matter physics typically employed to describe non-living gels,” Datta says.

Support for bacterial colonies

The team’s work reveals that polymers, far from being a passive medium, play a pivotal role in supporting bacterial life by shaping how cells grow in colonies. The form of these colonies – their morphology – is known to influence cell-cell interactions and is important for maintaining their genetic diversity. It also helps determine how resilient a colony is to external stressors.

“By revealing this previously-unknown morphology of bacterial colonies in concentrated mucus, our finding could help inform ways to treat bacterial infections in patients with cystic fibrosis, in which the mucus that lines the lungs and gut becomes more concentrated, often causing the bacterial infections that take hold in that mucus to become life-threatening,” Datta tells Physics World.

Friend or foe?

As for why cable formation is important, Datta explains that there are two schools of thought. The first is that by forming large cables, bacteria may become more resilient against the body’s immune system, making them more infectious. The other possibility is that the reverse is true – that cable formation could in fact leave bacteria more exposed to the host’s defence mechanisms. These include “mucociliary clearance”, which is the process by which tiny hairs on the surface of the lungs constantly sweep up mucus and propel it upwards.

“Could it be that when bacteria are all clumped together in these cables, it is actually easier to get rid of them by expelling them out of the body?” Dutta asks.

Investigating these hypotheses is an avenue for future research, he adds. “Ours is a fundamental discovery on how bacteria grow in complex environments, more akin to their natural habitats,” Datta says. “We also expect it will motivate further work exploring how cable formation influences the ways in which bacteria interact with hosts, phages, nutrients and antibiotics.”

The post Bacterial ‘cables’ form a living gel in mucus appeared first on Physics World.

Organic photovoltaic solar cells could withstand harsh space environments

11 février 2025 à 15:00

Carbon-based organic photovoltaics (OPVs) may be much better than previously thought at withstanding the high-energy radiation and sub-atomic particle bombardments of space environments. This finding, by researchers at the University of Michigan in the US, challenges a long-standing belief that OPV devices systematically degrade under conditions such as those encountered by spacecraft in low-Earth orbit. If verified in real-world tests, the finding suggests that OPVs could one day rival traditional thin-film photovoltaic technologies based on rigid semiconductors such as gallium arsenide.

Lightweight, robust, radiation-resilient photovoltaics are critical technologies for many aerospace applications. OPV cells are particularly attractive for this sector because they are ultra-lightweight, thermally stable and highly flexible. This last property allows them to be integrated onto curved surfaces as well as flat ones.

Today’s single-junction OPV devices also have a further advantage. Thanks to power conversion efficiencies (PCEs) that now exceed 20%, their specific power – that is, the power generated per weight – can be up to 40 W/g. This is significantly higher than traditional photovoltaic technologies, including those based on silicon (1 W/g) and gallium arsenide (3 W/g) on flexible substrates. Devices with such a large specific power could provide energy for small spacecraft heading into low-Earth orbit and beyond.

Until now, however, scientists believed that these materials had a fatal flaw for space applications: they weren’t robust to irradiation by the energetic particles (predominantly fluxes of electrons and protons) that spacecraft routinely encounter.

Testing two typical OPV materials

In the new work, researchers led by electrical and computer engineer Yongxi Li and physicist Stephen Forrest analysed how two typical OPV materials behave when exposed to proton particles with differing energies. They did this by characterizing their optoelectronic properties before and after irradiation exposure. The first materials were made up of small molecules (DBP, DTDCPB and C70) that had been grown using a technique called vacuum thermal evaporation (VTE). The second group consisted of solution-processed small molecules and polymers (PCE-10, PM6, BT-CIC and Y6).

The team’s measurements show that the OPVs grown by VTE retained their initial PV efficiency under radiation fluxes of up to 1012 cm−2. In contrast, polymer-based OPVs lose 50% of their original efficiency under the same conditions. This, say the researchers, is because proton irradiation breaks carbon-hydrogen bonds in the polymers’ molecular alkyl side chains. This leads to polymer cross-linking and the generation of charge traps that imprison electrons and prevent them from generating useful current.

The good news, Forrest says, is that many of these defects can be mended by thermally annealing the materials at temperatures of 45 °C or less. After such an annealing, the cell’s PCE returns to nearly 90% of its value before irradiation. This means that Sun-facing solar cells made of these materials could essentially “self-heal”, though Forrest acknowledges that whether this actually happens in deep space is a question that requires further investigation. “It may be more straightforward to design the material so that the electron traps never appear in the first place or by filling them with other atoms, so eliminating this problem,” he says.

According to Li, the new study, which is detailed in Joule, could aid the development of standardized stability tests for how protons interact with OPV devices. Such tests already exist for c-Si and GaAs solar cells, but not for OPVs, he says.

The Michigan researchers say they will now be developing materials that combine high PCEs with strong resilience to proton exposure. “We will then use these materials to fabricate OPV devices that we will then test on CubeSats and spacecraft in real-world environments,” Li tells Physics World.

The post Organic photovoltaic solar cells could withstand harsh space environments appeared first on Physics World.

New class of quasiparticle appears in bilayer graphene

10 février 2025 à 10:00

A newly-discovered class of quasiparticles known as fractional excitons offers fresh opportunities for condensed-matter research and could reveal unprecedented quantum phases, say physicists at Brown University in the US. The new quasiparticles, which are neither bosons nor fermions and carry no charge, could have applications in quantum computing and sensing, they say.

In our everyday, three-dimensional world, particles are classified as either fermions or bosons. Fermions such as electrons follow the Pauli exclusion principle, which prevents them from occupying the same quantum state. This property underpins phenomena like the structure of atoms and the behaviour of metals and insulators. Bosons, on the other hand, can occupy the same state, allowing for effects like superconductivity and superfluidity.

Fractional excitons defy this traditional classification, says Jia Leo Li, who led the research. Their properties lie somewhere in between those of fermions and bosons, making them more akin to anyons, which are particles that exist only in two-dimensional systems. But that’s only one aspect of their unusual nature, Li adds. “Unlike typical anyons, which carry a fractional charge of an electron, fractional excitons are neutral particles, representing a distinct type of quantum entity,” he says.

The experiment

Li and colleagues created the fractional excitons using two sheets of graphene – a form of carbon just one atom thick – separated by a layer of another two-dimensional material, hexagonal boron nitride. This layered setup allowed them to precisely control the movement of electrons and positively-charged “holes” and thus to generate excitons, which are pairs of electrons and holes that behave like single particles.

The team then applied a 12 T magnetic field to their bilayer structure. This strong field caused the electrons in the graphene to split into fractional charges – a well-known phenomenon that occurs in the fractional quantum Hall effect. “Here, strong magnetic fields create Landau electronic levels that induce particles with fractional charges,” Li explains. “The bilayer structure facilitates pairing between these positive and negative charges, making fractional excitons possible.”

“Distinct from any known particles”

The fractional excitons represent a quantum system of neutral particles that obey fractional quantum statistics, interact via dipolar forces and are distinct from any known particles, Li tells Physics World. He adds that his team’s study, which is detailed in Nature, builds on prior works that predicted the existence of excitons in the fractional quantum Hall effect (see, for example, Nature Physics 13, 751 2017Nature Physics 15, 898-903 2019Science 375 (6577), 205-209 2022).

The researchers now plan to explore the properties of fractional excitons further. “Our key objectives include measuring the fractional charge of the constituent particles and confirming their anyonic statistics,” Li explains. Studies of this nature could shed light on how fractional excitons interact and flow, potentially revealing new quantum phases, he adds.

“Such insights could have profound implications for quantum technologies, including ultra-sensitive sensors and robust quantum computing platforms,” Li says. “As research progresses, fractional excitons may redefine the boundaries of condensed-matter physics and applied quantum science.”

The post New class of quasiparticle appears in bilayer graphene appeared first on Physics World.

Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets

6 février 2025 à 13:00

A new graphene nanostructure could become the basis for the first ferromagnets made purely from carbon. Known as an asymmetric or “Janus” graphene nanoribbon after the two-faced god in Roman mythology, the opposite edges of this structure have different properties, with one edge taking a zigzag form. Lu Jiong , a researcher at the National University of Singapore (NUS) who co-led the effort to make the structure, explains that it is this zigzag edge that gives rise to the ferromagnetic state, making the structure the first of its kind.

“The work is the first demonstration of the concept of a Janus graphene nanoribbon (JGNR) strand featuring a single ferromagnetic zigzag edge,” Lu says.

Graphene nanostructures with zigzag-shaped edges show much promise for technological applications thanks to their electronic and magnetic properties. Zigzag GNRs (ZGNRs) are especially appealing because the behaviour of their electrons can be tuned from metal-like to semiconducting by adjusting the length or width of the ribbons; modifying the structure of their edges; or doping them with non-carbon atoms. The same techniques can also be used to make such materials magnetic. This versatility means they can be used as building blocks for numerous applications, including quantum and spintronics technologies.

Previously, only two types of symmetric ZGNRs had been synthesized via on-surface chemistry: 6-ZGNR and nitrogen-doped 6-ZGNR, where the “6” refers to the number of carbon rows across the nanoribbon’s width. In the latest work, Lu and co-team leaders Hiroshi Sakaguchi of the University of Kyoto, Japan and Steven Louie at the University of California, Berkeley, US sought to expand this list.

 “It has been a long-sought goal to make other forms of zigzag-edge related GNRs with exotic quantum magnetic states for studying new science and developing new applications,” says team member Song Shaotang, the first author of a paper in Nature about the research.

ZGNRs with asymmetric edges

Building on topological classification theory developed in previous research by Louie and colleagues, theorists in the Singapore-Japan-US collaboration predicted that it should be possible to tune the magnetic properties of these structures by making ZGNRs with asymmetric edges. “These nanoribbons have one pristine zigzag edge and another edge decorated with a pattern of topological defects spaced by a certain number m of missing motifs,” Louie explains. “Our experimental team members, using innovative z-shaped precursor molecules for synthesis, were able to make two kinds of such ZGNRs. Both of these have one edge that supports a benzene motif array with a spacing of m = 2 missing benzene rings in between. The other edge is a conventional zigzag edge.”

Crucially, the theory predicted that the magnetic behaviour – ranging from antiferromagnetism to ferrimagnetism to ferromagnetism – of these JGNRs could be controlled by varying the value of m. In particular, says Louie, the configuration of m = 2 is predicted to show ferromagnetism – that is, all electron spins aligned in the same direction – concentrated entirely on the pristine zigzag edge. This behaviour contrasts sharply with that of symmetric ZGNRs, where spin polarization occurs on both edges and the aligned edge spins are antiferromagnetically coupled across the width of the ribbon.

Precursor design and synthesis

To validate these theoretical predictions, the team synthesized JGNRs on a surface. They then used advanced scanning tunnelling microscope (STM) and atomic force microscope (AFM) measurements to visualize the materials’ exact real-space chemical structure. These measurements also revealed the emergence of exotic magnetic states in the JGNRs synthesized in Lu’s lab at the NUS.

atomic model of the JGNRs
Two sides: An atomic model of the Janus graphene nanoribbons (left) and its atomic force microscopic image (right). (Courtesy: National University of Singapore)

In the past, Sakaguchi explains that GNRs were mainly synthesized using symmetric precursor chemical structures, largely because their asymmetric counterparts were so scarce. One of the challenges in this work, he notes, was to design asymmetric polymeric precursors that could undergo the essential fusion (dehydrogenation) process to form JGNRs. These molecules often orient randomly, so the researchers needed to use additional techniques to align them unidirectionally prior to the polymerization reaction. “Addressing this challenge in the future could allow us to produce JGNRs with a broader range of magnetic properties,” Sakaguchi says.

Towards carbon-based ferromagnets

According to Lu, the team’s research shows that JGNRs could become the first carbon-based spin transport channels to show ferromagnetism. They might even lead to the development of carbon-based ferromagnets, capping off a research effort that began in the 1980s.

However, Lu acknowledges that there is much work to do before these structures find real-world applications. For one, they are not currently very robust when exposed to air. “The next goal,” he says, “is to develop chemical modifications that will enhance the stability of these 1D structures so that they can survive under ambient conditions.”

A further goal, he continues, is to synthesize JGNRs with different values of m, as well as other classes of JGNRs with different types of defective edges. “We will also be exploring the 1D spin physics of these structures and [will] investigate their spin dynamics using techniques such as scanning tunnelling microscopy combined with electron spin resonance, paving the way for their potential applications in quantum technologies.”

The post Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets appeared first on Physics World.

Alternative building materials could store massive amounts of carbon dioxide

27 janvier 2025 à 13:00

Replacing conventional building materials with alternatives that sequester carbon dioxide could allow the world to lock away up to half the CO2 generated by humans each year – about 16 billion tonnes. This is the finding of researchers at the University of California Davis and Stanford University, both in the US, who studied the sequestration potential of materials such as carbonate-based aggregates and biomass fibre in brick.

Despite efforts to reduce greenhouse gas emissions by decarbonizing industry and switching to renewable sources of energy, it is likely that humans will continue to produce significant amounts of CO2 beyond the target “net zero” date of 2050. Carbon storage and sequestration – either at source or directly from the atmosphere – are therefore worth exploring as an additional route towards this goal. Researchers have proposed several possible ways of doing this, including injecting carbon underground or deep under the ocean. However, all these scenarios are challenging to implement practically and pose their own environmental risks.

Modifying common building materials

In the present work, a team of civil engineers and earth systems scientists led by Elisabeth van Roijen (then a PhD student at UC Davis) calculated how much carbon could be stored in modified versions of several common building materials. These include concrete (cement) and asphalt containing carbonate-based aggregates; bio-based plastics; wood; biomass-fibre bricks (from waste biomass); and biochar filler in cement.

The researchers obtained the “16 billion tonnes of CO2” figure by assuming that all aggregates currently employed in concrete would be replaced with carbonate-based versions. They also supplemented 15% of cement with biochar and the remainder with carbonatable cements; increased the amount of wood used in all new construction by 20%; and supplemented 15% of bricks with biomass and the remainder with carbonatable calcium hydroxide. A final element in their calculation was to replace all plastics used in construction today with bio-based plastics and all bitumen with bio-oil in asphalt.

“We calculated the carbon storage potential of each material based on the mass ratio of carbon in each material,” explains van Roijen. “These values were then scaled up based on 2016 consumption values for each material.”

“The sheer magnitude of carbon storage is pretty impressive”

While the production of some replacement materials would need to increase to meet the resulting demand, van Roijen and colleagues found that resources readily available today – for example, mineral-rich waste streams – would already let us replace 10% of conventional aggregates with carbonate-based ones. “These alone could store 1 billion tonnes of CO2,” she says. “The sheer magnitude of carbon storage is pretty impressive, especially when you put it in context of the level of carbon dioxide removal needed to stay below the 1.5 and 2 °C targets set by The Intergovernmental Panel on Climate Change (IPCC).”

Indeed, even if the world doesn’t implement these technologies until 2075, we could still store enough carbon between 2075 and 2100 to stay below these targets, she tells Physics World. “This is assuming, of course, that all other decarbonization efforts outlined in the IPCC reports are also implemented to achieve net-zero emissions,” she says.

Building materials are a good option for carbon storage

The motivation for the study, she explains, came from the urgent need – as expressed by the IPCC – to not only reduce new carbon emissions through rapid and significant decarbonization, but to also remove large amounts of COalready present in the atmosphere. “Rather than burying it in geological, terrestrial or ocean reservoirs, we wanted to look into the possibility of leveraging existing technology – namely conventional building materials – as a way to store CO2. Building materials are a good option for carbon storage given the massive quantity (30 billion tonnes) produced each year, not to mention their durability.”

Van Roijen, who is now a postdoctoral researcher at the US Department of Energy Renewable Energy Laboratory, hopes that this work, which is detailed in Science, will go beyond the reach of the research lab and attract the attention of policymakers and industrialists. While some of the technologies outlined in this study are new and require further research, others, such as bio-based plastics, are well established and simply need some economic and political support, she says. “That said, conventional building materials such as concrete and plastics are pretty cheap, so there will need to be some incentive for industries to make the switch over to these low-carbon materials.”

The post Alternative building materials could store massive amounts of carbon dioxide appeared first on Physics World.

Fast radio burst came from a neutron star’s magnetosphere, say astronomers

24 janvier 2025 à 16:00

The exact origins of cosmic phenomena known as fast radio bursts (FRBs) are not fully understood, but scientists at the Massachusetts Institute of Technology (MIT) in the US have identified a fresh clue: at least one of these puzzling cosmic discharges got its start very close to the object that emitted it. This result, which is based on measurements of a fast radio burst called FRB 20221022A, puts to rest a long-standing debate about whether FRBs can escape their emitters’ immediate surroundings. The conclusion: they can.

“Competing theories argued that FRBs might instead be generated much farther away in shock waves that propagate far from the central emitting object,” explains astronomer Kenzie Nimmo of MIT’s Kavli Institute for Astrophysics and Space Research. “Our findings show that, at least for this FRB, the emission can escape the intense plasma near a compact object and still be detected on Earth.”

As their name implies, FRBs are brief, intense bursts of radio waves. The first was detected in 2007, and since then astronomers have spotted thousands of others, including some within our own galaxy. They are believed to originate from cataclysmic processes involving compact celestial objects such as neutron stars, and they typically last a few milliseconds. However, astronomers have recently found evidence for bursts a thousand times shorter, further complicating the question of where they come from.

Nimmo and colleagues say they have now conclusively demonstrated that FRB 20221022A, which was detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in 2022, comes from a region only 10 000 km in size. This, they claim, means it must have originated in the highly magnetized region that surrounds a star: the magnetosphere.

“Fairly intuitive” concept

The researchers obtained their result by measuring the FRB’s scintillation, which Nimmo explains is conceptually similar to the twinkling of stars in the night sky. The reason stars twinkle is that because they are so far away, they appear to us as point sources. This means that their apparent brightness is more affected by the Earth’s atmosphere than is the case for planets and other objects that are closer to us and appear larger.

“We applied this same principle to FRBs using plasma in their host galaxy as the ‘scintillation screen’, analogous to Earth’s atmosphere,” Nimmo tells Physics World. “If the plasma causing the scintillation is close to the FRB source, we can use this to infer the apparent size of the FRB emission region.”

According to Nimmo, different models of FRB origins predict very different sizes for this region. “Emissions originating within the magnetized environments of compact objects (for example, magnetospheres) would produce a much smaller apparent size compared to emission generated in distant shocks propagating far from the central object,” she explains. “By constraining the emission region size through scintillation, we can determine which physical model is more likely to explain the observed FRB.”

Challenge to existing models

The idea for the new study, Nimmo says, stemmed from a conversation with another astronomer, Pawan Kumar of the University of Texas at Austin, early last year. “He shared a theoretical result showing how scintillation could be used a ‘probe’ to constrain the size of the FRB emission region, and, by extension, the FRB emission mechanism,” Nimmo says. “This sparked our interest and we began exploring the FRBs discovered by CHIME to search for observational evidence for this phenomenon.”

The researchers say that their study, which is detailed in Nature, shows that at least some FRBs originate from magnetospheric processes near compact objects such as neutron stars. This finding is a challenge for models of conditions in these extreme environments, they say, because if FRB signals can escape the dense plasma expected to exist near such objects, the plasma may be less opaque than previously assumed. Alternatively, unknown factors may be influencing FRB propagation through these regions.

A diagnostic tool

One advantage of studying FRB 20221022A is that it is relatively conventional in terms of its brightness and the duration of its signal (around 2 milliseconds). It does have one special property, however, as discovered by Nimmo’s colleagues at McGill University in Canada: its light is highly polarized. What is more, the pattern of its polarization implies that its emitter must be rotating in a way that is reminiscent of pulsars, which are highly magnetized, rotating neutron stars. This result is reported in a separate paper in Nature.

In Nimmo’s view, the MIT team’s study of this (mostly) conventional FRB establishes scintillation as a “powerful diagnostic tool” for probing FRB emission mechanisms. “By applying this method to a larger sample of FRBs, which we now plan to investigate, future studies could refine our understanding of their underlying physical processes and the diverse environments they occupy.”

The post Fast radio burst came from a neutron star’s magnetosphere, say astronomers appeared first on Physics World.

Terahertz light produces a metastable magnetic state in an antiferromagnet

24 janvier 2025 à 10:00

Physicists in the US, Europe and Korea have produced a long-lasting light-driven magnetic state in an antiferromagnetic material for the first time. While their project started out as a fundamental study, they say the work could have applications for faster and more compact memory and processing devices.

Antiferromagnetic materials are promising candidates for future high-density memory devices. This is because in antiferromagnets, the spins used as the bits or data units flip quickly, at frequencies in the terahertz range. Such rapid spin flips are possible because, by definition, the spins in antiferromagnets align antiparallel to each other, leading to strong interactions among the spins. This is different from ferromagnets, which have parallel electron spins and are used in today’s memory devices such as computer hard drives.

Another advantage is that antiferromagnets display almost no macroscopic magnetization. This means that bits can be packed more densely onto a chip than is the case for the ferromagnets employed in conventional magnetic memory, which do have a net magnetization.

A further attraction is that the values of bits in antiferromagnetic memory devices are generally unaffected by the presence of stray magnetic fields. However, Nuh Gedik of the Massachusetts Institute of Technology (MIT), who led the latest research effort, notes that this robustness can be a double-edged sword: the fact that antiferromagnet spins are insensitive to weak magnetic fields also makes them difficult to control.

Antiferromagnetic state lasts for more than 2.5 milliseconds

In the new work, Gedik and colleagues studied FePS3, which becomes an antiferromagnet below a critical temperature of around 118 K. By applying intense pulses of terahertz-frequency light to this material, they were able to control this transition, placing the material in a metastable magnetic state that lasts for more than 2.5 milliseconds even after the light source is switched off. While such light-induced transitions have been observed before, Gedik notes that they typically only last for picoseconds.

The technique works because the terahertz source stimulates the atoms in the FePS3 at the same frequency at which the atoms collectively vibrate (the resonance frequency). When this happens, Gedik explains that the atomic lattice undergoes a unique form of stretching. This stretching cannot be achieved with external mechanical forces, and it pushes the spins of the atoms out of their magnetically alternating alignment.

The result is a state in which the spin in one direction is larger, transforming the originally antiferromagnetic material into a state with net magnetization. This metastable state becomes increasingly robust as the temperature of the material approaches the antiferromagnetic transition point. That is a sign that critical fluctuations near the phase transition point are a key factor in enhancing both the magnitude and lifetime of the new magnetic state, Gedik says.

A new experimental setup

The team, which includes researchers from the Max Planck Institute for the Structure and Dynamics of Matter in Germany, the University of the Basque Country in Spain, Seoul National University and the Flatiron Institute in New York, wasn’t originally aiming to produce long-lived magnetic states. Instead, its members were investigating nonlinear interactions among low-energy collective modes, such as phonons (vibrations of the atomic lattice) and spin excitations called magnons, in layered magnetic materials like FePS3. It was for this purpose that they developed a new experimental setup capable of generating strong terahertz pulses with a wide spectral bandwidth.

“Since nonlinear interactions are generally weak, we chose a family of materials known for their strong coupling between magnetic spins and phonons,” Gedik says. “We also suspected that, under such intense resonant excitation in these particular materials, something intriguing might occur – and indeed, we discovered a new magnetic state with an exceptionally long lifetime.”

While the researchers’ focus remains on fundamental questions, they say the new findings may enable a “significant step” toward practical applications for ultrafast science. “The antiferromagnetic nature of the material holds great potential for potentially enabling faster and more compact memory and processing devices,” says. Gedik’s MIT colleague Batyr Ilyas. He adds that the observed long lifetime of the induced state means that it can be explored further using conventional experimental probes used in spintronic technologies.

The team’s next step will be to study the nonlinear interactions between phonons and magnons more closely using two-dimensional spectroscopy experiments. “Second, we plan to demonstrate the feasibility of probing this metastable state through electrical transport experiments,” Ilyas tells Physics World. “Finally, we aim to investigate the generalizability of this phenomenon in other materials, particularly those exhibiting enhanced fluctuations near room temperature.”

The work is detailed in Nature.

The post Terahertz light produces a metastable magnetic state in an antiferromagnet appeared first on Physics World.

New candidate emerges for a universal quantum electrical standard

23 janvier 2025 à 10:00

Physicists in Germany have developed a new way of defining the standard unit of electrical resistance. The advantage of the new technique is that because it is based on the quantum anomalous Hall effect rather than the ordinary quantum Hall effect, it does not require the use of applied magnetic fields. While the method in its current form requires ultracold temperatures, an improved version could allow quantum-based voltage and resistance standards to be integrated into a single, universal quantum electrical reference.

Since 2019, all base units in the International System of Units (SI) have been defined with reference to fundamental constants of nature. For example, the definition of the kilogram, which was previously based on a physical artefact (the international prototype kilogram), is now tied to Planck’s constant, h.

These new definitions do come with certain challenges. For example, today’s gold-standard way to experimentally determine the value of h (as well the elementary charge e, another base SI constant) is to measure a quantized electrical resistance (the von Klitzing constant RK = h/e2) and a quantized voltage (the Josephson constant KJ = 2e/h). With RK and KJ pinned down, scientists can then calculate e and h.

To measure RK with high precision, physicists use the fact that it is related to the quantized values of the Hall resistance of a two-dimensional electron system (such as the ones that form in semiconductor heterostructures) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in semiconductors like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. In graphene, a two-dimensional carbon sheet, fields of about 5 T are typically required.

The problem with this method is that KJ is measured by means of a separate phenomenon known as the AC Josephson effect, and the large external magnetic fields that are so essential to the QHE measurement render Josephson devices inoperable. According to Charles Gould of the Institute for Topological Insulators at the University of Würzburg (JMU), who led the latest research effort, this makes it difficult to integrate a QHE-based resistance standard with the voltage standard.

A way to measure RK at zero external magnetic field

Relying on the quantum anomalous Hall effect (QAHE) instead would solve this problem. This variant of the QHE arises from electron transport phenomena recently identified in a family of materials known as ferromagnetic topological insulators. Such quantum spin Hall systems, as they are also known, conduct electricity along their (quantized) edge channels or surfaces, but act as insulators in their bulk. In these materials, spontaneous magnetization means the QAHE manifests as a quantization of resistance even at weak (or indeed zero) magnetic fields.

In the new work, Gould and colleagues made Hall resistance quantization measurements in the QAHE regime on a device made from V-doped (Bi,Sb)2Te3. These measurements showed that the relative deviation of the Hall resistance from RK at zero external magnetic field is just (4.4 ± 8.7) nΩ Ω−1. The method thus makes it possible to determine RK at zero magnetic field with the needed precision — something Gould says was not previously possible.

The snag is that the measurement only works under demanding experimental conditions: extremely low temperatures (below about 0.05 K) and low electrical currents (below 0.1 uA). “Ultimately, both these parameters will need to be significantly improved for any large-scale use,” Gould explains. “To compare, the QHE works at temperatures of 4.2 K and electrical currents of about 10 uA; making its detection much easier and cheaper to operate.”

Towards a universal electrical reference instrument

The new study, which is detailed in Nature Electronics, was made possible thanks to a collaboration between two teams, he adds. The first is at Würzburg, which has pioneered studies on electron transport in topological materials for some two decades. The second is at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, which has been establishing QHE-based resistance standards for even longer. “Once the two teams became aware of each other’s work, the potential of a combined effort was obvious,” Gould says.

Because the project brings together two communities with very different working methods and procedures, they first had to find a window of operations where their work could co-exist. “As a simple example,” explains Gould, “the currents of ~100 nA used in the present study are considered extremely low for metrology, and extreme care was required to allow the measurement instrument to perform under such conditions. At the same time, this current is some 200 times larger than that typically used when studying topological properties of materials.”

As well as simplifying access to the constants h and e, Gould says the new work could lead to a universal electrical reference instrument based on the QAHE and the Josephson effect. Beyond that, it could even provide a quantum standard of voltage, resistance, and (by means of Ohm’s law) current, all in one compact experiment.

The possible applications of the QAHE in metrology have attracted a lot of attention from the European Union, he adds. “The result is a Europe-wide EURAMET metrology consortium QuAHMET aimed specifically at further exploiting the effect and operation of the new standard at more relaxed experimental conditions.”

The post New candidate emerges for a universal quantum electrical standard appeared first on Physics World.

❌