↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

New candidate emerges for a universal quantum electrical standard

Physicists in Germany have developed a new way of defining the standard unit of electrical resistance. The advantage of the new technique is that because it is based on the quantum anomalous Hall effect rather than the ordinary quantum Hall effect, it does not require the use of applied magnetic fields. While the method in its current form requires ultracold temperatures, an improved version could allow quantum-based voltage and resistance standards to be integrated into a single, universal quantum electrical reference.

Since 2019, all base units in the International System of Units (SI) have been defined with reference to fundamental constants of nature. For example, the definition of the kilogram, which was previously based on a physical artefact (the international prototype kilogram), is now tied to Planck’s constant, h.

These new definitions do come with certain challenges. For example, today’s gold-standard way to experimentally determine the value of h (as well the elementary charge e, another base SI constant) is to measure a quantized electrical resistance (the von Klitzing constant RK = h/e2) and a quantized voltage (the Josephson constant KJ = 2e/h). With RK and KJ pinned down, scientists can then calculate e and h.

To measure RK with high precision, physicists use the fact that it is related to the quantized values of the Hall resistance of a two-dimensional electron system (such as the ones that form in semiconductor heterostructures) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in semiconductors like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. In graphene, a two-dimensional carbon sheet, fields of about 5 T are typically required.

The problem with this method is that KJ is measured by means of a separate phenomenon known as the AC Josephson effect, and the large external magnetic fields that are so essential to the QHE measurement render Josephson devices inoperable. According to Charles Gould of the Institute for Topological Insulators at the University of Würzburg (JMU), who led the latest research effort, this makes it difficult to integrate a QHE-based resistance standard with the voltage standard.

A way to measure RK at zero external magnetic field

Relying on the quantum anomalous Hall effect (QAHE) instead would solve this problem. This variant of the QHE arises from electron transport phenomena recently identified in a family of materials known as ferromagnetic topological insulators. Such quantum spin Hall systems, as they are also known, conduct electricity along their (quantized) edge channels or surfaces, but act as insulators in their bulk. In these materials, spontaneous magnetization means the QAHE manifests as a quantization of resistance even at weak (or indeed zero) magnetic fields.

In the new work, Gould and colleagues made Hall resistance quantization measurements in the QAHE regime on a device made from V-doped (Bi,Sb)2Te3. These measurements showed that the relative deviation of the Hall resistance from RK at zero external magnetic field is just (4.4 ± 8.7) nΩ Ω−1. The method thus makes it possible to determine RK at zero magnetic field with the needed precision — something Gould says was not previously possible.

The snag is that the measurement only works under demanding experimental conditions: extremely low temperatures (below about 0.05 K) and low electrical currents (below 0.1 uA). “Ultimately, both these parameters will need to be significantly improved for any large-scale use,” Gould explains. “To compare, the QHE works at temperatures of 4.2 K and electrical currents of about 10 uA; making its detection much easier and cheaper to operate.”

Towards a universal electrical reference instrument

The new study, which is detailed in Nature Electronics, was made possible thanks to a collaboration between two teams, he adds. The first is at Würzburg, which has pioneered studies on electron transport in topological materials for some two decades. The second is at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, which has been establishing QHE-based resistance standards for even longer. “Once the two teams became aware of each other’s work, the potential of a combined effort was obvious,” Gould says.

Because the project brings together two communities with very different working methods and procedures, they first had to find a window of operations where their work could co-exist. “As a simple example,” explains Gould, “the currents of ~100 nA used in the present study are considered extremely low for metrology, and extreme care was required to allow the measurement instrument to perform under such conditions. At the same time, this current is some 200 times larger than that typically used when studying topological properties of materials.”

As well as simplifying access to the constants h and e, Gould says the new work could lead to a universal electrical reference instrument based on the QAHE and the Josephson effect. Beyond that, it could even provide a quantum standard of voltage, resistance, and (by means of Ohm’s law) current, all in one compact experiment.

The possible applications of the QAHE in metrology have attracted a lot of attention from the European Union, he adds. “The result is a Europe-wide EURAMET metrology consortium QuAHMET aimed specifically at further exploiting the effect and operation of the new standard at more relaxed experimental conditions.”

The post New candidate emerges for a universal quantum electrical standard appeared first on Physics World.

Altermagnets imaged at the nanoscale

A recently-discovered class of magnets called altermagnets has been imaged in detail for the first time thanks to a technique developed by physicists at the University of Nottingham’s School of Physics and Astronomy in the UK. The team exploited the unique properties of altermagnetism to map the magnetic domains in the altermagnet manganese telluride (MnTe) down to the nanoscale level, raising hopes that its unusual magnetic ordering could be controlled and exploited in technological applications.

In most magnetically-ordered materials, the spins of atoms (that is, their magnetic moments) have two options: they can line up parallel with each other, or antiparallel, alternating up and down. These arrangements arise from the exchange interaction between atoms, and lead to ferromagnetism and antiferromagnetism, respectively.

Altermagnets, which were discovered in 2024, are different. While their neighbouring spins are antiparallel, like an antiferromagnet, the atoms hosting these spins are rotated relative to their neighbours. This means that they combine some properties from both types of conventional magnetism. For example, the up, down, up ordering of their spins leads to a net magnetization of zero because – as in antiferromagnets – the spins essentially cancel each other out. However, their spin splitting is non-relativistic, as in ferromagnets.

Resolving altermagnetic states down to nanoscale

Working at the MAX IV international synchrotron facility in Sweden, a team led by Nottingham’s Peter Wadley used photoemission electron microscopy to detect the electrons emitted from the surface of MnTe when it was irradiated with a polarized X-ray beam.

“The emitted electrons depend on the polarization of the X-ray beam in ways not seen in other classes of magnetic materials,” explains Wadley, “and this can be used to map the magnetic domains in the material with unprecedented detail.”

Using this technique, the team was able to resolve altermagnetic states down to the nanoscale – from 100-nm-scale vortices and domain walls up to 10-μm-sized single-domain states. And that is not all: Wadley and colleagues found that they could control these features by cooling the material while a magnetic field is applied.

Potential uses of altermagnets

Magnetic materials are found in most long-term computer memory devices and in many advanced microchips, including those used for Internet of Things and artificial intelligence applications. If these materials were replaced with altermagnets, Wadley and colleagues say that the switching speed of microelectronic components and digital memory could increase by up to a factor of 1000, with lower energy consumption.

“The predicted properties of altermagnets make them very attractive from the point of view of fundamental research and applications,” Wadley tells Physics World. “With strong theoretical guidance from our collaborators at FZU Prague and the Max Planck Institute for the Physics of Complex Systems, we realised that our experience in materials development and magnetic imaging positioned us well to attempt to image and control altermagnetic domains.”

One of the main challenges the researchers faced was developing thin films of MnTe with surfaces of a sufficiently high quality that allowed them to detect the subtle X-ray spectroscopy signatures of the altermagnetic order. They hope that their study, detailed in Nature, will spur further interest in these materials.

“Altermagnets provide a new vista of predicted phenomena from unconventional domain walls to unique band structure effects,” Wadley says. “We are exploring these effects on multiple fronts and one of the major goals is to demonstrate a more efficient means of controlling the magnetic domains, for example, by applying electric currents rather than cooling them down.”

The post Altermagnets imaged at the nanoscale appeared first on Physics World.

Very thin films of a novel semimetal conduct electricity better than copper

Metals usually become less conductive as they get thinner. Niobium phosphide, however, is different. According to researchers at Stanford University, US, a very thin film of this non-crystalline topological semimetal conducts electricity better than copper even in non-crystalline films. This surprising result could aid the development of ultrathin low-resistivity wires for nanoelectronics applications.

“As today’s electronic devices and chips become smaller and more complex, the ultrathin metallic wires that carry electrical signals within these chips can become a bottleneck when they are scaled down,” explains study leader Asir Intisar Khan, a visiting postdoctoral scholar and former PhD student in Eric Pop’s group at Stanford.

The solution, he says, is to create ultrathin conductors with a lower electrical resistivity to make the metal interconnects that enable dense logic and memory operations within neuromorphic and spintronic devices. “Low resistance will lead to lower voltage drops and lower signal delays, ultimately helping to reduce power dissipation at the system level,” Khan says.

The problem is that the resistivity of conventional metals increases when they are made into thin films. The thinner the film, the less good it is at conducting electricity.

Topological semimetals are different

Topological semimetals are different. Analogous to the better-known topological insulators, which conduct electricity along special edge states while remaining insulating in their bulk, these materials can carry large amounts of current along their surface even when their structure is somewhat disordered. Crucially, they maintain this surface-conducting property even as they are thinned down.

In the new work, Khan and colleagues found that the effective resistivity of non-crystalline films of niobium phosphide (NbP) decreases dramatically as the film thickness is reduced. Indeed, the thinnest films (< 5 nm) have resistivities lower than conventional metals like copper of similar thicknesses at room temperature.

Another advantage is that these films can be created and deposited on substrates at relatively low temperatures (around 400 °C). This makes them compatible with modern semiconductor and chip fabrication processes such as industrial back-end-of-line (BEOL). Such materials would therefore be relatively easy to integrate into state-of-the-art nanoelectronics. The fact that the films are non-crystalline is also an important practical advantage.

A “huge” collaboration

Khan says he began thinking about this project in 2022 after discussions with a colleague, Ching-Tzu Chen, from IBM’s TJ Watson Research Center. “At IBM, they were exploring the theory concept of using topological semimetals for this purpose,” he recalls. “Upon further discussion with Prof. Eric Pop, we wanted to explore the possibility of experimental realization of thin films of such semimetals at Stanford.”

This turned out to more difficult than expected, he says. While physicists have been experimenting with single crystals of bulk NbP and this class of topological semimetals since 2015, fabricating them at the ultrathin film limit of less than 5 nm at a temperature and using deposition methods compatible with industry and nanoelectronic fabrication was new. “We therefore had to optimize the deposition process from a variety of angles: substrate choice, strain engineering, temperature, pressure and stoichiometry, to name a few,” Khan tells Physics World.

The project turned out to be a “huge” collaboration in the end, with researchers from Stanford, Ajou University, Korea, and IBM Watson all getting involved, he adds.

The researchers says they will now be running further tests on their material. “We also think NbP is not the only material with this property, so there’s much more to discover,” Pop says.

The results are detailed in Science.

The post Very thin films of a novel semimetal conduct electricity better than copper appeared first on Physics World.

Quasiparticles become massless – but only when they’re moving in the right direction

Physicists at Penn State and Columbia University in the US say they have seen the “smoking gun” signature of an elusive quasiparticle predicted by theorists 16 years ago. Known as semi-Dirac fermions, the quasiparticles were spotted in a crystal of the topological semimetal ZrSiS and they have a peculiar property: they only behave like they have mass when they’re moving in a certain direction.

“When we shine infrared light on ZrSiS crystals and carefully measure the reflected light, we observed optical transitions that follow a unique power-law scaling, B2/3, with B being the magnetic field,” explains Yinming Shao, a physicist at Penn State and lead author of a study in Physical Review X on the quasiparticle. “This special power-law turns out to be the exact prediction from 16 years ago of semi-Dirac fermions.”

The team performed the experiments using the 17.5 Tesla magnet at the US National High Magnetic Field Laboratory in Florida. This high field was crucial to the result, Shao explains, because applying a magnetic field to a material causes its electronic energy levels to become quantized into discrete (Landau) levels. The energy gap between these levels then depends on the electrons’ mass and the strength of the field.

Normally, the energy levels of the electrons should increase by set amounts as the magnetic field increases, but in this case they didn’t. Instead, they followed the B2/3 pattern.

Realizing semi-Dirac fermions

Previous efforts to create semi-Dirac fermions relied on stretching graphene (a sheet of carbon just one atom thick) until the material’s two so-called Dirac points touch. These points occur in the region where the material’s valence and conduction bands meet. At these points, something special happens: the relationship between the energy and momentum of charge carriers (electrons and holes) in graphene is described by the Dirac equation, rather than the standard Schrödinger equation as is the case for most crystalline materials. The presence of these unusual band structures (known as Dirac cones) enables the charge carriers in graphene to behave like massless particles.

The problem is that making Dirac points touch in graphene turned out to require an unrealistically high level of strain. Shao and colleagues chose to work with ZrSiS instead because it also has Dirac points, but in this case, they exist continuously along a so-called nodal line. The researchers found evidence for semi-Dirac fermions at the crossing points of these nodal lines.

Interesting optical responses

The idea for the study stemmed from an earlier project in which researchers investigating a similar compound, ZrSiSe, spotted some interesting optical responses when they applied a magnetic field to the material out-of-plane. “I found that similar band-structure features that make ZrSiSe interesting would require applying a magnetic field in-plane for ZrSiS, so we carried out this measurement and indeed observed many unexpected features,” Shao says.

The greatest challenges, he recalls, was to figure out how to interpret the observations, since real materials like ZrSiS have a much more complicated Fermi surface than the ones that feature in early theoretical models. “We collaborated with many different theorists and eventually singled out the signatures originating from semi-Dirac fermions in this material,” he says.

The team still has much to understand about the material’s behaviour, he tells Physics World. “There are some unexplained fine electronic energy level-splitting in the data that we do not fully understand yet and which may originate from electronic interaction effects.”

As for applications, Shao notes that ZrSiS is a layered material, much like graphite – a form of carbon that is, in effect, made up of many layers of graphene. “This means that once we can figure out how to obtain a single layer cut of this compound, we can harness the power of semi-Dirac fermions and control its properties with the same precision as graphene,” he says.

The post Quasiparticles become massless – but only when they’re moving in the right direction appeared first on Physics World.

Sun-like stars produce ‘superflares’ about once a century

Stars like our own Sun produce “superflares” around once every 100 years, surprising astronomers who had previously estimated that such events occurred only every 3000 to 6000 years. The result, from a team of astronomers in Europe, the US and Japan, could be important not only for fundamental stellar physics but also for forecasting space weather.

The Sun regularly produces solar flares, which are energetic outbursts of electromagnetic radiation. Sometimes, these flares are accompanied by plasma in events known as coronal mass ejections. Both activities can trigger powerful solar storms when they interact with the Earth’s upper atmosphere, posing a danger to spacecraft and satellites as well as electrical grids and radio communications on the ground.

Despite their power, though, these events are much weaker than the “superflares” recently observed by NASA’s Kepler and TESS missions at other Sun-like stars in our galaxy. The most intense superflares release energies of about 1025 J, which show up as short, sharp peaks in the stars’ visible light spectrum.

Observations from the Kepler space telescope

In the new study, which is detailed in Science, astronomers sought to find out whether our Sun is also capable of producing superflares, and if so, how often they happen. This question can be approached in two different ways, explains study first author Valeriy Vasilyev, a postdoctoral researcher at the Max Planck Institute for Solar System Research, Germany. “One option is to observe the Sun directly and record events, but it would take a very long time to gather enough data,” Vasilyev says. “The other approach is to study a large number of stars with characteristics similar to those of the Sun and extrapolate their flare activity to our Sun.”

The researchers chose the second option. Using a new method they developed, they analysed Kepler space telescope data on the fluctuations of more than 56,000 Sun-like stars during the period between 2009‒2013. This dataset, which is much larger and more representative than previous datasets because it based on recent advances in our understanding of Sun-like stars, corresponds to around 220,000 years of solar observations.

The new technique can detect superflares and precisely localize them on the telescope images with sub-pixel resolution, Vasilyev says. It also accounts for how light propagates through the telescope’s optics as well as instrumental effects that could “contaminate” the data.

The team, which also includes researchers from the University of Graz, Austria; the University of Oulu, Finland; the National Astronomical Observatory of Japan; the University of Colorado Boulder in the US; and the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, both in France; carefully analysed the detected flares. They checked for potential sources of error, such as those originating from unresolved binary stars, flaring M- and K-dwarf stars and fast-rotating active stars that might have been wrongly classified. Thanks to these robust, statistical evaluations, they identified almost 3000 bright stellar flares in the population they observed – a detection rate that implies that superflares occur roughly once per century, per star.

Sun should also be capable of producing superflares

According to Vasilyev, the team’s results also suggest that solar flares and stellar superflares are generated by the same physical mechanisms. This is important because reconstructions of past solar activity, which are based on the concentrations of cosmogenic isotopes in terrestrial archives such as tree rings, tell us that our Sun occasionally experiences periods of higher or lower solar activity lasting several decades.

One example is the Maunder Minimum, a decades-long period during the 17th century when very few sunspots were recorded. At the other extreme, solar activity was comparatively higher during the Modern Maximum that occurred around the mid-20th century. Based on the team’s analysis, Vasilyev says that “so-called grand minima and grand maxima are not regular but tend to cluster in time. This means that centuries could pass by without extreme solar flares followed by several such events occurring over just a few years or decades.”

It is possible, he adds, that a superflare occurred in the past century but went unnoticed. “While we have no evidence of such an event, excluding it with certainty would require continuous and systematic monitoring of the Sun,” he tells Physics World.  The most intense solar flare in recorded history, the so-called “Carrington event” of September 1859, was documented essentially by chance: “By the time he [the English astronomer Richard Carrington] called someone to show them the bright glow he observed (which lasted only a few minutes), the brightness had already faded.”

Between 1996 and 2002, when instruments provided direct measurements of total solar brightness with sufficient accuracy and temporal resolution, 12 flares with Carrington-like energies were detected. Had these flares been aimed at Earth, it is possible that they would have had similar effects, he says.

The researchers now plan to investigate the conditions required to produce superflares. “We will be extending our research by analysing data from next-generation telescopes, such as the European mission PLATO, which I am actively involved in developing,” Vasilyev says. “PLATO’s launch is due for the end of 2026 and will provide valuable information with which we can refine our understanding of stellar activity and even the impact of superflares on exoplanets.”

The post Sun-like stars produce ‘superflares’ about once a century appeared first on Physics World.

New method recycles quantum dots used in microscopic lasers

Researchers at the University of Strathclyde, UK, have developed a new method to recycle the valuable semiconductor colloidal quantum dots used to fabricate supraparticle lasers. The recovered particles can be reused to build new lasers with a photoluminescence quantum yield almost as high as lasers made from new particles.

Supraparticle lasers are a relatively new class of micro-scale lasers that show much promise in applications such as photocatalysis, environmental sensing, integrated photonics and biomedicine. The active media in these lasers – the supraparticles – are made by assembling and densely packing colloidal quantum dots (CQDs) in the microbubbles formed in a surfactant-stabilized oil-and-water emulsion. The underlying mechanism is similar to the way that dish soap, cooking oil and water mix when we do the washing up, explains Dillon H Downie, a physics PhD student at Strathclyde and a member of the research team led by Nicolas Laurand.

Supraparticles have a high refractive index compared to their surrounding medium. Thanks to this difference, light at the interface between them experiences total internal reflection. This means that when the diameter of the supraparticles is an integer multiple of the wavelength of the incident light, so-called whispering gallery modes (resonant light waves that travel around a concave boundary) form within the supraparticles.

“The supraparticles are therefore microresonators made of an optical gain material (the quantum dots),” explains Downie, “and individual supraparticles can be made to lase by optically pumping them.”

Conceptual image of a supraparticle showing them as a collection of spheres suspended inside a larger sphere, with a red and purple ring around the middle representing the whispering gallery mode circulation
Resonating and recyclable: Supraparticle lasers confine and amplify light through whispering gallery modes — resonant light waves circulating along a spherical boundary — inside a tiny sphere made from aggregated colloidal quantum dots. (Courtesy: Dillon H Downie, University of Strathclyde)

The problem is that many CQDs are made from expensive and sometimes toxic elements. Demand for these increasingly scarce elements will likely outstrip supply before the end of this decade, but at present, only 2% of quantum dots made from these rare-earth elements are recycled. While researchers have been exploring ways of recovering them from electronic waste, the techniques employed often require specialized instruments, complex bio-metallurgical absorbents and hazardous acid-leaching processes. A more environmentally friendly approach is thus sorely needed.

Exceptional recycling potential

In the new work, Laurand, Downie and colleagues recycled supraparticle lasers by first disassembling the CQDs in them. They did this by suspending the dots in an oil phase and applying ultrasonic high-frequency sound waves and heat. They then added water to separate out the dots. Finally, they filtered and purified the disassembled CQDs and tested their fluorescence efficiency before reassembling them into a new laser configuration.

Using this process, the researchers were able to recover 85% of the quantum dots from the initial supraparticle batch. They also found that the recycled quantum dots boasted a photoluminescence quantum yield of 83 ± 16%, which is comparable to the 86 ± 9% for the original particles.

“By testing the lasers’ performance both before and after this process we confirmed their exceptional recycling potential,” Downie says.

Simple, practical technique

Downie describes the team’s technique as simple and practical even for research labs that lack specialized equipment such as centrifuges and scrubbers. He adds that it could also be applied to other self-assembled nanocomposites.

“As we expect nanoparticle aggregates in everything from wearable medical devices to ultrabright LEDs in the future, it is, therefore, not inconceivable that some of these could be sent back for specialized recycling in the same way we do with commercial batteries today,” he tells Physics World. “We may even see a future where rare-earth or some semiconductor elements become critically scarce, necessitating the recycling for any and all devices containing such valuable nanoparticles.”

By proving that supraparticles are reusable, Downie adds, the team’s method provides “ample justification” to anyone wishing to incorporate supraparticle technology into their devices. “This is seen as especially relevant if they are to be used in biomedical applications such as targeted drug delivery systems, which would otherwise be limited to single-use,” he says.

With work on colloidal quantum dots and supraparticle lasers maturing at an incredible rate, Downie adds that it is “fantastic to be able to mature the process of their recycling alongside this progress, especially at such an early stage in the field”.

The study is detailed in Optical Materials Express.

The post New method recycles quantum dots used in microscopic lasers appeared first on Physics World.

Cross-linked polymer is both stiff and stretchy

A new foldable “bottlebrush” polymer network is both stiff and stretchy – two properties that have been difficult to combine in polymers until now. The material, which has a Young’s modulus of 30 kPa even when stretched up to 800% of its original length, could be used in biomedical devices, wearable electronics and soft robotics systems, according to its developers at the University of Virginia School of Engineering and Applied Science in the US.

Polymers are made by linking together building blocks of monomers into chains. To make polymers elastic, these chains are crosslinked by covalent chemical bonds. The crosslinks connect the polymer chains so that when a force is applied to stretch the polymer, it recovers its shape when the force is removed.

A polymer can be made stiffer by adding more crosslinks, to shorten the polymer chain. The stiffness increases because the crosslinks supress the thermal fluctuations of network strands, but this has the effect of making it brittle. This limitation has held back the development of materials that need both stiffness and stretchability, says materials scientist and engineer Liheng Cai, who led this new research effort.

Foldable bottlebrush polymers

In their new work, the researchers hypothesized that foldable bottlebrush-like polymers might not suffer from this problem. These polymers consist of many densely packed linear side chains randomly separated by small spacer monomers. There is a prerequisite, however: the side chains need to have a relatively high molecular weight (MW) and a low glass transition temperature (Tg) while the spacer monomer needs to be low MW and incompatible with the side chains. Achieving this requires control over the incompatibility between backbones and side chain chemistries, explains Baiqiang Huang, who is a PhD student in Cai’s group.

The researchers discovered that two polymers, poly(dimethyl siloxane) (PDMS) and benzyl methacrylate (BnMA) fit the bill here. PDMS is used as the side chain material and BnMA as the spacer monomer. The two are highly incompatible and have very different Tg values of −100°C and 54°C, respectively.

When stretched, the collapsed backbone in the polymer unfolds to release the stored length, so allowing it to be “remarkably extensible”, write the researchers in Science Advances. In contrast, the stiffness of the material changes little thanks to the molecular properties of the side chains in the polymer, says Huang. “Indeed, in our experiments, we demonstrated a significant enhancement in mechanical performance, achieving a constant Young’s modulus of 30 kPa and a tensile breaking strain that increased 40-fold, from 20% to 800%, compared to standard polymers.”

And that is not all: the design of the new foldable bottlebrush polymer means that stiffness and stretchability can be controlled independently in a material for the first time.

Potential applications

The work will be important for when it comes to developing next-generation materials with tailored mechanical properties. According to the researchers, potential applications include durable and flexible prosthetics, high-performance wearable electronics and stretchable materials for soft robotics and medical implants.

Looking forward, the researchers say they will now be focusing on optimizing the molecular structure of their polymer network to fine-tune its mechanical properties for specific applications. They also aim to incorporate functional metallic nanoparticles into the networks, so creating multifunctional materials with specific electrical, magnetic or optical properties. “These efforts will extend the utility of foldable bottlebrush polymer networks to a broader range of applications,” says Cai.

The post Cross-linked polymer is both stiff and stretchy appeared first on Physics World.

Solar wind squashed Uranus’s magnetosphere during Voyager 2 flyby

Some of our understanding of Uranus may be false, say physicists at NASA’s Jet Propulsion Laboratory who have revisited Voyager 2 data before and after its 1986 flyby of this ice-giant planet. The new analyses could shed more light on some of the mysterious and hitherto unexplainable measurements made by the spacecraft. For example, why did it register a strongly asymmetric, plasma-free magnetosphere – something that is unheard of for planets in our solar system – and belts of highly energetic electrons?

Voyager 2 reached Uranus, the seventh planet in our solar system, 38 years ago. The spacecraft gathered its data in just five days and the discoveries from this one and, so far, only flyby provide most of our understanding of this ice giant. Two major findings that delighted astronomers were its 10 new moons and two rings. Other observations perplexed researchers, however.

One of these, explains Jamie Jasinski, who led this new study, was the observation of the second most intense electron radiation belt after Jupiter’s. How such a belt could be maintained or even exist at Uranus lacked an explanation until now. “The other mystery was that the magnetosphere did not have any plasma,” he says. “Indeed, we have been calling the Uranian magnetosphere a ‘vacuum magnetosphere’ because of how empty it is.”

Unrepresentative conditions

These observations, however, may not be representative of the conditions that usually prevail at Uranus, Jasinski explains, because they were simply made during an anomalous period. Indeed, just before the flyby, unusual solar activity  squashed the planet’s magnetosphere down to about 20% of its original volume. Such a situation exists only very rarely and was likely responsible for creating a plasma-free magnetosphere with the observed highly excited electron radiation belts.

Jasinski and colleagues came to their conclusions by analysing Voyager 2 data of the solar wind (a stream of charged particles emanating from the Sun) upstream of Uranus for the few days before the flyby started. They saw that the dynamic pressure of the solar wind increased by a factor of 20, meaning that it dramatically compressed the magnetosphere of Uranus. They then looked at eight months of solar wind data obtained by the spacecraft at Uranus’ orbit and found that the solar wind conditions present during the flyby only occur 4% of the time.

“The flyby therefore occurred during the maximum peak solar wind intensity in that entire eight-month period,” explains Jasinski.

The scientific picture we have of Uranus since the Voyager 2 flyby is that it has an extreme magnetospheric environment, he says. But maybe the flyby just happened to occur during some strange activity rather than it being like that generally.

The timing was just wrong

Jasinski previously worked on NASA’s MESSENGER mission to Mercury. Out of the thousands of orbits made by this spacecraft around the planet over a four-year period, there were occasional times where activity from the Sun completely eroded the entire magnetic field. “That really highlighted for me that if we had made an observation during one of those events, we would have a very different idea of Mercury.”

Following this line of thought, Jasinski asked himself whether we had simply observed Uranus during a similar anomalous time. “The Voyager 2 flyby lasted just five days, so we may have observed Uranus at just the ‘wrong time’,” he says.

One of the most important take-home messages from this study is that we can’t take the results from just one flyby as a being a good representation of the Uranus system, he tells Physics World. Future missions must therefore be designed so that a spacecraft remains in orbit for a few years, enabling variations to be observed over long time periods.

Why we need to go back to Uranus

One of the reasons that we need to go back to Uranus, Jasinski says, is to find out whether any of its moons have subsurface liquid oceans. To observe such oceans with a spacecraft, the moons need to be inside the magnetosphere. This is because the magnetosphere, as it rotates, provides a predictable, steadily varying magnetic field at the moon. This field can then induce a magnetic field response from the ocean that can be measured by the spacecraft. The conductivity of the ocean – and therefore the magnetic signal from the moon – will vary with the depth, thickness and salinity of the ocean.

If the moon is outside the magnetosphere, this steady and predictable external field does not exist and it can no longer drive the induction response. We cannot therefore, detect a magnetic field from the ocean if the moon is outside the magnetosphere.

Before these latest results, researchers thought that the outermost moons, Titania and Oberon, would spend a significant part of their orbit around the planet outside of the magnetosphere, Jasinski explains. This is because we thought that Uranus’s magnetosphere was generally small. However, in light of the new findings, this is probably not true and both moons will orbit inside the magnetosphere since it is much larger than previously thought.

Titania and Oberon are the most likely candidates for harbouring oceans, he adds, because they are slightly larger than the other moons. This means that they can retain heat better and therefore be warmer and less likely to be completely frozen.

“A future mission to Uranus is critical in collecting the scientific measurements to answer some of the most intriguing science questions in our solar system,” says Jasinski. “Only by going back to Uranus and orbiting the planet can we really gain an understanding of this curious planet.”

Happily, in 2022, the US National Academies outlined that a Uranus Orbiter and Probe mission should be a future NASA flagship mission that NASA should prioritize in the coming decade. Such a mission would help us unravel the nature of Uranus’s magnetosphere and its interaction with the planet’s atmosphere, moons and rings, and with the solar wind. “Of course, modern instrumentation would also revolutionize the type of discoveries we would make compared to previous missions,” says Jasinski.

The present study is detailed in Nature Astronomy.

The post Solar wind squashed Uranus’s magnetosphere during Voyager 2 flyby appeared first on Physics World.

Supramolecular biomass foam removes microplastics from water

A reusable and biodegradable fibrous foam developed by researchers at Wuhan University in China can remove up to 99.8% of microplastics from polluted water. The foam, which is made from a self-assembled network of chitin and cellulose obtained from biomass wastes, has been successfully field-tested in four natural aquatic environments.

The amount of plastic waste in the environment has reached staggering levels and is now estimated at several billion metric tons. This plastic degrades extremely slowly and poses a hazard for ecosystems throughout its lifetime. Aquatic life is particularly vulnerable, as micron-sized plastic particles can combine with other pollutants in water and be ingested by a wide range of organisms. Removing these microplastic particles would help limit the damage, but standard filtration technologies are ineffective as the particles are so small.

A highly porous interconnected structure

The new adsorbent developed by Wuhan’s Hongbing Deng and colleagues consists of intertwined beta-chitin nanofibre sheets (obtained from squid bone) with protonated amines and suspended cellulose fibres (obtained from cotton). This structure contains a number of functional groups, including -OH, -NH3+ and -NHCO- that allow the structure to self-assemble into a highly porous interconnected network.

This self-assembly is important, Deng explains, because it means the foam does not require “complex processing (no cross-linking and minimal use of chemical reagents) or adulteration with toxic or expensive substances,” he tells Physics World.

The functional groups make the surface of the foam rough and positively charged, providing numerous sites that can interact and adsorb plastic particles ranging in size from less than 100 nm to over 1000 microns. Deng explains that multiple mechanisms are at work during this process, including physical interception, electrostatic attraction and intermolecular interactions. The latter group includes interactions that involv hydrogen bonding, van der Waals forces and weak hydrogen bonding interactions (between OH and CH groups, for example).

The researchers tested their foam in lake water, coastal water, still water (a small pond) and water used for agricultural irrigation. They also combined these systematic adsorption experiments with molecular dynamics (MD) simulations and Hirshfeld partition (IGMH) calculations to better understand how the foam was working.

They found that the foam can adsorb a variety of nanoplastics and microplastics, including the polystyrene, polymethyl methacrylate, polypropylene and polyethylene terephthalate found in everyday objects such as electronic components, food packaging and textiles. Importantly, the foam can adsorb these plastics even in water bodies polluted with toxic metals such as lead and chemical dyes. It adsorbed nearly 100% of the particles in its first cycle and around 96-98% of the particles over the following five cycles.

“The great potential of biomass”

Because the raw materials needed to make the foam are readily available, and the fabrication process is straightforward, Deng thinks it could be produced on a large scale. “Other microplastic removal materials made from biomass feedstocks have been reported in recent years, but some of these needed to be functionalized with other chemicals,” he says. “Such treatments can increase costs or hinder their large-scale production.”

 Deng and his team have applied for a patent on the material and are now looking for industrial partners to help them produce it. In the meantime, he hopes the work will help draw attention to the microplastic problem and convince more scientists to work on it. “We believe that the great potential of biomass will be recognized and that the use of biomass resources will become more diverse and thorough,” he says.

The present work is described in Science Advances.

The post Supramolecular biomass foam removes microplastics from water appeared first on Physics World.

Immiscible ice layers may explain why Uranus and Neptune lack magnetic poles

When the Voyager 2 spacecraft flew past Uranus and Neptune in 1986 and 1989, it detected something strange: neither of these “ice giant” planets has a well-defined north and south magnetic pole. This absence has remained mysterious ever since, but simulations performed at the University of California, Berkeley (UCB) in the US have now suggested an explanation. According to UCB planetary scientist Burkhard Militzer, the disorganized magnetic fields of Uranus and Neptune may arise from a separation of the icy fluids that make up their interiors. The theory could be tested in laboratory experiments of fluids at high pressures, as well as by a proposed mission to Uranus in the 2040s.

On Earth, the dipole magnetic field that loops from the North Pole to the South Pole arises from convection in the planet’s liquid-iron outer core. Since Uranus and Neptune lack such a dipole field, this implies that the convective movement of material in their interiors must be very different.

In 2004, planetary scientists Sabine Stanley and Jeremy Bloxham suggested that the planets’ interiors might contain immiscible layers. This separation would make widespread convection impossible, preventing a global dipolar magnetic field from forming, while convection in just one layer would produce the disorganized magnetic field that Voyager 2 observed. However, the nature of these non-mixing layers was still unexplained – hampered, in part, by a lack of data.

“Since both planets have been visited by only one spacecraft (Voyager 2), we do not have many measurements to analyse,” Militzer says.

Two immiscible fluids

To investigate conditions deep beneath Uranus and Neptune’s icy surfaces, Militzer developed computer models to simulate how a mixture of water, methane and ammonia will behave at the temperatures (above 4750 K) and pressures (above 3 x 106 atmospheres) that prevail there. The results surprised him. “One morning, I opened my laptop,” he recalls. “When I started analysing my latest simulations, I could not believe my eyes. An initially homogeneous mixture of water, methane and ammonia had separated into two distinct layers.”

The upper layer, he explains, is thin, rich in water and convecting, which allows it to generate the disordered magnetic field. The lower layer is magnetically inactive and composed of carbon, nitrogen and hydrogen. “This had never been observed before and I could tell right then that this result might allow us to understand what has been going on in the interiors of Uranus and Neptune,” he says.

A plastic polymer-like- and a water-rich layer

Militzer’s model, which he describes in PNAS, shows that the hydrogen content in the methane-ammonia mixture gradually decreases with depth, transforming into a C-N-H fluid. This C-N-H layer is almost like a plastic polymer, Militzer explains, and cannot support even a disorganized magnetic field – unlike the upper, water-rich layer, which likely convects.

A future mission to Uranus with the right instruments on board could provide observational evidence for this structure, Militzer says. “I would advocate for a Doppler imager so we can detect the planet’s natural oscillation frequencies,” he tells Physics World. Though such instruments are expensive and heavy, he says they are essential to detecting the presence of the predicted two ice layers in Uranus’ interior: “Like one can distinguish between an oboe and a clarinet, these frequencies can tell [us] about a planet’s interior structure.”

A follow-up to Voyager 2 could also reveal how the ice giants’ structures have evolved since they formed 4.5 billion years ago. Initially, their interiors would have contained only a single ice layer, and this layer would have generated a strong dipolar magnetic field with well-defined north and south poles. “Then, at some point, this ice separated into two distinct layers and their magnetic field switched from dipolar to disordered fields that we see today,” Militzer explains.

Determining when this switch occurred would help us understand not only Uranus and Neptune, but also ice giants orbiting stars other than our Sun. “The most common exoplanets discovered to date are around the same size as Uranus and Neptune, so when we observe the magnetic field of such ‘sub-Neptune’ exoplanets in the future, we might be able to say something about their age,” Militzer says.

In the near term, Militzer hopes that experimentalists will be able to test his theory in extremely-high temperatures and pressure fluid systems that mimic the proportions of elements found on Uranus and Neptune. But his long-term hopes are pinned on a new mission that could detect the predicted layers directly. “While I will have long retired when such a detection might eventually be made, I would be so happy to see it in my lifetime,” he says.

The post Immiscible ice layers may explain why Uranus and Neptune lack magnetic poles appeared first on Physics World.

Laser beam casts a shadow in a ruby crystal

Particles of light – photons – are massless, so they normally pass right through each other. This generally means they can’t cast a shadow. In a new work, however, physicist Jeff Lundeen of the University of Ottawa, Canada and colleagues found that this counterintuitive behaviour can, in fact, happen when a laser beam is illuminated by another light source as it passes through a highly nonlinear medium. As well as being important for basic science, the work could have applications in laser fabrication and imaging.

The light-shadow experiment began when physicists led by Raphael Akel Abrahao sent a high-power beam of green laser light through a cube-shaped ruby crystal. They then illuminated this beam from the side with blue light and observed that the beam cast a shadow on a piece of white paper. This shadow extended through an entire face of the crystal. Writing in Optica, they note that “under ordinary circumstances, photons do not interact with each other, much less block each other as needed for a shadow.” What was going on?

Photon-photon interactions

The answer, they explain, boils down to some unusual photon-photon interactions that take place in media that absorb light in a highly nonlinear way. While several materials fit this basic description, most become saturated at high laser intensities. This means they become more transparent in the presence of a strong laser field, producing an “anti-shadow” that is even brighter than the background – the opposite of what the team was looking for.

What they needed, instead, was a material that absorbs more light at higher optical intensities. Such behaviour is known as “reverse saturation of absorption” or “saturable transmission”, and it only occurs if four conditions are met. Firstly, the light-absorbing system needs to have two electronic energy levels: a ground state and an excited state. Secondly, the transition from the ground to the excited state must be less strong (technically, it must have a smaller cross-section) than the transition from the first exited state to a higher excited state. Thirdly, after the material absorbs light, neither the first nor the second excited states should decay back to other levels when the light is re-emitted. Finally, the incident light should only saturate the first transition.

Diagram showing how the green laser increases the optical absorption of the blue illuminating laser beam, alongside a photo of the setup
Shadow experiment: A high-power green laser beam is directed through a ruby cube and illuminated with a blue laser beam from the side. The green laser beam increases the optical absorption of the blue illuminating laser beam, creating a matching region in the illuminating light and creating a darker area that appears as a shadow of the green laser beam. (Courtesy: R. A. Abrahao, H. P. N. Morin, J. T. R. Pagé, A. Safari, R. W. Boyd, J. S. Lundeen)

That might sound like a tall order, but it turns out that ruby fits the bill. Ruby is an aluminium oxide crystal that contains impurities of chromium atoms. These impurities distort its crystal lattice and give it its familiar red colour. When green laser light (532 nm) is applied to ruby, it drives an electronic transition from the ground state (denoted 4A2) to an excited state 4T2. This excited state then decays rapidly via phonons (vibrations of the crystal lattice) to the 2E state.

At this point, the electrons absorb blue light (450 nm) and transition from 2E to a different excited state, denoted 2T1. While electrons in the 4A2 state could, in principle, absorb blue light directly, without any intermediate step, the absorption cross-section of the transition from 2E to 2T1 is larger, Abrahao explains.

The result is that in the presence of the green laser beam, the ruby absorbs more of the illuminating blue light. This leaves behind a lower-optical-intensity region of blue illumination within the ruby – in other words, the green laser beam’s shadow.

Shadow behaves like an ordinary shadow

This laser shadow behaves like an ordinary shadow in many respects. It follows the shape of the object (the green laser beam) and conforms to the contours of the surfaces it falls on. The team also developed a theoretical model that predicts that the darkness of the shadow will increase as a function of the power of the green laser beam. In their experiment, the maximum contrast was 22% – a figure that Abrahao says is similar to a typical shadow on a sunny day. He adds that it could be increased in the future.

Lundeen offers another way of looking at the team’s experiment. “Fundamentally, a light wave is actually composed of a hybrid particle made up of light and matter, called a polariton,” he explains. “When light travels in a glass or crystal, both aspects of the polariton are important and, for example, explain why the wave travels more slowly in these media than in vacuum. In the absence of either part of the polariton, either the photon or atom, there would be no shadow.”

Strictly speaking, it is therefore not massless light that is creating the shadow, but the material component of the polariton, which has mass, adds Abrahao, who is now a postdoctoral researcher at Brookhaven National Laboratory in the US.

As well as helping us to better understand light-matter interactions, Abrahao tells Physics World that the experiment “could also come in useful in any device in which we need to control the transmission of a laser beam with anther laser beam”. The team now plans to search for other materials and combinations of wavelengths that might produce a similar “laser shadow” effect.

The post Laser beam casts a shadow in a ruby crystal appeared first on Physics World.

Generative AI has an electronic waste problem, researchers warn

The rising popularity of generative artificial intelligence (GAI), and in particular large language models such as ChatGPT, could produce a significant surge in electronic waste, according to new analyses by researchers in Israel and China. Without mitigation measures, the researchers warn that this stream of e-waste could reach 2.5 million tons (2.2 billion kg) annually by 2030, and potentially even more.

“Geopolitical factors, such as restrictions on semiconductor imports, and the trend for rapid server turnover for operational cost saving, could further exacerbate e-waste generation,” says study team member Asaf Tzachor, who studies existential risks at Reichman University in Herzliya, Israel.

GAI or Gen AI is a form of artificial intelligence that creates new content, such as text, images, music, or videos using patterns it has learned from existing data. Some of the principles that make this pattern-based learning possible were developed by the physicist John Hopfield, who shared the 2024 Nobel Prize for Physics with computer scientist and AI pioneer Geoffrey Hinton. Perhaps the best-known example of Gen AI is ChatGPT (the “GPT” stands for “generative pre-trained transformer”), which is an example of a Large Language Model (LLM).

While the potential benefits of LLMs are significant, they come at a price. Notably, they require so much energy to train and operate that some major players in the field, including Google and ChatGPT developer OpenAI, are exploring the possibility of building new nuclear reactors for this purpose.

Quantifying and evaluating Gen AI’s e-waste problem

Energy use is not the only environmental challenge associated with Gen AI, however. The amount of e-waste it produces – including printed circuit boards and batteries that can contain toxic materials such as lead and chromium – is also a potential issue. “While the benefits of AI are well-documented, the sustainability aspects, and particularly e-waste generation, have been largely overlooked,” Tzachor says.

Tzachor and his colleagues decided to address what they describe as a “significant knowledge gap” regarding how GAI contributes to e-waste. Led by sustainability scientist Peng Wang at the Institute of Urban Environment, Chinese Academy of Sciences, they developed a computational power-drive, material flow analysis (CP-MFA) framework to quantify and evaluate the e-waste it produces. This involved modelling the computational resources required for training and deploying LLMs, explains Tzachor, and translating these resources into material flows and e-waste projections.

“We considered various future scenarios of GAI development, ranging from the most aggressive to the most conservative growth,” he tells Physics World. “We also incorporated factors such as geopolitical restrictions and server lifecycle turnover.”

Using this CP-MFA framework, the researchers estimate that the total amount of Gen AI-related e-waste produced between 2023 and 2030 could reach the level of 5 million tons in a “worst-case” scenario where AI finds the most widespread applications.

A range of mitigation measures

That worst-case scenario is far from inevitable, however. Writing in Nature Computational Science, the researchers also modelled the effectiveness of different e-waste management strategies. Among the strategies they studied were increasing the lifespan of existing computing infrastructures through regular maintenance and upgrades; reusing or remanufacturing key components; and improving recycling processes to recover valuable materials in a so-called “circular economy”.

Taken together, these strategies could reduce e-waste generation by up to 86%, according to the team’s calculations. Investing in more energy-efficient technologies and optimizing AI algorithms could also significantly reduce the computational demands of LLMs, Tzachor adds, and would reduce the need to update hardware so frequently.

Another mitigation strategy would be to design AI infrastructure in a way that uses modular components, which Tzachor says are easier to upgrade and recycle. “Encouraging policies that promote sustainable manufacturing practices, responsible e-waste disposal and extended producer responsibility programmes can also play a key role in reducing e-waste,” he explains.

As well as helping policymakers create regulations that support sustainable AI development and effective e-waste management, the study should also encourage AI developers and hardware manufacturers to adopt circular economy principles, says Tzachor. “On the academic side, it could serve as a foundation for future research aimed at exploring the environmental impacts of AI applications other than LLMs and developing more comprehensive sustainability frameworks in general.”

The post Generative AI has an electronic waste problem, researchers warn appeared first on Physics World.

Vertical-nanowire transistors defeat the Boltzmann tyranny

A new transistor made from semiconducting vertical nanowires of gallium antimonide (GaSb) and indium arsenide (InAs) could rival today’s best silicon-based devices. The new transistors are switched on and off by electrons tunnelling through an energy barrier, making them highly energy-efficient. According to their developers at the Massachusetts Institute of Technology (MIT) in the US, they could be ideal for low-energy applications such as the Internet of Things (IoT).

Electronic transistors use an applied voltage to regulate the flow of electricity – that is, electrons – within a semiconductor chip. When this voltage is applied to a conventional silicon transistor, electrons climb over an energy barrier from one side of the device to the other, and it switches from an “off” state to an “on” one. This type of switching is the basis of modern information technology, but there is a fundamental physical limit on the threshold voltage required to get the electrons moving. This limit, which is sometimes termed the “Boltzmann tyranny” because it stems from the Boltzmann-like energy distribution of electrons in a semiconductor, puts a cap on the energy efficiency of this type of transistor.

Highly precise process

In the new work, MIT researchers led by electrical engineer Jesús A del Alamo made their transistor using a top-down fabrication technique they developed. This extremely precise process uses high-quality, epitaxially-grown structures and both dry and wet etching to fabricate nanowires just 6 nm in diameter. The researchers then placed a gate stack composed of a very thin gate dielectric and a metal gate on the sidewalls of the nanowires. Finally, they added point contacts to the source, gate and drain of the transistors using multiple planarization and etch-back steps.

The sub-10 nm size of the devices and the extreme thinness of the gate dielectric (just 2.4 nm) means that electrons are confined in a space so small that they can no longer move freely. In this quantum confinement regime, electrons no longer climb over the thin energy barrier at the GaSb/InAs heterojunction. Instead, they tunnel through it. The voltage required for such a device to switch is much lower than it is for traditional silicon-based transistors.

Steep switching slope and high drive current

Researchers have been studying tunnelling-type transistors for more than 20 years, notes Yanjie Shao, a postdoctoral researcher in nanoelectronics and semiconductor physics at MIT and the lead author of a study in Nature Electronics on the new transistor. Such devices are considered attractive because they allow for ultra-low-power electronics. However, they come with a major challenge: it is hard to maintain a sharp transition between “off” and “on” while delivering a high drive current.

When the project began five years ago, Shao says the team “believed in the potential of the GaSb/InAs ‘broken-band’ system to overcome this difficulty”. But it wasn’t all plain sailing. Fabricating such small vertical nanowires was, he says, “one of the biggest problems we faced”. Making a high-quality gate stack with a very low density of electronic trap states (states within dielectric materials that capture and release charge carriers in a semiconductor channel) was another challenge.

After many unsuccessful attempts, the team found a way to make the system work. “We devised a plasma-enhanced deposition method to make the gate electric and this was key to obtaining exciting transistor performance,” Shao tells Physics World.

The researchers also needed to understand the behaviour of tunnelling transistors, which Shao calls “not easy”. The task was made possible, he adds, by a combination of experimental work and first-principles modelling by Ju Li’s group at MIT, together with quantum transport simulation by David Esseni’s group at the University of Udine, Italy. These studies revealed that band alignment and near-periphery scaling of the number of conduction modes at the heterojunction interface play key roles in the physics of electrons under extreme confinement.

The reward for all this work is a device with a drive current as high as 300 uA/m and a switching slope less than 60 mV/decade (a decade, in this context, is a power of 10 difference between off and on states), meaning that the supply voltage is just 0.3 V. This is below the fundamental limit achievable with silicon-based devices, and around 20 times better than other tunnelling transistors of its type.

Potential for novel devices

Shao says the most likely applications for the new transistor are in ultra-low-voltage electronics. These will be useful for artificial intelligence and Internet of Things (IoT) applications, which require devices with higher energy efficiencies. Shao also hopes the team’s work will bring about a better understanding of the physics at surfaces and interfaces that feature extreme quantum confinement – something that could lead to novel devices that benefit from such nanoscale physics.

The MIT team is now developing transistors with a slightly different configuration that features vertical “nano-fins”. These could make it possible to build more uniform devices with less structural variation across the surface. “Being so small, even a variation of just 1 nm can adversely affect their operation,” Shao says. “We also hope that we can bring this technology closer to real manufacturing by optimizing the process technology.”

The post Vertical-nanowire transistors defeat the Boltzmann tyranny appeared first on Physics World.

Nuclear shape transitions visualized for the first time

Diagram showing a xenon atom changing shape from spherical to prolate to triaxial to oblate during a collision at the LHC
Shape shifter: The nucleus of the xenon atom can assume different shapes depending on the balance of internal forces at play. When two xenon atoms collide at the LHC, simulations indicate that the extremely hot conditions will trigger changes in these shapes. (Courtesy: You Zhou, NBI)

Xenon nuclei change shape as they collide, transforming from soft, oval-shaped particles to rigid, spherical ones. This finding, which is based on simulations of experiments at CERN’s Large Hadron Collider (LHC), provides a first look at how the shapes of atomic nuclei respond to extreme conditions. While the technique is still at the theoretical stage, physicists at the Niels Bohr Institute (NBI) in Denmark and Peking University in China say that ultra-relativistic nuclear collisions at the LHC could allow for the first experimental observations of these so-called nuclear shape phase transitions.

The nucleus of an atom is made up of protons and neutrons, which are collectively known as nucleons. Like electrons, nucleons exist in different energy levels, or shells. To minimize the energy of the system, these shells take different shapes, with possibilities including pear, spherical, oval or peanut-shell-like formations. These shapes affect many properties of the atomic nucleus as well as nuclear processes such as the strong interactions between protons and neutrons. Being able to identify them is thus very useful for predicting how nuclei will behave.

Colliding pairs of 129Xe atoms at the LHC

In the new work, a team led by You Zhou at the NBI and Huichao Song at Peking University studied xenon-129 (129Xe). This isotope has 54 protons and 75 neutrons and is considered a relatively large atom, making its nuclear shape easier, in principle, to study than that of smaller atoms.

Usually, the nucleus of xenon-129 is oval-shaped (technically, it is a 𝛾-soft rotor). However, low-energy nuclear theory predicts that it can transition to a spherical, prolate or oblate shape under certain conditions. “We propose that to probe this change (called a shape phase transition), we could collide pairs of 129Xe atoms at the LHC and use the information we obtain to extract the geometry and shape of the initial colliding nuclei,” Zhou explains. “Probing these initial conditions would then reveal the shape of the 129Xe atoms after they had collided.”

A quark-gluon plasma

To test the viability of such experiments, the researchers simulated accelerating atoms to near relativistic speeds, equivalent to the energies involved in a typical particle-physics experiment at the LHC. At these energies, when nuclei collide with each other, their constituent protons and neutrons break down into smaller particles. These smaller particles are mainly quarks and gluons, and together they form a quark-gluon plasma, which is a liquid with virtually no viscosity.

Zhou, Song and colleagues modelled the properties of this “almost perfect” liquid using an advanced hydrodynamic model they developed called IBBE-VISHNU. According to these analyses, the Xe nuclei go from being soft and oval-shaped to rigid and spherical as they collide.

Studying shape transitions was not initially part of the researchers’ plan. The original aim of their work was to study conditions that prevailed in the first 10-6 seconds after the Big Bang, when the very early universe is thought to have been filled with a quark-gluon plasma of the type produced at the LHC. But after they realized that their simulations could shed light on a different topic, they shifted course.

“Our new study was initiated to address the open question of how nuclear shape transitions manifest in high-energy collisions,” Zhou explains, “and we also wanted to provide experimental insights into existing theoretical nuclear structure predictions.”

One of the team’s greatest difficulties lay in developing the complex models required to account for nuclear deformation and probe the structure of xenon and its fluctuations, Zhou tells Physics World. “There was also a need for compelling new observables that allow for a direct probe of the shape of the colliding nuclei,” he says.

Applications in both high- and low-energy nuclear and structure physics

The work could advance our understanding of fundamental nuclear properties and the operation of the theory of quantum chromodynamics (QCD) under extreme conditions, Zhou adds. “The insights gleaned from this work could guide future nuclear collision experiments and influence our understanding of nuclear phase transitions, with applications extending to both high-energy nuclear physics and low-energy nuclear structure physics,” he says.

The NBI/Peking University researchers say that future experiments could validate the nuclear shape phase transitions they observed in their simulations. Expanding the study to other nuclei that could be collided at the LHC is also on the cards, says Zhou. “This could deepen our understanding of nuclear structure at ultra-short timescales of 10-24 seconds.”

The research is published in Physical Review Letters.

The post Nuclear shape transitions visualized for the first time appeared first on Physics World.

Physicists close in on fractionally-charged electron mystery in graphene

Physicists in the US have found an explanation for why electrons in a material called pentalayer moiré graphene carry fractional charges even in the absence of a magnetic field. This phenomenon is known as the fractional quantum anomalous Hall effect, and teams at the Massachusetts Institute of Technology (MIT), Johns Hopkins University and Harvard University/University of California, Berkeley have independently suggested that an interaction-induced topological “flat” band in the material’s electronic structure may be responsible.

Scientists already knew that electrons in graphene could, in effect, split into fractions of themselves in the presence of a very strong magnetic field. This is an example of the fractional quantum Hall effect, which occurs when a material’s Hall conductance is quantized at fractional multiples of e2/h.

In 2023, several teams of researchers introduced a new twist by observing this fractional quantization even without a magnetic field. The fractional quantum anomalous Hall effect, as it was dubbed, was initially observed in material called twisted molybdenum ditelluride (MoTe2).

Then, in February this year, an MIT team led by physicist Long Ju spotted the same effect in pentalayer moiré graphene. This material consists of a layer of a two-dimensional hexagonal boron nitride (hBN) with five layers of graphene (carbon sheets just one atom thick) stacked on top of it. The graphene and hBN layers are twisted at a small angle with respect to each other, resulting in a moiré pattern that can induce conflicting properties such as superconductivity and insulating behaviour within the structure.

Answering questions

Although Ju and colleagues were the first to observe the fractional quantum anomalous Hall effect in graphene, their paper did not explain why it occurred. In the latest group of studies, other scientists have put forward a possible solution to the mystery.

According to MIT’s Senthil Todadri, the effect could stem from the fact that electrons in two-dimensional materials like graphene are confined in such small spaces that they start interacting strongly. This means that they can no longer be considered as independent charges that naturally repel each other. The Johns Hopkins team led by Ya-Hui Zhang and the Harvard/Berkeley team led by Ashvin Vishwanath and Daniel E Parker came to similar conclusions, and published their work in Physical Review Letters alongside that of the MIT team.

Crystal-like periodic patterns form an electronic “flat” band

Todadri and colleagues started their analyses with a reasonably realistic model of the pentalayer graphene. This model treats the inter-electron Coulomb repulsion in an approximate way, replacing the “push” of all the other electrons on any given electron with a single potential, Todadri explains. “Such a strategy is routinely employed in quantum mechanical calculations of, say, the structure of atoms, molecules or solids,” he notes.

The MIT physicists found that the moiré arrangement of pentalayer graphene induces a weak electron potential that forces electrons passing through it to arrange themselves in crystal-like periodic patterns that form a “flat” electronic band. This band is absent in calculations that do not account for electron–electron interactions, they say.

Such flat bands are especially interesting because electrons in them become “dispersionless” – that is, their kinetic energy is suppressed. As the electrons slow almost to a halt, their effective mass approaches infinity, leading to exotic topological phenomena as well as strongly correlated states of matter associated with high-temperature superconductivity and magnetism. Other quantum properties of solids such as fractional splitting of electrons can also occur.

“Mountain and valley” landscapes

So what causes the topological flat band in pentalayer graphene to form? The answer lies in the “mountain and valley” landscapes that naturally appear in the electronic crystal. Electrons in this material experience these landscapes as pseudo-magnetic fields, which affect their motion and, in effect, do away with the need to apply a real magnetic field to induce the fractional Hall quantization.

“This interaction-induced topological (‘valley-polarized Chern-1’) band is also predicted by our theory to occur in the four- and six-layer versions of multilayer graphene,” Todadri says. “These structures may then be expected to host phases where electron fractions appear.”

In this study, the MIT team presented only a crude treatment of the fractional states. Future work, Todadri says, may focus on understanding the precise role of the moiré potential produced by aligning the graphene with a substrate. One possibility, he suggests, is that it simply pins the topological electron crystal in place. However, it could also stabilize the crystal by tipping its energy to be lower than a competing liquid state. Another open question is whether these fractional electron phenomena at zero magnetic field require a periodic potential in the first place. “The important next question is to develop a better theoretical understanding of these states,” Todadri tells Physics World.

The post Physicists close in on fractionally-charged electron mystery in graphene appeared first on Physics World.

Optimization algorithm gives laser fusion a boost

A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.

A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.

In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.

In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.

Reducing computing cost and saving time

To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.

The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.

The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.

Developing tools

The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”

As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.

A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.

The post Optimization algorithm gives laser fusion a boost appeared first on Physics World.

New imaging technique could change how we look at certain objects in space

A new imaging technique that takes standard two-dimensional (2D) radio images and reconstructs them as three-dimensional (3D) ones could tell us more about structures such as the jet-like features streaming out of galactic black holes. According to the technique’s developers, it could even call into question physical models of how radio galaxies formed in the first place.

“We will now be able to obtain information about the 3D structures in polarized radio sources whereas currently we only see their 2D structures as they appear in the plane of the sky,” explains Lawrence Rudnick, an observational astrophysicist at the University of Minnesota, US, who led the study. “The analysis technique we have developed can be performed not only on the many new maps to be made with powerful telescopes such as the Square Kilometre Array and its precursors, but also from decades of polarized maps in the literature.”

Analysis of data from the MeerKAT radio telescope array

In their new work, Rudnick and colleagues in Australia, Mexico, the UK and the US studied polarized light data from the MeerKAT radio telescope array at the South African Radio Astronomy Observatory. They exploited an effect called Faraday rotation, which rotates the angle of polarized radiation as it travels through a magnetized ionized region. By measuring the amount of rotation for each pixel in an image, they can determine how much material that radiation passed through.

In the simplest case of a uniform medium, says Rudnick, this information tells us the relative distance between us and the emitting region for that pixel. “This allows us to reconstruct the 3D structure of the radiating plasma,” he explains.

An indication of the position of the emitting region

The new study builds on a previous effort that focused on a specific cluster of galaxies for which the researchers already had cubes of data representing its 2D appearance in the sky, plus a third axis given by the amount of Faraday rotation. In the latest work, they decided to look at this data in a new way, viewing the cubes from different angles.

“We realized that the third axis was actually giving us an indication of the position of the emitting region,” Rudnick says. “We therefore extended the technique to situations where we didn’t have cubes to start with, but could re-create them from a pair of 2D images.”

There is a problem, however, in that polarization angle can also rotate as the radiation travels through regions of space that are anything but uniform, including our own Milky Way galaxy and other intervening media. “In that case, the amount of radiation doesn’t tell us anything about the actual 3D structure of the emitting source,” Rudnick adds. “Separating out this information from the rest of the data is perhaps the most difficult aspect of our work.”

Shapes of structures are very different in 3D

Using this technique, Rudnick and colleagues were able determine the line-of-sight orientation of active galactic nuclei (AGN) jets as they are expelled from a massive black hole at the centre of the Fornax A galaxy. They were also able to observe how the materials in these jets interact with “cosmic winds” (essentially larger-scale versions of the magnetic solar wind streaming from our own Sun) and other space weather, and to analyse the structures of magnetic fields inside the jets from the M87 galaxy’s black hole.

The team found that the shapes of structures as inferred from 2D radio images were sometimes very different from those that appear in the 3D reconstructions. Rudnick notes that some of the mental “pictures” we have in our heads of the 3D structure of radio sources will likely turn out to be wrong after they are re-analysed using the new method. One good example in this study was a radio source that, in 2D, looks like a tangled string of filaments filling a large volume. When viewed in 3D, it turns out that these filamentary structures are in fact confined to a band on the surface of the source. “This could change the physical models of how radio galaxies are formed, basically how the jets from the black holes in their centres interact with the surrounding medium,” Rudnick tells Physics World.

The work is detailed in the Monthly Notices of the Royal Astronomical Society

The post New imaging technique could change how we look at certain objects in space appeared first on Physics World.

Electromagnetic waves solve partial differential equations

Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.

Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.

Alternatives to numerical computing

To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.

The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.

Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.

Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.

Interconnected metatronic elements

In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.

Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.

The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.

Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.

In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”

The post Electromagnetic waves solve partial differential equations appeared first on Physics World.

Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt

A supposedly stable belt of radiation 7000 km above the Earth’s surface may in fact be producing damaging bursts of high-energy electrons. According to scientists at the University of Colorado Boulder, US, the bursts appear to be triggered by lightning, and understanding them could help determine the safest “windows” for launching spacecraft – especially those with a human cargo.

The Earth is surrounded by two doughnut-shaped radiation belts that lie within our planet’s magnetosphere. While both belts contain high concentrations of energetic electrons, the electrons in the outer belt (which starts from about 4 Earth radii above the Earth’s surface and extends to about 9–10 Earth radii) typically have energies in the MeV range. In contrast, electrons in the inner belt, which is located between about 1.1 and 2 Earth radii, have energies between 10 and a few hundred kilo-electronvolts (KeV).

At the higher end of this energy scale, these electrons easily penetrate the walls of spacecraft and can damage sensitive electronics inside. They also pose risks to astronauts who leave the protective environment of their spacecraft to perform extravehicular activities.

The size of the radiation belts, as well as the energy and number of electrons they contain, varies considerably over time. One cause of these variations is sub-second bursts of energetic electrons that enter the atmosphere from the magnetosphere that surrounds it. These rapid microbursts are most commonly seen in the outer radiation belt, where they are the result of interactions with phenomena called whistler mode chorus radio waves. However, they can also be observed in the inner belt, where they are generated by whistlers produced by lightning storms. Such lightening-induced precipitation, as it is known, typically occurs at low energies of 10s to 100 KeV.

Outer-belt energies in inner-belt electrons

In the new study, researchers led by CU Boulder aerospace engineering student Max Feinland observed clumps of electrons with MeV energies in the inner belt for the first time. This serendipitous discovery came while Feinland was analysing data from a now-decommissioned NASA satellite called the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX). He originally intended to focus on outer-belt electrons, but “after stumbling across these events in the inner belt, we thought they were interesting and decided to investigate further,” he tells Physics World.

After careful analysis, Feinland, who was working as an undergraduate research assistant in Lauren Blum’s team at CU Boulder’s Laboratory for Atmospheric and Space Physics at the time, identified 45 bursts of high-energy electrons in the inner belt in data from 1996 to 2006. At first, he and his colleagues weren’t sure what could be causing them, since the chorus waves known to produce such high-energy bursts are generally an outer-belt phenomenon. “We actually hypothesized a number of processes that could explain our observations,” he says. “We even thought that they might be due to Very Low Frequency (VLF) transmitters used for naval communications.”

The lightbulb moment, however, came when Feinland compared the bursts to records of lightning strikes in North America. Intriguingly, he found that several of the peaks in the electron bursts seemed to happen less than a second after the lighting strikes.

A lightning trigger

The researchers’ explanation for this is that radio waves produced after a lightning strike interact with electrons in the inner belt. These electrons then begin to oscillate between the Earth’s northern and southern hemispheres with a period of just 0.2 seconds. With each oscillation, some electrons drop out of the inner belt and into the atmosphere. This last finding was unexpected: while researchers knew that high-energy electrons can fall into the atmosphere from the outer radiation belt, this is the first time that they have observed them coming from the inner belt.

Feinland says the team’s discovery could help space-launch firms and national agencies decide when to launch their most sensitive payloads. With further studies, he adds, it might even be possible to determine how long these high-energy electrons remain in the inner belt after geomagnetic storms. “If we can quantify these lifetimes, we could determine when it is safest to launch spacecraft,” he says.

The researchers are now seeking to calculate the exact energies of the electrons. “Some of them may be even more energetic than 1 MeV,” Feinland says.

The present work is detailed in Nature Communications.

The post Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt appeared first on Physics World.

‘Buddy star’ could explain Betelgeuse’s varying brightness

An unseen low-mass companion star may be responsible for the recently observed “Great Dimming” of the red supergiant star Betelgeuse. According to this hypothesis, which was put forward by researchers in the US and Hungary, the star’s apparent brightness varies when an orbiting companion – dubbed α Ori B or, less formally, “Betelbuddy” – displaces light-blocking dust, thereby changing how much of Betelgeuse’s light reaches the Earth.

Located about 548 light-years away, in the constellation Orion, Betelgeuse is the 10th brightest star in the night sky. Usually, its brightness varies over a period of 416 days, but in 2019–2020, its output dropped to the lowest level ever recorded.

At the time, some astrophysicists speculated that this “Great Dimming” might mean that the star was reaching the end of its life and would soon explode as a supernova. Over the next three years, however, Betelgeuse’s brightness recovered, and alternative hypotheses gained favour. One such suggestion is that a cooler spot formed on the star and began ejecting material and dust, causing its light to dim as seen from Earth.

Pulsation periods

The latest hypothesis was inspired, in part, by the fact that Betelgeuse experiences another cycle in addition to its fundamental 416-day pulsation period. This second cycle, known as the long secondary period (LSP), lasts 2170 days, and the Great Dimming occurred after its minimum brightness coincided with a minimum in the 416-day cycle.

While astrophysicists are not entirely sure what causes LSPs, one leading theory suggest that they stem from a companion star. As this companion orbits its parent star, it displaces the cosmic dust the star produces and expels, which in turn changes the amount of starlight that reaches us.

Lots of observational data

To understand whether this might be happening with Betelgeuse, a team led by Jared Goldberg at the Flatiron Institute’s Center for Computational Astrophysics; Meridith Joyce at the University of Wyoming; and László Molnár of the Konkoly Observatory, HUN-REN CSFK, Budapest; analysed a wealth of observational data from the American Association of Variable Star Observers. “This association has been collecting data from both professional and amateur astronomers, so we had access to decades worth of data,” explains Molnár. “We also looked at data from the space-based SMEI instrument and spectroscopic observations collected by the STELLA robotic telescope.”

The researchers combined these direct-observation data with advanced computer models that simulate Betelgeuse’s activity. When they studied how the star’s brightness and its velocity varied relative to each other, they realized that the brightest phase must correspond to a companion being in front of it. “This is the opposite of what others have proposed,” Molnár notes. “For example, one popular hypothesis postulates that companions are enveloped in dense dust clouds, obscuring the giant star when they pass in front of them. But in this case, the companion must remove dust from its vicinity.”

As for how the companion does this, Molnár says they are not sure whether it evaporates the dust away or shepherds it to the opposite side of Betelgeuse with its gravitational pull. Both are possible, and Goldberg adds that other processes may also contribute. “Our new hypothesis complements the previous one involving the formation of a cooler spot on the star that ejects material and dust,” he says. “The dust ejection could occur because the companion star was out of the way, behind Betelgeuse rather than along the line of sight.”

The least absurd of all hypotheses?

The prospect of a connection between an LSP and the activity of a companion star is a longstanding one, Goldberg tells Physics World. “We know the Betelgeuse has an LSP and if an LSP exists, that means a ‘buddy’ for Betelgeuse,” he says.

The researchers weren’t always so confident, though. Indeed, they initially thought the idea of a companion star for Betelgeuse was absurd, so the hardest part of their work was to prove to themselves that this was, in fact, the least absurd of all hypotheses for what was causing the LSP.

“We’ve been interested in Betelgeuse for a while now, and in a previous paper, led by Meridith, we already provided new size, distance and mass estimates for the star based on our models,” says Molnár. “Our new data started to point in one direction, but first we had to convince ourselves that we were right and that our claims are novel.”

The findings could have more far-reaching implications, he adds. While around one third of all red giants and supergiants have LSPs, the relationships between LSPs and brightness vary. “There are therefore a host of targets out there and potentially a need for more detailed models on how companions and dust clouds may interact,” Molnár says.

The researchers are now applying for observing time on space telescopes in hopes of finding direct evidence that the companion exists. One challenge they face is that because Betelgeuse is so bright – indeed, too bright for many sensitive instruments – a “Betelbuddy”, as Goldberg has nicknamed it, may be simpler to explain than it is to observe. “We’re throwing everything we can at it to actually find it,” Molnár says. “We have some ideas on how to detect its radiation in a way that can be separated from the absolute deluge of light Betelgeuse is producing, but we have to collect and analyse our data first.”

The study is published in The Astrophysical Journal.

The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.

Physicists propose new solution to the neutron lifetime puzzle

Neutrons inside the atomic nucleus are incredibly stable, but free neutrons decay within 15 minutes – give or take a few seconds. The reason we don’t know this figure more precisely is that the two main techniques used to measure it produce conflicting results. This so-called neutron lifetime problem has perplexed scientists for decades, but now physicists at TU Wien in Austria have come up with a possible explanation. The difference in lifetimes, they say, could stem from the neutron being in not-yet-discovered excited states that have different lifetimes as well as different energies.

According to the Standard Model of particle physics, free neutrons undergo a process called beta decay that transforms a neutron into a proton, an electron and an antineutrino. To measure the neutrons’ average lifetime, physicists employ two techniques. The first, known as the bottle technique, involves housing neutrons within a container and then counting how many of them remain after a certain amount of time. The second approach, known as the beam technique, is to fire a neutron beam with a known intensity through an electromagnetic trap and measure how many protons exit the trap within a fixed interval.

Researchers have been performing these experiments for nearly 30 years but they always encounter the same problem: the bottle technique yields an average neutron survival time of 880 s, while the beam method produces a lifetime of 888 s. Importantly, this eight-second difference is larger than the uncertainties of the measurements, meaning that known sources of error cannot explain it.

A mix of different neutron states?

A team led by Benjamin Koch and Felix Hummel of TU Wien’s Institute of Theoretical Physics is now suggesting that the discrepancy could be caused by nuclear decay producing free neutrons in a mix of different states. Some neutrons might be in the ground state, for example, while others could be in a higher-energy excited state. This would alter the neutrons’ lifetimes, they say, because elements in the so-called transition matrix that describes how neutrons decay into protons would be different for neutrons in excited states and neutrons in ground states.

As for how this would translate into different beam and bottle lifetime measurements, the team say that neutron beams would naturally contain several different neutron states. Neutrons in a bottle, in contrast, would almost all be in the ground state – simply because they would have had time to cool down before being measured in the container.

Towards experimental tests

Could these different states be detected? The researchers say it’s possible, but they caution that experiments will be needed to prove it. They also note that theirs is not the first hypothesis put forward to explain the neutron lifetime discrepancy. Perhaps the simplest explanation is that the gap stems from unknown systematic errors in either the beam experiment, the bottle experiment, or both. Other, more theoretical approaches have also been proposed, but Koch says they do not align with existing experimental data.

“Personally, I find hypotheses that require fewer and smaller new assumptions – and that are experimentally testable – more appealing,” Koch says. As an example, he cites a 2020 study showing that a phenomenon called the inverse quantum Zeno effect could speed up the decay of bottle-confined neutrons, calling it “an interesting idea”. Another possible explanation of the puzzle, which he says he finds “very intriguing” has just been published and describes the admixture of novel bound electron-proton states in the final state of a weak decay, known as “Second Flavor Hydrogen Atoms”.

As someone with a background in quantum gravity and theoretical physics beyond the Standard Model, Koch is no stranger to predictions that are hard (and sometimes impossible, at least in the near term) to test. “Contributing to the understanding of a longstanding problem in physics with a hypothesis that could be experimentally tested soon is therefore particularly exciting for me,” he tells Physics World. “If our hypothesis of excited neutron states is confirmed by future experiments, it would shed a completely new light on the structure of neutral nuclear matter.”

The researchers now plan to collaborate with colleagues from the Institute for Atomic and Subatomic Physics at TU Wien to revaluate existing experimental data and explore various theoretical models. “We’re also hopeful about designing experiments specifically aimed at testing our hypothesis,” Koch reveals.

The present study is detailed in Physical Review D.

The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.

❌