↩ Accueil

Vue lecture

Symmetric crystals can absorb light asymmetrically

Scientists have discovered a centrosymmetric crystal that behaves as though it is chiral – absorbing left- and right-handed circularly-polarized light differently. This counterintuitive finding, from researchers at Northwestern University and the University of Wisconsin-Madison in the US, could help in the development of new technologies that control light. Applications include brighter optical displays and improved sensors.

Centrosymmetric crystals are those that look identical when reflected through a central point. Until now, only non-centrosymmetric crystals were thought to exhibit differential absorption of circularly-polarized light, owing to their chirality – a property that describes how an object differs from its mirror image (such as our left and right hands, for example).

In the new work, a team led by chemist Roel Tempelaar studied how a centrosymmetric crystal made from lithium, cobalt and selenium oxide interacts with circularly polarized light, that is, light with an electromagnetic field direction that rotates in a helical or “corkscrew-like” fashion as it propagates through space. Such light is routinely employed to study the conformation of chiral biomolecules, such as proteins, DNA and amino acids, as they absorb left- and right-handed circularly polarized light differently, a phenomenon known as circular dichroism.

The crystal, which has the chemical formula Li2Co3(SeO3)4, was first synthesized in 1999, but has not (to the best of the researchers’ knowledge) been discussed in the literature since.

 A photophysical process involving strong chiroptical signals

Tempelaar and colleagues found that the material absorbed circularly polarized light more when the light was polarized in one direction than in the other. This property, they say, stems from a photophysical process involving strong chiroptical signals that invert when the sample is flipped. Such a mechanism is different to conventional chiroptical response to circularly polarized light and has not been seen before in single centrosymmetric crystals.

Not only does the discovery challenge long-held assumptions about crystals and chiroptical responses, it opens up opportunities for engineering new optical materials that control light, says Tempelaar. Potential applications could include brighter optical displays, polarization-dependent optical diodes, chiral lasing, more sensitive sensors and new types of faster, more secure light-based communication.

“Our work has shown that centrosymmetric crystals should not be dismissed when designing materials for circularly polarized light absorption,” Tempelaar tells Physics World. “Indeed, we found such absorption to be remarkably strong for Li2Co3(SeO3)4.”

The researchers say they took on this study after their theoretical calculations revealed that Li2Co3(SeO3)4 should show circular dichroism. They then successfully grew the crystals by mixing cobalt hydroxide, lithium hydroxide monohydrate and selenium dioxide and heating the mixture for five days in an autoclave at about 220 °C.

The “tip of the iceberg”

“This crystal is the first candidate material that we resorted to in order to test our prediction,” says Tempelaar. “The fact that it behaved the way it does could just be a great stroke of luck, but it is more likely that Li2Co3(SeO3)4 is just the tip of the iceberg spanning many centrosymmetric materials for circularly polarized light absorption.”

Some of those compounds may compete with current champion materials for circularly polarized light absorption, through which we can push the boundaries of optical materials engineering, he adds. “Much remains to be discovered, however, and we are eager to progress this research direction further.”

“We are also interested in incorporating such materials into photonic structures such as optical microcavities to amplify their desirable optical properties and yield devices with new functionality,” Tempelaar reveals.

Full details of the study are reported in Science.

The post Symmetric crystals can absorb light asymmetrically appeared first on Physics World.

  •  

Soliton structure protects superfluorescence

Superfluorescence is a collective quantum phenomenon in which many excited particles emit light coherently in a sudden, intense burst. It is usually only observed at cryogenic temperatures, but researchers in the US and France have now determined how and why superfluorescence occurs at room temperature in a lead halide perovskite. The work could help in the development of materials that host exotic coherent quantum states – like superconductivity, superfluidity or superfluorescence – under ambient conditions, they say.

Superfluorescence and other collective quantum phenomena are rapidly destroyed at temperatures higher than cryogenic ones because of thermal vibrations produced in the crystal lattice. In the system studied in this work, the researchers, led by physicist Kenan Gundogdu of North Carolina State University, found that excitons (bound electron–hole pairs) spontaneously form localized, coherence-preserving domains. “These solitons act like quantum islands,” explains Gundogdu. “Excitons inside these islands remain coherent while those outside rapidly dephase.”

The soliton structure acts as a shield, he adds, protecting its content from thermal disturbances – a behaviour that represents a kind of quantum analogue of “soundproofing” – that is, isolation from vibrations. “Here, coherence is maintained not by external cooling but by intrinsic self-organization,” he says.

Intense, time-delayed bursts of coherent emission

The team, which also includes researchers from Duke University, Boston University and the Institut Polytechnique de Paris, began their experiment by exciting lead halide perovskite samples with intense femtosecond laser pulses to generate a dense population of excitons in the material. Under normal conditions, these excitons recombine and emit light incoherently, but at high enough densities, as was the case here, the researchers observed intense, time-delayed bursts of coherent emission, which is a signature of superfluorescence.

When they analysed how the emission evolved over time, the researchers observed that it fluctuated. Surprisingly, these fluctuations were not random, explains Gundogdu, but were modulated by a well-defined frequency, corresponding to a specific lattice vibrational mode. “This suggested that the coherent excitons that emit superfluorescence come from a region in the lattice in which the lattice modes themselves oscillate in synchrony.”

So how can coherent lattice oscillations arise in a thermally disordered environment? The answer involves polarons, says Gundogdu. These are groups of excitons that locally deform the lattice. “Above a critical excitation density, these polarons self-organize into a soliton, which concentrates energy into specific vibrational modes while suppressing others. This process filters out incoherent lattice motion, allowing a stable collective oscillation to emerge.”

The new work, which is detailed in Nature, builds on a previous study in which the researchers had observed superfluorescence in perovskites at room temperature – an unexpected result. They suspected that an intrinsic effect was protecting excitons from dephasing – possibly through a quantum analogue of vibration isolation as mentioned – but the mechanism behind this was unclear.

In this latest experiment, the team determined how polarons can self-organize into soliton states, and revealed an unconventional form of superfluorescence where coherence emerges intrinsically inside solitons. This coherence protection mechanism might be extended to other macroscopic quantum phenomena such as superconductivity and superfluidity.

“These effects are foundational for quantum technologies, yet how coherence survives at high temperatures is still unresolved,” Gundogdu tells Physics World. “Our findings provide a new principle that could help close this knowledge gap and guide the design of more robust, high-temperature quantum systems.”

The post Soliton structure protects superfluorescence appeared first on Physics World.

  •  

Statistical physics reveals how ‘condenser’ occupations limit worker mobility

Occupational transition network
Occupational transition network Graphical visualization of the weighted and directed labour market network in France derived from the transition probability matrix, computed from data spanning 2012 to 2020. Each node symbolizes an occupation, with links illustrating transitions between them. Node sizes correspond to the occupation’s workforce size, line widths are proportional to the transition probability. (Courtesy: M Knicker, K Naumann-Woleske and M Benzaquen, École Polytechnique Paris)

A new statistical physics analysis of the French labour market has revealed that the vast majority of occupations act as so-called condenser occupations. These structural bottlenecks attract workers from many other types of jobs, but offer very limited options for further mobility. This finding could help explain why changing jobs in response to shocks like technological change or economic crises is often so slow, say scientists at the Ecole Polytechnique in Paris, who performed the study.

“By pinpointing where mobility gets ‘stuck’, we provide a new lens to understand – and potentially improve – the adaptability of labour markets,” explains Max Knicker of the EconophysiX lab, who led this new research effort.

Knicker and colleagues borrowed a concept from statistical physics known as the “fitness and complexity” framework, which is used to study the structure of economies and ecosystems. In their work, the researchers treated occupations as nodes in a network and analysed real transition data – that is, how workers actually moved between different jobs in France from 2012 to 2020. The data came from official sources and were provided by the National Institute of Statistics and Economic Studies through the Secure Data Access Center (CASD).

“In total, we had access to information on about 30 million workers and employers in France, whom we tracked over a 10-year period,” explains Knicker. “We also worked with high-resolution administrative data from INSEE (the French National Institute of Statistics), and specifically the BTS-Postes (Base Tous Salariés-Postes).”

Two key metrics

The researchers assigned a score for each occupation and developed two key metrics. These were: accessibility, which measures how many different jobs “feed into” a given occupation; and transferability, which measures how many different jobs someone can move to from that occupation.

By studying the network of job flows with these metrics, they observed hidden patterns and constraints in occupational mobility and identified four main clusters, or categories, of jobs. The first are defined as “diffuser” occupations and have high transferability but low accessibility. “These require specific training to enter, but that training allows for transitions to many other areas,” explains Knicker. “This means they are more difficult to get into, but offer a wide range of exit opportunities.”

The second group are called “channel” occupations. These are both hard to enter and offer few onward transitions, he says. “They often involve highly specialized skills, such as specific types of machine operation.”

The third class are “hubs” and are both widely accessible and highly transferable – so much so that they act as central nodes in the transition network. “This class includes jobs like retail sellers, which require a broad, yet not highly specialized skill set,” says Knicker.

The fourth and last category is the most common type and dubbed “condenser” occupations. “Workers from many different backgrounds can easily enter these, but they can’t easily get out afterwards,” explains Knicker. “Examples of such jobs include caregiving roles.”

A valuable tool for policymakers

The researchers explain that they undertook their study to answer a broader question: why do some economies adapt quickly to shocks while others struggle? “Despite increasing attention to issues like automation or the green transition, we still lacked tools to diagnose where worker mobility breaks down,” says Knicker. “A key challenge was dealing with the sheer complexity and size of the labour flow data – we analysed over 250 million person–year observations. Another was interpreting the results in a meaningful, policy-relevant way, since the transition network is shaped by many intertwined factors like skill compatibility, employer preferences and worker choices.”

The new framework could become a valuable tool for policymakers seeking to make labour markets more responsive, he tells Physics World. “For example, by identifying specific occupations that function as bottlenecks, we can better target reskilling efforts or job transition programmes. It also suggests that simply increasing training isn’t enough – what matters is where people are coming from and where they can go next.”

The researchers also showed that the structure of job transitions itself can limit mobility. Over time, this could inform the design of more strategic labour interventions, especially in the face of structural shocks like AI-driven job displacement, states Knicker.

Looking forward, the Ecole Polytechnique team plans to extend its approach by studying how the career paths of individual workers evolve over time. This, says Knicker, will be done using panel data, not just year-to-year snapshots as in the present analysis. He and his colleagues are also interested in linking their metrics to wage dynamics – for example, does low transferability make workers more vulnerable to exploitation or wage stagnation? “Finally, we hope to explore whether similar bottleneck structures exist in other countries, which could reveal whether these patterns are universal or country-specific.”

Full details of the analysis are reported in the Journal of Statistical Mechanics Theory and Experiment.

The post Statistical physics reveals how ‘condenser’ occupations limit worker mobility appeared first on Physics World.

  •  

Liquid carbon reveals its secrets

Thanks to new experiments using the DIPOLE 100-X high-performance laser at the European X-ray Free Electron Laser (XFEL), an international collaboration of physicists has obtained the first detailed view of the microstructure of carbon in its liquid state. The work will help refine models of liquid carbon, enabling important insights into the role that it plays in the interior of ice giant planets like Uranus and Neptune, where liquid carbon exists in abundance. It could also inform the choice of ablator materials in future technologies such as nuclear fusion.

Carbon is the one of the most abundant elements on Earth and indeed the universe, but we still know very little about how it behaves in its liquid state. This is because producing liquid carbon is extremely difficult: at ambient pressures it sublimes rather than melts; and the liquid phase requires pressures of at least several hundred atmospheres to form. What is more, carbon boasts the highest melting temperature (of roughly 4500 °C) of all known materials under these high-pressure conditions, which means that there is no substance that can contain it for long enough to be studied and characterized.

In situ probing laser compression technique

There is an alternative, though, which involves using X-ray free electron laser pulses – such as those produced at the European XFEL – to transform solid carbon into a liquid for a few nanoseconds. The next challenge is to make measurements during this very short period of time. But this is exactly what a team led by Dominik Kraus of the University of Rostock and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has succeeded in doing.

In their work, Kraus and colleagues transiently created liquid carbon by driving strong compression waves into solid carbon samples using the pulsed high-energy laser DIPOLE 100-X, which is a new experimental platform at the European XFEL. In this way, the researchers were able to achieve pressures exceeding one million atmospheres, with the compression waves simultaneously heating the samples to around 7000 K to form liquid carbon. They then obtained in situ snapshots of the structure using ultrabright X-ray pulses at the European XFEL that lasted just 25 fs – that is, about 100,000 times shorter than the already very short lifetime of the liquid carbon samples.

Relevance to planetary interiors and inertial fusion

Studying liquid carbon is important for modelling the interior of planets such as the ice giants Neptune and Uranus, as well as the atmosphere of white dwarfs, in which it also exists, explains Kraus. The insights gleaned from the team’s experiments will help to clarify the role that liquid carbon plays in the ice giants and perhaps even comparable carbon-rich exoplanets.

Liquid carbon also forms as a transient state during some technical processes, like in the synthesis of carbon-based materials such as carbon nanotubes, nanodiamonds or “Q-carbon”, and may be key for the synthesis of new carbon materials, such as the long sought after (but still only predicted) “BC-8” structure. The team’s findings could also help inform the choice of materials for inertial fusion implosions aiming for clean and reliable energy production, where carbon is used as an ablator material.

“Because of its relevance in these areas, I had already tried to study liquid carbon during my doctoral work more than 10 years ago,” Kraus says. “Without an XFEL for characterization, I could only obtain a tiny hint of the liquid structure of carbon (and with large error bars) and was barely able to refine any existing models.”

Until now, however, this work was considered as being the best attempt to characterize the structure of liquid carbon at Mbar pressures, he tells Physics World. “Using the XFEL as a characterization tool and the subsequent analysis was incredibly simple in comparison to all the previous work and, in the end, the most important challenge was to get the European XFEL facility ready – something that I had already discussed more than 10 years ago too when the first plans were being made for studying matter under extreme conditions at such an installation.”

The results of the new study, which is detailed in Nature, prove that simple models cannot describe the liquid state of carbon very well, and that sophisticated atomistic simulations are required for predicting processes involving this material, he says.

Looking forward, the Rostock University and HZDR researchers now plan to extend their methodology to the liquid states of various other materials. “In particular, we will study mixtures of light elements that may exist in planetary interiors and the resulting chemistry at extreme conditions,” reveals Kraus. “This work may also be interesting for forming doped nanodiamonds or other phases with potential technological applications.”

The post Liquid carbon reveals its secrets appeared first on Physics World.

  •  

Short-lived eclipsing binary pulsar spotted in Milky Way

Astronomers in China have observed a pulsar that becomes partially eclipsed by an orbiting companion star every few hours. This type of observation is very rare and could shed new light on how binary star systems evolve.

While most stars in our galaxy exist in pairs, the way these binary systems form and evolve is still little understood. According to current theories, when two stars orbit each other, one of them may expand so much that its atmosphere becomes large enough to encompass the other. During this “envelope” phase, mass can be transferred from one star to the other, causing the stars’ orbit to shrink over a period of around 1000 years. After this, the stars either merge or the envelope is ejected.

In the special case where one star in the pair is a neutron star, the envelope-ejection scenario should, in theory, produce a helium star that has been “stripped” of much of its material and a “recycled” millisecond pulsar – that is, a rapidly spinning neutron star that flashes radio pulses hundreds of times per second. In this type of binary system, the helium star can periodically eclipse the pulsar as it orbits around it, blocking its radio pulses and preventing us from detecting them here on Earth. Only a few examples of such a binary system have ever been observed, however, and all previous ones were in nearby dwarf galaxies called the Magellanic Clouds, rather than our own Milky Way.

A special pulsar

Astronomers led by Jinlin Han from the National Astronomical Observatories of China say they have now identified the first system of this type in the Milky Way. The pulsar in the binary, denoted PSR J1928+1815, had been previously identified using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) during the FAST Galactic Plane Pulsar Snapshot survey. These observations showed that PSR J1928+1815 has a spin period of 10.55 ms, which is relatively short for a pulsar of this type and suggests it had recently sped up by accreting mass from a companion.

The researchers used FAST to observe this suspected binary system at radio frequencies ranging from 1.0 to 1.5 GHz over a period of four and a half years. They fitted the times that the radio pulses arrived at the telescope with a binary orbit model to show that the system has an eccentricity of less than 3 × 10−5. This suggests that the pulsar and its companion star are in a nearly circular orbit. The diameter of this orbit, Han points out, is smaller than that of our own Sun, and its period – that is, the time it takes the two stars to circle each other – is correspondingly short, at 3.6 hours. For a sixth of this time, the companion star blocks the pulsar’s radio signals.

The team also found that the rate at which this orbital period is changing (the so-called spin period derivative) is unusually high for a millisecond-period pulsar, at 3.63 × 10−18 s s−1 .This shows that energy is rapidly being lost from the system as the pulsar spins down.

“We knew that PSR J1928+1815 was special from November 2021 onwards,” says Han. “Once we’d accumulated data with FAST, one of my students, ZongLin Yang, studied the evolution of such binaries in general and completed the timing calculations from the data we had obtained for this system. His results suggested the existence of the helium star companion and everything then fell into place.”

Short-lived phenomenon

This is the first time a short-life (107 years) binary consisting of a neutron star and a helium star has ever been detected, Han tells Physics World. “It is a product of the common envelope evolution that lasted for only 1000 years and that we couldn’t observe directly,” he says.

“Our new observation is the smoking gun for long-standing binary star evolution theories, such as those that describe how stars exchange mass and shrink their orbits, how the neutron star spins up by accreting matter from its companion and how the shared hydrogen envelope is ejected.”

The system could help astronomers study how neutron stars accrete matter and then cool down, he adds. “The binary detected in this work will evolve to become a system of two compact stars that will eventually merge and become a future source of gravitational waves.”

Full details of the study are reported in Science.

The post Short-lived eclipsing binary pulsar spotted in Milky Way appeared first on Physics World.

  •  

Handheld device captures airborne signs of disease

A sensitive new portable device can detect gas molecules associated with certain diseases by condensing dilute airborne biomarkers into concentrated liquid droplets. According to its developers at the University of Chicago in the US, the device could be used to detect airborne viruses or bacteria in hospitals and other public places, improve neonatal care, and even allow diabetic patients to read glucose levels in their breath, to list just three examples.

Many disease biomarkers are only found in breath or ambient air at levels of a few parts per trillion. This makes them very difficult to detect compared with biomarkers in biofluids such as blood, saliva or mucus, where they are much more concentrated. Traditionally, reaching a high enough sensitivity required bulky and expensive equipment such as mass spectrometers, which are impractical for everyday environments.

Rapid and sensitive identification

Researchers led by biophysicist and materials chemist Bozhi Tian have now developed a highly portable alternative. Their new Airborne Biomarker Localization Engine (ABLE) can detect both non-volatile and volatile molecules in air in around 15 minutes.

This handheld device comprises a cooled condenser surface, an air pump and microfluidic enrichment modules, and it works in the following way. First, air that (potentially) contains biomarkers flows into a cooled chamber. Within this chamber, Tian explains, the supersaturated moisture condenses onto nanostructured superhydrophobic surfaces and forms droplets. Any particles in the air thus become suspended inside the droplets, which means they can be analysed using conventional liquid-phase biosensors such as colorimeteric test strips or electrochemical probes. This allows them to be identified rapidly with high sensitivity.

Tiny babies and a big idea

Tian says the inspiration for this study, which is detailed in Nature Chemical Engineering, came from a visit he made to a neonatal intensive care unit (NICU) in 2021. “Here, I observed the vulnerability and fragility of preterm infants and realized how important non-invasive monitoring is for them,” Tian explains.

“My colleagues and I envisioned a contact-free system capable of detecting disease-related molecules in air. Our biggest challenge was sensitivity and initial trials failed to detect key chemicals,” he remembers. “We overcame this problem by developing a new enrichment strategy using nanostructured condensation and molecular sieves while also exploiting evaporation physics to stabilize and concentrate the captured biomarkers.”

The technology opens new avenues for non-contact, point-of-care diagnostics, he tells Physics World. Possible near-term applications include the early detection of ailments such as inflammatory bowel disease (IBD), which can lead to markers of inflammation appearing in patients’ breath. Respiratory disorders and neurodevelopment conditions in babies could be detected in a similar way. Tian suggests the device could even be used for mental health monitoring via volatile stress biomarkers (again found in breath) and for monitoring air quality in public spaces such as schools and hospitals.

“Thanks to its high sensitivity and low cost (of around $200), ABLE could democratize biomarker sensing, moving diagnostics beyond the laboratory and into homes, clinics and underserved areas, allowing for a new paradigm in preventative and personalized medicine,” he says.

Widespread applications driven by novel physics

The University of Chicago scientists’ next goal is to further miniaturize and optimize the ABLE device. They are especially interested in enhancing its sensitivity and energy efficiency, as well as exploring the possibility of real-time feedback through closed-loop integration with wearable sensors. “We also plan to extend its applications to infectious disease surveillance and food spoilage detection,” Tian reveals.

The researchers are currently collaborating with health professionals to test ABLE in real-world settings such as NICUs and outpatient clinics. In the future, though, they also hope to explore novel physical processes that might improve the efficiency at which devices like these can capture hydrophobic or nonpolar airborne molecules.

According to Tian, the work has unveiled “unexpected evaporation physics” in dilute droplets with multiple components. Notably, they have seen evidence that such droplets defy the limit set by Henry’s law, which states that at constant temperature, the amount of a gas that dissolves in a liquid of a given type and volume is directly proportional to the partial pressure of the gas in equilibrium with the liquid. “This opens a new physical framework for such condensation-driven sensing and lays the foundation for widespread applications in the non-contact diagnostics, environmental monitoring and public health applications mentioned,” Tian says.

The post Handheld device captures airborne signs of disease appeared first on Physics World.

  •  

Worm slime could inspire recyclable polymer design

The animal world – including some of its ickiest parts – never ceases to amaze. According to researchers in Canada and Singapore, velvet worm slime contains an ingredient that could revolutionize the design of high-performance polymers, making them far more sustainable than current versions.

“We have been investigating velvet worm slime as a model system for inspiring new adhesives and recyclable plastics because of its ability to reversibly form strong fibres,” explains Matthew Harrington, the McGill University chemist who co-led the research with Ali Miserez of Nanyang Technological University (NTU). “We needed to understand the mechanism that drives this reversible fibre formation, and we discovered a hitherto unknown feature of the proteins in the slime that might provide a very important clue in this context.”

The velvet worm (phylum Onychophora) is a small, caterpillar-like creature that lives in humid forests. Although several organisms, including spiders and mussels, produce protein-based slimy material outside their bodies, the slime of the velvet worm is unique. Produced from specialized papillae on each side of the worm’s head, and squirted out in jets whenever the worm needs to capture prey or defend itself, it quickly transforms from a sticky, viscoelastic gel into stiff, glassy fibres as strong as nylon.

When dissolved in water, these stiff fibres return to their biomolecular precursors. Remarkably, new fibres can then be drawn from the solution – implyimg that the instructions for fibre self-assembly are “encoded” within the precursors themselves, Harrington says.

High-molecular-weight protein identified

Previously, the molecular mechanisms behind this reversibility were little understood. In the present study, however, the researchers used protein sequencing and the AI-guided protein structure prediction algorithm AlphaFold to identify a specific high-molecular-weight protein in the slime. Known as a leucine-rich repeat, this protein has a structure similar to that of a cell surface receptor protein called a Toll-like receptor (TLR).

In biology, Miserez explains, this type of receptor is involved in immune system response. It also plays a role in embryonic or neural development. In the worm slime, however, that’s not the case.

“We have now unveiled a very different role for TLR proteins,” says Miserez, who works in NTU’s materials science and engineering department. “They play a structural, mechanical role and can be seen as a kind of ‘glue protein’ at the molecular level that brings together many other slime proteins to form the macroscopic fibres.”

Miserez adds that the team found this same protein in different species of velvet worms that diverged from a common ancestor nearly 400 million years ago. “This means that this different biological function is very ancient from an evolutionary perspective,” he explains.

“It was very unusual to find such a protein in the context of a biological material,” Harrington adds. “By predicting the protein’s structure and its ability to bind to other slime proteins, we were able to hypothesize its important role in the reversible fibre formation behaviour of the slime.”

The team’s hypothesis is that the reversibility of fibre formation is based on receptor-ligand interactions between several slime proteins. While Harrington acknowledges that much work remains to be done to verify this, he notes that such binding is a well-described principle in many groups of organisms, including bacteria, plants and animals. It is also crucial for cell adhesion, development and innate immunity. “If we can confirm this, it could provide inspiration for making high-performance non-toxic (bio)polymeric materials that are also recyclable,” he tells Physics World.

The study, which is detailed in PNAS, was mainly based on computational modelling and protein structure prediction. The next step, say the McGill researchers, is to purify or recombinantly express the proteins of interest and test their interactions in vitro.

The post Worm slime could inspire recyclable polymer design appeared first on Physics World.

  •  

Sound waves control droplet movement in microfluidic processor

Thanks to a new sound-based control system, a microfluidic processor can precisely manipulate droplets with an exceptionally broad range of volumes. The minimalist device is compatible with many substrates, including metals, polymers and glass. It is also biocompatible, and its developers at the Hong Kong Polytechnic University say it could be a transformative tool for applications in biology, chemistry and lab-on-a-chip systems.

Nano- and microfluidic systems use the principles of micro- and nanotechnology, biochemistry, engineering and physics to manipulate the behaviour of liquids on a small scale. Over the past few decades, they have revolutionized fluid processing, enabling researchers in a host of fields to perform tasks on chips that would previously have required painstaking test-tube-based work. The benefits include real-time, high-throughput testing for point-of care diagnostics using tiny sample sizes.

Microfluidics also play a role in several everyday technologies, including inkjet printer heads, pregnancy tests and, as the world recently discovered, tests for viruses like SARS-Cov2, which causes COVID-19. Indeed, the latter example involves a whole series of fluidic operations, as viral RNA is extracted from swabs, amplified and quantified using the polymerase chain reaction (PCR).

In each of these operations, it is vital to avoid contaminating the sample with other fluids. Researchers have therefore been striving to develop contactless techniques – for instance, those that rely on light, heat or magnetic and electric fields to move the fluids around. However, such approaches often require strong fields or high temperatures that can damage delicate chemical or biological samples.

In recent years, scientists have experimented with using acoustic fields instead. However, this method was previously found to work only for certain types of fluids, and with a limited volume range from hundreds of nanolitres (nL) to tens of microlitres (μL).

Versatile, residue-free fluid control

The new sound-controlled fluidic processor (SFP) developed by Liqiu Wang and colleagues is not bound by this limit. Thanks to an ultrasonic transducer and a liquid-infused slippery surface that minimizes adhesion of the samples, it can manipulate droplets with volumes of between 1 nL to 3000 μL. “By adjusting the sound source’s position, we can shape acoustic pressure fields to push, pull, mix or even split droplets on demand,” explains Wang. “This method ensures versatile, residue-free fluid control.”

The technique’s non-invasive nature and precision make it ideal for point-of-care diagnostics, drug screening and automated biochemical assays, Wang adds. “It could also help streamline reagent delivery in high-throughput systems,” he tells Physics World.

A further use, Wang suggests, would be fundamental biological applications such as organoid research. Indeed, the Hong Kong researchers demonstrated this by culturing mouse primary liver organoids and screening for molecules like verapamil, a drug that can protect the liver by preventing harmful calcium buildup.

Wang and colleagues, who report their work in Science Advances, say they now plan to integrate their sound-controlled fluidic processor into fully automated, programmable lab-on-a-chip systems. “Future steps include miniaturization and incorporating multiple acoustic sources for parallel operations, paving the way for next-generation diagnostics and chemical processing,” Wang reveals.

The post Sound waves control droplet movement in microfluidic processor appeared first on Physics World.

  •  

Quantum physics guides proton motion in biological systems

If you dig deep enough, you’ll find that most biochemical and physiological processes rely on shuttling hydrogen atoms – protons – around living systems. Until recently, this proton transfer process was thought to occur when protons jump from water molecule to water molecule and between chains of amino acids. In 2023, however, researchers suggested that protons might, in fact, transfer at the same time as electrons. Scientists in Israel have now confirmed this is indeed the case, while also showing that proton movement is linked to the electrons’ spin, or magnetic moment. Since the properties of electron spin are defined by quantum mechanics, the new findings imply that essential life processes are intrinsically quantum in nature.

The scientists obtained this result by placing crystals of lysozyme – an enzyme commonly found in living organisms – on a magnetic substrate. Depending on the direction of the substrate’s magnetization, the spin of the electrons ejected from this substrate may be up or down. Once the electrons are ejected from the substrate, they enter the lysozymes. There, they become coupled to phonons, or vibrations of the crystal lattice.

Crucially, this coupling is not random. Instead, the chirality, or “handedness”, of the phonons determines which electron spin they will couple with – a  property known as chiral induced spin selectivity.

Excited chiral phonons mediate electron coupling spin

When the scientists turned their attention to proton transfer through the lysozymes, they discovered that the protons moved much more slowly with one magnetization direction than they did with the opposite. This connection between proton transfer and spin-selective electron transfer did not surprise Yossi Paltiel, who co-led the study with his Hebrew University of Jerusalem (HUJI) colleagues Naama Goren, Nir Keren and Oded Livnah in collaboration with Nurit Ashkenazy of Ben Gurion University and Ron Naaman of the Weizmann Institute.

“Proton transfer in living organisms occurs in a chiral environment and is an essential process,” Paltiel says. “Since protons also have spin, it was logical for us to try to relate proton transfer to electron spin in this work.”

The finding could shed light on proton hopping in biological environments, Paltiel tells Physics World. “It may ultimately help us understand how information and energy are transferred inside living cells, and perhaps even allow us to control this transfer in the future.

“The results also emphasize the role of chirality in biological processes,” he adds, “and show how quantum physics and biochemistry are fundamentally related.”

The HUJI team now plans to study how the coupling between the proton transfer process and the transfer of spin polarized electrons depends on specific biological environments. “We also want to find out to what extent the coupling affects the activity of cells,” Paltiel says.

Their present study is detailed in PNAS.

The post Quantum physics guides proton motion in biological systems appeared first on Physics World.

  •  

Laboratory-scale three-dimensional X-ray diffraction makes its debut

Trips to synchrotron facilities could become a thing of the past for some researchers thanks to a new laboratory-scale three-dimensional X-ray diffraction microscope designed by a team from the University of Michigan, US. The device, which is the first of its kind, uses a liquid-metal-jet electrode to produce high-energy X-rays and can probe almost everything a traditional synchrotron can. It could therefore give a wider community of academic and industrial researchers access to synchrotron-style capabilities.

Synchrotrons are high-energy particle accelerators that produce bright, high-quality beams of coherent electromagnetic radiation at wavelengths ranging from the infrared to soft X-rays. To do this, they use powerful magnets to accelerate electrons in a storage ring, taking advantage of the fact that accelerated electrons emit electromagnetic radiation.

One application for this synchrotron radiation is a technique called three-dimensional X-ray diffraction (3DXRD) microscopy. This powerful technique enables scientists to study the mechanical behaviour of polycrystalline materials, and it works by constructing three-dimensional images of a sample from X-ray images taken at multiple angles, much as a CT scan images the human body. Instead of the imaging device rotating around a patient, however, it is the sample that rotates in the focus of the powerful X-ray beam.

At present, 3DXRD can only be performed at synchrotrons. These are national and international facilities, and scientists must apply for beamtime months or even years in advance. If successful, they receive a block of time lasting six days at the most, during which they must complete all their experiments.

A liquid-metal-jet anode

Previous attempts to make 3DXRD more accessible by downscaling it have largely been unsuccessful. In particular, efforts to produce high-energy X-rays using electrical anodes have foundered because these anodes are traditionally made of solid metal, which cannot withstand the extremely high power of electrons needed to produce X-rays.

The new lab-scale device developed by mechanical engineer Ashley Bucsek and colleagues overcomes this problem thanks to a liquid-metal-jet anode that can absorb more power and therefore produce a greater number of X-ray photons per electrode surface area. The sample volume is illuminated by a monochromatic box or line-focused X-ray beam while diffraction patterns are serially recorded as the sample rotates full circle. “The technique is capable of measuring the volume, position, orientation and strain of thousands of polycrystalline grains simultaneously,” Bucsek says.

When members of the Michigan team tested the device by imaging samples of titanium alloy samples, they found it was as accurate as synchrotron-based 3DXRD, making it a practical alternative. “I conducted my PhD doing 3DXRD experiments at synchrotron user facilities, so having full-time access to a personal 3DXRD microscope was always a dream,” Bucsek says. “My colleagues and I hope that the adaptation of this technology from the synchrotron to the laboratory scale will make it more accessible.”

The design for the device, which is described in Nature Communications, was developed in collaboration with a US-based instrumentation firm, PROTO Manufacturing. Bucsek says she is excited by the possibility that commercialization will make 3DXRD more “turn-key” and thus reduce the need for specialized knowledge in the field.

The Michigan researchers now hope to use their instrument to perform experiments that must be carried out over long periods of time. “Conducting such prolonged experiments at synchrotron user facilities would be difficult, if not impossible, due to the high demand, so, lab-3DXRD can fill a critical capability gap in this respect,” Bucsek tells Physics World.

The post Laboratory-scale three-dimensional X-ray diffraction makes its debut appeared first on Physics World.

  •  

Ancient woodworking technique inspires improved memristor

Researchers in China have adapted the interlocking structure of mortise-and-tenon joints – as used by woodworkers around the world since ancient times – to the design of nanoscale devices known as memristors. The new devices are far more uniform than previous such structures, and the researchers say they could be ideal for scientific computing applications.

The memory-resistor, or “memristor”, was described theoretically at the beginning of the 1970s, but the first practical version was not built until 2008. Unlike standard resistors, the resistance of a memristor changes depending on the current previously applied to it, hence the “memory” in its name. This means that a desired resistance can be programmed into the device and subsequently stored. Importantly, the remembered value of the resistive state persists even when the power is switched off.

Thanks to numerous technical advances since 2008, memristors can now be integrated onto chips in large numbers. They are also capable of processing large amounts of data in parallel, meaning they could be ideal for emerging “in-memory” computing technologies that require calculations known as large-scale matrix-vector multiplications (MVMs). Many such calculations involve solving partial differential equations (PDEs), which are used to model complex behaviour in fields such as weather forecasting, fluid dynamics and astrophysics, to name but a few.

One remaining hurdle, however, is that it is hard to make memristors with uniform characteristics. The electronic properties of devices containing multiple memristors can therefore vary considerably, which adversely affects the computational accuracy of large-scale arrays.

Inspiration from an ancient technique

Physicists co-led by Shi-Jun Liang and Feng Miao of Nanjing University’s School of Physics say they have now overcome this problem by designing a memristor that uses a mortise-tenon-shaped (MTS) architecture. Humans have been using these strong and stable structures in wooden furniture for thousands of years, with one of the earliest examples dating back to the Hemudu culture in China 7 000 years ago.

Liang, Miao and colleagues created the mortise part of their structure by using plasma etching to create a hole within a nanosized-layer of hexagonal boron nitride (h-BN). They then constructed a tenon in a top electrode made of tantalum (Ta) that precisely matches the mortise. This ensures that this electrode directly contacts the device’s switching layer (which is made from HfO2) only in the designated region. A bottom electrode completes the device.

The new architecture ensures highly uniform switching within the designated mortise-and-tenon region, resulting in a localized path for electronic conduction. “The result is a memristor with exceptional fundamental properties across three key metrics,” Miao tells Physics World. “These are: high endurance (over more than 109 cycles); long-term and stable memory retention (of over 104 s), and a fast switching speed of around 4.2 ns.”

The cycle-to-cycle variation of the low-resistance state (LRS) can also be reduced from 30.3% for a traditional memristor to 2.5% for the MTS architecture and the high-resistance state (HRS) from 62.4 to 27.2%.

To test their device, the researchers built a PDE solver with it. They found that their new MTS memristor could solve the Poisson equation five times faster than a conventional memristor based on HfO2 without h-BN.

The new technique, which is detailed in Science Advances, is a promising strategy for developing high-uniformity memristors, and could pave the way for high-accuracy, energy-efficient scientific computing platforms, Liang claims. “We are now looking to develop large-scale integration of our MTS device and make a prototype system,” he says.

The post Ancient woodworking technique inspires improved memristor appeared first on Physics World.

  •  

New contact lenses allow wearers to see in the near-infrared

A new contact lens enables humans to see near-infrared light without night vision goggles or other bulky equipment. The lens, which incorporates metallic nanoparticles that “upconvert” normally-invisible wavelengths into visible ones, could have applications for rescue workers and others who would benefit from enhanced vision in conditions with poor visibility.

The infrared (IR) part of the electromagnetic spectrum encompasses light with wavelengths between 700 nm and 1 mm. Human eyes cannot normally detect these wavelengths because opsins, the light-sensitive protein molecules that allow us to see, do not have the required thermodynamic properties. This means we see only a small fraction of the electromagnetic spectrum, typically between 400‒700 nm.

While devices such as night vision goggles and infrared-visible converters can extend this range, they require external power sources. They also cannot distinguish between different wavelengths of IR light.

Photoreceptor-binding nanoparticles

In a previous work, researchers led by neuroscientist Tian Xue of the University of Science and Technology of China (USTC) injected photoreceptor-binding nanoparticles into the retinas of mice. While this technique was effective, it is too invasive and risky for human volunteers. In the new study, therefore, Xue and colleagues integrated the nanoparticles into biocompatible polymeric materials similar to those used in standard soft contact lenses.

The nanoparticles in the lenses are made from Au/NaGdF4: Yb3+, Er3+ and have a diameter of approximately 45 nm each. They work by capturing photons with lower energies (longer wavelengths) and re-emitting them as photons with higher energies (shorter wavelengths). This process is known as upconversion and the emitted light is said to be anti-Stokes shifted.

When the researchers tested the new upconverting contact lenses (UCLs) on mice, the rodents’ behaviour suggested they could sense IR wavelengths. For example, when given a choice between a dark box and an IR-illuminated one, the lens-wearing mice scurried into the dark box. In contrast, a control group of mice not wearing lenses showed no preference for one box over the other. The pupils of the lens-wearing mice also constricted when exposed to IR light, and brain imaging revealed that processing centres in their visual cortex were activated.

Flickering seen even with eyes closed

The team then moved on to human volunteers. “In humans, the near-infrared UCLs enabled participants to accurately detect flashing Morse code-like signals and perceive the incoming direction of near-infrared (NIR) light,” Xue says, referring to light at wavelengths between 800‒1600 nm. Counterintuitively, the flashing images appeared even clearer when the volunteers closed their eyes – probably because IR light is better than visible light at penetrating biological tissue such as eyelids. Importantly, Xue notes that wearing the lenses did not affect participants’ normal vision.

The team also developed a wearable system with built-in flat UCLs. This system allowed volunteers to distinguish between patterns such as horizontal and vertical lines; S and O shapes; and triangles and squares.

But Xue and colleagues did not stop there. By replacing the upconverting nanoparticles with trichromatic orthogonal ones, they succeeded in converting NIR light into three different spectral bands. For example, they converted infrared wavelengths of 808, 980 nm and 1532 nm into 540, 450, and 650 nm respectively – wavelengths that humans perceive as green, blue and red.

“As well as allowing wearers to garner more detail within the infrared spectrum, this technology could also help colour-blind individuals see wavelengths they would otherwise be unable to detect by appropriately adjusting the absorption spectrum,” Xue tells Physics World.

According to the USTC researchers, who report their work in Cell, the devices could have several other applications. Apart from providing humans with night vision and offering an adaptation for colour blindness, the lenses could also give wearers better vision in foggy or dusty conditions.

At present, the devices only work with relatively bright IR emissions (the study used LEDs). However, the researchers hope to increase the photosensitivity of the nanoparticles so that lower levels of light can trigger the upconversion process.

The post New contact lenses allow wearers to see in the near-infrared appeared first on Physics World.

  •  

‘Zombie’ volcano reveals its secrets

The first high-resolution images of Bolivia’s Uturuncu volcano have yielded unprecedented insights into whether this volcanic “zombie” is likely to erupt in the near future. The images were taken using a technique that combines seismology, rock physics and petrological analyses, and the scientists who developed it say it could apply to other volcanoes, too.

Volcanic eruptions occur when bubbles of gases such as SO2 and CO2 rise to the Earth’s surface through dikes and sills in the planet’s crust, bringing hot, molten rock known as magma with them. To evaluate the chances of this happening, researchers need to understand how much gas and melted rock have accumulated in the volcano’s shallow upper crust, or crater. This is not easy, however, as the structures that convey gas and magma to the surface are complex and mapping them is challenging with current technologies.

A zombie volcano

In the new work, a team led by Mike Kendall of the University of Oxford, UK and Haijiang Zhang from the University of Science and Technology of China (USTC) employed a combination of seismological and petrophysical analyses to create such a map for Uturuncu. Located in the Central Andes, this volcano formed in the Pleistocene era (around 2.58 million to 11,700 years ago) as the oceanic Nazca plate was forced beneath the South American continental plate. It is made up of around 50 km3 of homogeneous, porphyritic dacite lava flows that are between 62% and 67% silicon dioxide (SiO2) by weight, and it sits atop the Altiplano–Puna magma body, which is the world’s largest body of partially-melted silicic rock.

Although Uturuncu has not erupted for nearly 250,000 years, it is not extinct. It regularly emits plumes of gas, and earthquakes are a frequent occurrence in the shallow crust beneath and around it. Previous geodetic studies also detected a 150-km-wide deformed region of rock centred around 3 km south-west of its summit. These signs of activity, coupled with Uturuncu’s lack of a geologically recent eruption, have led some scientists to describe it as a “zombie”.

Movement of liquid and gas explains Uturuncu’s unrest

To tease out the reasons for Uturuncu’s semi-alive behaviour, the team turned to seismic tomography – a technique Kendall compares to medical imaging of a human body. The idea is to detect the seismic waves produced by earthquakes travelling through the Earth’s crust, analyse their arrival times, and use this information to create three-dimensional images of what lies beneath the surface of the structure being studied.

Writing in PNAS, Kendall and colleagues explain that they used seismic tomography to analyse signals from more than 1700 earthquakes in the region around Uturuncu. They performed this analysis in two ways. First, they assumed that seismic waves travel through the crust at the same speed regardless of their direction of propagation. This isotropic form of tomography gave them a first image of the region’s structure. In their second analysis, they took the directional dependence of the seismic waves’ speed into account. This anisotropic tomography gave them complementary information about the structure.

The researchers then combined their tomographic measurements with previous geophysical imaging results to construct rock physics models. These models contain information about the paths that hot migrating fluids and gases take as they migrate to the surface. In Uturuncu’s case, the models showed fluids and gases accumulating in shallow magma reservoirs directly below the volcano’s crater and down to a depth of around 5 km. This movement of liquid and gas explains Uturuncu’s unrest, the team say, but the good news is that it has a low probability of producing eruptions any time soon.

According to Kendall, the team’s methods should be applicable to more than 1400 other potentially active volcanoes around the world. “It could also be applied to identifying potential geothermal energy sites and for critical metal recovery in volcanic fluids,” he tells Physics World.

The post ‘Zombie’ volcano reveals its secrets appeared first on Physics World.

  •  

The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration

The 20th of May is World Metrology Day, and this year it was extra special because it was also the 150th anniversary of the treaty that established the metric system as the preferred international measurement standard. Known as the Metre Convention, the treaty was signed in 1875 in Paris, France by representatives of all 17 nations that belonged to the Bureau International des Poids et Mesures (BIPM) at the time, making it one of the first truly international agreements. Though nations might come and go, the hope was that this treaty would endure “for all times and all peoples”.

To celebrate the treaty’s first century and a half, the BIPM and the United Nations Educational, Scientific and Cultural Organisation (UNESCO) held a joint symposium at the UNESCO headquarters in Paris. The event focused on the achievements of BIPM as well as the international scientific collaborations the Metre Convention enabled. It included talks from the Nobel prize-winning physicist William Phillips of the US National Institute of Standards and Technology (NIST) and the BIPM director Martin Milton, as well as panel discussions on the future of metrology featuring representatives of other national metrology institutes (NMIs) and metrology professionals from around the globe.

A long and revolutionary tradition

The history of metrology dates back to ancient times. As UNESCO’s Hu Shaofeng noted in his opening remarks, the Egyptians recognized the importance of precision measurements as long ago as the 21st century BCE.  Like other early schemes, the Egyptians’ system of measurement used parts of the human body as references, with units such as the fathom (the length of a pair of outstretched arms) and the foot. This was far from ideal since, as Phillips pointed out in his keynote address, people come in various shapes and sizes. These variations led to a profusion of units. By some estimates, pre-revolutionary France had a whopping 250,000 different measures, with differences arising not only between towns but also between professions.

The French Revolutionaries were determined to put an end to this mess. In 1795, just six years after the Revolution, the law of 18 Geminal An III (according to the new calendar of the French Republic) created a preliminary version of the world’s first metric system. The new system tied length and mass to natural standards (the metre was originally one-forty-millionth of the Paris meridian, while the kilogram is the mass of a cubic decimetre of water), and it became the standard for all of France in 1799. That same year, the system also became more practical, with units becoming linked, for the first time, to physical artefacts: a platinum metre and kilogram deposited in the French National Archives.

When the Metre Convention adopted this standard internationally 80 years later, it kick-started the construction of new length and mass standards. The new International Prototype of the Metre and International Prototype of the Kilogram were manufactured in 1879 and officially adopted as replacements for the Revolutionaries’ metre and kilogram in 1889, though they continued to be calibrated against the old prototypes held in the National Archives.

A short history of the BIPM

The BIPM itself was originally conceived as a means of reconciling France and Germany after the 1870–1871 Franco–Prussian War. At first, its primary roles were to care for the kilogram and metre prototypes and to calibrate the standards of its member states. In the opening decades of the 20th century, however, it extended its activities to cover other kinds of measurements, including those related to electricity, light and radiation. Then, from the 1960s onwards, it became increasingly interested in improving the definition of length, thanks to new interferometer technology that made it possible to measure distance at a precision rivalling that of the physical metre prototype.

Photo of William Phillips on stage at the Metre Convention symposium, backed by a large slide that reads "The Revolutionary Dream: A tous les temps, a tous les peuples, For all times, for all peoples". The slide also contains two large symbolic medallions, one showing a female figure dressed in Classical garments holding out a metre ruler under the logo "A tous les temps, a tous les peuples" and another showing a winged figure measuring the Earth with an instrument.
Metre man: William Phillips giving the keynote address at the Metre Convention’s 150th anniversary symposium. (Courtesy: Isabelle Dumé)

It was around this time that the BIPM decided to replace its expanded metric system with a framework encompassing the entire field of metrology. This new framework consisted of six basic units – the metre, kilogram, second, ampere, degree Kelvin (later simply the kelvin), candela and mole – plus a set of “derived” units (the Newton, Hertz, Joule and Watt) built from the six basic ones. Thus was born the International System of Units, or SI after the French initials for Système International d’unités.

The next major step – a “brilliant choice”, in Phillips’ words – came in 1983, when the BIPM decided to redefine the metre in terms of the speed of light. In the future, the Bureau decreed that the metre would officially be the length travelled by light in vacuum during a time interval of 1/299,792,458 seconds.

This decision set the stage for defining the rest of the seven base units in terms of natural fundamental constants. The most recent unit to join the club was the kilogram, which was defined in terms of the Planck constant, h, in 2019. In fact, the only base unit currently not defined in terms of a fundamental constant is the second, which is instead determined by the transition between the two hyperfine levels of the ground state of caesium-133. The international metrology community is, however, working to remedy this, with meetings being held on the subject in Versailles this month.

Measurement affects every aspect of our daily lives, and as the speakers at last week’s celebrations repeatedly reminded the audience, a unified system of measurement has long acted as a means of building trust across international and disciplinary borders. The Metre Convention’s survival for 150 years is proof that peaceful collaboration can triumph, and it has allowed humankind to advance in ways that would not have been possible without such unity. A lesson indeed for today’s troubled world.

The post The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration appeared first on Physics World.

  •  

Ultrasound-activated structures clear biofilms from medical implants

When implanted medical devices like urinary stents and catheters get clogged with biofilms, the usual solution is to take them out and replace them with new ones. Now, however, researchers at the University of Bern and ETH Zurich, Switzerland have developed an alternative. By incorporating ultrasound-activated moving structures into their prototype “stent-on-a-chip” device, they showed it is possible to remove biofilms without removing the device itself. If translated into clinical practice, the technology could increase the safe lifespan of implants, saving money and avoiding operations that are uncomfortable and sometimes hazardous for patients.

Biofilms are communities of bacterial cells that adhere to natural surfaces in the body as well as artificial structures such as catheters, stents and other implants. Because they are encapsulated by a protective, self-produced extracellular matrix made from polymeric substances, they are mechanically robust and resistant to standard antibacterial measures. If not removed, they can cause infections, obstructions and other complications.

Intense, steady flows push away impurities

The new technology, which was co-developed by Cornel Dillinger, Pedro Amado and other members of Francesco Clavica and Daniel Ahmed’s research teams, takes advantage of recent advances in the fields of robotics and microfluidics. Its main feature is a coating made from microscopic hair-like structures known as cilia. Under the influence of an acoustic field, which is applied externally via a piezoelectric transducer, these cilia begin to move. This movement produces intense, steady fluid flows with velocities of up to 10 mm/s – enough to break apart encrusted deposits (made from calcium carbonate, for example) and flush away biofilms from the inner and outer surfaces of implanted urological devices.

Microscope image showing square and diamond shapes in various shades of grey
All fouled up: Typical examples of crystals known as encrustations that develop on the surfaces of urinary stents and catheters. (Courtesy: Pedro Amado and Shaokai Zheng)

“This is a major advance compared to existing stents and catheters, which require regular replacements to avoid obstruction and infections,” Clavica says.

The technology is also an improvement on previous efforts to clear implants by mechanical means, Ahmed adds. “Our polymeric cilia in fact amplify the effects of ultrasound by allowing for an effect known as acoustic streaming at frequencies of 20 to 100 kHz,” he explains. “This frequency is lower than that possible with previous microresonator devices developed to work in a similar way that had to operate in the MHz-frequency range.”

The lower frequency achieves the desired therapeutic effects while prioritizing patient safety and minimizing the risk of tissue damage, he adds.

Wider applications

In creating their technology, the researchers were inspired by biological cilia, which are a natural feature of physiological systems such as the reproductive and respiratory tracts and the central nervous system. Future versions, they say, could apply the ultrasound probe directly to a patient’s skin, much as handheld probes of ultrasound scanners are currently used for imaging. “This technology has potential applications beyond urology, including fields like visceral surgery and veterinary medicine, where keeping implanted medical devices clean is also essential,” Clavica says.

The researchers now plan to test new coatings that would reduce contact reactions (such as inflammation) in the body. They will also explore ways of improving the device’s responsiveness to ultrasound – for example by depositing thin metal layers. “These modifications could not only improve acoustic streaming performance but could also provide additional antibacterial benefits,” Clavica tells Physics World.

In the longer term, the team hope to translate their technology into clinical applications. Initial tests that used a custom-built ultrasonic probe coupled to artificial tissue have already demonstrated promising results in generating cilia-induced acoustic streaming, Clavica notes. “In vivo animal studies will then be critical to validate safety and efficacy prior to clinical adoption,” he says.

The present study is detailed in PNAS.

The post Ultrasound-activated structures clear biofilms from medical implants appeared first on Physics World.

  •  

Protons take to the road

Physicists at CERN have completed a “test run” for taking antimatter out of the laboratory and transporting it across the site of the European particle-physics facility. Although the test was carried out with ordinary protons, the team that performed it says that antiprotons could soon get the same treatment. The goal, they add, is to study antimatter in places other than the labs that create it, as this would enable more precise measurements of the differences between matter and antimatter. It could even help solve one of the biggest mysteries in physics: why does our universe appear to be made up almost entirely of matter, with only tiny amounts of antimatter?

According to the Standard Model of particle physics, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed). This might sound straightforward, but it leads to a peculiar prediction. Under the Standard Model, the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter. But if that were the case, there shouldn’t be any matter left, because whenever pairs of antimatter and matter particles collide, they annihilate each other in a burst of energy.

Physicists therefore suspect that there are other, more subtle differences between matter particles and their antimatter counterparts – differences that could explain why the former prevailed while the latter all but disappeared. By searching for these differences, they hope to shed more light on antimatter-matter asymmetry – and perhaps even reveal physics beyond the Standard Model.

Extremely precise measurements

At CERN’s Baryon-Antibaryon Symmetry Experiment (BASE) experiment, the search for matter-antimatter differences focuses on measuring the magnetic moment (or charge-to-mass ratio) of protons and antiprotons. These measurements need to be extremely precise, but this is difficult at CERN’s “Antimatter Factory” (AMF), which manufactures the necessary low-energy antiprotons in profusion. This is because essential nearby equipment – including the Antiproton Decelerator and ELENA, which reduce the energy of incoming antiprotons from GeV to MeV – produces magnetic field fluctuations that blur the signal.

To carry out more precise measurements, the team therefore needs a way of transporting the antiprotons to other, better-shielded, laboratories. This is easier said than done, because antimatter needs to be carefully isolated from its environment to prevent it from annihilating with the walls of its container or with ambient gas molecules.

The BASE team’s solution was to develop a device that can transport trapped antiprotons on a truck for substantial distances. It is this device, known as BASE-STEP (for Symmetry Tests in Experiments with Portable Antiprotons), that has now been field-tested for the first time.

Protons on the go

During the test, the team successfully transported a cloud of about 105 trapped protons out of the AMF and across CERN’s Meyrin campus over a period of four hours. Although protons are not the same as antiprotons, BASE-STEP team leader Christian Smorra says they are just as sensitive to disturbances in their environment caused by, say, driving them around. “They are therefore ideal stand-ins for initial tests, because if we can transport protons, we should also be able to transport antiprotons,” he says.

Photo of the BASE-STEP system sitting on a bright yellow trolley after being unloaded from the transport crane, which is visible above it. A woman in a hard hat and head scarf watches from the ground, while a man in a hard hat stands above her on a set of steps, also watching.
The next step: BASE-STEP on a transfer trolley, watched over by BASE team members Fatma Abbass and Christian Smorra. (Photo: BASE/Maria Latacz)

The BASE-STEP device is mounted on an aluminium frame and measures 1.95 m x 0.85 m x 1.65 m. At 850‒900 kg, it is light enough to be transported using standard forklifts and cranes.

Like BASE, it traps particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper. To further confine the protons and prevent them from colliding with the trap’s walls, this trap is surrounded by a superconducting magnet bore operated at cryogenic temperatures. The second electrode stack is also kept at ultralow pressures of 10-19 bar, which Smorra says is low enough to keep antiparticles from annihilating with residual gas molecules. To transport antiprotons instead of protons, Smorra adds, they would just need to switch the polarity of the electrodes.

The transportable trap system, which is detailed in Nature, is designed to remain operational on the road. It uses a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, and its frame can handle accelerations of up to 1g (9.81 m/s2) in all directions over and above the usual (vertical) force of gravity. This means it can travel up and down slopes with a gradient of up to 10%, or approximately 6°.

Once the BASE-STEP device is re-configured to transport antiprotons, the first destination on the team’s list is a new Penning-trap system currently being constructed at the Heinrich Heine University in Düsseldorf, Germany. Here, physicists hope to search for charge-parity-time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is possible at CERN’s AMF.

“At BASE, we are currently performing measurements with a precision of 16 parts in a trillion,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine and a researcher at CERN and Japan’s RIKEN laboratory. “These experiments are the most precise tests of matter/antimatter symmetry in the baryon sector to date, but to make these experiments better, we have no choice but to transport the particles out of CERN’s antimatter factory,” he tells Physics World.

The post Protons take to the road appeared first on Physics World.

  •  

Plasma physics sets upper limit on the strength of ‘dark electromagnetism’

Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.

Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.

Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.

Bullet Cluster experiment

In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.

Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.

A graph of the dark electromagnetic coupling constant 𝛼𝐷 as a function of the dark matter mass 𝑚𝐷. There is a blue triangle in the upper left corner of the graph, a wide green region below it running from the bottom left to the top right, and a thin red strip below that. A white triangle at the bottom right of the graph represents a region not disallowed by the measurements.
Interaction strength: Constraints on the dark electromagnetic coupling constant 𝛼𝐷 based on observations from the Bullet Cluster. 𝛼𝐷 must lie below the blue, green and red regions. Dashed lines show the reference value used for the mass of 1 TeV. (Courtesy: K Schoefler et al., “Can plasma physics establish a significant bound on long-range dark matter interactions?” Phys Rev D 111 L071701, https://doi.org/10.1103/PhysRevD.111.L071701)

The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.

Ruling out many of the simplest dark matter theories

The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.

This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”

As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.

The present study is detailed in Physical Review D.

The post Plasma physics sets upper limit on the strength of ‘dark electromagnetism’ appeared first on Physics World.

  •  

Evidence for a superconducting gap emerges in hydrogen sulphides

Researchers in Germany report that they have directly measured a superconducting gap in a hydride sulphide material for the first time. The new finding represents “smoking gun” evidence for superconductivity in these materials, while also confirming that the electron pairing that causes it is mediated by phonons.

Superconductors are materials that conduct electricity without resistance. Many materials behave this way when cooled below a certain transition temperature Tc, but in most cases this temperature is very low. For example, solid mercury, the first superconductor to be discovered, has a Tc of 4.2 K. Superconductors that operate at higher temperatures – perhaps even at room temperature – are thus highly desirable, as an ambient-temperature superconductor would dramatically increase the efficiency of electrical generators and transmission lines.

The rise of the superhydrides

The 1980s and 1990s saw considerable progress towards this goal thanks to the discovery of high-temperature copper oxide superconductors, which have Tcs between 30–133 K. Then, in 2015, the maximum known critical temperature rose even higher thanks to the discovery that a sulphide material, H3S, has a Tc of 203 K when compressed to pressures of 150 GPa.

This result sparked a flurry of interest in solid materials containing hydrogen atoms bonded to other elements. In 2019, the record was broken again, this time by lanthanum decahydride (LaH10), which was found to have a Tc of 250–260 K, again at very high pressures.

A further advance occurred in 2021 with the discovery of high-temperature superconductivity in cerium hydrides. These novel phases of CeH9 and another newly-synthesized material, CeH10, are remarkable in that they are stable and display high-temperature superconductivity at lower pressures (about 80 GPa, or 0.8 million atmospheres) than the other so-called “superhydrides”.

But how does it work?

One question left unanswered amid these advances concerned the mechanism for superhydride superconductivity. According to the Bardeen–Cooper–Schrieffer (BCS) theory of “conventional” superconductivity, superconductivity occurs when electrons overcome their mutual electrical repulsion to form pairs. These electron pairs, which are known as Cooper pairs, can then travel unhindered through the material as a supercurrent without scattering off phonons (quasiparticles arising from vibrations of the material’s crystal lattice) or other impurities.

Cooper pairing is characterized by a tell-tale energy gap near what’s known as the Fermi level, which is the highest energy level that electrons can occupy in a solid at a temperature of absolute zero. This gap is equivalent to the maximum energy required to break up a Cooper pair of electrons, and spotting it is regarded as unambiguous proof of that material’s superconducting nature.

For the superhydrides, however, this is easier said than done, because measuring such a gap requires instruments that can withstand the extremely high pressures required for superhydrides to exist and behave as superconductors. Traditional techniques such as scanning tunnelling spectroscopy or angle-resolved photoemission spectroscopy do not work, and there was little consensus on what might take their place.

Planar electron tunnelling spectroscopy

A team led by researchers at Germany’s Max Planck Institute for Chemistry has now stepped in by developing a form of spectroscopy that can operate under extreme pressures. The technique, known as planar electron tunnelling spectroscopy, required the researchers to synthesize highly pure planar tunnel junctions of H3S and its deuterated equivalent D3S under pressures of over 100 GPa. Using a technique called laser heating, they created junctions with three parts: a metal, tantalum; a barrier made of tantalum pentoxide, Ta2O5; and the H3S or D3S superconductors. By measuring the differential conductance across the junctions, they determined the density of electron states in H3S and D3S near the Fermi level.

These tunnelling spectra revealed that both H3S and D3S have fully open superconducting gaps of 60 meV and 44 meV respectively. According to team member Feng Du, the smaller gap in D3S confirms that the superconductivity in H3S comes about thanks to interactions between electrons and phonons – a finding that backs up long-standing predictions.

The researchers hope their work, which they report on in Nature, will inspire more detailed studies of superhydrides. They now plan to measure the superconducting gap of other metal superhydrides and compare them with the covalent superhydrides they studied in this work. “The results from such experiments could help us understand the origin of the high Tc in these superconductors,” Du tells Physics World.

The post Evidence for a superconducting gap emerges in hydrogen sulphides appeared first on Physics World.

  •  

Neutron Airy beams make their debut

Physicists have succeeded in making neutrons travel in a curved parabolic waveform known as an Airy beam. This behaviour, which had previously been observed in photons and electrons but never in a non-elementary particle, could be exploited in fundamental quantum science research and in advanced imaging techniques for materials characterization and development.

In free space, beams of light propagate in straight lines. When they pass through an aperture, they diffract, becoming wider and less intense. Airy beams, however, are different. Named after the 19th-century British scientist George Biddell Airy, who developed the mathematics behind them while studying rainbows, they follow a parabola-shaped path – a property known as self-acceleration – and do not spread out as they travel. Airy beams are also “self-healing”, meaning that they reconstruct themselves after passing through an obstacle that blocked part of the beam.

Scientists have been especially interested in Airy beams since 1979, when theoretical work by the physicist Michael Berry suggested several possible applications for them, says Dmitry Pushin, a physicist at the Institute for Quantum Computing (IQC) and the University of Waterloo, Canada. Researchers created the first Airy beams from light in 2007, followed by an electron Airy beam in 2013.

“Inspired by the unusual properties of these beams in optics and electron experiments, we wondered whether similar effects could be harnessed for neutrons,” Pushin says.

Making such beams out of neutrons turned out to be challenging, however. Because neutrons have no charge, they cannot be shaped by electric fields. Also, lenses that focus neutron beams do not exist.

A holographic approach

A team led by Pushin and Dusan Sarenac of the University at Buffalo’s Department of Physics in the US has now overcome these difficulties using a holographic approach based on a custom-microfabricated silicon diffraction grating. The team made this grating from an array of 6 250 000 micron-sized cubic phase patterns etched onto a silicon slab. “The grating modulates incoming neutrons into an Airy form and the resulting beam follows a curved trajectory, exhibiting the characteristics of a two-dimensional Airy profile at a neutron detector,” Sarenac explains.

According to Pushin, it took years of work to figure out the correct dimensions for the array. Once the design was optimized, however, fabricating it took just 48 hours at the IQC’s nanofabrication facility. “Developing a precise wave phase modulation method using holography and silicon microfabrication allowed us to overcome the difficulties in manipulating neutrons,” he says.

The researchers say the self-acceleration and self-healing properties of Airy beams could improve existing neutron imaging techniques (including neutron scattering and diffraction), potentially delivering sharper and more detailed images. The new beams might even allow for new types of neutron optics and could be particularly useful, for example, when targeting specific regions of a sample or navigating around structures.

Creating the neutron Airy beams required access to international neutron science facilities such as the US National Institute of Standards and Technology’s Center for Neutron Research; the US Department of Energy’s Oak Ridge National Laboratory; and the Paul Scherrer Institute in Villigen, Switzerland. To continue their studies, the researchers plan to use the UK’s ISIS Neutron and Muon Source to explore ways of combining neutron Airy beams with other structured neutron beams (such as helical waves of neutrons or neutron vortices). This could make it possible to investigate complex properties such as the chirality, or handedness, of materials. Such work could be useful in drug development and materials science. Since a material’s chirality affects how its electrons spin, it could be important for spintronics and quantum computing, too.

“We also aim to further optimize beam shaping for specific applications,” Sarenac tells Physics World. “Ultimately, we hope to establish a toolkit for advanced neutron optics that can be tailored for a wide range of scientific and industrial uses.”

The present work is detailed in Physical Review Letters.

The post Neutron Airy beams make their debut appeared first on Physics World.

  •  

Geophysicists pinpoint location of Yellowstone magma reservoir

The first clear images of Yellowstone’s shallowest magma reservoir have revealed its depth with unprecedented precision, providing information that could help scientists determine how dangerous it is. By pinpointing the reservoir’s location, geophysicists and seismologists from Rice University and the universities of Utah, New Mexico and Texas at Dallas, hope to develop more accurate predictions of when this so-called “supervolcano” will erupt again.

Yellowstone is America’s oldest national park, and it owes its spectacular geysers and hot springs to its location above one of the world’s largest volcanoes. The last major eruption of the Yellowstone supervolcano happened around 630 000 years ago, and was violent enough to create a collapsed crater, or caldera, over 60 km across. Though it shows no sign of repeating this cataclysm anytime soon, it is still an active volcano, and it is slowly forming a new magma reservoir.

Previous estimates of the depth of this magma reservoir were highly imprecise, ranging from three to eight kilometres. Scientists also lacked an accurate location for the reservoir’s top and were unsure how its properties changed with increasing depth.

The latest results, from a team led by Brandon Schmandt and Chenglong Duan at Rice and Jamie Farrell at Utah, show that the reservoir’s top lies 3.8 km below the surface. They also show evidence of an abrupt downward transition into a mixture of gas bubbles and magma filling the pore space of volcanic rock. The gas bubbles are made of mostly H2O in supercritical form and the magma comprises molten silicic rock such as rhyolite.

Creating artificial seismic waves

Duan and colleagues obtained their result by using a mechanical vibration source (a specialized truck built by the oil and gas firm Dawson Geophysical) to create artificial seismic waves across the ground beneath the northeast portion of Yellowstone’s caldera. They also deployed a network of hundreds of portable seismometers capable of recording both vertical and ground vibrations, spaced at 100 to 150-m intervals, across the national park. “Researchers already knew from previous seismic and geochemical studies that this region was underlain by magma, but we needed new field data and an innovative adaptation of conventional seismic imaging techniques,” explains Schmandt. The new study, he tells Physics World, is “a good example of how the same technologies are relevant to energy industry imaging and studies of natural hazards”.

Over a period of a few days, the researchers created artificial earthquakes at 110 different locations using 20 shocks lasting 40 seconds apiece. This enabled them to generate two types of seismic wave, known as S- and P-waves, which reflect off molten rock at different velocities. Using this information, they were able to locate the top of the magma chamber and determine that 86% of this upper portion was solid rock.

The rest, they discovered, was made up of pores filled with molten material such as rhyolite and volatile gases (mostly water in supercritical form) and liquids in roughly equal proportion. Importantly, they say, this moderate concentration of pores allows the volatile bubbles to gradually escape to the surface so they do not accumulate and increase the buoyancy deeper inside the chamber. This is good news as it means that the Yellowstone supervolcano is unlikely to erupt any time soon.

A key aspect of this analysis was a wave-equation imaging method that Duan developed, which substantially improved the spatial resolution of the features observed. “This was important since we had to adapt the data we obtained to its less than theoretically ideal properties,” Schmandt explains.

The work, which is detailed in Nature, could also help scientists monitor the eruption potential of other volcanos, Schmandt adds. This is because estimating the accumulation and buoyancy of volatile material beneath sharp magmatic cap layers is key to assessing the stability of the system. “There are many types of similar hazardous magmatic systems and their older remnants on our planet that are important for resources like metal ores and critical minerals,” he explains. “We therefore have plenty of targets left to understand and now some refined ideas about how we might approach them in the field and on the computer.”

The post Geophysicists pinpoint location of Yellowstone magma reservoir appeared first on Physics World.

  •  

Axion quasiparticle appears in a topological antiferromagnet

Physicists have observed axion quasiparticles for the first time in a two-dimensional quantum material. As well as having applications in materials science, the discovery could aid the search for fundamental axions, which are a promising (but so far hypothetical) candidate for the unseen dark matter pervading our universe.

Theorists first proposed axions in the 1970s as a way of solving a puzzle involving the strong nuclear force and charge-parity (CP) symmetry. In systems that obey this symmetry, the laws of physics are the same for a particle and the spatial mirror image of its oppositely charged antiparticle. Weak interactions are known to violate CP symmetry, and the theory of quantum chromodynamics (QCD) allows strong interactions to do so, too. However, no-one has ever seen evidence of this happening, and the so-called “strong CP problem” remains unresolved.

More recently, the axion has attracted attention as a potential constituent of dark matter – the mysterious substance that appears to make up more than 85% of matter in the universe. Axions are an attractive dark matter candidate because while they do have mass, and theory predicts that the Big Bang should have generated them in large numbers, they are much less massive than electrons, and they carry no charge. This combination means that axions interact only very weakly with matter and electromagnetic radiation – exactly the behaviour we expect to see from dark matter.

Despite many searches, though, axions have never been detected directly. Now, however, a team of physicists led by Jianxiang Qiu of Harvard University has proposed a new detection strategy based on quasiparticles that are axions’ condensed-matter analogue. According to Qiu and colleagues, these quasiparticle axions, as they are known, could serve as axion “simulators”, and might offer a route to detecting dark matter in quantum materials.

Topological antiferromagnet

To detect axion quasiparticles, the Harvard team constructed gated electronic devices made from several two-dimensional layers of manganese bismuth telluride (MnBi2Te4). This material is a rare example of a topological antiferromagnet – that is, a material that is insulating in its bulk while conducting electricity on its surface, and that has magnetic moments that point in opposite directions. These properties allow quasiparticles known as magnons (collective oscillations of spin magnetic moments) to appear in and travel through the MnBi2Te4. Two types of magnon mode are possible: one in which the spins oscillate in sync; and another in which they are out of phase.

Qiu and colleagues applied a static magnetic field across the plane of their MnBi2Te4 sheets and bombarded the devices with sub-picosecond light pulses from a laser. This technique, known as ultrafast pump-probe spectroscopy, allowed them to observe the 44 GHz coherent oscillation of the so-called condensed-matter field. This field is the CP-violating term in QCD, and it is proportional to a material’s magnetoelectric coupling constant. “This is uniquely enabled by the out-of-phase magnon in this topological material,” explains Qiu. “Such coherent oscillations are the smoking-gun evidence for the axion quasiparticle and it is the combination of topology and magnetism in MnBi2Te4 that gives rise to it.”

A laboratory for axion studies

Now that they have detected axion quasiparticles, Qiu and colleagues say their next step will be to do experiments that involve hybridizing them with particles such as photons. Such experiments would create a new type of “axion-polariton” that would couple to a magnetic field in a unique way – something that could be useful for applications in ultrafast antiferromagnetic spintronics, in which spin-polarized currents can be controlled with an electric field.

The axion quasiparticle could also be used to build an axion dark matter detector. According to the team’s estimates, the detection frequency for the quasiparticle is in the milli-electronvolt (meV) range. While several theories for the axion predict that it could have a mass in this range, most existing laboratory detectors and astrophysical observations search for masses outside this window.

“The main technical barrier to building such a detector would be grow high-quality large crystals of MnBi2Te4 to maximize sensitivity,” Qiu tells Physics World. “In contrast to other high-energy experiments, such a detector would not require expensive accelerators or giant magnets, but it will require extensive materials engineering.”

The research is described in Nature.

The post Axion quasiparticle appears in a topological antiferromagnet appeared first on Physics World.

  •  

Fluid electrodes make soft, stretchable batteries

Researchers at Linköping University in Sweden have developed a new fluid electrode and used it to make a soft, malleable battery that can recharge and discharge over 500 cycles while maintaining its high performance. The device, which continues to function even when stretched to twice its length, might be used in next-generation wearable electronics.

Futuristic wearables such as e-skin patches, e-textiles and even internal e-implants on the organs or nerves will need to conform far more closely to the contours of the human body than today’s devices can. To fulfil this requirement of being soft and stretchable as well as flexible, such devices will need to be made from mechanically pliant components powered by soft, supple batteries. Today’s batteries, however, are mostly rigid. They also tend to be bulky because long-term operations and power-hungry functions such as wireless data transfer, continuous sensing and complex processing demand plenty of stored energy.

To overcome these barriers, researchers led by the Linköping chemist Aiman Rahmanudin decided to rethink the very concept of battery electrode design. Instead of engineering softness and stretchability into a solid electrode, as was the case in most previous efforts, they made the electrode out of a fluid. “Bulky batteries compromise the mechanical compliance of wearable devices, but since fluids can be easily shaped into any configuration, this limitation is removed, opening up new design possibilities for next-generation wearables,” Rahmanudin says.

A “holistic approach”

Designing a stretchable battery requires a holistic approach, he adds, as all the device’s components need to be soft and stretchy. For example, they used a modified version of the wood-based biopolymer lignin as the cathode and a conjugated poly(1-amino-5-chloroanthraquinone) (PACA) as the anode. They made these electrodes fluid by dispersing them separately with conductive carbon fillers in an aqueous electrolyte medium consisting of 0.1 M HClO4.

To integrate these electrodes into a complete cell, they had to design a stretchable current collector and an ion-selective membrane to prevent the cathodic and anodic fluids from crossing over. They also encapsulated the fluids in a robust, low-permeability elastomer to prevent them from drying up.

Designing energy storage devices from the “inside out”

Previous flexible, high-performance electrode work by the Linköping team focused on engineering the mechanical properties of solid battery electrodes by varying their Young’s modulus. “For example, think of a rubber composite that can be stretched and bent,” explains Rahmanudin. “The thicker the rubber, however, the higher the force required to stretch it, which affects mechanical compliancy.

“Learning from our past experience and work on electrofluids (which are conductive particles dispersed in a liquid medium employed as stretchable conductors), we figured that mixing redox particles with conductive particles and suspending them in an electrolyte could potentially work as battery electrodes. And we found that it did.”

Rahmanudin tells Physics World that fluid-based electrodes could lead to entirely new battery designs, including batteries that could be moulded into textiles, embedded in skin-worn patches or integrated into soft robotics.

After reporting their work in Science Advances, the researchers are now working on increasing the voltage output of their battery, which currently stands 0.9 V. “We are also looking into using Earth-abundant and sustainable materials like zinc and manganese oxide for future versions of our device and aim at replacing the acidic electrolyte we used with a safer pH neutral and biocompatible equivalent,” Rahmanudin says.

Another exciting direction, he adds, will be to exploit the fluid nature of such materials to build batteries with more complex three-dimensional shapes, such as spirals or lattices, that are tailored for specific applications. “Since the electrodes can be poured, moulded or reconfigured, we envisage a lot of creative potential here,” Rahmanudin says.

The post Fluid electrodes make soft, stretchable batteries appeared first on Physics World.

  •  

Photonic computer chips perform as well as purely electronic counterparts, say researchers

Researchers in Singapore and the US have independently developed two new types of photonic computer chips that match existing purely electronic chips in terms of their raw performance. The chips, which can be integrated with conventional silicon electronics, could find use in energy-hungry technologies such as artificial intelligence (AI).

For nearly 60 years, the development of electronic computers proceeded according to two rules of thumb: Moore’s law (which states that the number of transistors in an integrated circuit doubles every two years) and Dennard scaling (which says that as the size of transistors decreases, their power density will stay constant). However, both rules have begun to fail, even as AI systems such as large language models, reinforcement learning and convolutional neural networks are becoming more complex. Consequently, electronic computers are struggling to keep up.

Light-based computation, which exploits photons instead of electrons, is a promising alternative because it can perform multiplication and accumulation (MAC) much more quickly and efficiently than electronic devices. These operations are crucial for AI, and especially for neural networks. However, while photonic systems such as photonic accelerators and processors have made considerable progress in performing linear algebra operations such as matrix multiplication, integrating them into conventional electronics hardware has proved difficult.

A hybrid photonic-electronic system

The Singapore device was made by researchers at the photonic computing firm Lightelligence and is called PACE, for Photonic Arithmetic Computing Engine. It is a hybrid photonic-electronic system made up of more than 16 000 photonic components integrated on a single silicon chip and performs matrix MAC on 64-entry binary vectors.

“The input vector data elements start in electronic form and are encoded as binary intensities of light (dark or light) and fed into a 64 x 64 array of optical weight modulators that then perform multiply and summing operations to accumulate the results,” explains Maurice Steinman, Lightelligence’s senior vice president and general manager for product strategy. “The result vectors are then converted back to the electronic domain where each element is compared to its corresponding programmable 8-bit threshold, producing new binary vectors that subsequently re-circulate optically through the system.”

The process repeats until the resultant vectors reach “convergence” with settled values, Steinman tells Physics World. Each recurrent step requires only a few nanoseconds and the entire process completes quickly.

The Lightelligence device, which the team describe in Nature, can solve complex computational problems known as max-cut/optimization problems that are important for applications in areas such as logistics. Notably, its greatly reduced minimum latency – a key measure of computation speed – means it can solve a type of problem known as an Ising model in just five nanoseconds. This makes it 500 times faster than today’s best graphical-processing-unit-based systems at this task.

High level of integration achieved

Independently, researchers led by Nicholas Harris at Lightmatter in Mountain View, California, have fabricated the first photonic processor capable of executing state-of-the-art neural network tasks such as classification, segmentation and running reinforcement learning algorithms. Lightmatter’s design consists of six chips in a single package with high-speed interconnects between vertically aligned photonic tensor cores (PTCs) and control dies. The team’s processor integrates four 128 x 128 PTCs, with each PTC occupying an area of 14 x 24.96 mm. It contains all the photonic components and analogue mixed-signal circuits required to operate and members of the team say that the current architecture could be scaled to 512 x 512 computing units in a single die.

The result is a device that can perform 65.5 trillion adaptive block floating-point 35 (ABFP) 16-bit operations per second with just 78 W of electrical power and 1.6 W of optical power. Writing in Nature, the researchers claim that this represents the highest level of integration achieved in photonic processing.

The team also showed that the Lightmatter processor can implement complex AI models such as the neural network ResNet (used for image processing) and the natural language processing model BERT (short for Bidirectional Encoder Representations from Transformers) – all with an accuracy rivalling that of standard electronic processors. It can also compute reinforcement learning algorithms such as DeepMind’s Atari. Harris and colleagues have already applied their device to several real-world AI applications, such as generating literary texts and classifying film reviews, and they say that their photonic processor marks an essential step in post-transistor computing.

Both teams fabricated their photonic and electronic chips using standard complementary metal-oxide-semiconductor (CMOS) processing techniques. This means that existing infrastructures could be exploited to scale up their manufacture. Another advantage: both systems were fully integrated in a standard chip interface – a first.

Given these results, Steinman says he expects to see innovations emerging from algorithm developers who seek to exploit the unique advantages of photonic computing, including low latency. “This could benefit the exploration of new computing models, system architectures and applications based on large-scale integrated photonics circuits.”

The post Photonic computer chips perform as well as purely electronic counterparts, say researchers appeared first on Physics World.

  •  

Layer-spintronics makes its debut

A new all-electrical way of controlling spin-polarized currents has been developed by researchers at the Singapore University of Technology and Design (SUTD). By using bilayers of recently-discovered materials known as altermagnets, the researchers developed a tuneable and magnetic-free alternative to current approaches – something they say could bring spintronics closer to real-world applications.

Spintronics stores and processes information by exploiting the quantum spin (or intrinsic angular momentum) of electrons rather than their charge. The technology works by switching electronic spins, which can point either “up” or “down”, to perform binary logical operations in much the same way as electronic circuits use electric charge. One of the main advantages is that when an electron’s spin switches direction, its new state is stored permanently; it is said to be “non-volatile”. Spintronics circuits therefore do not require any additional input power to keep their states stable, which could make them more efficient and faster than the circuits in conventional electronic devices.

The problem is that the spin currents that carry information in spintronics circuits are usually generated using ferromagnetic materials and the magnetization of these materials can only be switched using very strong magnetic fields. Doing this requires bulky apparatus, which hinders the creation of ultracompact devices – a prerequisite for real-world applications.

“Notoriously difficult to achieve”

Controlling the spins with electric fields instead would be ideal, but Ang Yee Sin, who led the new research, says it has proved notoriously difficult to achieve – until now. “We have now shown that we can generate and reverse the spin direction of the electron current in an altermagnet made of two very thin layers of chromium sulphide (CrS) at room temperature using only an electric field,” Ang says.

Altermagnets, which were only discovered in 2024, are different from the conventional magnetically-ordered materials, ferromagnets and antiferromagnets. In ferromagnets, the magnetic moments (or spins) of atoms line up parallel to each other. In antiferromagnets, they line up antiparallel. The spins in altermagnets are also antiparallel, but the atoms that host these spins are rotated with respect to their neighbours. This combination gives altermagnets some properties of both ferromagnets and antiferromagnets, plus new properties of their own.

In bilayers of CrS, explains Ang, the electrons in each layer naturally prefer to spin in opposite directions, essentially cancelling each other out. “When we apply an electric field across the layers, however, one layer becomes more ‘active’ than the other. The current flowing through the device therefore becomes spin-polarized.”

A new device concept

The main challenge the researchers faced in their work was to identify a suitable material and a stacking arrangement in which spin and layers intertwined just right. This required detailed quantum-level simulations and theoretical modelling to prove that CrS bilayers could do the job, says Ang.

The work opens up a new device concept that the team calls layer-spintronics in which spin control is achieved via layer selection using an electric field. According to Ang, this concept has clear applications for next-generation, energy-efficient, compact and magnet-free memory and logic devices. And, since the technology works at room temperature and uses electric gating – a common approach in today’s electronics – it could make it possible to integrate spintronics devices with current semiconductor technology. This could lead to novel spin transistors, reconfigurable logic gates, or ultrafast memory cells based entirely on spin in the future, he says.

The SUTD researchers, who report their work in Materials Horizons, now aim to identify other 2D altermagnets that can host similar or even more robust spin-electric effects. “We are also collaborating with experimentalists to synthesize and characterize CrS bilayers to validate our predictions in the lab and investigating how to achieve non-volatile spin control by integrating them with ferroelectric materials,” reveals Ang. “This could potentially allow for memory devices that can retain information for longer.”

The post Layer-spintronics makes its debut appeared first on Physics World.

  •  

Solar wind burst caused a heatwave on Jupiter

A burst of solar wind triggered a planet-wide heatwave in Jupiter’s upper atmosphere, say astronomers at the University in Reading, UK. The hot region, which had a temperature of over 750 K, propagated at thousands of kilometres per hour and stretched halfway around the planet.

“This is the first time we have seen something like a travelling ionospheric disturbance, the likes of which are found on Earth, at a giant planet,” says James O’Donoghue, a Reading planetary scientist and lead author of a study in Geophysical Research Letters on the phenomenon. “Our finding shows that Jupiter’s atmosphere is not as self-contained as we thought, and that the Sun can drive dramatic, global changes, even this far out in the solar system.”

Jupiter’s upper atmosphere begins hundreds of kilometres above its surface and has two components. One is a neutral thermosphere composed mainly of molecular hydrogen. The other is a charged ionosphere comprising electrons and ions. Jupiter also has a protective magnetic shield, or magnetosphere.

When emissions from Jupiter’s volcanic moon, Io, become ionized by extreme ultraviolet radiation from the Sun, the resulting plasma becomes trapped in the magnetosphere. This trapped plasma then generates magnetosphere-ionosphere currents that heat the planet’s polar regions and produce aurorae. Thanks to this heating, the hottest places on Jupiter, at around 900 K, are its poles. From there, temperatures gradually decrease, reaching 600 K at the equator.

Quite a different temperature-gradient pattern

In 2021, however, O’Donoghue and colleagues observed quite a different temperature-gradient pattern in near-infrared spectral data recorded by the 10-metre Keck II telescope in Hawaii, US, during an event in 2017. When they analysed these data, they found an enormous hot region far from Jupiter’s aurorae and stretching across 180° in longitude – half the planet’s circumference.

“At the time, we could not definitively explain this hot feature, which is roughly 150 K hotter than the typical ambient temperature of Jupiter,” says O’Donoghue, “so we re-analysed the Keck data using updated solar wind propagation models.”

Two instruments on NASA’s Juno spacecraft were pivotal in the re-analysis, he explains. The first, called Waves, can measure electron densities locally. Its data showed that these electron densities ramped up as the spacecraft approached Jupiter’s magnetosheath, which is the region between the planet’s magnetic field and the solar wind. The second instrument was Juno’s magnetometer, which recorded measurements that backed up the Waves-based analyses, O’Donoghue says.

A new interpretation

In their latest study, the Reading scientists analysed a burst of fast solar wind that emanated from the Sun in January 2017 and propagated towards Jupiter. They found that a high-speed stream of this wind arrived several hours before the Keck telescope recorded the data that led them to identify the hot region.

“Our analysis of Juno’s magnetometer measurements also showed that this spacecraft exited the magnetosphere of Jupiter early,” says O’Donoghue. “This is a strong sign that strong solar winds probably compressed Jupiter’s magnetic field several hours before the hot region appeared.

“We therefore see the hot region emerging as a response to solar wind compression: the aurorae flared up and heat spilled equatorward.”

The result shows that the Sun can significantly reshape the global energy balance in Jupiter’s upper atmosphere, he tells Physics World. “That changes how we think about energy balance at all giant planets, not just Jupiter, but potentially Saturn, Uranus, Neptune and exoplanets too,” he says. “It also shows that solar wind can trigger complex atmospheric responses far from Earth and it could help us understand space weather in general.”

The Reading researchers say they would now like to hunt for more of these events, especially in the southern hemisphere of Jupiter where they expect a mirrored response. “We are also working on measuring wind speeds and temperatures across more of the planet and at different times to better understand how often this happens and how energy moves around,” O’Donoghue reveals. “Ultimately, we want to build a more complete picture of how space weather shapes Jupiter’s upper atmosphere and drives (or interferes) with global circulation there.”

The post Solar wind burst caused a heatwave on Jupiter appeared first on Physics World.

  •  

Speedy worms behave like active polymers in disordered mazes

Worms move faster in an environment riddled with randomly-placed obstacles than they do in an empty space. This surprising observation by physicists at the University of Amsterdam in the Netherlands can be explained by modelling the worms as polymer-like “active matter”, and it could come in handy for developers of robots for soil aeriation, fertility treatments and other biomedical applications.

When humans move, the presence of obstacles – disordered or otherwise – has a straightforward effect: it slows us down, as anyone who has ever driven through “traffic calming” measures like speed bumps and chicanes can attest. Worms, however, are different, says Antoine Deblais, who co-led the new study with Rosa Sinaasappel and theorist colleagues in Sara Jabbari Farouji’s group. “The arrangement of obstacles fundamentally changes how worms move,” he explains. “In disordered environments, they spread faster as crowding increases, while in ordered environments, more obstacles slow them down.”

A maze of cylindrical pillars

The team obtained this result by placing single living worms at the bottom of a water chamber containing a 50 x 50 cm array of cylindrical pillars, each with a radius of 2.5 mm. By tracking the worms’ movement and shape changes with a camera for two hours, the scientists could see how the animals behaved when faced with two distinct pillar arrangements: a periodic (square lattice) structure; and a disordered array. The minimum distance between any two pillars was set to the characteristic width of a worm (around 0.5 mm) to ensure they could always pass through.

“By varying the number and arrangement of the pillars (up to 10 000 placed by hand!), we tested how different environments affect the worm’s movement,” Sinaasappel explains. “We also reduced or increased the worm’s activity by lowering or raising the temperature of the chamber.”

These experiments showed that when the chamber contained a “maze” of obstacles placed at random, the worms moved faster, not slower. The same thing happened when the researchers increased the number of obstacles. More surprisingly still, the worms got through the maze faster when the temperature was lower, even though the cold reduced their activity.

Active polymer-like filaments

To explain these counterintuitive results, the team developed a statistical model that treats the worms as active polymer-like filaments and accounts for both the worms’ flexibility and the fact that they are self-driven. This analysis revealed that in a space containing disordered pillar arrays, the long-time diffusion coefficient of active polymers with a worm-like degree of flexibility increases significantly as the fraction of the surface occupied by pillars goes up. In regular, square-lattice arrangements, the opposite happens.

The team say that this increased diffusivity comes about because randomly-positioned pillars create narrow tube-like structures between them. These curvilinear gaps guide the worms and allow them to move as if they were straight rods for longer before they reorient. In contrast, ordered pillar arrangements create larger open spaces, or pores, in which worms can coil up. This temporarily traps them and they slow down.

Similarly, the team found that reducing the worm’s activity by lowering ambient temperatures increases a parameter known as its persistence length. This is essentially a measure of how straight the worm is, and straighter worms pass between the pillars more easily.

“Self-tuning plays a key role”

Identifying the right active polymer model was no easy task, says Jabbari Farouji. One challenge was to incorporate the way worms adjust their flexibility depending on their surroundings. “This self-tuning plays a key role in their surprising motion,” says Jabbari Farouji, who credits this insight to team member Twan Hooijschuur.

Understanding how active, flexible objects move through crowded environments is crucial in physics, biology and biophysics, but the role of environmental geometry in shaping this movement was previously unclear, Jabbari Farouji says. The team’s discovery that movement in active, flexible systems can be controlled simply by adjusting the environment has important implications, adds Deblais.

“Such a capability could be used to sort worms by activity and therefore optimize soil aeration by earthworms or even influence bacterial transport in the body,” he says. “The insights gleaned from this study could also help in fertility treatments – for instance, by sorting sperm cells based on how fast or slow they move.”

Looking ahead, the researchers say they are now expanding their work to study the effects of different obstacle shapes (not just simple pillars), more complex arrangements and even movable obstacles. “Such experiments would better mimic real-world environments,” Deblais says.

The present work is detailed in Physical Review Letters.

The post Speedy worms behave like active polymers in disordered mazes appeared first on Physics World.

  •  

Supercritical water reveals its secrets

Contrary to some theorists’ expectations, water does not form hydrogen bonds in its supercritical phase. This finding, which is based on terahertz spectroscopy measurements and simulations by researchers at Ruhr University Bochum, Germany, puts to rest a long-standing controversy and could help us better understand the chemical processes that occur near deep-sea vents.

Water is unusual. Unlike most other materials, it is denser as a liquid than it is as the ice that forms when it freezes. It also expands rather than contracting when it cools; becomes less viscous when compressed; and exists in no fewer than 17 different crystalline phases.

Another unusual property is that at high temperatures and pressures – above 374 °C and 221 bars – water mostly exists as a supercritical fluid, meaning it shares some properties with both gases and liquids. Though such extreme conditions are rare on the Earth’s surface (at least outside a laboratory), they are typical for the planet’s crust and mantle. They are also present in so-called black smokers, which are hydrothermal vents that exist on the seabed in certain geologically active locations. Understanding supercritical water is therefore important for understanding the geochemical processes that occur in such conditions, including the genesis of gold ore.

Supercritical water also shows promise as an environmentally friendly solvent for industrial processes such as catalysis, and even as a mediator in nuclear power plants. Before any such applications see the light of day, however, researchers need to better understand the structure of water’s supercritical phase.

Probing the hydrogen bonding between molecules

At ambient conditions, the tetrahedrally-arranged hydrogen bonds (H-bonds) in liquid water produce a three-dimensional H-bonded network. Many of water’s unusual properties stem from this network, but as it approaches its supercritical point, its structure changes.

Previous studies of this change have produced results that were contradictory or unclear at best. While some pointed to the existence of distorted H-bonds, others identified heterogeneous structures involving rigid H-bonded dimers or, more generally, small clusters of tetrahedrally-bonded water surrounded by nonbonded gas-like molecules.

To resolve this mystery, an experimental team led by Gerhard Schwaab and Martina Havenith, together with Philipp Schienbein and Dominik Marx, investigated how water absorbs light in the far infrared/terahertz (THz) range of the spectrum. They performed their experiments and simulations at temperatures of 20° to 400°C and pressures from 1 bar up to 240 bars. In this way, they were able to investigate the hydrogen bonding between molecules in samples of water that were entering the supercritical state and samples that were already in it.

Diamond and gold cell

Because supercritical water is highly corrosive, the researchers carried out their experiments in a specially-designed cell made from diamond and gold. By comparing their experimental data with the results of extensive ab initio simulations that probed different parts of water’s high-temperature phase diagram, they obtained a molecular picture of what was happening.

The researchers found that the terahertz spectrum of water in its supercritical phase was practically identical to that of hot gaseous water vapour. This, they say, proves that supercritical water is different from both liquid water at ambient conditions and water in a low-temperature gas phase where clusters of molecules form directional hydrogen bonds. No such molecular clusters appear in supercritical water, they note.

The team’s ab initio molecular dynamics simulations also revealed that two water molecules in the supercritical phase remain close to each other for a very limited time – much shorter than the typical lifetime of hydrogen bonds in liquid water – before distancing themselves. What is more, the bonds between hydrogen and oxygen atoms in supercritical water do not have a preferred orientation. Instead, they are permanently and randomly rotating. “This is completely different to the hydrogen bonds that connect the water molecules in liquid water at ambient conditions, which do have a persisting preferred orientation,” Havenith says.

Now that they have identified a clear spectroscopic fingerprint for supercritical water, the researchers want to study how solutes affect the solvation properties of this substance. They anticipate that the results from this work, which is published in Science Advances, will enable them to characterize the properties of supercritical water for use as a “green” solvent.

The post Supercritical water reveals its secrets appeared first on Physics World.

  •  

Abnormal ‘Arnold’s tongue’ patterns appear in a real oscillating system

Two diagrams illustrating the unusual Arnold's tongue patterns observed in the experiment
Synchronization studies: When the experimenters mapped the laser’s breathing frequency intensity in the parameter space of pump current and intracavity loss (left), unusual features appeared. The areas contoured by blue dashed lines correspond to strong intensity, and represent the main synchronization regions. (right) Synchronization regions extracted from this map highlight their leaf-like structure. (Courtesy: DOI: 10.1126/sciadv.ads3660)

Abnormal versions of synchronization patterns known as “Arnold’s tongues” have been observed in a femtosecond fibre laser that generates oscillating light pulses. While these unconventional patterns had been theorized to exist in certain strongly-driven oscillatory systems, the new observations represent the first experimental confirmation.

Scientists have known about synchronization since 1665, when Christiaan Huygens observed that pendulums placed on a table eventually begin to sway in unison, coupled by vibrations within the table. It was not until the mid-20th century, however, that a Russian mathematician, Vladimir Arnold, discovered that plotting certain parameters of such coupled oscillating systems produces a series of tongue-like triangular shapes.

These shapes are now known as Arnold’s tongues, and they are an important indicator of synchronization. When the system’s parameters are in the tongue region, the system is synchronized. Otherwise, it is not.

Arnold’s tongues are found in all real-world synchronized systems, explains Junsong Peng, a physicist at East China Normal University. They have previously been studied in systems such as nanomechanical and biological resonators to which external driving frequencies are applied. More recently, they have been observed in the motion of two bound solitons (wave packets that maintain their shapes and sizes as they propagate) when they are subject to external forces.

Abnormal synchronization regions

In the new work, Peng, Sonia Boscolo of Aston University in the UK, Christophe Finot of the University of Burgundy in France, and colleagues studied Arnold’s tongue patterns in a laser that emits solitons. Lasers of this type possess two natural synchronization frequencies: the repetition frequency of the solitons (determined by the laser’s cavity length) and the frequency at which the energy of the soliton becomes self-modulating, or “breathing”.

In their experiments, which they describe in Science Advances, the researchers found that as they increased the driving force applied to this so-called breathing-soliton laser, the synchronization region first broadened, then narrowed. These changes produced Arnold’s tongues with very peculiar shapes. Instead of being triangle-like, they appeared as two regions shaped like leaves or rays.

Avoiding amplitude death

Although theoretical studies had previously predicted that Arnold’s-tongue patterns would deviate substantially from the norm as the driving force increased, Peng says that demonstrating this in a real system was not easy. The driving force required to access the anomalous regime is so strong that it can destroy fragile coherent pulsing states, leading to “amplitude death” in which all oscillations are completely suppressed.

In the breathing-soliton laser, however, the two frequencies synchronized without amplitude death even though the repetition frequency is about two orders of magnitude higher than the breathing frequency. “These lasers therefore open up a new frontier for studying synchronization phenomena,” Peng says.

To demonstrate the system’s potential, the researchers explored the effects of using an optical attenuator to modulate the laser’s dissipation while changing the laser’s pump current to modulate its gain. Having precise control over both parameters enabled them to identify “holes” within the ray-shaped tongue regions. These holes appear when the driving force exceeds a certain strength, and they represent quasi-periodic (unsynchronized) states inside the larger synchronized regions.

“The manifestation of holes is interesting not only for nonlinear science, it is also important for practical applications,” Peng explains. “This is because these holes, which have not been realized in experiments until now, can destabilize the synchronized system.”

Understanding when and under which conditions these holes appear, Peng adds, could help scientists ensure that oscillating systems operate more stably and reliably.

Extending synchronization to new regimes

The researchers also used simulations to produce a “map” of the synchronization regions. These simulations perfectly reproduced the complex synchronization structures they observed in their experiments, confirming the existence of the “hole” effect.

Despite these successes, however, Peng says it is “still quite challenging” to understand why such patterns appear. “We would like to do more investigations on this issue and get a better understanding of the dynamics at play,” he says.

The current work extends studies of synchronization into a regime where the synchronized region no longer exhibits a linear relationship with the coupling strength (as is the case for normal Arnold’s-tongue pattern), he adds. “This nonlinear relationship can generate even broader synchronization regions compared to the linear regime, making it highly significant for enhancing the stability of oscillating systems in practical applications,” he tells Physics World.

The post Abnormal ‘Arnold’s tongue’ patterns appear in a real oscillating system appeared first on Physics World.

  •  

Strange metals get their strangeness from quantum entanglement

A concept from quantum information theory appears to explain at least some of the peculiar behaviour of so-called “strange” metals. The new approach, which was developed by physicists at Rice University in the US, attributes the unusually poor electrical conductivity of these metals to an increase in the quantum entanglement of their electrons. The team say the approach could advance our understanding of certain high-temperature superconductors and other correlated quantum structures.

While electrons can travel through ordinary metals such as gold or copper relatively freely, strange metals resist their flow. Intriguingly, some high-temperature superconductors have a strange metal phase as well as a superconducting one. This phenomenon that cannot be explained by conventional theories that treat electrons as independent particles, ignoring any interactions between them.

To unpick these and other puzzling behaviours, a team led by Qimiao Si turned to the concept of quantum Fisher information (QFI). This statistical tool is typically used to measure how correlations between electrons evolve under extreme conditions. In this case, the team focused on a theoretical model known as the Anderson/Kondo lattice that describes how magnetic moments are coupled to electron spins in a material.

Correlations become strongest when strange metallicity appears

These analyses revealed that electron-electron correlations become strongest at precisely the point at which strange metallicity appears in a material. “In other words, the electrons become maximally entangled at this quantum critical point,” Si explains. “Indeed, the peak signals a dramatic amplification of multipartite electron spin entanglement, leading to a complex web of quantum correlations between many electrons.”

What is striking, he adds, is that this surge of entanglement provides a new and positive characterization of why strange metals are so strange, while also revealing why conventional theory fails. “It’s not just that traditional theory falls short, it is that it overlooks this rich web of quantum correlations, which prevents the survival of individual electrons as the elementary objects in this metallic substance,” he explains.

To test their finding, the researchers, who report their work in Nature Communications, compared their predictions with neutron scattering data from real strange-metal materials. They found that the experimental data was a good match. “Our earlier studies had also led us to suspect that strange metals might host a deeply entangled electron fluid – one whose hidden quantum complexity had yet to be fully understood,” adds Si.

The implications of this work are far-reaching, he tells Physics World. “Strange metals may hold the key to unlocking the next generation of superconductors — materials poised to transform how we transmit energy and, perhaps one day, eliminate power loss from the electric grid altogether.”

The Rice researchers say they now plan to explore how QFI manifests itself in the charge of electrons as well as their spins. “Until now, our focus has only been on the QFI associated with electrons spins, but electrons also of course carry charge,” Si says.

The post Strange metals get their strangeness from quantum entanglement appeared first on Physics World.

  •  

Helium nanobubble measurements shed light on origins of heavy elements in the universe

New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.

The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.

The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.

From the night sky to the laboratory

One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”

Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).

This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”

The 94Sr(alpha,n)97Zr reaction

To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.

The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.

“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.

To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.

Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.

As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.

The work is detailed in Physical Review Letters.

The post Helium nanobubble measurements shed light on origins of heavy elements in the universe appeared first on Physics World.

  •  

Two-dimensional metals make their debut

Researchers from the Institute of Physics of the Chinese Academy of Sciences have produced the first two-dimensional (2D) sheets of metal. At just angstroms thick, these metal sheets could be an ideal system for studying the fundamental physics of the quantum Hall effect, 2D superfluidity and superconductivity, topological phase transitions and other phenomena that feature tight quantum confinement. They might also be used to make novel electronic devices such as ultrathin low-power transistors, high-frequency devices and transparent displays.

Since the discovery of graphene – a 2D sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently-bonded atoms are separated by gaps. The presence of these gaps mean that neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets.

Making atomically thin metals would expand this class of technologically important structures. However, because each atom in a metal is strongly bonded to surrounding atoms in all directions, thinning metal sheets to this degree has proved difficult. Indeed, many researchers thought it might be impossible.

Melting and squeezing pure metals

The technique developed by Guangyu Zhang, Luojun Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. The team used MoS2/sapphire because both materials are atomically flat and lack dangling bonds that could react with the metals. They also have high Young’s moduli, of 430 GPa and 300 GPa respectively, meaning they can withstand extremely high pressures.

Once the metal powders melted into a droplet, the researchers applied a pressure of 200 MPa. They then continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal formed.

The team produced five atomically thin 2D metals using this technique. The thinnest, at around 5.8 Å, was tin, followed by bismuth (~6.3 Å), lead (~7.5 Å), indium (~8.4 Å) and gallium (~9.2 Å).

“Arduous explorations”

Zhang, Du and colleagues started this project around 10 years ago after they decided it would be interesting to work on 2D materials other than graphene and its layered vdW cousins. At first, they had little success. “Since 2015, we tried out a host of techniques, including using a hammer to thin a metal foil – a technique that we borrowed from gold foil production processes – all to no avail,” Du recalls. “We were not even able to make micron-thick foils using these techniques.”

After 10 years of what Du calls “arduous explorations”, the team finally moved a crucial step forward by developing the vdW squeezing method.

Writing in Nature, the researchers say that the five 2D metals they’ve realized so far are just the “tip of the iceberg” for their method. They now intend to increase this number. “In terms of novel properties, there is still a knowledge gap in the emerging electrical, optical, magnetic properties of 2D metals, so it would be nice to see how these materials behave physically as compared to their bulk counterparts thanks to 2D confinement effects,” says Zhang. “We would also like to investigate to what extent such 2D metals could be used for specific applications in various technological fields.”

The post Two-dimensional metals make their debut appeared first on Physics World.

  •