Globular springtails (Dicyrtomina minuta) are small bugs about five millimetres long that can be seen crawling through leaf litter and garden soil. While they do not have wings and cannot fly, they more than make up for it with their ability to hop relatively large heights and distances.
This jumping feat is thanks to a tail-like appendage on their abdomen called a furcula, which is folded in beneath their body, held under tension.
When released, it snaps against the ground in as little as 20 milliseconds, flipping the springtail up to 6 cm into the air and 10 cm horizontally.
They modified a cockroach-inspired robot to include a latch-mediated spring actuator, in which potential energy is stored in an elastic element – essentially a robotic fork-like furcula.
Via computer simulations and experiments to control the length of the linkages in the furcula as well as the energy stored in them, the team found that the robot could jump some 1.4 m horizontally, or 23 times its body length – the longest of any existing robot relative to body length.
The work could help design robots that can traverse places that are hazardous to humans.
“Walking provides a precise and efficient locomotion mode but is limited in terms of obstacle traversal,” notes Harvard’s Robert Wood. “Jumping can get over obstacles but is less controlled. The combination of the two modes can be effective for navigating natural and unstructured environments.”
The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.
The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.
Limitation of thermostats for radiative heat
Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.
Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.
In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.
Integrating optical sensors to improve room comfort
The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.
The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.
The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.
The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.
New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.
The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.
Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.
Hot and violent region
The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.
The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.
New result reinforces previous interpretations
The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.
Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.
Very long baseline interferometry
To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.
“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”
Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.
A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.
Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.
Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.
Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.
Shifting resonances
However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.
“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.
Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.
Dithering the cavity
This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.
The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”
Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”
In this episode of the Physics World Weekly podcast, online editor Margaret Harris chats about her recent trip to CERN. There, she caught up with physicists working on some of the lab’s most exciting experiments and heard from CERN’s current and future leaders.
Founded in Geneva in 1954, today CERN is most famous for the Large Hadron Collider (LHC), which is currently in its winter shutdown. Harris describes her descent 100 m below ground level to visit the huge ATLAS detector and explains why some of its components will soon be updated as part of the LHC’s upcoming high luminosity upgrade.
She explains why new “crab cavities” will boost the number of particle collisions at the LHC. Among other things, this will allow physicists to better study how Higgs bosons interact with each other, which could provide important insights into the early universe.
Harris describes her visit to CERN’s Antimatter Factory, which hosts several experiments that are benefitting from a 2021 upgrade to the lab’s source of antiprotons. These experiments measure properties of antimatter – such as its response to gravity – to see if its behaviour differs from that of normal matter.
Harris also heard about the future of the lab from CERN’s director general Fabiola Gianotti and her successor Mark Thomson, who will take over next year.
Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.
Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.
Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.
Almost twice as much 10Be than expected
In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.
The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.
Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.
To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”
Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.
The private firm Intuitive Machines has launched a lunar lander to test extraction methods for water and volatile gases. The six-legged Moon lander, dubbed Athena, took off yesterday aboard a SpaceX Falcon 9 rocket from NASA’s Kennedy Space Center in Florida . Also aboard the rocket was NASA’s Lunar Trailblazer – a lunar orbiter that will investigate water on the Moon and its geology.
In February 2024, Intuitive Machines’ Odysseus mission became the first US mission to make a soft landing on the Moon since Apollo 17 and the first private craft to do so. After a few hiccups during landing, the mission carried out measurements with an optical and radio telescope before it ended seven days later.
Athena is the second lunar lander by Intuitive Machines in its quest to build infrastructure on the Moon that would be required for long-term lunar exploration.
The mission, standing almost five meters tall, aims to land in the Mons Mouton region, which is about 160 km from the lunar south pole.
It will use a drill to bore one meter into the surface and test the extraction of substances – including volatiles such as carbon dioxide as well as water – that it will then analyse with a mass spectrometer.
Athena also contains a “hopper” dubbed Grace that can travel up to 25 kilometres on the lunar surface. Carrying about 10 kg of payloads, the rocket-propelled drone will aim to take images of the lunar surface and explore nearby craters.
As well as Grace, Athena carries two rovers. MAPP, built by Lunar Outpost, will autonomously navigate the lunar surface while a small, lightweight rover dubbed Yaoki, which has been built by the Japanese firm Dymon, will explore the Moon within 50 meters of the lander.
Athena is part of NASA’s $2.6bn Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.
Taking the Moon’s temperature
Lunar Trailblazer, meanwhile, will spend two years orbiting the Moon from a 100 km altitude polar orbit. Weighing 200 kg and about the size of a washing machine, it will map the distribution of water on the Moon’s surface about 12 times a day with a resolution of about 50 meters.
While it is known that water exists on the lunar surface, little is known about its form, abundance, distribution or how it arrived. Various hypothesis range from “wet” asteroids crashing into the Moon to volcanic eruptions producing water vapour from the Moon’s interior.
Water hunter: NASA’s Lunar Trailblazer will spend two years mapping the distribution of water on the surface of the Moon (courtesy: Lockheed Martin Space for Lunar Trailblazer)
To help answer that question, the craft will examine water deposits via an imaging spectrometer dubbed the High-resolution Volatiles and Minerals Moon Mapper that has been built by NASA’s Jet Propulsion Laboratory.
A thermal mapper, meanwhile, that has been developed by the University of Oxford, will plot the temperature of the Moon’s surface and help to confirm the presence and location of water.
Lunar Trailblazer was selected in 2019 as part of NASA’s Small Innovative Missions for Planetary Exploration programme.
While the biology of how an entire organism develops from a single cell has long been a source of fascination, recent research has increasingly highlighted the role of mechanical forces. “If we want to have rigorous predictive models of morphogenesis, of tissues and cells forming organs of an animal,” says Konstantin Doubrovinski at the University of Texas Southwestern Medical Center, “it is absolutely critical that we have a clear understanding of material properties of these tissues.”
Now Doubrovinski and his colleagues report a rheological study explaining why the developing fruit fly (Drosophila melanogaster) epithelial tissue stretches as it does over time to allow the embryo to change shape.
Previous studies had shown that under a constant force, tissue extension was proportional to the time the force had been applied to the power of one half. This had puzzled the researchers, since it did not fit a simple model in which epithelial tissues behave like linear springs. In such a model, the extension obeys Hooke’s law and is proportional to the force applied alone, such that the exponent of time in the relation would be zero.
They and other groups had tried to explain this observation of an exponent equal to 0.5 as due to the viscosity of the medium surrounding the cells, which would lead to deformation near the point of pulling that then gradually spreads. However, their subsequent experiments ruled out viscosity as a cause of the non-zero exponent.
Tissue pulling experiments Schematic showing how a ferrofluid droplet positioned inside one cell is used to stretch the epithelium via an external magnetic field. The lower images are snapshots from an in vivo measurement. (Courtesy: Konstantin Doubrovinski/bioRxiv 10.1101/2023.09.12.557407)
For their measurements, the researchers had exploited a convenient feature of Drosophila epithelial cells – a small hole, through which they could manipulate a droplet of ferrofluid to enter using a permanent magnet. Once inside the cell, a magnet acting on this droplet could exert forces on the cell to stretch the surrounding tissue.
For the current study, the researchers first tested the observed scaling law over longer periods of time. A power law gives a straight line on a log–log plot but as Doubrovinski points out, curves also look like straight lines over short sections. However, even when they increased the time scales probed in their experiments to cover three orders of magnitude – from fractions of a second to several minutes – the observed power law still held.
Understanding the results
One of the post docs on the team – Mohamad Ibrahim Cheikh – stumbled upon the actual relation giving the power law with an exponent of 0.5 while working on a largely unrelated problem. He had been modelling ellipsoids in a hexagonal meshwork on a surface, in what Doubrovinski describes as a “large” and “relatively complex” simulation. He decided to examine what would happen if he allowed the mesh to relax in its stretched position, which would model the process of actin turnover in cells.
Cheikh’s simulation gave the power law observed in the epithelial cells. “We totally didn’t expect it,” says Doubrovinski. “We pursued it and thought, why are we getting it? What’s going on here?”
Although this simulation yielded the power law with an exponent of 0.5, because the simulation was so complex, it was hard to get a handle on why. “There are all these different physical effects that we took into account that we thought were relevant,” he tells Physics World.
To get a more intuitive understanding of the system, the researchers attempted to simplify the model into a lattice of springs in one dimension, keeping only some of the physical effects from the simulations, until they identified the effects required to give the exponent value of 0.5. They could then scale this simplified one-dimensional model back up to three dimensions and test how it behaved.
According to their model, if they changed the magnitude of various parameters, they should be able to rescale the curves so that they essentially collapse onto a single curve. “This makes our prediction falsifiable,” says Doubrovinski, and in fact the experimental curves could be rescaled in this way.
When the researchers used measured values for the relaxation constant based on the actin turnover rate, along with other known parameters such as the size of the force and the size of the extension, they were able to calculate the force constant of the epithelial cell. This value also agreed with their previous estimates.
Doubrovinski explains how the ferrofluid droplet engages with individual “springs” of the lattice as it moves through the mesh. “The further it moves, the more springs it catches on,” he says. “So the rapid increase of one turns into a slow increase with an exponent of 0.5.” Against this model, all the pieces fit into place.
“I find it inspiring that the authors, first motivated by in vivo mechanical measurements, could develop a simple theory capturing a new phenomenological law of tissue rheology,” says Pierre Françoise Lenne, group leader at the Institut de Biologie du Development de Marseille at L’Universite d’Aix-Marseille. Lenne specializes in the morphogenesis of multicellular systems but was not involved in the current research.
Next, Doubrovinski and his team are keen to see where else their results might apply, such as other developmental stages and other types of organisms, such as mammals, for example.
Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.
Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.
The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.
An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.
For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.
Highly compressed “tensor networks”
To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.
These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.
Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.
This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.
The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.
Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?
Heavy duty: The new transfer arm. (Courtesy: UHV Design)
The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.
“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”
The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.
Continual development, review and improvement
While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.
“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”
The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.
Moving multiple objects
As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.
Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”
Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.
However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.
Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.
“That’s a big no-no”
“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.
Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.
But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.
As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.
Built-in error protection
As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.
It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.
In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.
Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.
The device is a remarkable achievement from the materials science and fabrication standpoint
Ivar Martin, Argonne National Laboratory
“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).
Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.
Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.
“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”
Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.
This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.
In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.
Philip Minnmann
Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.
Johannes Schubert
Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.
The fusion physicist Ian Chapman is to be the next head of UK Research and Innovation (UKRI) – the UK’s biggest public research funder. He will take up the position in June, replacing the geniticist Ottoline Leyser who has held the position since 2020.
UK science minister Patrick Vallance notes that Chapman’s “leadership experience, scientific expertise and academic achievements make him an exceptionally strong candidate to lead UKRI”.
UKRI chairman Andrew Mackenzie, meanwhile, states that Chapman “has the skills, experience, leadership and commitment to unlock this opportunity to improve the lives and livelihoods of everyone”.
Hard act to follow
After gaining an MSc in mathematics and physics from Durham University, Chapman completed a PhD at Imperial College London in fusion science, which he partly did at Culham Science Centre in Oxfordshire.
In 2014 he became head of tokamak science at Culham and then became fusion programme manager a year later. In 2016, aged just 34, he was named chief executive of the UK Atomic Energy Authority (UKAEA), which saw him lead the UK’s magnetic confinement fusion research programme at Culham.
In that role he oversaw an upgrade to the lab’s Mega Amp Spherical Tokamak as well as the final operation of the Joint European Torus (JET) – one of the world’s largest nuclear fusion devices – that closed in 2024.
Chapman also played a part in planning a prototype fusion power plant. Known as the Spherical Tokamak for Energy Production (STEP), it was first announced by the UK government in 2019 with operations expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.
Chapman, who currently sits on UKRI’s board, says that he is “excited” to take over as head of UKRI. “Research and innovation must be central to the prosperity of our society and our economy, so UKRI can shape the future of the country,” he notes. “I was tremendously fortunate to represent UKAEA, an organisation at the forefront of global research and innovation of fusion energy, and I look forward to building on those experiences to enable the wider UK research and innovation sector.”
The UKAEA has announced that Tim Bestwick, who is currently UKAEA’s deputy chief executive, will take over as interim UKAEA head until a permanent replacement is found.
Steve Cowley, director of the Princeton Plasma Physics Laboratory in the US and a former chief executive of UKAEA, told Physics World that Chapman is an “astonishing science leader” and that the UKRI is in “excellent hands”. “[Chapman] has set a direction for UK fusion research that is bold and inspired,” adds Cowley. “It will be a hard act to follow but UK fusion development will go ahead with great energy.”
A team at the Trento Proton Therapy Centre in Italy has delivered the first clinical treatments using proton arc therapy, an emerging proton delivery technique. Following successful dosimetric comparisons with clinically delivered proton plans, the researchers confirmed the feasibility of PAT delivery and used PAT to treat nine cancer patients, reporting their findings inMedical Physics.
Currently, proton therapy is mostly delivered using pencil-beam scanning (PBS), which provides highly conformal dose distributions. But PBS delivery can be compromised by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this limitation by moving to an arc trajectory.
“Proton arc treatments are different from any other pencil-beam proton delivery technique because of the large number of beam angles used and the possibility to optimize the number of energies used for each beam direction, which enables optimization of the delivery time,” explains first author Francesco Fracchiolla. “The ability to optimize both the number of energy layers and the spot weights makes these treatments superior to any previous delivery technique.”
Plan comparisons
The Trento researchers – working with colleagues from RaySearch Laboratories – compared the dosimetric parameters of PAT plans with those of state-of-the-art multiple-field optimized (MFO) PBS plans, for 10 patients with head-and-neck cancer. They focused on this site due to the high number of organs-at-risk (OARs) close to the target that may be spared using this new technique.
In future, PAT plans will be delivered with the beam on during gantry motion (dynamic mode). This requires dynamic arc plan delivery with all system settings automatically adjusted as a function of gantry angle – an approach with specific hardware and software requirements that have so far impeded clinical rollout.
Instead, Fracchiolla and colleagues employed an alternative version of static PAT, in which the static arc is converted into a series of PBS beams and delivered using conventional delivery workflows. Using the RayStation treatment planning system, they created MFO plans (using six noncoplanar beam directions) and PAT plans (with 30 beam directions), robustly optimized against setup and range uncertainties.
PAT plans dramatically improved dose conformality compared with MFO treatments. While target coverage was of equal quality for both treatment types, PAT decreased the mean doses to OARs for all patients. The biggest impact was in the brainstem, where PAT reduced maximum and mean doses by 19.6 and 9.5 Gy(RBE), respectively. Dose to other primary OARs did not differ significantly between plans, but PAT achieved an impressive reduction in mean dose to secondary OARs not directly adjacent to the target.
The team also evaluated how these dosimetric differences impact normal tissue complication probability (NTCP). PAT significantly reduced (by 8.5%) the risk of developing dry mouth and slightly lowered other NTCP endpoints (swallowing dysfunction, tube feeding and sticky saliva).
To verify the feasibility of clinical PAT, the researchers delivered MFO and PAT plans for one patient on a clinical gantry. Importantly, delivery times (from the start of the first beam to the end of the last) were similar for both techniques: 36 min for PAT with 30 beam directions and 31 min for MFO. Reducing the number of beam directions to 20 reduced the delivery time to 25 min, while maintaining near-identical dosimetric data.
First patient treatments
The successful findings of the plan comparison and feasibility test prompted the team to begin clinical treatments.
“The final trigger to go live was the fact that the discretized PAT plans maintained pretty much exactly the optimal dosimetric characteristics of the original dynamic (continuous rotation) arc plan from which they derived, so there was no need to wait for full arc to put the potential benefits to clinical use. Pretreatment verification showed excellent dosimetric accuracy and everything could be done in a fully CE-certified environment,” say Frank Lohr and Marco Cianchetti, director and deputy director, respectively, of the Trento Proton Therapy Center. “The only current drawback is that we are not at the treatment speed that we could be with full dynamic arc.”
To date, nine patients have received or are undergoing PAT treatment: five with head-and-neck tumours, three with brain tumours and one thorax cancer. For the first two head-and-neck patients, the team created PAT plans with a half arc (180° to 0°) with 10 beam directions and a mean treatment time of 12 min. The next two were treated with a complete arc (360°) with 20 beam directions. Here, the mean treatment time was 24 min. Patient-specific quality assurance revealed an average gamma passing rate (3%, 3 mm) of 99.6% and only one patient required replanning.
All PAT treatments were performed using the centre’s IBA ProteusPlus proton therapy unit and the existing clinical workflow. “Our treatment planning system can convert an arc plan into a PBS plan with multiple beams,” Fracchiolla explains. “With this workaround, the entire clinical chain doesn’t change and the plan can be delivered on the existing system. This ability to convert the arc plans into PBS plans means that basically every proton centre can deliver these treatments with the current hardware settings.”
The researchers are now analysing acute toxicity data from the patients, to determine whether PAT reduces toxicity. They are also looking to further reduce the delivery times.
“Hopefully, together with IBA, we will streamline the current workflow between the OIS [oncology information system] and the treatment control system to reduce treatment times, thus being competitive in comparison with conventional approaches, even before full dynamic arc treatments become a clinical reality,” adds Lohr.
Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)
Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.
The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable
The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.
The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.
The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”
There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.
Is smaller better?
A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.
In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.
But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)
However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.
The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.
1 Superconductors
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)
Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc). Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.
What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.
Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.
However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current. This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.
High-performance fusion materials
Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.
The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.
2 REBCO unit cell
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)
The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.
Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).
REBCO compounds are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.
REBCO wires: Bendy but brittle
REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.
Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.
Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.
Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in 2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.
Pushing superconductors to the limit
Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).
These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.
Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.
Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.
To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).
TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.
We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.
Searching for clues
The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.
In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials3 52).
We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol.36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)
Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.
We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol.34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin48 710).
Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum.95 063907). The effect of other critical parameters like magnetic field and strain also still needs to be explored.
Towards net zero
The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.
This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices
Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.
This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.
Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.
Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.
Observing the planet during a transit event
On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.
Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.
“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”
The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.
Removing unwanted signals
According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.
“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”
The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”
Join us for an insightful webinar that delves into the role of Cobalt-60 in intracranial radiosurgery using Leksell Gamma Knife.
Through detailed discussions and expert insights, attendees will learn how Leksell Gamma Knife, powered by cobalt-60, has and continues to revolutionize the field of radiosurgery, offering patients a safe and effective treatment option.
Participants will gain a comprehensive understanding of the use of cobalt in medical applications, highlighting its significance, and learn more about the unique properties of cobalt-60. The webinar will explore the benefits of cobalt-60 in intracranial radiosurgery and why it is an ideal choice for treating brain lesions while minimizing damage to surrounding healthy tissue.
Don’t miss this opportunity to enhance your knowledge and stay at the forefront of medical advancements in radiosurgery!
Riccardo Bevilacqua
Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather, and writes popular science articles on physics and radiation.
Working with “student LEGO enthusiasts”, they have developed a fully functional LEGO interferometer kit that consists of lasers, mirrors, beamsplitters and, of course, some LEGO bricks.
The set, designed as a teaching aid for secondary-school pupils and older, is aimed at making quantum science more accessible and engaging as well as demonstrating the basic principles of interferometry such as interference patterns.
“Developing this project made me realise just how incredibly similar my work as a quantum scientist is to the hands-on creativity of building with LEGO,” notes Nottingham quantum physicist Patrik Svancara. “It’s an absolute thrill to show the public that cutting-edge research isn’t just complex equations. It’s so much more about curiosity, problem-solving, and gradually bringing ideas to life, brick by brick!”
A team at Cardiff University will now work on the design and develop materials that can be used to train science teachers with the hope that the sets will eventually be made available throughout the UK.
“We are sharing our experiences, LEGO interferometer blueprints, and instruction manuals across various online platforms to ensure our activities have a lasting impact and reach their full potential,” adds Svancara.
If you want to see the LEGO interferometer in action for yourself then it is being showcased at the Cosmic Titans: Art, Science, and the Quantum Universe exhibition at Nottingham’s Djanogly Art Gallery, which runs until 27 April.
2 In quantum cryptography, who eavesdrops on Alice and Bob?
(Courtesy: Andy Roberts IBM Research/Science Photo Library)
3 Which artist made the Quantum Cloud sculpture in London?
4 IBM used which kind of atoms to create its Quantum Mirage image?
5 When Werner Heisenberg developed quantum mechanics on Helgoland in June 1925, he had travelled to the island to seek respite from what? A His allergies B His creditors C His funders D His lovers
6 According to the State of Quantum 2024 report, how many countries around the world had government initiatives in quantum technology at the time of writing? A 6 B 17 C 24 D 33
7 The E91 quantum cryptography protocol was invented in 1991. What does the E stand for? A Edison B Ehrenfest C Einstein D Ekert
8 British multinational consumer-goods firm Reckitt sells a “Quantum” version of which of its household products? A Air Wick freshener B Finish dishwasher tablets C Harpic toilet cleaner D Vanish stain remover
9 John Bell’s famous theorem of 1964 provides a mathematical framework for understanding what quantum paradox? A Einstein–Podolsky–Rosen B Quantum indefinite causal order C Schrödinger’s cat D Wigner’s friend
10 Which celebrated writer popularized the notion of Schrödinger’s cat in the mid-1970s? A Douglas Adams B Margaret Atwood C Arthur C Clarke D Ursula K le Guin
11 Which of these isn’t an interpretation of quantum mechanics? A Copenhagen B Einsteinian C Many worlds D Pilot wave
12 Which of these companies is not a real quantum company? A Qblox B Qruise C Qrypt D Qtips
13 Which celebrity was spotted in the audience at a meeting about quantum computers and music in London in December 2022? A Peter Andre B Peter Capaldi C Peter Gabriel D Peter Schmeichel
14 What of the following birds has not yet been chosen by IBM as the name for different versions of its quantum hardware? A Condor B Eagle C Flamingo D Peregrine
15 When quantum theorist Erwin Schrödinger fled Nazi-controlled Vienna in 1938, where did he hide his Nobel-prize medal? A In a filing cabinet B Under a pot plant C Behind a sofa D In a desk drawer
16 Which of the following versions of the quantum Hall effect has not been observed so far in the lab? A Fractional quantum Hall effect B Anomalous fractional quantum Hall effect C Anyonic fractional quantum Hall effect D Excitonic fractional quantum Hall effect
17 What did Quantum Coffee on Front Street West in Toronto call its recently launched pastry, which is a superposition of a croissant and muffin? A Croissin B Cruffin C Muffant D Muffcro
18 What destroyed the Helgoland guest house where Heisenberg stayed in 1925 while developing quantum mechanics? A A bomb B A gas leak C A rat infestation D A storm
This quiz is for fun and there are no prizes. Answers will be revealed on the Physics World website in April.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
As physicists, we like to think that physics and politics are – indeed, ought to be – unconnected. And a lot of the time, that’s true.
Certainly, the value of the magnetic moment of the muon or the behaviour of superconductors in a fusion reactor (look out for our feature article next week) have nothing do with where anyone sits on the political spectrum. It’s subjects like climate change, evolution and medical research that tend to get caught in the political firing line.
But scientists of all disciplines in the US are now feeling the impact of politics at first hand. The new administration of Donald Trump has ordered the National Institutes of Health to slash the “indirect” costs of its research projects, threatening medical science and putting the universities that support it at risk. The National Science Foundation, which funds much of US physics, is under fire too, with staff sacked and grant funding paused.
Trump has also signed a flurry of executive orders that, among other things, ban federal government initiatives to boost diversity, equity and inclusion (DEI) and instruct government departments to “combat illegal private-sector DEI preferences, mandates, policies, programs and activities”. Some organizations are already abandoning such efforts for fear of these future repercussions.
What’s troubling for physics is that attacks on diversity initiatives fall most heavily on people from under-represented groups, who are more likely to quit physics or not go into it in the first place. That’s bad news for our subject as a whole because we know that a diverse community brings in smart ideas, new approaches and clever thinking.
The speed of changes in the US is bewildering too. Yes, the proportion from federal grants for indirect costs might be too high, but making dramatic changes at short notice, with no consultation is bizarre. There’s also a danger that universities will try to recoup lost money by raising tuition fees, which will hit poorer students the hardest.
So far, it’s been left to senior leaders such as James Gates – a theoretical physicist at the University of Maryland – to warn of the dangers in store. “My country,” he said at an event earlier this month, “is in for a 50-year period of a new dark ages.”
This episode of the Physics World Weekly podcast features an interview with the theoretical physicist Jim Gates who is at the University of Maryland and Brown University – both in the US.
He updates his theorist’s bucket list, which he first shared with Physics World back in 2014. This is a list of breakthroughs in physics that Gates would like to see happen before he dies.
One list item – the observation or gravitational waves – happened in 2015 and Gates explains the importance of the discovery. He also explains why the observation of gravitons, which are central to a theory of quantum gravity, is on his bucket list.
Quantum information
Gates is known for his work on supersymmetry and superstring theory, so it is not surprising that experimental evidence for those phenomena are on the bucket list. Gates also talks about a new item on his list that concerns the connections between quantum physics and information theory.
In this interview with Physics World’s Margaret Harris, Gates also reflects on how the current political upheaval in the US is affecting science and society – and what scientists can do ensure that the public has faith in science.
I studied physics at the University of Oxford and I was the first person in my family to go to university. I then completed a DPhil at Oxford in 1991 studying cosmic rays and neutrinos. In 1992 I moved to University College London as a research fellow. That was the first time I went to CERN and two years later I began working on the Large Electron-Positron Collider, which was the predecessor of the Large Hadron Collider. I was fortunate enough to work on some of the really big measurements of the W and Z bosons and electroweak unification, so it was a great time in my life. In 2000 I worked at the University of Cambridge where I set up a neutrino group. It was then that I began working at Fermilab – the US’s premier particle physics lab.
So you flipped from collider physics to neutrino physics?
Over the past 20 years, I have oscillated between them and sometimes have done both in parallel. Probably the biggest step forward was in 2013 when I became spokesperson for the Deep Underground Neutrino Experiment – a really fascinating, challenging and ambitious project. In 2018 I was then appointed executive chair of the Science and Technology Facilities Council (STFC) – one of the main UK funding agencies. The STFC funds particle physics and astronomy in the UK and maintains relationships with organizations such as CERN and the Square Kilometre Array Observatory, as well as operating some of the UK’s biggest national infrastructures such as the Rutherford Appleton Laboratory and the Daresbury Laboratory.
What did that role involve?
It covered strategic funding of particle physics and astronomy in the UK and also involved running a very large scientific organization with about 2800 scientific, technical and engineering staff. It was very good preparation for the role as CERN director-general.
What attracted you to become CERN director-general?
CERN is such an important part of the global particle-physics landscape. But I don’t think there was ever a moment where I just thought “Oh, I must do this”. I’ve spent six years on the CERN Council, so I know the organization well. I realized I had all of the tools to do the job – a combination of the science, knowing the organization and then my experience in previous roles. CERN has been a large part of my life for many years, so it’s a fantastic opportunity for me.
It was quite a surreal moment. My first thoughts were “Well, OK, that’s fun”, so it didn’t really sink in until the evening. I’m obviously very happy and it was fantastic news but it was almost a feeling of “What happens now?”.
What so does happen now as CERN director-general designate?
There will be a little bit of shadowing, but you can’t shadow someone for the whole year, that doesn’t make very much sense. So what I really have to do is understand the organization, how it works from the inside and, of course, get to know the fantastic CERN staff, which I’ve already started doing. A lot of my time at the moment is meeting people and understanding how things work.
How might you do things differently?
I don’t think I will do anything too radical. I will have a look at where we can make things work better. But my priority for now is putting in place the team that will work with me from January. That’s quite a big chunk of work.
We have a decision to make on what comes after the High Luminosity-LHC in the mid-2040s
What do you think your leadership style will be?
I like to put around me a strong leadership team and then delegate and trust the leadership team to deliver. I’m there to set the strategic direction but also to empower them to deliver. That means I can take an outward focus and engage with the member states to promote CERN. I think my leadership style is to put in place a culture where the staff can thrive and operate in a very open and transparent way. That’s very important to me because it builds trust both within the organization and with CERN’s partners. The final thing is that I’m 100% behind CERN being an inclusive organization.
So diversity is an important aspect for you?
I am deeply committed to diversity and CERN is deeply committed to it in all its forms, and that will not change. This is a common value across Europe: our member states absolutely see diversity as being critical, and it means a lot to our scientific communities as well. From a scientific point of view, if we’re not supporting diversity, we’re losing people who are no different from others who come from more privileged backgrounds. Also, diversity at CERN has a special meaning: it means all the normal protected characteristics, but also national diversity. CERN is a community of 24 member states and quite a few associate member states, and ensuring nations are represented is incredibly important. It’s the way you do the best science, ultimately, and it’s the right thing to do.
The LHC is undergoing a £1bn upgrade towards a High Luminosity-LHC (HL-LHC), what will that entail?
The HL-LHC is a big step up in terms of capability and the goal will be to increase the luminosity of the machine. We are also upgrading the detectors to make them even more precise. The HL-LHC will run from about 2030 to the early 2040s. So by the end of LHC operations, we would have only taken about 10% of the overall data set once you add what the HL-LHC is expected to produce.
What physics will that allow?
There’s a very specific measurement that we would like to make around the nature of the Higgs mechanism. There’s something very special about the Higgs boson that it has a very strange vacuum potential, so it’s always there in the vacuum. With the HL-LHC, we’re going to start to study the structure of that potential. That’s a really exciting and fundamental measurement and it’s a place where we might start to see new physics.
Beyond the HL-LHC, you will also be involved in planning what comes next. What are the options?
We have a decision to make on what comes after the HL-LHC in the mid-2040s. It seems a long way off but these projects need a 20-year lead-in. I think the consensus amongst the scientific community for a number of years has been that the next machine must explore the Higgs boson. The motivation for a Higgs factory is incredibly strong.
Yet there has not been much consensus whether that should be a linear or circular machine?
My personal view is that a circular collider is the way forward. One option is the Future Circular Collider (FCC) – a 91 km circumference collider that would be built at CERN.
What would the benefits of the FCC be?
We know how to build circular colliders and it gives you significantly more capability than a linear machine by producing more Higgs bosons. It is also a piece of research infrastructure that will be there for many years beyond the electron–positron collider. The other aspect is that at some point in the future, we are going to want a high-energy hadron collider to explore the unknown.
But it won’t come cheap, with estimates being about £12–15bn for the electron–positron version, dubbed FCC-ee?
While the price tag for the FCC-ee is significant, that is spread over 24 member states for 15 years and contributions can also come from elsewhere. I’m not saying it’s going to be easy to actually secure that jigsaw puzzle of resource, because money will need to come from outside Europe as well.
China is also considering the Circular Electron Positron Collider (CEPC) that could, if approved, be built by the 2030s. What would happen to the FCC if the CEPC were to go ahead?
I think that will be part of the European Strategy for Particle Physics, which will happen throughout this year, to think about the ifs and buts. Of course, nothing has really been decided in China. It’s a big project and it might not go ahead. I would say it’s quite easy to put down aggressive timescales on paper but actually delivering them is always harder. The big advantage of CERN is that we have the scientific and engineering heritage in building colliders and operating them. There is only one CERN in the world.
What do you make of alternative technologies such as muon colliders that could be built in the existing LHC tunnel and offer high energies?
It’s an interesting concept but technically we don’t know how to do it. There’s a lot of development work but it’s going to take a long time to turn that into a real machine. So looking at a muon collider on the time scale of the mid-2040s is probably unrealistic. What is critical for an organization like CERN and for global particle physics is that when the HL-LHC stops by 2040, there’s not a large gap without a collider project.
Last year CERN celebrated its 70th anniversary, what do you think particle physics might look like in the next 70 years?
If you look back at the big discoveries over the last 30 years we’ve seen neutrino oscillations, the Higgs boson, gravitational waves and dark energy. That’s four massive discoveries. In the coming decade we will know a lot more about the nature of the neutrino and the Higgs boson via the HL-LHC. The big hope is we find something else that we don’t expect.
A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.
For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.
To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.
The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.
Reproducing human coughs and sneezes
One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.
During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)
“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”
These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.
“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.
The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.
The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.
Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.
Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.
“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”
This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.
Triboelectric series
Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.
Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.
According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.
In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”
Starting from scratch
For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.
At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.
Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.
Contact history
“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”
To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.
“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”
The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.
Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.
Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)
The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.
Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).
As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.
The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.
The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.
Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.
Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.
Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.
Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.
Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.
Three decades ago – in May 1995 – the British-born mathematical physicist Freeman Dyson published an article in the New York Review of Books. Entitled “The scientist as rebel”, it described how all scientists have one thing in common. No matter what their background or era, they are rebelling against the restrictions imposed by the culture in which they live.
“For the great Arab mathematician and astronomer Omar Khayyam, science was a rebellion against the intellectual constraints of Islam,” Dyson wrote. Leading Indian physicists in the 20th century, he added, were rebelling against their British colonial rulers and the “fatalistic ethic of Hinduism”. Even Dyson traced his interest in science as an act of rebellion against the drudgery of compulsory Latin and football at school.
“Science is an alliance of free spirits in all cultures rebelling against the local tyranny that each culture imposes,” he wrote. Through those acts of rebellion, scientists expose “oppressive and misguided conceptions of the world”. The discovery of evolution and of DNA changed our sense of what it means to be human, he said, while black holes and Gödel’s theorem gave us new views of the universe and the nature of mathematics.
But Dyson feared that this view of science was being occluded. Writing in the 1990s, which was a time of furious academic debate about the “social construction of science”, he feared that science’s liberating role was becoming hidden by a cabal of sociologists and philosophers who viewed scientists as like any other humans, governed by social, psychological and political motives. Dyson didn’t disagree with that view, but underlined that nature is the ultimate arbiter of what’s important.
Today’s rebels
One wonders what Dyson, who died in 2020, would make of current events were he alive today. It’s no longer just a small band of academics disputing science. Its opponents also include powerful and highly placed politicians, who are tarring scientists and scientific findings for lacking objectivity and being politically motivated. Science, they say, is politics by other means. They then use that charge to justify ignoring or openly rejecting scientific findings when creating regulations and making decisions.
Thousands of researchers, for instance, contribute to efforts by the United Nations Intergovernmental Panel on Climate Change (IPCC) to measure the impact and consequences of the rising amounts of carbon dioxide in the atmosphere. Yet US President Donald Trump –speaking after Hurricane Helene left a trail of destruction across the south-east US last year – called climate change “one of the great scams”. Meanwhile, US chief justice John Roberts once rejected using mathematics to quantify the partisan effects of gerrymandering, calling it “sociological gobbledygook”.
In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations
These attitudes are not only anti-science but also undermine democracy by sidelining experts and dissenting voices, curtailing real debate, scapegoating and harming citizens.
A worrying precedent for how things may play out in the Trump administration occurred in 2012 when North Carolina’s legislators passed House Bill 819. By prohibiting the use of models of sea-level rise to protect people living near the coast from flooding, the bill damaged the ability of state officials to protect its coastline, resources and citizens. It also prevented other officials from fulfilling their duty to advise and protect people against threats to life and property.
In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations. In the first Trump administration, bills were introduced in the US Congress to stop politicians from using science produced by the Department of Energy in policies to avoid admitting the reality of climate change.
We can expect more anti-scientific efforts, if the first Trump administration is anything to go by. Dyson’s rebel alliance, it seems, now faces not just posturing academics but a Galactic Empire.
The critical point
In his 1995 essay, Dyson described how scientists can be liberators by abstaining from political activity rather than militantly engaging in it. But how might he have seen them meeting this moment? Dyson would surely not see them turning away from their work to become politicians themselves. After all, it’s abstaining from politics that empowers scientists to be “in rebellion against the restrictions” in the first place. But Dyson would also see them as aware that science is not the driving force in creating policies; political implementation of scientific findings ultimately depends on politicians appreciating the authority and independence of these findings.
One of Trump’s most audacious “Presidential Actions”, made in the first week of his presidency, was to define sex. The action makes a female “a person belonging, at conception, to the sex that produces the large reproductive cell” and a male “a person belonging, at conception, to the sex that produces the small reproductive cell”. Trump ordered the government to use this “fundamental and incontrovertible reality” in all regulations.
An editorial in Nature (563 5) said that this “has no basis in science”, while cynics, citing certain biological interpretations that all human zygotes and embryos are initially effectively female, gleefully insisted that the order makes all of us female, including the new US president. For me and other Americans, Trump’s action restructures the world as it has been since Genesis.
Still, I imagine that Dyson would still see his rebels as hopeful, knowing that politicians don’t have the last word on what they are doing. For, while politicians can create legislation, they cannot legislate creation.
In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.
The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.
“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”
With and without sebum
Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.
This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.
What is responsible for the low ice adhesion?
Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.
This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.
Insights into human technology
As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”
The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”
For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.
Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.
“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”
One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.
Collective rotation
A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.
“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”
Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.
Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.
Thin magnetic film
The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.
To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.
This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.
While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”
Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.
A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.
Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.
In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.
Rising to the occasion?
To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.
What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.
You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.
Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.
You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)
Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.
There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.
Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?
What is the muon g-2?
Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.
However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.
This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.
“Virtual” particles
(Courtesy: CERN)
The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.
“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.
The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev.73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev.74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.
The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment. However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.
New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”
Testing the Standard Model: experiment vs theory
In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett.126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)
Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”
The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep.887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke– making the discrepancy all the more intriguing.
While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.
Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.
This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.
1 Eyes on the prize
(Courtesy: Muon g-2 collaboration/IOP Publishing)
Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.
That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.
The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”
If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.
To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.
2 Which Standard Model?
(Courtesy: Alex Keshavarzi/IOP Publishing)
Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.
Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.
The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.
The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.
On your marks, get set, bake!
In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.
For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.
New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)
The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.
It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.
Treatment with low-temperature plasma is emerging as a novel cancer therapy. Previous studies have shown that plasma can deactivate cancer cells in vitro, suppress tumour growth in vivo and potentially induce anti-tumour immunity. Researchers at the University of Tokyo are investigating another promising application – the use of plasma to inhibit tumour recurrence after surgery.
Lead author Ryo Ono and colleagues demonstrated that treating cancer resection sites with streamer discharge – a type of low-temperature atmospheric plasma – significantly reduced the recurrence rate of melanoma tumours in mice.
“We believe that plasma is more effective when used as an adjuvant therapy rather than as a standalone treatment, which led us to focus on post-surgical treatment in this study,” says Ono.
In vivo experiments
To create the streamer discharge, the team applied a high-voltage pulse (25 kV, 20 ns, 100 pulse/s) to a 3 mm-diameter rod electrode with a hemispherical tip. The rod was placed in a quartz tube with a 4 mm inner diameter, and the working gas – humid oxygen mixed with ambient air – was flowed through the tube. As electrons in the plasma collide with molecules in the gas, the mixture generates cytotoxic reactive oxygen and nitrogen species.
The researchers performed three experiments on mice with melanoma, a skin cancer with a local recurrence rate of up to 10%. In the first experiment, they injected 11 mice with mouse melanoma cells, resecting the resulting tumours eight days later. They then treated five of the mice with streamer discharge for 10 min, with the mouse placed on a grounded plate and the electrode tip 10 mm above the resection site.
Experimental setup Streamer discharge generation and treatment. (Courtesy: J. Phys. D: Appl. Phys. 10.1088/1361-6463/ada98c)
Tumour recurrence occurred in five of the six control mice (no plasma treatment) and two of the five plasma-treated mice, corresponding to recurrence rates of 83% and 40%, respectively. In a second experiment with the same parameters, recurrence rates were 44% in nine control mice and 25% in eight plasma-treated mice.
In a third experiment, the researchers delayed the surgery until 12 days after cell injection, increasing the size of the tumour before resection. This led to a 100% recurrence rate in the control group of five mice. Only one recurrence was seen in five plasma-treated mice, although one mouse that died of unknown causes was counted as a recurrence, resulting in a recurrence rate of 40%.
All of the experiments showed that plasma treatment reduced the recurrence rate by roughly 50%. The researchers note that the plasma treatment did not affect the animals’ overall health.
Cytotoxic mechanisms
To further confirm the cytotoxicity of streamer discharge, Ono and colleagues treated cultured melanoma cells for between 0 and 250 s, at an electrode–surface distance of 10 mm. The cells were then incubated for 3, 6 or 24 h. Following plasma treatments of up to 100 s, most cells were still viable 24 h later. But between 100 and 150 s of treatment, the cell survival rate decreased rapidly.
The experiment also revealed a rapid transition from apoptosis (natural programmed cell death) to late apoptosis/necrosis (cell death due to external toxins) between 3 and 24 h post-treatment. Indeed, 24 h after a 150 s plasma treatment, 95% of the dead cells were in the late stages of apoptosis/necrosis. This finding suggests that the observed cytotoxicity may arise from direct induction of apoptosis and necrosis, combined with inhibition of cell growth at extended time points.
In a previous experiment, the researchers used streamer discharge to treat tumours in mice before resection. This treatment delayed tumour regrowth by at least six days, but all mice still experienced local recurrence. In contrast, in the current study, plasma treatment reduced the recurrence rate.
The difference may be due to different mechanisms by which plasma inhibits tumour recurrence: cytotoxic reactive species killing residual cancer cells at the resection site; or reactive species triggering immunogenic cell death. The team note that either or both of these mechanisms may be occurring in the current study.
“Initially, we considered streamer discharge as the main contributor to the therapeutic effect, as it is the primary source of highly reactive short-lived species,” explains Ono. “However, recent experiments suggest that the discharge within the quartz tube also generates a significant amount of long-lived reactive species (with lifetimes typically exceeding 0.1 s), which may contribute to the therapeutic effect.”
One advantage of the streamer discharge device is that it uses only room air and oxygen, without requiring the noble gases employed in other cold atmospheric plasmas. “Additionally, since different plasma types generate different reactive species, we hypothesized that streamer discharge could produce a unique therapeutic effect,” says Ono. “Conducting in vivo experiments with different plasma sources will be an important direction for future research.”
Looking ahead to use in the clinic, Ono believes that the low cost of the device and its operation should make it feasible to use plasma treatment immediately after tumour resection to reduce recurrence risk. “Currently, we have only obtained preliminary results in mice,” he tells Physics World. “Clinical application remains a long-term goal.”
Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultra-high-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.
“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”
Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or being absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.
Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are also expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.
Underwater detection
Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.
The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000 m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.
KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.
Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.
New class of astrophysical messengers
While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultra-high-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.
Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them”.
Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”
With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.
Resconi, explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”
Volcanoes are awe-inspiring beasts. They spew molten rivers, towering ash plumes, and – in rarer cases – delicate glassy formations known as Pele’s hair and Pele’s tears. These volcanic materials, named after the Hawaiian goddess of volcanoes and fire, are the focus of the latestPhysics World Storiespodcast, featuring volcanologists Kenna Rubin (University of Rhode Island) and Tamsin Mather (University of Oxford).
Pele’s hair is striking: fine, golden filaments of volcanic glass that shimmer like spider silk in the sunlight. Formed when lava is ejected explosively and rapidly stretched into thin strands, these fragile fibres range from 1 to 300 µm thick – similar to human hair. Meanwhile, Pele’s tears – small, smooth droplets of solidified lava – can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities.
These materials are more than just geological curiosities. By studying their structure and chemistry, researchers can infer crucial details about past eruptions. Understanding these “fossil” samples provides insights into the history of volcanic activity and its role in shaping planetary environments.
Rubin and Mather describe what it’s like working in extreme volcanic landscapes. One day, you might be near the molten slopes of active craters, and then on another trip you could be exploring the murky depths of underwater eruptions via deep-sea research submersibles like Alvin.
For a deeper dive into Pele’s hair and tears, listen to the podcast and explore our recentPhysics Worldfeature on the subject.
Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments.
To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).
The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.
“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.
In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.
Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.
Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.
“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.
The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.
Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.
If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.
The exhibition – Freedom in the Equation – shares the stories of 10 scientists to highlight Ukraine’s lost scientific potential due to Russia’s aggression towards the country while also shedding light on the contributions of Ukrainian scientists.
Among them are physicists Vasyl Kladko and Lev Shubnikov. Kladko worked on semiconductor physics and was deputy director of the Institute of Semiconductor Physics in Kyiv. He was killed in 2022 at the age of 65 as he tried to help his family flee Russia’s invasion.
Shubnikov, meanwhile, established a cryogenic lab at the Ukrainian Institute of Physics and Technology in Kharkiv (now known as the Kharkiv Institute of Physics and Technology) in the early 1930s. In 1937, Shubnikov was arrested during Stalin’s regime and accused of espionage and was executed shortly after.
The scientists were selected by Oleksii Boldyrev, a molecular biologist and founder of the online platform myscience.ua, together with Krystyna Semeryn, a literary scholar and publicist.
The portraits were created by Niklas Elemehed, who is the official artist of the Nobel prize, with the text compiled by Olesia Pavlyshyn, editor-in-chief at the Ukrainian popular-science outlet Kunsht.
The exhibition, which is part of the Science at Risk project, runs until 10 March. “Today, I witness scientists being killed, and preserving their names has become a continuation of my work in historical research and a continuation of resistance against violence toward Ukrainian science,” says Boldyrev.
Physicists at the University of New South Wales (UNSW) are the first to succeed in creating and manipulating quantum superpositions of a single, large nuclear spin. The superposition involves spin states that are very far apart and are therefore the superposition is considered a Schrödinger’s cat state. The work could be important for applications in quantum information processing and quantum error correction.
It was Erwin Schrödinger who, in 1935, devised his famous thought experiment involving a cat that could, worryingly, be both dead and alive at the same time. In his gedanken experiment, the decay of a radioactive atom triggers a mechanism (the breaking of a vial containing a poisonous gas) that kills the cat. However, since the decay of the radioactive atom is a quantum phenomenon, the atom is in a superposition of being decayed and not decayed. If the cat and poison are hidden in a box, we do not know if the cat is alive or dead. Instead, the state of the feline is a superposition of dead and alive – known as a Schrödinger’s cat state – until we open the box.
Schrödinger’s cat state (or just cat state) is now used to refer a superposition of two very different states of a quantum system. Creating cat states in the lab is no easy task, but researchers have managed to do this in recent years using the quantum superposition of coherent states of a laser field with different amplitudes, or phases, of the field. They have also created cat states using a trapped ion (with the vibrational state of the ion in the trap playing the role of the cat) and coherent microwave fields confined to superconducting boxes combined with Rydberg atoms and superconducting quantum bits (qubits).
Antimony atom cat
The cat state in the UNSW study is an atom of antimony, which is a heavy atom with a large nuclear spin. The high spin value implies that, instead of just pointing up and down (that is, in one of two directions), the nuclear spin of antimony can be in spin states corresponding to eight different directions. This makes it a high-dimensional quantum system that is valuable for quantum information processing and for encoding error-correctable logical qubits. The atom was embedded in a silicon quantum chip that allows for readout and control of the nuclear spin state.
Normally, a qubit, is described by just two quantum states, explains Xi Yu, who is lead author of a paper describing the study. For example, an atom with its spin pointing down can be labelled as the “0” state and the spin pointing up, the “1” state. The problem with such a system is that information contained in these states is fragile and can be easily lost when a 0 switches to a 1, or vice versa. The probability of this logical error occurring is reduced by creating a qubit using a system like the antinomy atom. With its eight different spin directions, a single error is not enough to erase the quantum information – there are still seven quantum states left, and it would take seven consecutive errors to turn the 0 into a 1.
More room for error
The information is still encoded in binary code (0 and 1), but there is more room for error between the logical codes, says team leader Andrea Morello. “If an error occurs, we detect it straight away, and we can correct it before further errors accumulate.”
The researchers say they were not initially looking to make and manipulate cat states but started with a project on high-spin nuclei for reasons unrelated to quantum information. They were in fact interested in observing quantum chaos in a single nuclear spin, which had been an experimental “holy grail” for a very long time, says Morello. “Once we began working with this system, we first got derailed by the serendipitous discovery of nuclear electric resonance, he remembers “We then became aware of some new theoretical ideas for the use of high-spin systems in quantum information and quantum error correcting codes.
“We therefore veered towards that research direction, and this is our first big result in that context,” he tells Physics World.
Scalable technology
The main challenge the team had to overcome in their study was to set up seven “clocks” that had to be precisely synchronized, so they could keep track of the quantum state of the eight-level system. Until quite recently, this would have involved cumbersome programming of waveform generators, explains Morello. “The advent of FPGA [field-programmable gate array] generators, tailored for quantum applications, has made this research much easier to conduct now.”
While there have already been a few examples of such physical platforms in which quantum information can be encoded in a (Hilbert) space of dimension larger than two – for example, microwave cavities or trapped ions – these were relatively large in size: bulk microwave cavities are typically the size of matchbox, he says. “Here, we have reconstructed many of the properties of other high-dimensional systems, but within an atomic-scale object – a nuclear spin. It is very exciting, and quite plausible, to imagine a quantum processor in silicon, containing millions of such Schrödinger cat states.”
The fact that the cat is hosted in a silicon chip means that this technology could be scaled up in the long-term using methods similar to those already employed in the computer chip industry today, he adds.
Looking ahead, the UNSW team now plans to demonstrate quantum error correction in its antimony system. “Beyond that, we are working to integrate the antimony atoms with lithographic quantum dots, to facilitate the scalability of the system and perform quantum logic operations between cat-encoded qubits,” reveals Morello.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has declared 2025 the International Year of Quantum Science and Technology – or IYQ.
UNESCO kicked-off IYQ on 4–5 February at a gala opening ceremony in Paris. Physics World’s Matin Durrani was there, and he shares his highlights from the event in this episode of the Physics World Weekly podcast.
No fewer than four physics Nobel laureates took part in the ceremony alongside representatives from governments and industry. While some speakers celebrated the current renaissance in quantum research and the burgeoning quantum-technology sector, others called on the international community to ensure that people in all nations benefit from a potential quantum revolution – not just people in wealthier countries. The dangers of promising too much from quantum computers and other technologies, was also discussed – as Durrani explains.
Scientists across the US have been left reeling after a spate of executive orders from US President Donald Trump has led to research funding being slashed, staff being told to quit and key programmes being withdrawn. In response to the orders, government departments and external organizations have axed diversity, equity and inclusion (DEI) programmes, scrubbed mentions of climate change from websites, and paused research grants pending tests for compliance with the new administration’s goals.
Since taking up office on 20 January, Trump has signed dozens of executive orders. One ordered the closure of the US Agency for International Development, which has supported medical and other missions worldwide for more than six decades. The administration said it was withdrawing almost all of the agency’s funds and wanted to sack its entire workforce. A federal judge has temporarily blocked the plans, saying they may violate the US’s constitution, which reserves decisions on funding to Congress.
Individual science agencies are under threat too. Politico reported that the Trump administration has asked the National Science Foundation (NSF), which funds much US basic and applied research, to lay off between a quarter and a half of its staff in the next two months. Another report suggests there are plans to cut the agency’s annual budget from roughly $9bn to $3bn. Meanwhile, former officials of the National Oceanic and Atmospheric Administration (NOAA) told CBS News that half its staff could be sacked and its budget slashed by 30%.
Even before they had learnt of plans to cut its staff and budget, officials at the NSF were starting to examine details of thousands of grants it had awarded for references to DEI, climate change and other topics that Trump does not like. The swiftness of the announcements has caused chaos, with recipients of grants suddenly finding themselves unable to access the NSF’s award cash management service, which holds grantees’ funds, including their salaries.
NSF bosses have taken some steps to reassure grantees. “Our top priority is resuming our funding actions and services to the research community and our stakeholders,” NSF spokesperson Mike England told Physics World in late January. In what is a highly fluid situation, there was some respite on 2 February when the NSF announced that access had been restored with the system able to accept payment requests.
“Un-American” actions
Trump’s anti-DEI orders have caused shockwaves throughout US science. According to 404 Media, NASA staff were told on 22 January to “drop everything” to remove mentions of DEI, Indigenous people, environmental justice and women in leadership, from public websites. Another victim has been NASA’s Here to Observe programme, which links undergraduates from under-represented groups with scientists who oversee NASA’s missions. Science reported that contracts for half the scientists involved in the programme had been cancelled by the end of January.
It is still unclear, however, what impact the Trump administration’s DEI rules will have on the make-up of NASA’s astronaut corps. Since choosing its first female astronaut in 1978, NASA has sought to make the corps more representative of US demographics. How exactly the agency should move forward will fall to Jared Isaacman, the space entrepreneur and commercial astronaut who has been nominated as NASA’s next administrator.
Anti-DEI initiatives have hit individual research labs too. Physics World understands that Fermilab – the US’s premier particle-physics lab – suspended its DEI office and its women in engineering group in January. Meanwhile, the Fermilab LBGTQ+ group, called Spectrum, was ordered to cease all activities and its mailing list deleted. Even the rainbow “Pride” flag was removed from the lab’s iconic Wilson Hall.
There was also some confusion that the American Chemical Society had removed its webpage on diversity and inclusion, but they had in fact published a new page and failed to put a redirect in place. “Inclusion and Belonging is a core value of the American Chemical Society, and we remain committed to creating environments where people from diverse backgrounds, cultures, perspectives and experiences thrive,” a spokesperson told Physics World. “We know the broken link caused confusion and some alarm, and we apologize.”
Dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men
Neal Lane, Rice University
Such a response – which some opponents denounce as going beyond what is legally required for fear of repercussions if no action is taken – has left it up to individual leaders to underline the importance of diversity in science. Neal Lane, a former science adviser to President Clinton, told Physics World that “dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men, including scientists, engineers, technical workers – essentially everyone who contributes to advancing America’s global leadership in science and technology”.
Lane, who is now a science and technology policy fellow at Rice University in Texas, think that the new administration’s anti-DEI actions “will weaken the US” and believes they should be considered “un-American”. “The purpose of DEI policies programmes and activities is to ensure all Americans have the opportunity to participate and the country is able to benefit from their participation,” he says.
One senior physicist at a US university, who wishes to remain anonymous, told Physics World that those behind the executive orders are relying on institutions and individuals to “comply in advance” with what they perceive to be the spirit of the orders. “They are relying on people to ignore the fine print, which says that executive orders can’t and don’t overwrite existing law. But it is up to scientists to do the reading — and to follow our consciences. More than universities are on the line: the lives of our students and colleagues are on the line.”
Education turmoil
Another target of the Trump administration is the US Department of Education, which was set up in 1978 to oversee everything from pre-school to postgraduate education. It has already put dozens of its civil servants on leave, ostensibly because their work involves DEI issues. Meanwhile, the withholding of funds has led to the cancellation of scientific meetings, mostly focusing on medicine and life sciences, that were scheduled in the US for late January and early February.
Colleges and universities in the US have also reacted to Trump’s anti-DEI executive order. Academic divisions at Harvard University and the Massachusetts Institute of Technology, for example, have already indicated that they will no longer require applicants for jobs to indicate how they plan to advance the goals of DEI. Northeastern University in Boston has removed the words “diversity” and “inclusion” from a section of its website.
Not all academic organizations have fallen into line, however. Danielle Holly, president of the women-only Mount Holyoke College in South Hadley, Massachusetts, says it will forgo contracts with the federal government if they required abolishing DEI. “We obviously can’t enter into contracts with people who don’t allow DEI work,” she told the Boston Globe. “So for us, that wouldn’t be an option.”
Climate concerns
For an administration that doubts the reality of climate change and opposes anti-pollution laws, the Environmental Protection Agency (EPA) is under fire too. Trump administration representatives were taking action even before the Senate approved Lee Zeldin, a former Republican Congressman from New York who has criticized much environmental legislation, as EPA Administrator. They removed all outside advisers on the EPA’s scientific advisory board and its clean air scientific advisory committee – purportedly to “depoliticize” the boards.
Once the Senate approved Zeldin on 29 January, the EPA sent an e-mail warning more than 1000 probationary employees who had spent less than a year in the agency that their roles could be “terminated” immediately. Then, according to the New York Times, the agency developed plans to demote longer-term employees who have overseen research, enforcement of anti-pollution laws, and clean-ups of hazardous waste. According to Inside Climate News, staff also found their individual pronouns scrubbed from their e-mails and websites without their permission – the result of an order to remove “gender ideology extremism”.
Critics have also questioned the nomination of Neil Jacobs to lead the NOAA. He was its acting head during Trump’s first term in office, serving during the 2019 “Sharpiegate” affair when Trump used a Sharpie pen to alter a NOAA weather map to indicate that Hurricane Dorian would affect Alabama. While conceding Jacobs’s experience and credentials, Rachel Cleetus of the Union of Concerned Scientists asserts that Jacobs is “unfit to lead” given that he “fail[ed] to uphold scientific integrity at the agency”.
Spending cuts
Another concern for scientists is the quasi-official team led by “special government employee” and SpaceX founder Elon Musk. The administration has charged Musk and his so-called “department of government efficiency”, or DOGE, to identify significant cuts to government spending. Though some of DOGE’s activities have been blocked by US courts, agencies have nevertheless been left scrambling for ways to reduce day-to-day costs.
The National Institutes of Health (NIH), for example, has said it will significantly reduce its funding for “indirect” costs of research projects it supported – the overheads that, for example, cover the cost of maintaining laboratories, administering grants, and paying staff salaries. Under the plans, indirect cost reimbursement for federally funded research would be capped at 15%, a drastic cut from its usual range.
NIH personnel have tried to put a positive gloss on its actions. “The United States should have the best medical research in the world,” a statement from NIH declared. “It is accordingly vital to ensure that as many funds as possible go towards direct scientific research costs rather than administrative overhead.”
Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives
US senator Patty Murray
Opponents of the Trump administration, however, are unconvinced. They argue that the measure will imperil critical clinical research because many academic recipients of NIH funds did not have the endowments to compensate for the losses. “Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives,” says US senator Patty Murray, a Democrat from Washington state.
Slashing universities’ share of grants to below 15%, could, however, force institutions to make up the lost income by raising tuition fees, which could “go through the roof”, according to the anonymous senior physicist contacted by Physics World. “Far from being a populist policy, these cuts to overheads are an attack on the subsidies that make university education possible for students from a range of socioeconomic backgrounds. The alternative is to essentially shut down the university research apparatus, which would in many ways be the death of American scientific leadership and innovation.”
Musk and colleagues have also gained unprecedented access to government websites related to civil servants and the country’s entire payments system. That access has drawn criticism from several commentators who note that, since Musk is a recipient of significant government support through his SpaceX company, he could use the information for his own advantage.
“Musk has access to all the data on federal research grantees and contractors: social security numbers, tax returns, tax payments, tax rebates, grant disbursements and more,” wrote physicist Michael Lubell from City College of New York. “Anyone who depends on the federal government and doesn’t toe the line might become a target. This is right out of (Hungarian prime minister) Viktor Orbán’s playbook.”
A new ‘dark ages’
As for the long-term impact of these changes, James Gates – a theoretical physicist at the University of Maryland and a past president of the US National Society of Black Physicists – is blunt. “My country is in for a 50-year period of a new dark ages,” he told an audience at the Royal College of Art in London, UK, on 7 February.
My country is in for a 50-year period of a new dark ages
James Gates, University of Maryland
Speaking at an event sponsored by the college’s association for Black students – RCA BLK – and supported by the UK’s organization for Black physicists, the Blackett Lab Family, he pointed out that the US has been through such periods before. As examples, Gates cited the 1950s “Red Scare” and the period after 1876 when the federal government abandoned efforts to enforce the civil rights of Black Americans in southern states and elsewhere.
However, he is not entirely pessimistic. “Nothing is permanent in human behaviour. The question is the timescale,” Gates said. “There will be another dawn, because that’s part of the human spirit.”
With additional reporting by Margaret Harris, online editor of Physics World, in London and Michael Banks, news editor of Physics World
Bacterial cells in solutions of polymers such as mucus grow into long cable-like structures that buckle and twist on each other, forming a “living gel” made of intertwined cells. This behaviour is very different from what happens in polymer-free liquids, and researchers at the California Institute of Technology (Caltech) and Princeton University, both in the US, say that understanding it could lead to new treatments for bacterial infections in patients with cystic fibrosis. It could also help scientists understand how cells organize themselves into polymer-secreting conglomerations of bacteria called biofilms that can foul medical and industrial equipment.
Interactions between bacteria and polymers are ubiquitous in nature. For example, many bacteria live as multicellular colonies in polymeric fluids, including host-secreted mucus, exopolymers in the ocean and the extracellular polymeric substance that encapsulates biofilms. Often, these growing colonies can become infectious, including in cystic fibrosis patients, whose mucus is more concentrated than it is in healthy individuals.
Laboratory studies of bacteria, however, typically focus on cells in polymer-free fluids, explains study leader Sujit Datta, a biophysicist and bioengineer at Caltech. “We wondered whether interactions with extracellular polymers influence proliferating bacterial colonies,” says Datta, “and if so, how?”
Watching bacteria grow in mucus
In their work, which is detailed in Science Advances, the Caltech/Princeton team used a confocal microscope to monitor how different species of bacteria grew in purified samples of mucus. The samples, Dutta explains, were provided by colleagues at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine.
Normally, when bacterial cells divide, the resulting “daughter” cells diffuse away from each other. However, in polymeric mucus solutions, Datta and colleagues observed that the cells instead remained stuck together and began to form long cable-like structures. These cables can contain thousands of cells, and eventually they start bending and folding on top of each other to form an entangled network.
“We found that we could quantitively predict the conditions under which such cables form using concepts from soft-matter physics typically employed to describe non-living gels,” Datta says.
Support for bacterial colonies
The team’s work reveals that polymers, far from being a passive medium, play a pivotal role in supporting bacterial life by shaping how cells grow in colonies. The form of these colonies – their morphology – is known to influence cell-cell interactions and is important for maintaining their genetic diversity. It also helps determine how resilient a colony is to external stressors.
“By revealing this previously-unknown morphology of bacterial colonies in concentrated mucus, our finding could help inform ways to treat bacterial infections in patients with cystic fibrosis, in which the mucus that lines the lungs and gut becomes more concentrated, often causing the bacterial infections that take hold in that mucus to become life-threatening,” Datta tells Physics World.
Friend or foe?
As for why cable formation is important, Datta explains that there are two schools of thought. The first is that by forming large cables, bacteria may become more resilient against the body’s immune system, making them more infectious. The other possibility is that the reverse is true – that cable formation could in fact leave bacteria more exposed to the host’s defence mechanisms. These include “mucociliary clearance”, which is the process by which tiny hairs on the surface of the lungs constantly sweep up mucus and propel it upwards.
“Could it be that when bacteria are all clumped together in these cables, it is actually easier to get rid of them by expelling them out of the body?” Dutta asks.
Investigating these hypotheses is an avenue for future research, he adds. “Ours is a fundamental discovery on how bacteria grow in complex environments, more akin to their natural habitats,” Datta says. “We also expect it will motivate further work exploring how cable formation influences the ways in which bacteria interact with hosts, phages, nutrients and antibiotics.”
IBM is on a mission to transform quantum computers from applied research endeavour to mainstream commercial opportunity. It wants to go beyond initial demonstrations of “quantum utility”, where these devices outperform classical computers only in a few niche applications, and reach the new frontier of “quantum advantage”. That’ll be where quantum computers routinely deliver significant, practical benefits beyond approximate classical computing methods, calculating solutions that are cheaper, faster and more accurate.
Unlike classical computers, which rely on the binary bits that can be either 0 or 1, quantum computers exploit quantum binary bits (qubits), but as a superposition of 0 and 1 states. This superposition, coupled with quantum entanglement (a correlation of two qubits), enables quantum computers to perform some types of calculation significantly faster than classical machines, such as problems in quantum chemistry and molecular reaction kinetics.
In the vanguard of IBM’s quantum R&D effort is Sarah Sheldon, a principal research scientist and senior manager of quantum theory and capabilities at the IBM Thomas J Watson Research Center in Yorktown Heights, New York. After a double-major undergraduate degree in physics and nuclear science and engineering at Massachusetts Institute of Technology (MIT), Sheldon received her PhD from MIT in 2013 – though she did much of her graduate research in nuclear science and engineering as a visiting scholar at the Institute for Quantum Computing (IQC) at the University of Waterloo, Canada.
At IQC, Sheldon was part of a group studying quantum control techniques, manipulating the spin states of nuclei in nuclear-magnetic-resonance (NMR) experiments. “Although we were using different systems to today’s leading quantum platforms, we were applying a lot of the same kinds of control techniques now widely deployed across the quantum tech sector,” Sheldon explains.
“Upon completion of my PhD, I opted instinctively for a move into industry, seeking to apply all that learning in quantum physics into immediate and practical engineering contributions,” she says. “IBM, as one of only a few industry players back then with an experimental group in quantum computing, was the logical next step.”
Physics insights, engineering solutions
Sheldon currently heads a cross-disciplinary team of scientists and engineers developing techniques for handling noise and optimizing performance in novel experimental demonstrations of quantum computers. It’s ambitious work that ties together diverse lines of enquiry spanning everything from quantum theory and algorithm development to error mitigation, error correction and techniques for characterizing quantum devices.
We’re investigating how to extract the optimum performance from current machines online today as well as from future generations of quantum computers.
Sarah Sheldon, IBM
“From algorithms to applications,” says Sheldon, “we’re investigating what can we do with quantum computers: how to extract the optimum performance from current machines online today as well as from future generations of quantum computers – say, five or 10 years down the line.”
A core priority for Sheldon and colleagues is how to manage the environmental noise that plagues current quantum computing systems. Qubits are all too easily disturbed, for example, by their interactions with environmental fluctuations in temperature, electric and magnetic fields, vibrations, stray radiation and even interference between neighbouring qubits.
The ideal solution – a strategy called error correction – involves storing the same information across multiple qubits, such that errors are detected and corrected when one or more of the qubits are impacted by noise. But the problem with these so-called “fault-tolerant” quantum computers is they need millions of qubits, which is impossible to implement in today’s small-scale quantum architectures. (For context, IBM’s latest Quantum Development Roadmap outlines a practical path to error-corrected quantum computers by 2029.)
“Ultimately,” Sheldon notes, “we’re working towards large-scale error-corrected systems, though for now we’re exploiting near-term techniques like error mitigation and other ways of managing noise in these systems.” In practical terms, this means implementing quantum architectures without increasing the number of qubits – essentially, integrating them with classical computers to reduce noise through increasing samples on the quantum computer combined with classical processing.
Strength in diversity
For Sheldon, one big selling point of the quantum tech industry is the opportunity to collaborate with people from a wide range of disciplines. “My team covers a broad-scope R&D canvas,” she says. There are mathematicians and computer scientists, for example, working on complexity theory and novel algorithm development; physicists specializing in quantum simulation and incorporating error suppression techniques; as well as quantum chemists working on simulations of molecular systems.
“Quantum is so interdisciplinary – you are constantly learning something new from your co-workers,” she adds. “I started out specializing in quantum control techniques, before moving onto experimental demonstrations of larger multiqubit systems while working ever more closely with theorists.”
Computing reimagined Quantum scientists and engineers at the IBM Thomas J Watson Research Center are working to deliver IBM’s Quantum Development Roadmap and a practical path to error-corrected quantum computers by 2029. (Courtesy: Connie Zhou for IBM)
External research collaborations are also mandatory for Sheldon and her colleagues. Front-and-centre is the IBM Quantum Network, which provides engagement opportunities with more than 250 organizations across the “quantum ecosystem”. These range from top-tier labs – such as CERN, the University of Tokyo and the UK’s National Quantum Computing Centre – to quantum technology start-ups like Q-CTRL and Algorithmiq. It also encompasses established industry players aiming to be early-adopting end-users of quantum technologies (among them Bosch, Boeing and HSBC).
“There’s a lot of innovation happening across the quantum community,” says Sheldon, “so external partnerships are incredibly important for IBM’s quantum R&D programme. While we have a deep and diverse skill set in-house, we can’t be the domain experts across every potential use-case for quantum computing.”
Opportunity knocks
Notwithstanding the pace of innovation, there are troubling clouds on the horizon. In particular, there is a shortage of skilled workers in the quantum workforce, with established technology companies and start-ups alike desperate to attract more physical scientists and engineers. The task is to fill not only specialist roles – be it error-correction scientists or quantum-algorithm developers – but more general positions such as test and measurement engineers, data scientists, cryogenic technicians and circuit designers.
Yet Sheldon remains upbeat about addressing the skills gap. “There are just so many opportunities in the quantum sector,” she notes. “The field has changed beyond all recognition since I finished my PhD.” Perhaps the biggest shift has been the dramatic growth of industry engagement and, with it, all sorts of attractive career pathways for graduate scientists and engineers. Those range from firms developing quantum software or hardware to the end-users of quantum technologies in sectors such as pharmaceuticals, finance or healthcare.
“As for the scientific community,” argues Sheldon, “we’re also seeing the outline take shape for a new class of quantum computational scientist. Make no mistake, students able to integrate quantum computing capabilities into their research projects will be at the leading edge of their fields in the coming decades.”
Ultimately, Sheldon concludes, early-career scientists shouldn’t necessarily over-think things regarding that near-term professional pathway. “Keep it simple and work with people you like on projects that are going to interest you – whether quantum or otherwise.”
The COVID-19 pandemic provided a driving force for researchers to seek out new disinfection methods that could tackle future viral outbreaks. One promising approach relies on the use of nanoparticles, with several metal and metal oxide nanoparticles showing anti-viral activity against SARS-CoV-2, the virus that causes COVID-19. With this in mind, researchers from Sweden and Estonia investigated the effect of such nanoparticles on two different virus types.
Aiming to elucidate the nanoparticles’ mode of action, they discovered a previously unknown antiviral mechanism, reporting their findings in Nanoscale.
The researchers – from the Swedish University of Agricultural Sciences (SLU) and the University of Tartu – examined triethanolamine terminated titania (TATT) nanoparticles, spherical 3.5-nm diameter titanium dioxide (titania) particles that are expected to interact strongly with viral surface proteins.
They tested the antiviral activity of the TATT nanoparticles against two types of virus: swine transmissible gastroenteritis virus (TGEV) – an enveloped coronavirus that’s surrounded by a phospholipid membrane and transmembrane proteins; and the non-enveloped encephalomyocarditis virus (EMCV), which does not have a phospholipid membrane. SARS-CoV-2 has a similar structure to TGEV: an enveloped virus with an outer lipid membrane and three proteins forming the surface.
In this latest investigation, the team aimed to determine whether one of these potential mechanisms – blocking of surface proteins, or membrane disruption via oxidation by nanoparticle-generated reactive oxygen species – is the likely cause of TATT’s antiviral activity. The first of these effects usually occurs at low (nanomolar to micromolar) nanoparticle concentrations, the latter at higher (millimolar) concentrations.
Mode of action
To assess the nanoparticle’s antiviral activity, the researchers exposed viral suspensions to colloidal TATT solutions for 1 h, at room temperature and in the dark (without UV illumination). For comparison, they repeated the process with silicotungstate polyoxometalate (POM) nanoparticles, which are not able to bind strongly to cell membranes.
The nanoparticle-exposed viruses were then used to infect cells and the resulting cell viability served as a measure of the virus infectivity. The team note that the nanoparticles alone showed no cytotoxicity against the host cells.
Measuring viral infectivity after nanoparticle exposure revealed that POM nanoparticles did not exhibit antiviral effects on either virus, even at relatively high concentrations of 1.25 mM. TATT nanoparticles, on the other hand, showed significant antiviral activity against the enveloped TGEV virus at concentrations starting from 0.125 mM, but did not affect the non-enveloped EMCV virus.
Based on previous evidence that TATT nanoparticles interact strongly with proteins in darkness, the researchers expected to see antiviral activity at a nanomolar level. But the finding that TATT activity only occurred at millimolar concentrations, and only affected the enveloped virus, suggests that the antiviral effect is not due to blocking of surface proteins. And as titania is not oxidative in darkness, the team propose that the antiviral effect is actually due to direct complexation of nanoparticles with membrane phospholipids – a mode of antiviral action not previously considered.
“Typical nanoparticle concentrations required for effects on membrane proteins correspond to the protein content on the virus surface. With a 1:1 complex, we would need maximum nanomolar concentrations,” Kessler explains. “We saw an effect at about 1 mM/l, which is far higher. This was the indication for us that the effect was on the whole of membrane.”
Verifying the membrane effect
To corroborate their hypothesis, the researchers examined the leakage of dye-labelled RNA from the TGEV coronavirus after 1 h exposure to nanoparticles. The fluorescence signal from the dye showed that TATT-treated TGEV released significantly more RNA than non-exposed virus, attributed to the nanoparticles disrupting the virus’s phospholipid membrane.
Finally, the team studied the interactions between TATT nanoparticles and two model phospholipid compounds. Both molecules formed strong complexes with TATT nanoparticles, while their interaction with POM nanoparticles was weak. This additional verification led the researchers to conclude that the antiviral effect of TATT in dark conditions is due to direct membrane disruption via complexation of titania nanoparticles with phospholipids.
“To the best of our knowledge, [this] proves a new pathway for metal oxide nanoparticles antiviral action,” they write.
Importantly, the nanoparticles are non-toxic, and work at room temperature without requiring UV illumination – enabling simple and low-cost disinfection methods. “While it was known that disinfection with titania could work in UV light, we showed that no special technical measures are necessary,” says Kessler.
Kessler suggests that the nanoparticles could be used to coat surfaces to destroy enveloped viruses, or in cost-effective filters to decontaminate air or water. “[It should be] possible to easily create antiviral surfaces that don’t require any UV activation just by spraying them with a solution of TATT, or possibly other oxide nanoparticles with an affinity to phosphate, including iron and aluminium oxides in particular,” he tells Physics World.
Carbon-based organic photovoltaics (OPVs) may be much better than previously thought at withstanding the high-energy radiation and sub-atomic particle bombardments of space environments. This finding, by researchers at the University of Michigan in the US, challenges a long-standing belief that OPV devices systematically degrade under conditions such as those encountered by spacecraft in low-Earth orbit. If verified in real-world tests, the finding suggests that OPVs could one day rival traditional thin-film photovoltaic technologies based on rigid semiconductors such as gallium arsenide.
Lightweight, robust, radiation-resilient photovoltaics are critical technologies for many aerospace applications. OPV cells are particularly attractive for this sector because they are ultra-lightweight, thermally stable and highly flexible. This last property allows them to be integrated onto curved surfaces as well as flat ones.
Today’s single-junction OPV devices also have a further advantage. Thanks to power conversion efficiencies (PCEs) that now exceed 20%, their specific power – that is, the power generated per weight – can be up to 40 W/g. This is significantly higher than traditional photovoltaic technologies, including those based on silicon (1 W/g) and gallium arsenide (3 W/g) on flexible substrates. Devices with such a large specific power could provide energy for small spacecraft heading into low-Earth orbit and beyond.
Until now, however, scientists believed that these materials had a fatal flaw for space applications: they weren’t robust to irradiation by the energetic particles (predominantly fluxes of electrons and protons) that spacecraft routinely encounter.
Testing two typical OPV materials
In the new work, researchers led by electrical and computer engineer Yongxi Li and physicist Stephen Forrest analysed how two typical OPV materials behave when exposed to proton particles with differing energies. They did this by characterizing their optoelectronic properties before and after irradiation exposure. The first materials were made up of small molecules (DBP, DTDCPB and C70) that had been grown using a technique called vacuum thermal evaporation (VTE). The second group consisted of solution-processed small molecules and polymers (PCE-10, PM6, BT-CIC and Y6).
The team’s measurements show that the OPVs grown by VTE retained their initial PV efficiency under radiation fluxes of up to 1012 cm−2. In contrast, polymer-based OPVs lose 50% of their original efficiency under the same conditions. This, say the researchers, is because proton irradiation breaks carbon-hydrogen bonds in the polymers’ molecular alkyl side chains. This leads to polymer cross-linking and the generation of charge traps that imprison electrons and prevent them from generating useful current.
The good news, Forrest says, is that many of these defects can be mended by thermally annealing the materials at temperatures of 45 °C or less. After such an annealing, the cell’s PCE returns to nearly 90% of its value before irradiation. This means that Sun-facing solar cells made of these materials could essentially “self-heal”, though Forrest acknowledges that whether this actually happens in deep space is a question that requires further investigation. “It may be more straightforward to design the material so that the electron traps never appear in the first place or by filling them with other atoms, so eliminating this problem,” he says.
According to Li, the new study, which is detailed in Joule, could aid the development of standardized stability tests for how protons interact with OPV devices. Such tests already exist for c-Si and GaAs solar cells, but not for OPVs, he says.
The Michigan researchers say they will now be developing materials that combine high PCEs with strong resilience to proton exposure. “We will then use these materials to fabricate OPV devices that we will then test on CubeSats and spacecraft in real-world environments,” Li tells Physics World.
International conferences are a great way to meet people from all over the world to share the excitement of physics and discuss the latest developments in the subject. But the International Conference on Women in Physics (ICWIP) offers more by allowing us to to listen to the experiences of people from many diverse backgrounds and cultures. At the same time, it highlights the many challenges that women in physics still face.
The ICWIP series is organized by the International Union of Pure and Applied Physics (IUPAP) and the week-long event typically features a mixture of plenaries, workshops and talks. Prior to the COVID-19 pandemic, the conferences were held in various locations across the world, but the last two have been held entirely online. The last such meeting – the 8th ICWIP run from India in 2023 – saw around 300 colleagues from 57 countries attend. I was part of a seven-strong UK contingent – at various stages of our careers – who gave a presentation describing the current situation for women in physics in the UK.
Being held solely online didn’t stop delegates fostering a sense of community or discussing their predicaments and challenges. What became evident during the week was the extent and types of issues that women from across the globe still have to contend with. One is the persistence of implicit and explicit gender bias in their institutions or workplaces. This, along with negative stereotyping of women, produces discrepancies between male and female numbers in institutions, particularly at postgraduate level and beyond. Women often end up choosing not to pursue physics later into their careers and being reluctant to take up leadership roles.
Much more needs to be done to ensure women are encouraged in their careers. Indeed, women often face challenging work–life balances, with some expected to play a greater role in family commitments than men, and have little support at their workplaces. One postdoctoral researcher at the 2023 meeting, for example, attempted to discuss her research poster in the virtual conference room while looking after her young children at home – the literal balancing of work and life in action.
Open forum The author and co-presenters at the most recent International Conference on Women in Physics. Represented by avatars online, they gave a presentation on women in physics in the UK. (Courtesy: Chethana Setty)
To improve their circumstances, delegates suggested enhancing legislation to combat gender bias and improve institutional culture through education to reduce negative stereotypes. More should also be done to improve networks and professional associations for women in physics. Another factor mentioned at the meeting, meanwhile, is the importance of early education and issues related to equity of teaching, whether delivered face-to-face or online.
But women can face disadvantages other than their gender, such as socioeconomic status and identity, resulting in a unique set of challenges for them. This is the principle of intersectionality and was widely discussed in the context of problems in career progression.
As we now look forward to the next ICWIP there is still a lot more to do. We must ensure that women can continue in their physics careers while recognizing that intersectionality will play an increasingly significant role in shaping future equity, diversity and inclusion policies. It is likely that soon a new team will be sought from academia and industry, comprising of individuals at various career stages to represent the UK at the next ICWIP. Please do get involved if you are interested. Participation is not limited to women.
Women are doing physics in a variety of challenging circumstances. Gaining an international outlook of different cultural perspectives, as is possible at an international conference like the ICWIP, helps to put things in context and highlights the many common issues faced by women in physics. Taking the time to listen and learn from each other is critical, a process that can facilitate collaboration on issues that affect us all. Fundamentally, we all share a passion for physics, and endeavour to be catalysts for positive change for future generations.
This article was based on discussions with Sally Jordan from the Open University; Holly Campbell, UK Atomic Energy Authority; Josie C, AWE; Wendy Sadler and Nils Rehm, Cardiff University; and Sarah Bakewell and Miriam Dembo, Institute of Physics
We tend to define ourselves by the subjects we studied, and I am no different. I originally did physics before going on to complete a PhD in aeronautical engineering, which has led to a lifelong career in aerospace.
However, it took me quite a few years before I realized that there is more than one route to an enjoyable and successful career. I used to think that a career began at the “coal face” – doing things you were trained for or had a specialist knowledge of – before managing projects then products or people as you progressed to loftier heights.
Many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?
At some point, I began to realize that while companies often adopt this linear approach to career paths, not everyone is comfortable with it. In fact, I now think that many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?
Artisans are people who focus on creating functional, practical and often decorative items using hands-on methods or skills. Their work emphasizes craftmanship, attention to detail and the quality of the finished product. For scientists and engineers, artisans are highly skilled people who apply their technical knowledge and know-how. Let’s be honest: they are the ones who get the “real work” done. From programmers to machinists and assemblers, these are the people who create detailed designs and make or maintain a high-quality product.
Architects, on the other hand, combine vision with technical knowledge to create functional and effective solutions. Their work involves designing, planning and overseeing. They have a broader view of what’s happening and may be responsible for delivering projects. They need to ensure tasks are appropriately prioritized and keep things on track and within budget.
Architects also help with guiding on best practice and resolving or unblocking issues. They are the people responsible for ensuring that the end result meets the needs of users and, where applicable, comply with regulations. Typically, this role involves running a project or team – think principal investigator, project manager, software architect or systems engineer.
As for artists, they are the people who have a big picture view of the world – they will not have eyes for the finer details. They are less constrained by a framework and are comfortable working with minimal formal guidance and definition. They have a vision of what will be needed for the future – whether that’s new products and strategic goals or future skills and technology requirements.
Artists set the targets for how an organization, department or business needs to grow and they define strategies for how a business will develop its competitive edge. Artists are often leaders and chiefs.
Which type are you?
To see how these personas work in practice, imagine working for a power utility provider. If there’s a power outage, the artisans will be the people who get the power back on by locating and fixing damaged power lines, repairing substations and so on. They are practical people who know how to make things work.
The architect will be organizing the repair teams, working out who goes to which location, and what to prioritize, ensuring that customers are kept happy and senior leaders are kept informed of progress. The artist, meanwhile, will be thinking about the future. How, for example, can utilities protect themselves better from storm damage and what new technologies or designs can be introduced to make the supply more resilient and minimize disruption?
Predominantly artisans are practical, architects are tactical and artists are strategic but there is an overlap between these qualities. Artisans, architects and artists differ in their goals and methods, but the boundaries between them are blurred. Based on my gut experience as a physicist in industry, I’d say the breakdown between different skills is roughly as shown in the figure below.
Varying values Artisans, architects and artists don’t have only one kind of attribute but are practical, tactical and strategic in different proportions. The numbers shown here are based on the author’s gut feeling after working in industry for more than 30 years.
Now this breakdown is not hard and fast. To succeed in your career, you need to be creative, inventive and skilful – whatever your role. While working with your colleagues, you need to engage in common processes such as adhering to relevant standards, regulations and quality requirements to deliver quality solutions and products. But thinking of ourselves as artisans, architects or artists may explain why each of us is suited to a certain role.
Know your strengths
Even though we all have something of the other personas in us, what’s important is to know what your own core strength is. I used to believe that the only route for a successful career was to work through each of these personas by starting out as artisan, turning into an architect, and then ultimately becoming an artist. And to be fair, this is how many career paths are structured, which his why we’re often encouraged to think this way.
However, I have worked with people who liked “hands on” work so much, that didn’t want to move to a different role, even though it meant turning down a significant promotion. I also know others who have indeed moved between different personas, only to discover the new type of work did not suit them.
Trouble is, although it’s usually possible to retrace steps, it’s not always straightforward to do so. Quite why that should be the case is not entirely clear. It’s certainly not because people are unwilling to accept a pay cut, but more because changing tack is seen as a retrograde step for both employees and their employers.
To be successful, any team, department or business needs to not only understand the importance of this skills mix but also recognize it’s not a simple pipeline – all three personas are critical to success. So if you don’t know already, I encourage you to think about what you enjoy doing most, using your insights to proactively drive career conversations and decisions. Don’t be afraid to emphasize where your “value add” lies.
If you’re not sure whether a change in persona is right for you, seek advice from mentors and peers or look for a secondment to try it out. The best jobs are the ones where you can spend most of your time doing what you love doing. Whether you’re an artisan, architect or artist – the most impactful employees are the ones who really enjoy what they do.
A new type of quantum bit (qubit) that stores information in a quantum dot with the help of an ensemble of nuclear spin states has been unveiled by physicists in the UK and Austria. Led by Dorian Gangloff and Mete Atatüre at the University of Cambridge, the team created a collective quantum state that could be used as a quantum register to store and relay information in a quantum communication network of the future.
Quantum communication networks are used to exchange and distribute quantum information between remotely-located quantum computers and other devices. As well as enabling distributed quantum computing, quantum networks can also support secure quantum cryptography. Today, these networks are in the very early stages of development and use the entangled quantum states of photons to transmit information. Network performance is severely limited by decoherence, whereby the quantum information held by photons is degraded as they travel long distances. As a result, effective networks need repeater nodes that receive and then amplify weakened quantum signals.
“To address these limitations, researchers have focused on developing quantum memories capable of reliably storing entangled states to enable quantum repeater operations over extended distances,” Gangloff explains. “Various quantum systems are being explored, with semiconductor quantum dots being the best single-photon generators delivering both photon coherence and brightness.”
Single-photon emission
Quantum dots are widely used for their ability to emit single photons at specific wavelengths. These photons are created by electronic transitions in quantum dots and are ideal for encoding and transmitting quantum information.
However, the electronic spin states of quantum dots are not particularly good at storing quantum information for long enough to be useful as stationary qubits (or nodes) in a quantum network. This is because they contain hundreds or thousands of nuclei with spins that fluctuate. The noise generated by these fluctuations causes the decoherence of qubits based on electronic spin states.
In their previous research, Gangloff and Atatüre’s team showed how this noise could be controlled by sensing how it interacts with the electronic spin states.
Atatüre says, “Building on our previous achievements, we suppressed random fluctuations in the nuclear ensemble using a quantum feedback algorithm. This is already very useful as it dramatically improves the electron spin qubit performance.”
Magnon excitation
Now, using a gallium arsenide quantum dot, the team has used the feedback algorithm to stabilize 13,000 nuclear spin states in a collective, entangled “dark state”. This is a stable quantum state that cannot absorb or emit photons. By introducing just a single nuclear magnon (spin flip) excitation, shared across all 13,000 nuclei, they could then flip the entire ensemble between two different collective quantum states.
Each of these collective states could respectively be defined as a 0 and a 1 in a binary quantum logic system. The team then showed how quantum information could be exchanged between the nuclear system and the quantum dot’s electronic qubit with a fidelity of about 70%.
“The quantum memory maintained the stored state for approximately 130 µs, validating the effectiveness of our protocol,” Gangloff explains. “We also identified unambiguously the factors limiting the current fidelity and storage time, including crosstalk between nuclear modes and optically induced spin relaxation.”
The researchers are hopeful that their approach could transform one of the biggest limitations to quantum dot-based communication networks into a significant advantage.
“By integrating a multi-qubit register with quantum dots – the brightest and already commercially available single-photon sources – we elevate these devices to a much higher technology readiness level,” Atatüre explains.
With some further improvements to their system’s fidelity, the researchers are now confident that it could be used to strengthen interactions between quantum dot qubits and the photonic states they produce, ultimately leading to longer coherence times in quantum communication networks. Elsewhere, it could even be used to explore new quantum phenomena, and gather new insights into the intricate dynamics of quantum many-body systems.
Whether it’s the Olympics or the FIFA World Cup, all big global events need a cheeky, fun mascot. So welcome to Quinnie – the official mascot for the International Year of Quantum Science and Technology (IYQ) 2025.
Unveiled at the launch of the IYQ at the headquarters of UNESCO in Paris on 4 February, Quinnie has been drawn by Jorge Cham, the creator of the long-running cartoon strip PHD Comics.
Quinnie was developed for UNESCO in a collaboration between Cham and PhysicsMagazine, which is published by the American Physical Society (APS) – one of the founding partners of IYQ.
Riding high Quinnie surfing on a quantum wave function. (Courtesy: Jorge Cham)
“Quinnie represents a young generation approaching quantum science with passion, ingenuity, and energy,” says Physics editor Matteo Rini. “We imagine her effortlessly surfing on quantum-mechanical wave functions and playfully engaging with the knottiest quantum ideas, from entanglement to duality.”
Quinnie is set to appear in a series of animated cartoons that the APS will release throughout the year.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
A newly-discovered class of quasiparticles known as fractional excitons offers fresh opportunities for condensed-matter research and could reveal unprecedented quantum phases, say physicists at Brown University in the US. The new quasiparticles, which are neither bosons nor fermions and carry no charge, could have applications in quantum computing and sensing, they say.
In our everyday, three-dimensional world, particles are classified as either fermions or bosons. Fermions such as electrons follow the Pauli exclusion principle, which prevents them from occupying the same quantum state. This property underpins phenomena like the structure of atoms and the behaviour of metals and insulators. Bosons, on the other hand, can occupy the same state, allowing for effects like superconductivity and superfluidity.
Fractional excitons defy this traditional classification, says Jia Leo Li, who led the research. Their properties lie somewhere in between those of fermions and bosons, making them more akin to anyons, which are particles that exist only in two-dimensional systems. But that’s only one aspect of their unusual nature, Li adds. “Unlike typical anyons, which carry a fractional charge of an electron, fractional excitons are neutral particles, representing a distinct type of quantum entity,” he says.
The experiment
Li and colleagues created the fractional excitons using two sheets of graphene – a form of carbon just one atom thick – separated by a layer of another two-dimensional material, hexagonal boron nitride. This layered setup allowed them to precisely control the movement of electrons and positively-charged “holes” and thus to generate excitons, which are pairs of electrons and holes that behave like single particles.
The team then applied a 12 T magnetic field to their bilayer structure. This strong field caused the electrons in the graphene to split into fractional charges – a well-known phenomenon that occurs in the fractional quantum Hall effect. “Here, strong magnetic fields create Landau electronic levels that induce particles with fractional charges,” Li explains. “The bilayer structure facilitates pairing between these positive and negative charges, making fractional excitons possible.”
“Distinct from any known particles”
The fractional excitons represent a quantum system of neutral particles that obey fractional quantum statistics, interact via dipolar forces and are distinct from any known particles, Li tells Physics World. He adds that his team’s study, which is detailed in Nature, builds on prior works that predicted the existence of excitons in the fractional quantum Hall effect (see, for example, Nature Physics 13, 751 2017, Nature Physics 15, 898-903 2019, Science 375 (6577), 205-209 2022).
The researchers now plan to explore the properties of fractional excitons further. “Our key objectives include measuring the fractional charge of the constituent particles and confirming their anyonic statistics,” Li explains. Studies of this nature could shed light on how fractional excitons interact and flow, potentially revealing new quantum phases, he adds.
“Such insights could have profound implications for quantum technologies, including ultra-sensitive sensors and robust quantum computing platforms,” Li says. “As research progresses, fractional excitons may redefine the boundaries of condensed-matter physics and applied quantum science.”
The European Space Agency (ESA) has released a spectacular image of an Einstein ring – a circle of light formed around a galaxy by gravitational lensing. Taken by the €1.4bn Euclid mission, the ring is a result of the gravitational effects of a galaxy located around 590 million light-years from Earth.
Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of more than two billion galaxies. The images it takes are about four times as sharp as current ground-based telescopes.
Einstein’s general theory of relativity predicts that light will bend around objects in space, so that they focus the light like a giant lens. This gravitational lensing effect is bigger for more massive objects and means we can sometimes see the light from distant galaxies that would otherwise be hidden.
Yet if the alignment is just right, the light from the distant source galaxy bends to form a spectacular ring around the foreground object. In this case, the mass of galaxy NGC 6505 is bending and magnifying the light from a more distant galaxy, which is about 4.42 billion light-years away, into a ring.
Studying such rings can shed light on the expansion of the universe as well as the nature of dark matter.
Euclid’s first science results were released in May 2024, following its first shots of the cosmos in November 2023. Hints of the ring were first spotted in September 2023 when Euclid was being testing with follow-up measurements now revealing it in exquisite detail.
Unexpected behaviour at phase transitions between classical and quantum magnetism has been observed in different quantum simulators operated by two independent groups. One investigation was led by researchers at Harvard University and used Rydberg atom as quantum bits (qubits). The other study was led by scientists at Google Research and involved superconducting qubits. Both projects revealed unexpected deviations from the canonical mechanisms of magnetic freezing, with unexpected oscillations near the phase transition.
A classical magnetic material can be understood as a fluid mixture of magnetic domains that are oriented in opposite directions, with the domain walls in constant motion. As a strengthening magnetic field is applied to the system, the energy associated with a domain wall increases, so the magnetic domains themselves become larger and less mobile. At some point, when the magnetism becomes sufficiently strong, a quantum phase transition occurs, causing the magnetism of the material to become fixed and crystalline: “A good analogy is like water freezing,” says Mikhail Lukin of Harvard University.
The traditional quantitative model for these transitions is the Kibble–Zurek mechanism, which was first formulated to describe cosmological phase transitions in the early universe. It predicts that the dynamics of a system begin to “freeze” when the system gets so close to the transition point that the domains crystallize more quickly than they can come to equilibrium.
“There are some very good theories of various types of quantum phase transitions that have been developed,” says Lukin, “but typically these theories make some approximations. In many cases they’re fantastic approximations that allow you to get very good results, but they make some assumptions which may or may not be correct.”
Highly reconfigurable platform
In their work, Lukin and colleagues utilized a highly reconfigurable platform using Rydberg atom qubits. The system was pioneered by Lukin and others in 2016 to study a specific type of magnetic quantum phase transition in detail. They used a laser to simulate the effect of a magnetic field on the Rydberg atoms, and adjusted the laser frequency to tune the field strength.
The researchers found that, rather than simply becoming progressively larger and less mobile as the field strength increased (a phenomenon called coarsening), the domain sizes underwent unexpected oscillations around the phase transition.
“We were really quite puzzled,” says Lukin. “Eventually we figured out that this oscillation is a sign of a special type of excitation mode similar to the Higgs mode in high-energy physics. This is something we did not anticipate…That’s an example where doing quantum simulations on quantum devices really can lead to new discoveries.”
Meanwhile, the Google-led study used a new approach to quantum simulation with superconducting qubits. Such qubits have proved extremely successful and scalable because they use solid-state technology – and they are used in most of the world’s leading commercial quantum computers such as IBM’s Osprey and Google’s own Willow chips. Much of the previous work using such chips, however, has focused on sequential “digital” quantum logic in which one set of gates is activated only after the previous set has concluded. The long times needed for such calculations allows the effects of noise to accumulate, resulting in computational errors.
Hybrid approach
In the new work, the Google team developed a hybrid analogue–digital approach in which a digital universal quantum gate set was used to prepare well-defined input qubit states. They then switched the processor to analogue mode, using capacitive couplers to tune the interactions between the qubits. In this mode, all the qubits were allowed to operate on each other simultaneously, without the quantum logic being shoehorned into a linear set of gate operations. Finally, the researchers characterized the output by switching back to digital mode.
The researchers used a 69-qubit superconducting system to simulate a similar, but non-identical, magnetic quantum phase transition to that studied by Lukin’s group. They were also puzzled by similar unexpected behaviour in their system. The groups’ subsequently became aware of each other’s work, as Google Research’s Trond Anderson explains: “It’s very exciting to see consistent observations from the Lukin group. This not only provides supporting evidence, but also demonstrates that the phenomenon appears in several contexts, making it extra important to understand”.
Both groups are now seeking to push their research deeper into the exploration of complex many-body quantum physics. The Google group estimates that, to conduct its simulations of the highly entangled quantum states involved with the same level of experimental fidelity would take the US Department of Energy’s Frontier supercomputer – one of the world’s most powerful – more than a million years. The researchers now want to look at problems that are completely intractable classically, such as magnetic frustration. “The analogue–digital approach really combines the best of both worlds, and we’re very excited about this as a new promising direction towards making discoveries in systems that are too complex for classical computers,” says Anderson.
The Harvard researchers are also looking to push their system to study more and more complex quantum systems. “There are many interesting processes where dynamics – especially across a quantum phase transition – remains poorly understood,” says Lukin. “And it ranges from the science of complex quantum materials to systems in high-energy physics such as lattice gauge theories, which are notorious for being hard to simulate classically to the point where people literally give up…We want to apply these kinds of simulators to real open quantum problems and really use them to study the dynamics of these systems.”
The research is described in side-by-side papers in Nature. The Google paper is here and the Harvard paper here.
An international team of researchers has detected a series of significant X-ray oscillations near the innermost orbit of a supermassive black hole – an unprecedented discovery that could indicate the presence of a nearby stellar-mass orbiter such as a white dwarf.
Optical outburst
The Massachusetts Institute of Technology (MIT)-led team began studying the extreme supermassive black hole 1ES 1927+654 – located around 270 million light years away and about a million times more massive than the Sun – in 2018, when it brightened by a factor of around 100 at optical wavelengths. Shortly after this optical outburst, X-ray monitoring revealed a period of dramatic variability as X-rays dropped rapidly – at first becoming undetectable for about a month, before returning with a vengeance and transforming into the brightest supermassive black hole in the X-ray sky.
“All of this dramatic variability seemed to be over by 2021, as the source appeared to have returned to its pre-2018 state. However, luckily, we continued to watch this source, having learned the lesson that this supermassive black hole will always surprise us. The discovery of these millihertz oscillations was indeed quite a surprise, but it gives us a direct probe of regions very close to the supermassive black hole,” says Megan Masterson, a fifth-year PhD candidate at the MIT Kavli Institute for Astrophysics and Space Research, who co-led the study with MIT’s Erin Kara – alongside researchers based elsewhere in the US, as well as at institutions in Chile, China, Israel, Italy, Spain and the UK.
“We found that the period of these oscillations rapidly changed – dropping from around 18 minutes in 2022 to around seven minutes in 2024. This period evolution is unprecedented, having never been seen before in the small handful of other supermassive black holes that show similar oscillatory behaviour,” she adds.
White dwarf
According to Masterson, one of the key ideas behind the study was that the rapid X-ray period change could be driven by a white dwarf – the compact remnant of a star like our Sun – orbiting around the supermassive black hole close to its event horizon.
“If this white dwarf is driving these oscillations, it should produce a gravitational wave signal that will be detectable with next-generation gravitational wave observatories, like ESA’s Laser Interferometer Space Antenna (LISA),” she says.
To test their hypothesis, the researchers used X-ray data from ESA’s XMM-Newton observatory to detect the oscillations, which allowed them to track how the X-ray brightness changed over time. The findings were presented in mid-January at the 245th meeting of the American Astronomical Society in National Harbor, Maryland, and subsequently reported in Nature.
According to Masterson, these insights into the behaviour of X-rays near a black hole will have major implications for future efforts to detect multi-messenger signals from supermassive black holes.
“We really don’t understand how common stellar-mass companions around supermassive black holes are, but these findings tell us that it may be possible for stellar-mass objects to survive very close to supermassive black holes and produce gravitational wave signals that will be detected with the next-generation gravitational wave observatories,” she says.
Looking ahead, Masterson confirms that the immediate next step for MIT research in this area is to continue to monitor 1ES 1927+654 – with both existing and future telescopes – in an effort to deepen understanding of the extreme physics at play in and around the innermost environments of black holes.
“We’ve learned from this discovery that we should expect the unexpected with this source,” she adds. “We’re also hoping to find other sources like this one through large time-domain surveys and dedicated X-ray follow-up of interesting transients.”
This episode of the Physics World Weekly podcast looks at how climate and environmental change affect the efficiency of solar panels. Our guest is the climate scientist Sushovan Ghosh, who is lead author of paper that explores how aerosols, rising temperatures and other environmental factors will affect solar-energy output in India in the coming decades.
Today, India ranks fifth amongst nations in terms of installed solar-energy capacity and boosting this capacity will be crucial for the country’s drive to reduce its greenhouse gas emissions by 45% by 2030 – when compared to 2005.
While much of India is blessed with abundant sunshine, it is experiencing a persistent decline in incoming solar radiation that is associated with aerosol pollution. What is more, higher temperatures associated with climate change reduce the efficiency of solar cells – and their performance is also impacted in India by other climate-related phenomena.
In this podcast, Ghosh explains how changes in the climate and environment affect the generation of solar energy and what can be done to mitigate these effects.
A new graphene nanostructure could become the basis for the first ferromagnets made purely from carbon. Known as an asymmetric or “Janus” graphene nanoribbon after the two-faced god in Roman mythology, the opposite edges of this structure have different properties, with one edge taking a zigzag form. Lu Jiong , a researcher at the National University of Singapore (NUS) who co-led the effort to make the structure, explains that it is this zigzag edge that gives rise to the ferromagnetic state, making the structure the first of its kind.
“The work is the first demonstration of the concept of a Janus graphene nanoribbon (JGNR) strand featuring a single ferromagnetic zigzag edge,” Lu says.
Graphene nanostructures with zigzag-shaped edges show much promise for technological applications thanks to their electronic and magnetic properties. Zigzag GNRs (ZGNRs) are especially appealing because the behaviour of their electrons can be tuned from metal-like to semiconducting by adjusting the length or width of the ribbons; modifying the structure of their edges; or doping them with non-carbon atoms. The same techniques can also be used to make such materials magnetic. This versatility means they can be used as building blocks for numerous applications, including quantum and spintronics technologies.
“It has been a long-sought goal to make other forms of zigzag-edge related GNRs with exotic quantum magnetic states for studying new science and developing new applications,” says team member Song Shaotang, the first author of a paper in Nature about the research.
ZGNRs with asymmetric edges
Building on topological classification theory developed in previous research by Louie and colleagues, theorists in the Singapore-Japan-US collaboration predicted that it should be possible to tune the magnetic properties of these structures by making ZGNRs with asymmetric edges. “These nanoribbons have one pristine zigzag edge and another edge decorated with a pattern of topological defects spaced by a certain number m of missing motifs,” Louie explains. “Our experimental team members, using innovative z-shaped precursor molecules for synthesis, were able to make two kinds of such ZGNRs. Both of these have one edge that supports a benzene motif array with a spacing of m = 2 missing benzene rings in between. The other edge is a conventional zigzag edge.”
Crucially, the theory predicted that the magnetic behaviour – ranging from antiferromagnetism to ferrimagnetism to ferromagnetism – of these JGNRs could be controlled by varying the value of m. In particular, says Louie, the configuration of m = 2 is predicted to show ferromagnetism – that is, all electron spins aligned in the same direction – concentrated entirely on the pristine zigzag edge. This behaviour contrasts sharply with that of symmetric ZGNRs, where spin polarization occurs on both edges and the aligned edge spins are antiferromagnetically coupled across the width of the ribbon.
Precursor design and synthesis
To validate these theoretical predictions, the team synthesized JGNRs on a surface. They then used advanced scanning tunnelling microscope (STM) and atomic force microscope (AFM) measurements to visualize the materials’ exact real-space chemical structure. These measurements also revealed the emergence of exotic magnetic states in the JGNRs synthesized in Lu’s lab at the NUS.
Two sides: An atomic model of the Janus graphene nanoribbons (left) and its atomic force microscopic image (right). (Courtesy: National University of Singapore)
In the past, Sakaguchi explains that GNRs were mainly synthesized using symmetric precursor chemical structures, largely because their asymmetric counterparts were so scarce. One of the challenges in this work, he notes, was to design asymmetric polymeric precursors that could undergo the essential fusion (dehydrogenation) process to form JGNRs. These molecules often orient randomly, so the researchers needed to use additional techniques to align them unidirectionally prior to the polymerization reaction. “Addressing this challenge in the future could allow us to produce JGNRs with a broader range of magnetic properties,” Sakaguchi says.
Towards carbon-based ferromagnets
According to Lu, the team’s research shows that JGNRs could become the first carbon-based spin transport channels to show ferromagnetism. They might even lead to the development of carbon-based ferromagnets, capping off a research effort that began in the 1980s.
However, Lu acknowledges that there is much work to do before these structures find real-world applications. For one, they are not currently very robust when exposed to air. “The next goal,” he says, “is to develop chemical modifications that will enhance the stability of these 1D structures so that they can survive under ambient conditions.”
A further goal, he continues, is to synthesize JGNRs with different values of m, as well as other classes of JGNRs with different types of defective edges. “We will also be exploring the 1D spin physics of these structures and [will] investigate their spin dynamics using techniques such as scanning tunnelling microscopy combined with electron spin resonance, paving the way for their potential applications in quantum technologies.”
A sample of asteroid dirt brought back to Earth by NASA’s OSIRIS-REx mission contains amino acids and the nucleobases of RNA and DNA, plus brines that could have facilitated the formation of organic molecules, scanning electron microscopy has shown.
The 120 g of material came from the near-Earth asteroid 101955 Bennu, which OSIRIS-REx visited in 2020. The findings “bolster the hypothesis that asteroids like Bennu could have delivered the raw ingredients to Earth prior to the emergence of life,” Dan Glavin of NASA’s Goddard Space Flight Center tells Physics World.
Bennu has an interesting history. It is 565 m across at its widest point and was once part of a much larger parent body, possibly 100 km in diameter, that was smashed apart in a collision in the Asteroid Belt between 730 million and 1.55 billion years ago. Bennu coalesced from the debris as a rubble pile that found itself in Earth’s vicinity.
The sample from Bennu was parachuted back to Earth in 2023 and shared among teams of researchers. Now two new papers, published in Nature and Nature Astronomy, reveal some of the findings from those teams.
Saltwater residue
In particular, researchers identified a diverse range of salt minerals, including sodium-bearing phosphates and carbonates that formed brines when liquid water on Bennu’s parent body either evaporated or froze.
Mineral rich SEM images of trona (water-bearing sodium carbonate) found in Bennu samples. The needles form a vein through surrounding clay-rich rock, with small pieces of rock resting on top of the needles. (Courtesy: Rob Wardell, Tim Gooding and Tim McCoy, Smithsonian)
The liquid water would have been present on Bennu’s parent during the dawn of the Solar System, in the first few million years after the planets began to form. Heat generated by the radioactive decay of aluminium-26 would have kept pockets of water liquid deep inside Bennu’s parent body. The brines that this liquid water bequeathed would have played a role in kickstarting organic chemistry.
Tim McCoy, of the Smithsonian’s National Museum of Natural History and the lead author of the Nature paper, says that “brines play two important roles”.
One of those roles is producing the minerals that serve as templates for organic molecules. “As an example, brines precipitate phosphates that can serve as a template on which sugars needed for life are formed,” McCoy tells Physics World. The phosphate is like a pegboard with holes, and atoms can use those spaces to arrange themselves into sugar molecules.
The second role that brines can play is to then release the organic molecules that have formed on the minerals back into the brine, where they can combine with other organic molecules to form more complex compounds.
Ambidextrous amino acids
Meanwhile, the study reported in Nature Astronomy, led by Dan Glavin and Jason Dworkin of NASA’s Goddard Space Flight Center, focused on the detection of 14 of the 20 amino acids used by life to build proteins, deepening the mystery of why life only uses “left-handed” amino acids.
Amino acid molecules lack rotational symmetry – think of how, no matter how much you twist or turn your left hand, you will never be able to superimpose it on your right hand. As such, amino acids can randomly be either left- or right-handed, a property known as chirality.
However, for some reason that no one has been able to figure out yet, all life on Earth uses left-handed amino acids.
One hypothesis was that due to some quirk, amino acids formed in space and brought to Earth in impacts had a bias for being left-handed. This possibility now looks unlikely after Glavin and Dworkin’s team discovered that the amino acids in the Bennu sample are a mix of left- and right-handed, with no evidence that one is preferred over the other.
“So far we have not seen any evidence for a preferred chirality,” Glavin says. This goes for both the Bennu sample and a previous sample from the asteroid 162173 Ryugu, collected by Japan’s Hayabusa2 mission, which contained 23 different forms of amino acid. “For now, why life turned left on Earth remains a mystery.”
Taking a closer step to the origin of life
Another mystery is why the organic chemistry on Bennu’s parent body reached a certain point and then stopped. Why didn’t it form more complex organic molecules, or even life?
Near-Earth asteroid A mosaic image of Bennu, as observed by NASA’s OSIRIS-REx spacecraft. (Courtesy: NASA/Goddard/University of Arizona)
Amino acids are the construction blocks of proteins. In turn, proteins are one of the primary molecules for life, facilitating biological processes within cells. Nucleobases have also been identified in the Bennu sample, but although chains of nucleobases are the molecular skeleton of RNA and DNA, neither nucleic acid has been found in an extraterrestrial sample yet.
“Although the wet and salty conditions inside Bennu’s parent body provided an ideal environment for the formation of amino acids and nucleobases, it is not clear yet why more complex organic polymers did not evolve,” says Glavin.
Researchers are still looking for that complex chemistry. McCoy cites the 5-carbon sugar ribose, which is a component of RNA, as an essential organic molecule for life that scientists hope to one day find in an asteroid sample.
“But as you might imagine, as organic molecules increase in complexity, they decrease in number,” says McCoy, explaining that we will need to search ever larger amounts of asteroidal material before we might get lucky and find them.
The answers will ultimately help astrobiologists figure out where life began. Could proteins, RNA or even biological cells have formed in the early Solar System within objects such as Bennu’s parent planetesimal? Or did complex biochemistry begin only on Earth once the base materials had been delivered from space?
“What is becoming very clear is that the basic chemical building blocks of life could have been delivered to Earth, where further chemical evolution could have occurred in a habitable environment, including the origin of life itself,” says Glavin.
What’s really needed are more samples. China’s Tianwen-2 mission is blasting off later this year on a mission to capture a 100 g sample from the small near-earth asteroid 469219 Kamo‘oalewa. The findings are likely to be similar to those of OSIRIS-REx and Hayabusa2, but there’s always the chance that something more complex might be in that sample too. If and when those organic molecules are found, they will have huge repercussions for the origin of life on Earth.
More than 800 researchers, policy makers and government officials from around the world gathered in Paris this week to attend the official launch of the International Year of Quantum Science and Technology (IYQ). Held at the headquarters of the United Nations Educational, Scientific and Cultural Organisation (UNESCO), the two-day event included contributions from four Nobel prize-winning physicists – Alain Aspect, Serge Haroche, Anne l’Huillier and William Phillips.
Opening remarks came from Cephas Adjej Mensah, a research director in the Ghanaian government, which last year submitted the draft resolution to the United Nations for 2025 to be proclaimed as the IYQ. “Let us commit to making quantum science accessible to all,” Mensah declared, reminding delegates that the IYQ is intended to be a global initiative, spreading the benefits of quantum equitably around the world. “We can unleash the power of quantum science and technology to make an equitable and prosperous future for all.”
The keynote address was given by l’Huillier, a quantum physicist at Lund University in Sweden, who shared the 2023 Nobel Prize for Physics with Pierre Agostini and Ferenc Krausz for their work on attosecond pulses. “Quantum mechanics has been extremely successful,” she said, explaining how it was invented 100 years ago by Werner Heisenberg on the island of Helgoland. “It has led to new science and new technology – and it’s just the beginning.”
Let’s go Stephanie Simmons, chief quantum officer at Photonic and co-chair of Canada’s National Quantum Strategy advisory council, speaking at the IYQ launch in Paris. (Courtesy: Matin Durrani)
Some of that promise was outlined by Phillips in his plenary lecture. The first quantum revolution led to lasers, semiconductors and transistors, he reminded participants, but said that the second quantum revolution promises more by exploiting effects such as quantum entanglement and superposition – even if its potential can be hard to grasp. “It’s not that there’s something deeply wrong with quantum mechanics – it’s that there’s something deeply wrong with our ability to understand it,” Phillips explained.
The benefits of quantum technology to society were echoed by leading Chinese quantum physicist Jian-Wei Pan of the University of Science and Technology of China in Hefei. “The second quantum revolution will likely provide another human leap in human civilization,” said Pan, who was not at the meeting, in a pre-recorded video statement. “Sustainable funding from government and private sector is essential. Intensive and proactive international co-operation and exchange will undoubtedly accelerate the benefit of quantum information to all of humanity.”
Leaders of the burgeoning quantum tech sector were in Paris too. Addressing the challenges and opportunities of scaling quantum technologies to practical use was a panel made up of Quantinuum chief executive Rajeeb Hazra, QuEra president Takuya Kitawawa, IBM’s quantum-algorithms vice president Katie Pizzoalato, ID Quantique boss Grégoire Ribordy and Microsoft technical fellow Krysta Svore. Also present was Alexander Ling from the National University of Singapore, co-founder of two hi-tech start-ups.
“We cannot imagine what weird and wonderful things quantum mechanics will lead to but you can sure it’ll be marvellous,” said Celia Merzbacher, executive director of the Quantum Economic Development Consortium (QED-C), who chaired the session. All panellists stressed the importance of having a supply of talented quantum scientists and engineers if the industry is to succeed. Hamza also underlined that new products based on “quantum 2.0” technology had to be developed with – and to serve the needs of – users if they are to turn a profit.
The ethical challenges of quantum advancements were also examined in a special panel, as was the need for responsible quantum innovation to avoid a “digital divide” where quantum technology benefits some parts of society but not others. “Quantum science should elevate human dignity and human potential,” said Diederick Croese, a lawyer and director of the Centre for Quantum and Society at Quantum Delta NL in the Netherlands.
Science in action German artist Robin Baumgarten explains the physics behind his Quantum Jungle art installation. (Courtesy: Matin Durrani)
The cultural impact of quantum science and technology was not forgotten in Paris either. Delegates flocked to an art installation created by Berlin-based artist and game developer Robin Baumgarten. Dubbed Quantum Jungle, it attempts to “visualize quantum physics in a playful yet scientifically accurate manner” by using an array of lights controlled by flickable, bendy metal door stops. Baumgarten claims it is a “mathematically accurate model of a quantum object”, with the brightness of each ring being proportional to the chance of an object being there.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”
In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.
The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.
Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.
Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.
These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.
The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.
Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.
Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.
Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.
In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.
When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”
Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.
Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.
Watch this short video filmed at the APS March Meeting in 2024, where Mark Elo, chief marketing officer of Tabor Quantum Solutions, introduces the Echo-5Q, which he explains is an industry collaboration between FormFactor and Tabor Quantum Solutions, using the QuantWare quantum processing unit (QPU).
Elo points out that it is an out-of-the-box solution, allowing customers to order a full-stack system, including the software, refrigeration, control electronics and the actual QPU. With the Echo-5, it gets delivered and installed, so that the customer can start doing quantum measurements immediately. He explains that the Echo-5Q is designed at a price and feature point that increases the accessibility for on-site quantum computing.
Brandon Boiko, senior applications engineer with FormFactor, describes the how FormFactor developed the dilution refrigeration technology that the qubits get installed into. Boiko explains that the product has been designed to reduce the cost of entry into the quantum field – made accessible through FormFactor’s test-and- measurement programme, which allows people to bring their samples on site to take measurements.
Alessandro Bruno is founder and CEO of QuantWare, which provides the quantum processor for the Echo-5Q, the part that sits at the milli Kelvin stage of the dilution refrigerator, and that hosts five qubits. Bruno hopes that the Echo-5Q will democratize access to quantum devices – for education, academic research and start-ups.
Researchers at the University of Chicago’s Pritzker School of Molecular Engineering have created a groundbreaking hydrogel that doubles as a semiconductor. The material combines the soft, flexible properties of biological tissues with the electronic capabilities of semiconductors, making it ideal for advanced medical devices.
In a study published in Science, the research team, led by Sihong Wang, developed a stretchy, jelly-like material that provides the robust semiconducting properties necessary for use in devices such as pacemakers, biosensors and drug delivery systems.
Rethinking hydrogel design
Hydrogels are ideal for many biomedical applications because they are soft, flexible and water-absorbent – just like human tissues. Material scientists, long recognizing the vast potential of hydrogels, have pushed the boundaries of this class of material. One way is to create hydrogels with semiconducting abilities that can be useful for transmitting information between living tissues and bioelectronic device interfaces – in other words, a hydrogel semiconductor.
Imparting semiconducting properties to hydrogels is no easy task, however. Semiconductors, while known for their remarkable electronic properties, are typically rigid, brittle and water-repellent, making them inherently incompatible with hydrogels. By overcoming this fundamental mismatch, Wang and his team have created a material that could revolutionize the way medical devices interface with the human body.
Traditional hydrogels are made by dissolving hydrogel precursors (monomers or polymers) in water and adding chemicals to crosslink the polymers and form a water-swelled state. Since most polymers are inherently insulating, creating a hydrogel with semiconducting properties requires a special class of semiconducting polymers. The challenges do not stop there, however. These polymers typically only dissolve in organic solvents, not in water.
“The question becomes how to achieve a well-dispersed distribution of these semiconducting materials within a hydrogel matrix,” says first author Yahao Dai, a PhD student in the Wang lab. “This isn’t just about randomly dispersing particles into the matrix. To achieve strong electrical performance, a 3D interconnected network is essential for effective charge transport. So, the fundamental question is: how do you build a hydrophobic, 3D interconnected network within the hydrogel matrix?”
Innovative material Sihong Wang (left), Yahao Dai (right) and colleagues have developed a novel hydrogel with semiconducting properties. (Courtesy: UChicago Pritzker School of Molecular Engineering/John Zich)
To address this challenge, the researchers first dissolved the polymer in an organic solvent that is miscible with water, forming an organogel – a gel-like material composed of an organic liquid phase in a 3D gel network. They then immersed the organogel in water and allowed the water to gradually replace the organic solvent, transforming it into a hydrogel.
The researchers point out that this versatile solvent exchange process can be adapted to a variety of semiconducting polymers, opening up new possibilities for hydrogel semiconductors with diverse applications.
A two-in-one material
The result is a hydrogel semiconductor material that’s soft enough to match the feel of human tissue. With a Young’s modulus as low as 81 kPa – comparable to that of jelly – and the ability to stretch up to 150% of its original length, this material mimics the flexibility and softness of living tissue. These tissue-like characteristics allow the material to seamlessly interface with the human body, reducing the inflammation and immune responses that are often triggered by rigid medical implants.
The material also has a high charge carrier mobility, a measure of its ability to efficiently transmit electrical signals, of up to 1.4 cm2/V/s. This makes it suitable for biomedical devices that require effective semiconducting performance.
The potential applications extend beyond implanted devices. The material’s high hydration and porosity enable efficient volumetric biosensing and mass transport throughout the entire thickness of the semiconducting layer, which is useful for biosensing, tissue engineering and drug delivery applications. The hydrogel also responds to light effectively, opening up possibilities for light-controlled therapies, such as light-activated wireless pacemakers or wound dressings that use heat to accelerate healing.
A vision for transforming healthcare
The research team’s hydrogel material is now patented and being commercialized through UChicago’s Polsky Center for Entrepreneurship and Innovation. “Our goal is to further develop this material system and enhance its performance and application space,” says Dai. While the immediate focus is on enhancing the electrical and light modulation properties of the hydrogel, the team envisions future work in biochemical sensing.
“An important consideration is how to functionalize various bioreceptors within the hydrogel semiconductor,” explains Dai. “As each biomarker requires a specific bioreceptor, the goal is to target as many biomarkers as possible.”
The team is already exploring new methods to incorporate bioreceptors, such as antibodies and aptamers, within the hydrogels. With these advances, this class of semiconductor hydrogels could act as next-generation interfaces between human tissues and bioelectronic devices, from sensors to tailored drug-delivery systems. This breakthrough material may soon bridge the gap between living systems and electronics in ways once thought impossible.
Journal of Reliability Science and Engineering (Courtesy: IOP Publishing)
As our world becomes ever more dependent on technology, an important question emerges: how much can we truly rely on that technology? To help researchers explore this question, IOP Publishing (which publishes Physics World) is launching a new peer-reviewed, open-access publication called Journal of Reliability Science and Engineering (JRSE). The journal will operate in partnership with the Institute of Systems Engineering (part of the China Academy of Engineering Physics) and will benefit from the editorial and commissioning support of the University of Electronic Science and Technology of China, Hunan University and the Beijing Institute of Structure and Environment Engineering.
“Today’s society relies much on sophisticated engineering systems to manufacture products and deliver services,” says JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. “Such systems include power plants, vehicles, transportation and manufacturing. The safe, reliable and economical operation of all these requires the continuing advancement of reliability science and engineering.”
Defining reliability
The reliability of an object is commonly defined as the probability that it will perform its intended function adequately for a specified period of time. “The object in question may be a human being, product, system, or process,” Zuo explains. “Depending on its nature, corresponding sub-disciplines are human-, material-, structural-, equipment-, software- and system reliability.”
Key concepts in reliability science include failure modes, failure rates and reliability function and coherency, as well as measurements such as mean time-to-failure, mean time between failures, availability and maintainability. “Failure modes can be caused by effects like corrosion, cracking, creep, fracture, fatigue, delamination and oxidation,” Zuo explains.
To analyse such effects, researchers may use approaches such as fault tree analysis (FTA); failure modes, effects and criticality analysis (FMECA); and binary decomposition, he adds. These and many other techniques lie within the scope of JRSE, which aims to publish high-quality research on all aspects of reliability. This could, for example, include studies of failure modes and damage propagation as well as techniques for managing them and related risks through optimal design and reliability-centred maintenance.
A focus on extreme environments
To give the journal structure, Zuo and his colleagues identified six major topics: reliability theories and methods; physics of failure and degradation; reliability testing and simulation; prognostics and health management; reliability engineering applications; and emerging topics in reliability-related fields.
JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. (Courtesy: IOP Publishing)
As well as regular issues published four times a year, JRSE will also produce special issues. A special issue on system reliability and safety in varying and extreme environments, for example, focuses on reliability and safety methods, physical/mathematical and data-driven models, reliability testing, system lifetime prediction and performance evaluation. Intelligent operation and maintenance of complex systems in varying and extreme environments are also covered.
Interest in extreme environments was one of the factors driving the journal’s development, Zuo says, due to the increasing need for modern engineering systems to operate reliably in highly demanding conditions. As examples, he cites wind farms being built further offshore; faster trains; and autonomous systems such as drones, driverless vehicles and social robots that must respond quickly and safely to ever-changing surroundings in close proximity to humans.
“As a society, we are setting ever higher requirements on critical systems such as the power grid and Internet, water distribution and transport networks,” he says. “All of these demand further advances in reliability science and engineering to develop tools for the design, manufacture and operation as well as the maintenance of today’s sophisticated engineering systems.”
The go-to platform for researchers and industrialists alike
Another factor behind the journal’s launch is that previously, there were no international journals focusing on reliability research by Chinese organizations. Since the discipline’s leaders include several such organizations, Zuo says the lack of international visibility has seriously limited scientific exchange and promotion of reliability research between China and the global community. He hopes the new journal will remedy this. “Notable features of the journal include gold open access (thanks to our partnership with IOP Publishing, a learned-society publisher that does not have shareholders) and a fast review process,” he says.
In general, the number of academic journals focusing on reliability science and engineering is limited, he adds. “JRSE will play a significant role in promoting the advances in reliability research by disseminating cutting-edge scientific discoveries and creative reliability assurance applications in a timely way.
“We are aiming that the journal will become the go-to platform for reliability researchers and industrialists alike.”
The first issue of JRSE will be published in March 2025, and its editors welcome submissions of original research reports as well as review papers co-authored by experts. “There will also be space for perspectives, comments, replies, and news insightful to the reliability community,” says Zuo. In the future, the journal plans to sponsor reliability-related academic forums and international conferences.
With over 100 experts from around the world on its editorial board, Zuo describes JRSE as scientist-led, internationally-focused and highly interdisciplinary. “Reliability is a critical measure of performance of all engineering systems used in every corner of our society,” he says. “This journal will therefore be of interest to disciplines such as mechanical-, electrical-, chemical-, mining- and aerospace engineering as well as the mathematical and life sciences.”
Hot material The crystal structure of cordierite gives the material its unique thermal properties. (Courtesy: M Dove and L Li/Matter)
The anomalous and ultra-low thermal expansion of cordierite results from the interplay between lattice vibrations and the elastic properties of the material. That is the conclusion of Martin Dove at China’s Sichuan University and Queen Mary University of London in the UK and Li Li at the Civil Aviation Flight University of China. They showed that the material’s unusual behaviour stems from direction-varying elastic forces in its lattice, which act to vary cordierite’s thermal expansion along different directions.
Cordierite is a naturally-occurring mineral that can also be synthesized. Thanks to its remarkable thermal properties, it is used in products ranging from pizza stones to catalytic converters. When heated to high temperatures, it undergoes ultra-low thermal expansion along two directions, and it shrinks a tiny amount along the third direction. This makes it incredibly useful as a material that can be heated and cooled without changing size or suffering damage.
Despite its widespread use, scientists lack a fundamental understanding of how cordierite’s anomalous thermal expansion arises from the properties of its crystal lattice. Normally, thermal expansion (positive or negative) is understood in terms of Grüneisen parameters. These describe how vibrational modes (phonons) in the lattice cause it to expand or contract along each axis as the temperature changes.
Negative Grüneisen parameters describe a lattice that shrinks when heated, and are seen as key to understanding thermal contraction of cordierite. However, the material’s thermal response is not isotropic (it only contracts only along one axis when heated at high temperatures) so understanding cordierite in terms of its Grüneisen parameters alone is difficult.
Advanced molecular dynamics
In their study, Dove and Li used advanced molecular dynamics simulations to accurately model the behaviour of atoms in the cordierite lattice. Their closely matched experimental observations of the material’s thermal expansion, providing them with key insights into why the material has a negative thermal expansion in just one direction.
“Our research demonstrates that the anomalous thermal expansion of cordierite originates from a surprising interplay between atomic vibrations and elasticity,” Dove explains. The elasticity is described in the form of an elastic compliance tensor, which predicts how a material will distort in response to a force applied along a specific direction.
At lower temperatures, lattice vibrations occur at lower frequencies. In this case, the simulations predicted negative thermal expansion in all directions – which is in line with observations of the material.
At higher temperatures, the lattice becomes dominated by high-frequency vibrations. In principle, this should result in positive thermal expansion in all three directions. Crucially, however, Dove and Li discovered that this expansion is cancelled out by the material’s elastic properties, as described by its elastic compliance tensor.
What is more, the unique arrangement of crystal lattice meant that this tensor varied depending on the direction of the applied force, creating an imbalance that amplifies differences between the material’s expansion along each axis.
Cancellation mechanism
“This cancellation mechanism explains why cordierite exhibits small positive expansion in two directions and small negative expansion in the third,” Dove explains. “Initially, I was sceptical of the results. The initial data suggested uniform expansion behaviour at both high and low temperatures, but the final results revealed a delicate balance of forces. It was a moment of scientific serendipity.”
Altogether, Dove and Li’s result clearly shows that cordierite’s anomalous behaviour cannot be understood by focusing solely on the Grüneisen parameters of its three axes. It is crucial to take its elastic compliance tensor into account.
In solving this long-standing mystery, the duo now hope their results could help researchers to better predict how cordierite’s thermal expansion will vary at different temperatures. In turn, they could help to extend the useful applications of the material even further.
“Anisotropic materials like cordierite hold immense potential for developing high-performance materials with unique thermal behaviours,” Dove says. “Our approach can rapidly predict these properties, significantly reducing the reliance on expensive and time-consuming experimental procedures.”
Gravitational waves are distortions of space–time that occur when massive bodies, such as black holes, are accelerated. They were first detected in 2016 by researchers working on the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) located in Hanford, Washington and Livingston, Louisiana.
LISA comprises three identical satellites in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometres – more than six times the distance between the Earth and the Moon.
While ground-based instruments detect gravitational waves with a frequency from a few Hz to a KHz, a space-based mission could pick up gravitational waves with frequencies between 10–4–10–1 Hz.
According to Hong-Bo Jin from the National Astronomical Observatories, Chinese Academy of Sciences, in Beijing, one disadvantage of a triangular array is that when the direction of gravitational-wave propagation as a transverse wave is parallel to the plane of the triangle, it is more difficult to detect the source of the gravitational wave.
A tetrahedral configuration could get around this problem while Jin says that an additional advantage is the extra combinations of optical paths possible with six arms. This means it could be sensitive to six polarization modes of gravitational waves. Einstein’s general theory of relativity predicts that gravitational waves have only two tensor polarization modes, so any detection of so-called vector or scalar polarization modes could signal new physics.
“Detecting gravitational waves based on the TEGO configuration will possibly reveal more polarization modes of gravitational waves, which is conducive to deepening our understanding of general relativity and revealing the essence of gravity and spacetime,” says Jin.
Yet such a design will come with costs. Given that the equipment for TEGO, including the telescopes and optical benches, is twice that of a triangular configuration, estimates for a tetrahedral set-up could also be double.
While TEGO has a separate technical route than TAIJI, Jin says it can “refer” to some of its mature technologies. Given that many technologies still need to be demonstrated and developed, however, TEGO has no specific timeline for when it could be launched.
Italian gravitational-wave physicist Stefano Vitale, a former principal investigator of the LISA Pathfinder mission, told Physics World that “polyhedric” configurations of gravitational-wave detectors are “not new” and are much more difficult to implement than LISA. He adds that even aligning a three-satellite configuration such as LISA is “extremely challenging” and is something the aerospace community has never tried before.
“Going off-plane, like the TEGO colleagues want to do, with telescope add-ons, opens a completely new chapter [and] cannot be considered as incremental relative to LISA,” adds Vitale.
Michelle Lollie is an advanced laser scientist at Quantinuum, supporting the design, development and construction of complex optical systems that will serve as the foundations of world-class quantum computers. Lollie also participates in various diversity, equity, inclusion and accessibility initiatives, advocating for those who are marginalized in STEM fields, particularly in physics. Outside of wrangling photons, you can often find her at home practicing the violin.
Your initial bachelor’s degree was in finance, and you went on to work in the field through your 20s before pivoting to physics – what made you take the leap to make this change, and what inspired you to pick physics for your second bachelor’s degree?
I had dreams of working in finance since high school – indeed, at the time I was on my way to being the most dedicated, most fashionable, and most successful investment banker on Wall Street. I would like to think that, in some other quantum universe, there’s still a Michelle Lollie – investment banker extraordinaire.
So my interest in physics wasn’t sparked until much later in life, when I was 28 years old – I was no longer excited by a career in finance, and was looking for a professional pivot. I came across a groundbreaking theory paper about the quantum teleportation of states. I honestly thought that it referred to “Beam me up, Scotty” from Star Trek, and
I was amazed.
But all jokes aside, quantum physics holds many a mystery that we’re still exploring. As a field, it’s quite new – there are approximately 100 years of dedicated quantum study and discovery, compared to millennia of classical physics. Perusing the paper and understanding about 2% of it, I just decided that this is what I would study. I wanted to learn about this “entanglement” business – a key concept of quantum physics. The rest is history.
Can you tell me a bit about your PhD pathway? You were a part of the APS Bridge Program at Indiana University – how did the programme help you?
After deciding to pursue a physics degree, I had to pick an academic institution to get said degree. What was news to me was that, for second baccalaureate degrees, funding at a public university was hard to come by. I was looking for universities with a strong optics programme, having decided that quantum optics was for me.
I learned about the Rose-Hulman Institute of Technology, in Terre Haute, Indiana by searching for optical engineering programmes. What I didn’t know was that, in terms of producing top engineers, you’d be hard pressed to find a finer institution. The same can be said for their pure science disciplines, although those disciplines aren’t usually ranked. I reached out to inquire about enrolment, was invited to visit and fell in love with the campus. I was funded and my physics journey began.
Prior to graduation, I was struggling with most of my grad-school applications being denied. I wasn’t the most solid student at Rose (it’s a rigorous place), but I wasn’t a poorly performing student, either. Enter the APS Bridge Program, which focuses on students who, for whatever reason, were having challenges applying to grad school. The programme funded two years of education, wherein the student could have more exposure to coursework (which was just what I needed) or have more opportunity for research, after which they could achieve a master’s degree and continue to a PhD.
I was accepted at a bridge programme site at Indiana University Bloomington. The additional two years allowed for a repeat of key undergraduate courses in the first year, with the second year filled with grad courses. I continued on and obtained my master’s degree. I decided to leave IU to collaborate with a professor at Louisiana State University (LSU) who I had always wanted to work with and had done prior research with. So I transferred to LSU and obtained my PhD, focusing on high-dimensional orbital angular momentum states of light for fibre-based quantum cryptography and communication protocols. Without the Bridge Program, it’s likely that you might not be reading this article.
It’s funny, but at the time, no-one was really talking about this. I think, for the individual who has to face various challenges due to race, sexual orientation and preference, gender, immigration status and the like, you just try to take your classes and do your research. But, just by your existence and certain aspects that may come along with that, you are often faced with a decision to advocate for yourself in a space that historically was not curated with you or your value in mind.
Beyond beamlines Alongside her days spent in the lab, Michelle Lollie is a keen violinist. (Courtesy: Samuel Cooper/@photoscoops)
So while no-one was going up and down the halls saying “Hey, look at us, we have five Black students in our department!”, most departments would bend over backwards for those diversity numbers. Note that five Black students in a department of well over 100 is nothing to write home about. It should be an order of magnitude higher, with 20–30 Black students at least. This is the sad state of affairs across physics and other sciences: people get excited about one Black student and think that they’re doing something great. But, once I brought this fact to the attention of those in the front office and my adviser, a bit of talk started. Consequently, and fortuitously, the president of the university happened to visit our lab the fall before my graduation. Someone at that event noticed me, a Black woman in the physics department, and reached out to have me participate in several high-profile opportunities within the LSU community. This sparked more interest in my identity as a Black woman in the field; and it turned out that I was the first Black woman who would be getting a PhD from the department, in 2022. I am happy to report that three more Black women have earned degrees (one master’s in medical physics, and two PhDs in physics) since then.
My family and I were featured on LSU socials for the historic milestone, especially thanks to Mimi LaValle, who is the media relations guru for the LSU Physics and Astronomy department. They even shared my grandmother’s experience as aBlack woman growing up in the US during the 1930s, and the juxtaposition of her opportunities versus mine were highlighted. It was a great moment and I’m glad that LSU not only acknowledged this story, but they emphasized and amplified it. I will always be grateful that I was able to hand my doctoral degree to my grandmother at graduation. She passed away in August 2024, but was always proud of my achievements. I was just as proud of her, for her determination to survive. Different times indeed.
What are some barriers and challenges you have faced through your education and career, if any?
The barriers have mostly been structural, embedded within the culture and fabric of physics. But this has made my dedication to be successful in the field a more unique and customized experience that only those who can relate to my identity will understand. There is a concerted effort to say that science doesn’t see colour, gender, etc., and so these societal aspects shouldn’t affect change within the field. I’d argue that human beings do science, so it is a decidedly “social” science, which is impacted significantly by culture – past and present. In fact, if we had more actual social scientists doing research on effecting change in the field for us physical scientists, the negative aspects of working in the field – as told by those who have lived experience – would be mitigated and true scientific broadening could be achieved.
What were the pitfalls, or stresses, of following this career random walk?
Other than the internal work of recognizing that, on a daily basis, I have to make space for myself in a field that’s not used to me, there hasn’t been anything of the sort. I have definitely had to advocate for myself and my presence within the field. But I love what I do and that I get to explore the mysteries of quantum physics. So, I’m not going anywhere anytime soon. The more space that I create, others can come in and feel just fine.
I want things to be as comfortable as possible for future generations of Black scientists. I am a Black woman, so I will always advocate for Black people within the space. This is unique to the history of the African Diaspora. I often advocate for those with cross-marginalized identities not within my culture, but no-one else has as much incentive to root for Black people but Black people. I urge everyone to do the same in highlighting those in their respective cultures and identities. If not you, then who?
What were the next steps for you after your PhD – how did you decide between staying in academia or pursuing a role in industry?
I always knew I was going to industry. I was actually surprised to learn that many physics graduates plan to go into academia. I started interviewing shortly before graduation, I knew what companies I had on my radar. I applied to them, received several offers, and decided on Quantinuum.
Tools of the trade At Quantinuum, Michelle Lollie works on the lasers and optics of quantum computers. (Courtesy: Quantinuum)
You are now an advanced laser scientist with Quantinuum – what does that involve, and what’s a “day in the life” like for you now?
Nowadays, I can be found either doing CAD models of beamlines, or in the lab building said beamlines. This involves a lot of lasers, alignment, testing and validation. It’s so cool to see an optical system that you’ve designed come to life on an optical table. Its even more satisfying when it is integrated within a full ion-trap system, and it works.I love practical work in the lab – when I have been designing a system for too long, I often say “Okay, I’ve been in front of this screen long enough. Time to go get the goggles and get the hands dirty.”
What do you know today, that you wish you knew when you were starting your career?
Had I known what I would have had to go through, I might not have ever done it. So, the ignorance of my path was actually a plus. I had no idea what this road entailed so, although the journey was a course in who-is-Michelle-going-to-be-101, I would wish for the “ignorance is bliss” state – on any new endeavour, even now. It’s in the unknowing that we learn who we are.
Be direct and succinct, and leave no room for speculation about what you are saying
What’s your advice for today’s students hoping to pursue a career in the quantum sector?
I always highlight what I’ve learned from Garfield Warren, a physics professor at Indiana University, and one of my mentors. He always emphasized learning skills beyond science that you’ll need to be successful. Those who work in physics often lack direct communication skills, and there can be a lot of miscommunication. Be direct and succinct, and leave no room for speculation about what you are saying. This skill is key.
Also, learn the specific tools of your trade. If you’re in optics, for example, learn the ins and outs of how lasers work. If you have opportunities to build laser set-ups, do so. Learn what the knobs do. Determine what it takes for you to be confident that the readout data is what you want. You should understand each and every component that relates to work that you are doing. Learn all that you can for each project that you work on. Employers know that they will need to train you on company-specific tasks, but technical acumen is assumed to a point. Whatever the skills are for your area, the more that you understand the minutiae, the better.
A new way to measure the temperatures of objects by studying the effect of their black-body radiation on Rydberg atoms has been demonstrated by researchers at the US National Institute of Standards and Technology (NIST). The system, which provides a direct, calibration-free measure of temperature based on the fact that all atoms of a given species are identical, has a systematic temperature uncertainty of around 1 part in 2000.
The black-body temperature of an object is defined by the spectrum of the photons it emits. In the laboratory and in everyday life, however, temperature is usually measured by comparison to a reference. “Radiation is inherently quantum mechanical,” says NIST’s Noah Schlossberger, “but if you go to the store and buy a temperature sensor that measures the radiation via some sort of photodiode, the rate of photons converted into some value of temperature that you see has to be calibrated. Usually that’s done using some reference surface that’s held at a constant temperature via some sort of contact thermometer, and that contact thermometer has been calibrated to another contact thermometer – which in some indirect way has been tied into some primary standard at NIST or some other facility that offers calibration services.” However, each step introduces potential error.
This latest work offers a much more direct way of determining temperature. It involves measuring the black-body radiation emitted by an object directly, using atoms as a reference standard. Such a sensor does not need calibration because quantum mechanics dictates that every atom of the same type is identical. In Rydberg atoms the electrons are promoted to highly excited states. This makes the atoms much larger, less tightly bound and more sensitive to external perturbations. As part of an ongoing project studying their potential to detect electromagnetic fields, the researchers turned their attention to atom-based thermometry. “These atoms are exquisitely sensitive to black-body radiation,” explains NIST’s Christopher Holloway, who headed the work.
Packet of rubidium atoms
Central to the new apparatus is a magneto-optical trap inside a vacuum chamber containing a pure rubidium vapour. Every 300 ms, the researchers load a new packet of rubidium atoms into the trap, cool them to around 1 mK and excite them from the 5S energy level to the 32S Rydberg state using lasers. They then allow them to absorb black-body radiation from the surroundings for around 100 μs, causing some of the 32S atoms to change state. Finally, they apply a strong, ramped electric field, ionizing the atoms. “The higher energy states get ripped off easier than the lower energy states, so the electrons that were in each state arrive at the detector at a different time. That’s how we get this readout that tells us the population in each of the states,” explains Schlossberger, the work’s first author. The researchers can use this ratio to infer the spectrum of the black-body radiation absorbed by the atoms and, therefore, the temperature of the black body itself.
The researchers calculated the fractional systematic uncertainty of their measurement as 0.006, which corresponds to around 2 K at room temperature. Schlossberger concedes that this sounds relatively unimpressive compared to many commercial thermometers, but he notes that their thermometer measures absolute temperature, not relative temperature. “If I had two skyscrapers next to each other, touching, and they were an inch different in height, you could probably measure that difference to less than a millimetre,” he says, “If I asked you to tell me the total height of the skyscraper, you probably couldn’t.”
One application of their system, the researchers say, could lie in optical clocks, where frequency shifts due to thermal background noise are a key source of uncertainty. At present, researchers have to perform a lot of in situ thermometry to try to infer the black-body radiation experienced by the clock without disturbing the clock itself. Schlossberger says that, in future, one additional laser, could potentially allow the creation of Rydberg states in the clock atoms. “It’s sort of designed so that all the hardware is the same as atomic clocks, so without modifying the clock significantly it would tell you the radiation experienced by the same atoms that are used in the clock in the location they’re used.”
The work is described in a paper in Physical Review Research. Atomic physicist Kevin Weatherill of Durham University in the UK says “it’s an interesting paper and I enjoyed reading it”. “The direction of travel is to look for a quantum measurement for temperature – there are a lot of projects going on at NIST and some here in the UK,”, he says. He notes, however, that this experiment is highly complex and says “I think at the moment just measuring the width of an atomic transition in a vapour cell [which is broadened by the Doppler effect as atoms move faster] gives you a better bound on temperature than what’s been demonstrated in this paper.”
I am one of two co-chairs, along with my colleague Hendrik Ohldag, of the Quantum Materials Research and Discovery Thrust Area at ALS. Among other things, our remit is to advise ALS management on long-term strategy regarding quantum science, We launch and manage beamline development projects to enhance the quantum research capability at ALS and, more broadly, establish collaborations with quantum scientists and engineers in academia and industry.
In terms of specifics, the thrust area addresses problems of condensed-matter physics related to spin and quantum properties – for example, in atomically engineered multilayers, 2D materials and topological insulators with unusual electronic structures. As a beamline scientist, active listening is the key to establishing productive research collaborations with our scientific end-users – helping them to figure out the core questions they’re seeking to answer and, by extension, the appropriate experimental techniques to generate the data they need.
The task, always, is to translate external users’ scientific goals into practical experiments that will run reliably on the ALS beamlines. High-level organizational skills, persistence and exhaustive preparation go a long way: it takes a lot of planning and dialogue to ensure scientific users get high-quality experimental results.
What do you like best and least about your job?
A core part of my remit is to foster the collective conversation between ALS staff scientists and the quantum community, demystifying synchrotron science and the capabilities of the ALS with prospective end-users. The outreach activity is exciting and challenging in equal measure – whether that’s initiating dialogue with quantum experts at scientific conferences or making first contact using Teams or Zoom.
Internally, we also track the latest advances in fundamental quantum science and applied R&D. In-house colloquia are mandatory, with guest speakers from the quantum community engaging directly with ALS staff teams to figure out how our portfolio of synchrotron-based techniques – whether spectroscopy, scattering or imaging – can be put to work by users from research or industry. This learning and development programme, in turn, underpins continuous improvement of the beamline support services we offer to all our quantum end-users.
As for downsides: it’s never ideal when a piece of instrumentation suddenly “breaks” on a Friday afternoon. This sort of troubleshooting is probably the part of the job I like least, though it doesn’t happen often and, in any case, is a hit I’m happy to take given the flexibility inherent to my role.
What do you know today that you wish you knew when you were starting out in your career?
It’s still early days, but I guess the biggest lesson so far is to trust in my own specialist domain knowledge and expertise when it comes to engaging with the diverse research community working on quantum materials. My know-how in photon science – from coherent X-ray scattering and X-ray detector technology to in situ magnetic- and electric-field studies and automated measurement protocols – enables visiting researchers to get the most out of their beamtime at ALS.
Radiation therapy is a targeted cancer treatment that’s typically delivered over several weeks, using a plan that’s optimized on a CT scan taken before treatment begins. But during this time, the geometry of the tumour and the surrounding anatomy can vary, with different patients responding in different ways to the delivered radiation. To optimize treatment quality, such changes must be taken into consideration. And this is where adaptive radiotherapy comes into play.
Adaptive radiotherapy uses patient images taken throughout the course of treatment to update the initial plan and compensate for any anatomical variations. By adjusting the daily plan to match the patient’s daily anatomy, adaptive treatments ensure more precise, personalized and efficient radiotherapy, improving tumour control while reducing toxicity to healthy tissues.
The implementation of adaptive radiotherapy is continuing to expand, as technology developments enable adaptive treatments in additional tumour sites. And as more cancer centres worldwide choose this approach, there’s a need for flexible, innovative software to streamline this increasing clinical uptake.
Designed to meet these needs, RayStation – the treatment planning system from oncology software specialist RaySearch Laboratories – makes adaptive radiotherapy faster and easier to implement in clinical practice. The versatile and holistic RayStation software provides all of the tools required to support adaptive planning, today and into the future.
“We need to be fast, we need to be predictable and we need to be user friendly,” says Anna Lundin, technical product manager at RaySearch Laboratories.
Meeting the need for speed
Typically, adaptive radiotherapy uses the cone-beam CT (CBCT) images acquired for daily patient positioning to perform plan adaptation. For seamless implementation into the clinical workflow to fully reflect the daily anatomical changes, this procedure should be performed “online” with the patient on the treatment table, as opposed to an “offline” approach where plan adaptation occurs after the patient has left the treatment session. Such online adaptation, however, requires the ability to analyse patient scans and perform adaptive re-planning as rapidly as possible.
To fulfil the needs for streamlining all types of adaptive (online or offline) requirements, RayStation incorporates a package of advanced algorithms that perform key tasks, including segmentation, deformable registration, CBCT image enhancement and recontouring, all while the previously delivered dose is taken into consideration. By automating all of these steps, RayStation accelerates the replanning process to the speed needed for online adaptation, with the ability to create an adaptive plan in less than a minute.
Anna Lundin: “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources.” (Courtesy: RaySearch Laboratories)
Central to this process is RayStation’s dose tracking, which uses the daily images to calculate the actual dose delivered to the patient in each fraction. This ability to evaluate treatment progress, both on a daily basis and considering the estimated total dose, enables informed decisions as to whether to replan or not. The software’s flexible workflow allows users to perform daily dose tracking, compare plans with daily anatomical information against the original plans and adapt when needed.
“You can document trigger points for when adaptation is needed,” Lundin explains. “So you can evaluate whether the original plan is still good to go or whether you want to update or adapt the treatment plan to changes that have occurred.”
User friendly
Another challenge when implementing online adaptation is that its time constraints necessitate access to intuitive tools that enable quick decision making. “One of the big challenges with adaptive radiotherapy has been that a lot of the decision making and processes have been done on an ad hoc basis,” says Lundin. “We need to utilize the same protocol-based planning for adaptive as we do for standard treatment planning.”
As such, RaySearch Laboratories has focused on developing software that’s easy to use, efficient and accessible to a large proportion of clinical personnel. RayStation enables clinics to define and validate clinical procedures for a specific patient category in advance, eliminating the need to repeat this each time.
“By doing this, we let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible,” Lundin adds.
Versatile design
Lundin emphasizes that this accelerated adaptive replanning solution is built upon RayStation’s pre-existing comprehensive framework. “It’s not a parallel solution, it’s a progression,” she explains. “That means that all the tools that we have for robust optimization and evaluation, tools to assess biological effects, support for multiple treatment modalities – all that is also available when performing adaptive assessments and adaptive planning.”
This flexibility allows RayStation to support both photon- and ion-based treatments, as well as multiple imaging modalities. “We have built a framework that can be configured for each site and each clinical indication,” says Lundin. “We believe in giving users the freedom to select which techniques and which strategies to employ.”
We let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible
In particular, adaptive radiotherapy is gaining interest among the proton therapy community. For such highly conformal treatments, it’s even more important to regularly assess the actual delivered dose and ensure that the plan is updated to deliver the correct dose each day. “We have the first clinics using RayStation to perform adaptive proton treatments in an online fashion,” Lundin says.
It’s likely that we will also soon see the emergence of biologically adapted radiotherapy, in which treatments are adapted not just to the patient’s anatomy, but to the tumour’s biological characteristics and biological response. Here again, RayStation’s flexible and holistic architecture can support the replanning needs of this advanced treatment approach.
Predictable performance
Lundin points out that the progression towards online adaptation has been valuable for radiotherapy as a whole. “A lot of the improvements required to handle the time-critical procedures of online adaptive are of large benefit to all adaptive assessments,” she explains. “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources. I see it as strictly necessary for online adaptive, but good for all.”
Artificial intelligence (AI) is not only a key component in enhancing the speed and consistency of treatment planning (with tools such as deep learning segmentation and planning), but also enables the handling of massive data sets, which in turn allows users to improve the treatment “intents” that they prescribe.
Key component AI plays a central role in enabling RayStation to deliver predictable and consistent treatment planning, with deep learning segmentation (shown in the image) being an integral part. (Courtesy: RaySearch Laboratories)
Learning more about how the delivered dose correlates with clinical outcome provides important feedback on the performance and effectiveness of current adaptive processes. This will help optimize and personalize future treatments and, ultimately, make the adaptive treatments more predictable and effective as a whole.
Lundin explains that full automation is the only way to generate the large amount of data in the predictable and consistent manner required for such treatment advancements, noting that it is not possible to achieve this manually.
RayStation’s ability to preconfigure and automate all of the steps needed for daily dose assessment enables these larger-scale dose follow-up clinical studies. The treatment data can be combined with patient outcomes, with AI employed to gain insight into how to best design treatments or predict how a tumour will respond to therapy.
“I look forward to seeing more outcome-related studies of adaptive radiotherapy, so we can learn from each other and have more general recommendations, as has been done in the field of standard radiotherapy planning,” says Lundin. “We need to learn and we need to improve. I think that is what adaptive is all about – to adapt each person’s treatment, but also adapt the processes that we use.”
Future evolution
Looking to the future, adaptive radiotherapy is expected to evolve rapidly, bolstered by ongoing advances in imaging techniques and increasing data processing speeds. RayStation’s machine learning-based segmentation and plan optimization algorithms will continue to play a central role in supporting this evolution, with AI making treatment adaptations more precise, personalized and efficient, enhancing the overall effectiveness of cancer treatment.
“RaySearch, with the foundation that we have in optimization and advancing treatment planning and workflows, is very well equipped to take on the challenges of these future developments,” Lundin adds. “We are looking forward to the improvements to come and determined to meet the expectations with our holistic software.”
This webinar will present the overall experience of a radiotherapy department that utilizes RTsafe QA solutions, including the RTsafe Prime and SBRT anthropomorphic phantoms for intracranial stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) applications, respectively, as well as the remote dosimetry services offered by RTsafe. The session will explore how these phantoms can be employed for end-to-end QA measurements and dosimetry audits in both conventional linacs and a Unity MR-Linac system. Key features of RTsafe phantoms, such as their compatibility with RTsafe’s remote dosimetry services for point (OSLD, ionization chamber), 2D (films), and 3D (gel) dosimetry, will be discussed. These capabilities enable a comprehensive SRS/SBRT accuracy evaluation across the entire treatment workflow – from imaging and treatment planning to dose delivery.
Christopher W Schneider
Christopher Schneider is the adaptive radiotherapy technical director at Mary Bird Perkins Cancer Center and serves as an adjunct assistant professor in the Department of Physics and Astronomy at Louisiana State University in Baton Rouge. Under his supervision, Mary Bird’s MR-guided adaptive radiotherapy program has provided treatment to more than 150 patients in its first year alone. Schneider’s research group focuses on radiation dosimetry, late effects of radiation, and the development of radiotherapy workflow and quality-assurance enhancements.
Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]
Boltzmann constant, kB = 1.38 × 10−23 J K−1
Speed of light, c = 3 × 108 m s−1
Solution
Black body radiation, spectral density: ε (ν) dν = hνρ (ν) n (ν)
The photon energy, E = hν where h is Planck’s constant and ν is the photon frequency.
The density of states, ρ (ν) = Aνn−1 where A is a constant independent of the frequency and the frequency term is the scaling of surface area of an n-dimensional sphere.
The Bose–Einstein distribution,
n(v)
where k is the Boltzmann constant and T is the temperature.
We let
and get
ε(x)
We do not need the constant of proportionality (which is not simple to calculate in 4D) to find the maximum of ε (x). Working out the constant just tells us how tall the peak is, but we are interested in where the peak is, not the total radiation.
We set this equal to zero for the maximum of the distribution,
This yields x = n (1 − e−x) where
and we can relate
and c being the speed of light.
This equation has the solution x = n +W (−ne−n) where W is the Lambert W function z = W (y) that solves zez = y (although there is a subtlety about which branch of the function). This is kind of useless to do anything with, though. One can numerically solve this equation using bisection/Newton–Raphson/iteration. Alternatively, one could notice that as the number of dimensions increases, e−x is small, so to leading approximation x ≈ n. One can do a little better iterating this, x ≈ n − ne−n which is what we will use. Note the second iteration yields
Number of dimensions, n
Numerical solution
Approximation
2
1.594
1.729
3
2.821
2.851
4 (the one we want)
3.921
3.927
5
4.965
4.966
6
5.985
5.985
Using the result above,
616 nm is middle of the spectrum, so it will look white with a green-blue tint. Note, we have used T = 6000 K for the temperature here, as given in the question.
It would also be valid to look at ε (λ) dλ instead of ε (ν) dν.
Question 2: Heavy stuff
In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]
Solution
First we will set up the equations of motion for our system. We will set one mass to be at position −x and the other to be at x, so the masses are at a distance of 2x from each other. Starting from Newton’s law of gravity:
we can then use Newton’s second law to rewrite the LHS,
which we can simplify to
It is important that you get the right factor here depending on your choice for the particle coordinates at the start. Note there are other methods of getting this point, e.g. reduced mass.
We can now solve the second order ODE above. We will not show the whole process here but present the starting point and key results. We can write the acceleration in terms of the velocity. The initial velocity is zero and the initial position
So,
and once the integrals are solved we can rearrange for the velocity,
Now we can form an expression for the total time taken for the masses to meet in the middle,
There are quite a few steps involved in solving this integral, for these solutions, we shall make use of the following (but do attempt to solve it for yourselves in full).
Hence,
We can now rearrange for G and substitute in the values given in the question, don’t forget to convert the time into seconds.
This is the generally accepted value for the gravitational constant of our universe as well.
Question 3: Just like clockwork
Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.
1 Tick tock Simplified schematic of a pendulum clock mechanism. When the pendulum swings one way (a), the escapement releases the gear attached to the hanging mass and allows it to fall. When the pendulum swings the other way (b) the escapement stops the gear attached to the mass moving so the mass stays in place. (Courtesy: Katherine Skipper/IOP Publishing)
A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by
where g is the acceleration due to gravity.
Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.
Radius of the Earth, R = 6.3781 × 106 m
Period of one Earth day, τ0 = 8.64 × 104 s
How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]
Solution
We will write the period of oscillation of the pendulum at the surface of the Earth to be
.
At a height h above the surface of the Earth the period of oscillation will be
,
where g0 and gh are the acceleration due to gravity at the surface of the Earth and a height h above it respectively.
We can define τ0 to be the total duration of the day which is 8.64 × 104 seconds and equal to N complete oscillations of the pendulum at the surface. The lag is then τh which will equal N times the difference in one period of the two clocks, τh = NΔT, where ΔT = (Th − T0). We can now take a ratio of the lag over the day and the total duration of the day:
Then by substituting in the expressions we have for the period of a pendulum at the surface and height h we can write this in terms of the gravitational constant,
[Award 1 mark for finding the ratio of the lag over the day and the total period of the day.]
The acceleration due to gravity at the Earth’s surface is
where G is the universal gravitational constant, M is the mass of the Earth and R is the radius of the Earth. At an altitude h, it will be
[Award 1 mark for finding the expression for the acceleration due to gravity at height h.]
Substituting into our expression for the lag, we get:
This simplifies to an expression for the lag over a day. We can then substitute in the given values to find,
[Award 2 marks for completing the simplification of the ratio and finding the lag to be ≈ 4 s.]
Question 4: Quantum stick
Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few millikelvins, so the stick is in its ground state.
Moment of inertia for a rod,
where m is the mass and l is the length.
Uncertainty principle,
There are several possible approximations and simplifications you could make in solving this problem, including:
sinθ ≈ θ for small θ
and
Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]
Hint: Consider the two possible initial conditions that arise from the uncertainty principle.
Solution
We can imagine this as an inverted pendulum, with gravity acting from the centre of mass and at an angle θ from the unstable equilibrium point.
[Award 1 mark for a suitable diagram of the system.]
We must now find the equations of motion of the system. For this we can use Newton’s second law in its rotational form τ = Iα (torque = moment of inertia × angular acceleration). We have another equation for torque we can use as well
where r is the distance from the pivot to the centre of mass and F is the force, which in this case is gravity mg. We can then equate these giving
Substituting in the given moment of inertia of the stick and that the angular acceleration
We can cancel a few things and rearrange to get a differential equation of the form:
we then can take the small angle approximation sin θ ≈ θ, resulting in
[Award 2 marks for finding the equation of motion for the system and using the small angle approximation.]
Solve with ansatz of θ = Aeωt + Be−ωt, where we have chosen
We can clearly see that this will satisfy the differential equation
Now we can apply initial conditions to find A and B, by looking at the two cases from the uncertainty principle
Case 1: The stick is at an angle but not moving
At t = 0, θ = Δθ
θ = Δθ = A + B
At t = 0,
, A=B
This implies Δθ = 2A and we can then find
So we can now write
or
Case 2: The stick is at upright but moving
At t = 0, θ = 0
This condition gives us A = −B.
At t = 0,
This initial condition has come from the relationship between the tangential velocity, Δv which equals the distance to the centre of mass from the pivot point, and the angular velocity . Using the above initial condition gives us where
We can now write
[Award 4 marks for finding the two expressions for θ by using the two cases of the uncertainty principle.]
Now there are a few ways we can finish off this problem, we shall look at three different ways. In each case when the stick has fallen on the ground .
Method 1
Take and , use then rearrange for tf in both cases. We have
Look at the expression for cosh−1x and sinh−1x given in the question. They are almost identical, we can then approximate the two arguments to each other and we find,
we can then substitute in the uncertainty principle as and then write an expression of , which we can put back into our arccosh expression (or do it for Δv and put into arcsinh).
where and .
Method 2
In this next method, when you get to the inverse hyperbolic functions, you can take an expansion of their natural log forms in the tending to infinity limit. To first order both functions give ln 2x, we can then equate the arguments and find Δx or Δv in terms of the other and use the uncertainty principle. This would give the time taken as,
where and .
Method 3
Rather than using hyperbolic functions, you could do something like above and do an expansion of the exponentials in the two expressions for tf or we could make life even easier and do the following.
Disregard the e−ωt terms as they will be much smaller than the eωt terms. Equate the two expressions for and then take the natural logs, once again arriving at an expression of
where and .
This method efficiently sets B = 0 when applying the initial conditions.
[Award 2 marks for reaching an expression for t using one of the methods above or a suitable alternative that gives the correct units for time.]
Then, by using one of the expressions above for time, substitute in the values and find that t = 10.58 seconds.
[Award 1 mark for finding the correct time value of t = 10.58 seconds.]
If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk
This episode of the Physics World Weekly podcast features Mark Thomson, who will become the next director-general of CERN in January 2026. In a conversation with Physics World’s Michael Banks, Thomson shares his vision of the future of the world’s preeminent particle physics lab, which is home to the Large Hadron Collider (LHC).
They chat about the upcoming high-luminosity upgrade to the LHC (HL-LHC), which will be completed in 2030. The interview explores long-term strategies for particle physics research and the challenges of managing large international scientific organizations. Thomson also looks back on his career in particle physics and his involvement with some of the field’s biggest experiments.
This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.
Oil spills can pollute large volumes of surrounding water – thousands of times greater than the spill itself – causing long-term economic, environmental, social and ecological damage. Effective methods for in situ capture of spilled oil are thus essential to minimize contamination from such disasters.
Many oil spill cleanup technologies, however, exhibit poor hydrodynamic stability under complex flow conditions, which leads to poor oil-capture efficiency. To address this shortfall, researchers from Harbin Institute of Technology in China have come up with a new approach to oil cleanup using a vortex-anchored filter (VAF).
“Since the 1979 Atlantic Empress disaster, interception and adsorption have been the primary methods for oil spill recovery, but these are sensitive to water-flow fluctuation,” explains lead author Shijie You. Oil-in-water emulsions from leaking pipelines and offshore industrial discharge are particularly challenging, says You, adding that “these problems inspire us to consider how we can address hydrodynamic stability of oil-capture devices under turbulent conditions”.
Inspired by the natural world
You and colleagues believe that the answers to oil spill challenges could come from nature – arguably the world’s greatest scientist. They found that the deep-sea glass sponge E. aspergillum, which lives at depths of up to 1000 m in the Pacific Ocean, has an excellent ability to filter feed with a high effectiveness, selectivity and robustness, and that its food particles share similarities with oil droplets.
The anatomical structure of E. aspergillum – also known as Venus’ flower basket – provided inspiration for the researchers to design their VAF. By mimicking the skeletal architecture and filter feeding patterns of the sponge, they created a filter that exhibited a high mass transfer and hydrodynamic stability in cleaning up oil spills under turbulent flow.
“The E. aspergillum has a multilayered skeleton–flagellum architecture, which creates 3D streamlines with frequent collision, deflection, convergence and separation,” explains You. “This can dissipate macro-scale turbulent flows into small-scale swirling flow patterns called low-speed vortical flows within the body cavity, which reduces hydrodynamic load and enhances interfacial mass transfer.”
For the sponges, this allows them to maintain a high mechanical stability while absorbing nutrients from the water. The same principles can be applied to synthetic materials for cleaning up oil spills.
VAF design Skeletal motif of E. aspergillum and (right column) front and top views of the VAF with a bio-inspired hollow cylinder skeleton and flagellum adsorbent. (Courtesy: Y Yu et al. Nat. Commun. 10.1038/s41467-024-55587-y)
The VAF is a synthetic form of the sponge’s architecture and, according to You, “is capable of transferring kinematic energy from an external water flow into multiple small-scale low-speed vortical flows within the body cavity to enhance hydrodynamic stability and oil capture efficiency”.
The tubular outer skeleton of the VAF comprises a helical ridge and chequerboard lattice. It is this skeleton that creates a slow vortex field inside the cavity and enables mass transfer of oil during the filtering process. Once the oil has been forced into the filter, the internal area – composed of flagellum-shaped adsorbent materials – provides a large interfacial area for oil adsorption.
Using the VAF to clean up oil spills
The researchers used their nature-inspired VAF to clean up oil spills under complex hydrodynamic conditions. You states that “the VAF can retain the external turbulent-flow kinetic energy in the low-speed vortical flows – with a small Kolmogorov microscale (85 µm) [the size of the smallest eddy in a turbulent flow] – inside the cavity of the skeleton, leading to enhanced interfacial mass transfer and residence time”.
“This led to an improvement in the hydrodynamic stability of the filter compared to other approaches by reducing the Reynolds stresses in nearly quiescent wake flows,” You explains. The filter was also highly resistant to bending stresses caused at the boundary of the filter when trying separate viscous fluids. When put into practice, the VAF was able to capture more than 97% of floating, underwater and emulsified oils, even under strong turbulent flow.
When asked how the researchers plan to improve the filter further, You tells Physics World that they “will integrate the VAF with photothermal, electrothermal and electrochemical modules for environmental remediation and resource recovery”.
“We look forward to applying VAF-based technologies to solve sea pollution problems with a filter that has an outstanding flexibility and adaptability, easy-to-handle operability and scalability, environmental compatibility and life-cycle sustainability,” says You.
A topological electronic crystal (TEC) in which the quantum Hall effect emerges without the need for an external magnetic field has been unveiled by an international team of physicists. Led by Josh Folk at the University of British Columbia, the group observed the effect in a stack of bilayer and trilayer graphene that is twisted at a specific angle.
In a classical electrical conductor, the Hall voltage and its associated resistance appear perpendicular both to the direction of an applied electrical current and an applied magnetic field. A similar effect is also seen in 2D electron systems that have been cooled to ultra-low temperatures. But in this case, the Hall resistance becomes quantized in discrete steps.
This quantum Hall effect can emerge in electronic crystals, also known as Wigner crystals. These are arrays of electrons that are held in place by their mutual repulsion. Some researchers have considered the possibility of a similar effect occurring in structures called TECs, but without an applied magnetic field. This is called the “quantum anomalous Hall effect”.
Anomalous Hall crystal
“Several theory groups have speculated that analogues of these structures could emerge in quantized anomalous Hall systems, giving rise to a type of TEC termed an ‘anomalous Hall crystal’,” Folk explains. “This structure would be insulating, due to a frozen-in electronic ordering in its interior, with dissipation-free currents along the boundary.”
For Folk’s team, the possibility of anomalous hall crystals emerging in real systems was not the original focus of their research. Initially, a team at the University of Washington had aimed to investigate the diverse phenomena that emerge when two or more flakes of graphene are stacked on top of each other, and twisted relative to each other at different angles
While many interesting behaviours emerged from these structures, one particular stack caught the attention of Washington’s Dacen Waters, which inspired his team to get in touch with Folk and his colleagues in British Columbia.
In a vast majority of cases, the twisted structures studied by the team had moiré patterns that were very disordered. Moiré patterns occur when two lattices are overlaid and rotated relative to each other. Yet out of tens of thousands of permutations of twisted graphene stacks, one structure appeared to be different.
Exceptionally low levels of disorder
“One of the stacks seemed to have exceptionally low levels of disorder,” Folk describes. “Waters shared that one with our group to explore in our dilution refrigerator, where we have lots of experience measuring subtle magnetic effects that appear at a small fraction of a degree above absolute zero.”
As they studied this highly ordered structure, the team found that its moiré pattern helped to modulate the system’s electronic properties, allowing a TEC to emerge.
“We observed the first clear example of a TEC, in a device made up of bilayer graphene stacked atop trilayer graphene with a small, 1.5° twist,” Folk explains. “The underlying topology of the electronic system, combined with strong electron-electron interactions, provide the essential ingredients for the crystal formation.”
After decades of theoretical speculation, Folk, Waters and colleagues have identified an anomalous Hall crystal, where the quantum Hall effect emerges from an in-built electronic structure, rather than an applied magnetic field.
Beyond confirming the theoretical possibility of TECs, the researchers are hopeful that their results could lay the groundwork for a variety of novel lines of research.
“One of the most exciting long-term directions this work may lead is that the TEC by itself – or perhaps a TEC coupled to a nearby superconductor – may host new kinds of particles,” Folk says. “These would be built out of the ‘normal’ electrons in the TEC, but totally unlike them in many ways: such as their fractional charge, and properties that would make them promising as topological qubits.”
Pollution from microplastics – small plastic particles less than 5 mm in size – poses an ongoing threat to human health. Independent studies have found microplastics in human tissues and within the bloodstream. And as blood circulates throughout the body and through vital organs, these microplastics reach can critical regions and lead to tissue dysfunction and disease. Microplastics can also cause functional irregularities in the brain, but exactly how they exert neurotoxic effects remains unclear.
A research collaboration headed up at the Chinese Research Academy of Environmental Sciences and Peking University has shed light on this conundrum. In a series of cerebral imaging studies reported in Science Advances, the researchers tracked the progression of fluorescent microplastics through the brains of mice. They found that microplastics entering the bloodstream become engulfed by immune cells, which then obstruct blood vessels in the brain and cause neurobehavioral abnormalities.
“Understanding the presence and the state of microplastics in the blood is crucial. Therefore, it is essential to develop methods for detecting microplastics within the bloodstream,” explains principal investigator Haipeng Huang from Peking University. “We focused on the brain due to its critical importance: if microplastics induce lesions in this region, it could have a profound impact on the entire body. Our experimental technology enables us to observe the blood vessels within the brain and detect microplastics present in these vessels.”
In vivo imaging
Huang and colleagues developed a microplastics imaging system by integrating a two-photon microscopy system with fluorescent plastic particles and demonstrated that it could image brain blood vessels in awake mice. They then fed five mice with water containing 5-µm diameter fluorescent microplastics. After a couple of hours, fluorescence images revealed microplastics within the animals’ cerebral vessels.
Lightening bolt The “MP-flash” observed as two plastic particles rapidly fly through the cerebral blood vessels. (Courtesy: Haipeng Huang)
As they move through rapidly flowing blood, the microplastics generate a fluorescence signal resembling a lightning bolt, which the researchers call a “microplastic flash” (MP-flash). This MP-flash was observed in four of the mice, with the entire MP-flash trajectory captured in a single imaging frame of less than 208 ms.
Three hours after administering the microplastics, the researchers observed fluorescent cells in the bloodstream. The signals from these cells were of comparable intensity to the MP-flash signal, suggesting that the cells had engulfed microplastics in the blood to create microplastic-labelled cells (MPL-cells). The team note that the microplastics did not directly attach to the vessel wall or cross into brain tissue.
To test this idea further, the researchers injected microplastics directly into the bloodstream of the mice. Within minutes, they saw the MP-Flash signal in the brain’s blood vessels, and roughly 6 min later MPL-cells appeared. No fluorescent cells were seen in non-treated mice. Flow cytometry of mouse blood after microplastics injection revealed that the MPL-cells, which were around 21 µm in dimeter, were immune cells, mostly neutrophils and macrophages.
Tracking these MPL-cells revealed that they sometimes became trapped within a blood vessel. Some cells exited the imaging field following a period of obstruction while others remained in cerebral vessels for extended durations, in some instances for nearly 2.5 h of imaging. The team also found that one week after injection, the MPL-cells had still not cleared, although the density of blockages was much reduced.
“[While] most MPL-cells flow rapidly with the bloodstream, a small fraction become trapped within the blood vessels,” Huang tells Physics World. “We provide an example where an MPL-cell is trapped at a microvascular turn and, after some time, is fortunate enough to escape. Many obstructed cells are less fortunate, as the blockage may persist for several weeks. Obstructed cells can also trigger a crash-like chain reaction, resulting in several MPL-cells colliding in a single location and posing significant risks.”
The MPL-cell blockages also impeded blood flow in the mouse brain. Using laser speckle contrast imaging to monitor blood flow, the researchers saw reduced perfusion in the cerebral cortical vessels, notably at 30 min after microplastics injection and particularly affecting smaller vessels.
Reduced blood flow These laser speckle contrast images show blood flow in the mouse brain at various times after microplastics injection. The images indicate that blockages of microplastic-labelled cells inhibit perfusion in the cerebral cortical vessels. (Courtesy: Huang et al. Sci. Adv.11 eadr8243 (2025))
Changing behaviour
Lastly, Huang and colleagues investigated whether the reduced blood supply to the brain caused by cell blockages caused behavioural changes in the mice. In an open-field experiment (used to assess rodents’ exploratory behaviour) mice injected with microplastics travelled shorter distances at lower speeds than mice in the control group.
The Y-maze test for assessing memory also showed that microplastics-treated mice travelled smaller total distances than control animals, with a significant reduction in spatial memory. Tests to evaluate motor coordination and endurance revealed that microplastics additionally inhibited motor abilities. By day 28 after injection, these behavioural impairments were restored, corresponding with the observed recovery of MPL-cell obstruction in the cerebral vasculature at 28 days.
The researchers conclude that their study demonstrates that microplastics harm the brain indirectly – via cell obstruction and disruption of blood circulation – rather than directly penetrating tissue. They emphasize, however, that this mechanism may not necessarily apply to humans, who have roughly 1200 times greater volume of circulating blood volume than mice and significantly different vascular diameters.
“In the future, we plan to collaborate with clinicians,” says Huang. “We will enhance our imaging techniques for the detection of microplastics in human blood vessels, and investigate whether ‘MPL-cell-car-crash’ happens in human. We anticipate that this research will lead to exciting new discoveries.”
Huang emphasizes how the use of fluorescent microplastic imaging technology has fundamentally transformed research in this field over the past five years. “In the future, advancements in real-time imaging of depth and the enhanced tracking ability of microplastic particles in vivo may further drive innovation in this area of study,” he says.
If you have worked in a university, research institute or business during the past two decades you will be familiar with the term equality, diversity and inclusion (EDI). There is likely to be an EDI strategy that includes measures and targets to nurture a workforce that looks more like the wider population and a culture in which everyone can thrive. You may find a reasoned business case for EDI, which extends beyond the organization’s legal obligations, to reflect and understand the people that you work with.
Look more closely and it is possible that the “E” in EDI is not actually equality, but rather equity. Equity is increasingly being used as a more active commitment, not least by the Institute of Physics, which publishes Physics World. How, though, is equity different to equality? What is causing this change of language and will it make any difference in practice?
These questions have become more pressing as discussions around equality and equity have become entwined in the culture wars. This is a particularly live issue in the US as Donald Trump’s second term as US president has begun to withdraw funding from EDI activities. But it has also influenced science policy in the UK.
The distinction between equality and equity is often illustrated by a cartoon published in 2016 by the UK artist Angus Maguire (above). It shows a fence and people of variable height gaining an equal view of a baseball match thanks to different numbers of crates that they stand on. This has itself, however, resulted in arguments about other factors such as the conditions necessary to watch the game in the stadium, or indeed even join in. That requires consideration about how the teams and the stadium could adapt to the needs of all potential participants, but also how these changes might affect the experience of others involved.
In terms of education, the Organization for Economic Co-operation and Development (OECD) states that equity “does not mean that all students obtain equal education outcomes, but rather that differences in students’ outcomes are unrelated to their background or to economic and social circumstances over which the students have no control”. This is an admirable goal, but there are questions about how to achieve it.
In OECD member countries, freedom of choice and competition yield social inequalities that flow through to education and careers. This means that governments are continually balancing the benefits of inspiring and rewarding individuals alongside concerns about group injustice.
In 2024, we hosted a multidisciplinary workshop about equity in science, and especially physics. Held at the University of Birmingham, it brought together physicists at different career stages with social scientists and people who had worked on science and education in government, charities and learned societies. At the event, social scientists told us that equality is commonly conceived as a basic right to be treated equally and not discriminated against, regardless of personal characteristics. This right provides a platform for “equality of opportunity” whereby barriers are removed so talent and effort can be rewarded.
Actions like these have helped to improve participation and progression across physics education and careers, but there is still significant underrepresentation and marginalization due to gender, ethnicity and social background. This is not unusual in open and competitive societies where the effects of promoting equal opportunities are often outweighed by the resources and connections of people with characteristics that are highly represented. Talent and effort are crucial in “high-performance” sectors such as academia and industry, but they are not the only factors influencing success.
Physicists at the meeting told us that they are motivated by intellectual curiosity, fascination with the natural world and love for their subject. Yet there is also, in physics, a culture of “genius” and competition, in which confidence is crucial. Facilities and working conditions, which often involve short-term contracts and international mobility, are difficult to balance alongside other life commitments. Although inequalities and exclusions are recognized, they are often ascribed to broader social factors or the inherent requirements of research. As a result, physicists tend not to accept responsibility for inequities within the discipline.
Physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others
Many physicists want merit to be a reflection of talent and effort. But we identified that physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others. Across the community, some believe in positive action beyond the removal of discrimination, but others can be actively hostile to any measure associated with EDI. This is a challenging environment for any young researcher and we heard distressing stories of isolation from women and colleagues who had hidden disabilities or those who were the first in their family to go to university.
The experience, positive or not, when joining a research group as a postgraduate or postdoctoral researcher is often linked with the personality of leaders. Peer groups and networks have helped many physicists through this period of their career, but it is also where the culture in a research group or department can drive some to the margins and ultimately out of the profession. In environments like this, equal opportunities have proved insufficient to advance diversity, let alone inclusion.
Culture change
Organizations that have replaced equality with equity want to signal a commitment not just to equal treatment, but also more equitable outcomes. However, those who have worked in government told us that some people become disengaged, thinking such efforts can only be achieved by reducing standards and threatening cultures they value. Given that physics needs technical proficiency and associated resources and infrastructure, it is not a discipline where equity can mean an equal distribution of positions and resources.
Physics can, though, counter the influence of wider inequalities by helping colleagues who are under-represented to gain the attributes, experiences and connections that are needed to compete successfully for doctoral studentships, research contracts and academic positions. It can also face up to its cultural problems, so colleagues who are minoritized feel less marginalized and they are ultimately recognized for their efforts and contributions.
This will require physicists giving more prominence to marginalized voices as well as critically and honestly examining their culture and tackling unacceptable behaviour. We believe we can achieve this by collaborating with our social science colleagues. That includes gathering and interpreting qualitative data, so there is shared understanding of problems, as well as designing strategies with people who are most affected, so that everyone has a stake in success.
If this happens, we can look forward to a physics community that genuinely practices equity, rather than espousing equality of opportunity.
This video has no voice over. (Video courtesy: Space Production)
The aim of the International Year of Quantum Science & Technology (IYQ) in 2025 to help raise the public’s awareness of the importance and impact of quantum science and applications on all aspects of life.
Ukraine-born artist Oksana Kondratyeva has certainly taken that message to heart. A London-based designer and producer of architectural glass art, she has recently created an intriguing piece of stained glass inspired by the casing for a quantum computer.
In this video specially made by Kondratyeva for Physics World, you can see her artwork, which was displayed at the 2024 British Glass Biennale, and glimpse the artist in the protective gear she wears while working with the chemicals to make her piece.
In the feature, Kondratyeva describes how her work fuses science and art – and reveals how the collaboration with Rigetti came about. As it happens, it was an article in Physics World during another international year – devoted to glass – that inspired the project.
Brilliant mind Illustration of the Danish physicist and Nobel laureate Niels Bohr (1885-1962). Bohr made numerous contributions to physics during his career, but it was his work on atomic structure and quantum theory that won him the 1922 Nobel Prize for Physics. (Courtesy: Sam Falconer, Debut Art/Science Photo Library)
One hundred and one years ago, Danish physicist Niels Bohr proposed a radical theory together with two young colleagues – Hendrik Kramers and John Slater – in an attempt to resolve some of the most perplexing issues in fundamental physics at the time. Entitled “The Quantum Theory of Radiation”, and published in the Philosophical Magazine, their hypothesis was quickly proved wrong, and has since become a mere footnote in the history of quantum mechanics.
Despite its swift demise, their theory perfectly illustrates the sense of crisis felt by physicists at that moment, and the radical ideas they were prepared to contemplate to resolve it. For in their 1924 paper Bohr and his colleagues argued that the discovery of the “quantum of action” might require the abandonment of nothing less than the first law of thermodynamics: the conservation of energy.
As we celebrate the centenary of Werner Heisenberg’s 1925 quantum breakthrough with the International Year of Quantum Science and Technology (IYQ) 2025, Bohr’s 1924 paper offers a lens through which to look at how the quantum revolution unfolded. Most physicists at that time felt that if anyone was going to rescue the field from the crisis, it would be Bohr. Indeed, this attempt clearly shows signs of the early rift between Bohr and Albert Einstein about the quantum realm, that would turn into a lifelong argument. Remarkably, the paper also drew on an idea that later featured in one of today’s most prominent alternatives to Bohr’s “Copenhagen” interpretation of quantum mechanics.
Genesis of a crisis
The quantum crisis began when German physicist Max Planck proposed the quantization of energy in 1900, as a mathematical trick for calculating the spectrum of radiation from a warm, perfectly absorbing “black body”. Later, in 1905, Einstein suggested taking this idea literally to account for the photoelectric effect, arguing that light consisted of packets or quanta of electromagnetic energy, which we now call photons.
Bohr entered the story in 1912 when, working in the laboratory of Ernest Rutherford in Manchester, he devised a quantum theory of the atom. In Bohr’s picture, the electrons encircling the atomic nucleus (that Rutherford had discovered in 1909) are constrained to specific orbits with quantized energies. The electrons can hop in “quantum jumps” by emitting or absorbing photons with the corresponding energy.
Conflicting views Stalwart physicists Albert Einstein and Niels Bohr had opposing views on quantum fundamentals from early on, which turned into a lifelong scientific argument between the two. (Paul Ehrenfest/Wikimedia Commons)
Bohr had no theoretical justification for this ad hoc assumption, but he showed that, by accepting it, he could predict (more or less) the spectrum of the hydrogen atom. For this work Bohr was awarded the 1922 Nobel Prize for Physics, the same year that Einstein collected the prize for his work on light quanta and the photoelectric effect (he had been awarded it in 1921 but was unable to attend the ceremony).
After establishing an institute of theoretical physics (now the Niels Bohr Institute) in Copenhagen in 1917, Bohr’s mission was to find a true theory of the quantum: a mechanics to replace, at the atomic scale, the classical physics of Isaac Newton that worked at larger scales. It was clear that classical physics did not work at the scale of the atom, although Bohr’s correspondence principle asserted that quantum theory should give the same results as classical physics at a large enough scale.
Mathematical mind Dutch physicist Hendrik Kramers spent 10 years as Niels Bohr’s assistant in Copenhagen. (Wikimedia Commons)
Quantum theory was at the forefront of physics at the time, and so was the most exciting topic for any aspiring young physicist. Three groups stood out as the most desirable places to work for anyone seeking a fundamental mathematical theory to replace the makeshift and sometimes contradictory “old” quantum theory that Bohr had cobbled together: that of Arnold Sommerfeld in Münich, of Max Born in Göttingen, and of Bohr in Copenhagen.
Dutch physicist Hendrik Kramers had hoped to work on his doctorate with Born – but in 1916 the First World War ruled that out, and so he opted instead for Copenhagen, in politically neutral Denmark. There he became Bohr’s assistant for ten years: as was the case with several of Bohr’s students, Kramers did the maths (it was never Bohr’s forte) while Bohr supplied the ideas, philosophy and kudos. Kramers ended up working on an impressive range of problems, from chemical physics to pure mathematics.
Reckless and radical
One of the most vexing question for Bohr and his Copenhagen circle in the early 1920s was how to think about electron orbits in atoms. Try as they might, they couldn’t find a way to make the orbits “fit” with experimental observations of atomic spectra.
Perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all
Bohr and others, including Heisenberg, began to voice a possibility that seemed almost reckless: perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all. Maybe we just can’t think of quantum particles as objects moving along trajectories in space and time.
This struck others, such as Einstein, as desperate, if not crazy. Surely the goal of science had always been to offer a picture of the world in terms of “things happening to objects in space”. What else could there be than that? How could we just give it all up?
But it was worse than that. For one thing, Bohr’s quantum jumps were supposed to happen instantaneously: an electron, say, jumping from one orbit to another in no time at all. In classical physics, everything happens continuously: a particle gets from here to there by moving smoothly across the intervening space, in some finite time. The discontinuities of quantum jumps seemed to some – like Austrian physicist Erwin Schrödinger in Vienna – bordering on the obscene.
Worse still was the fact that while the old quantum theory stipulated the energy of quantum jumps, there was nothing to dictate when they would happen – they simply did. In other words, there was no causal kick that instigated a quantum jump: the electron just seemed to make up its own mind about when to jump. As Heisenberg would later proclaim in his 1927 paper on the uncertainty principle (Zeitschrift für Physik43 172),quantum theory “establishes the final failure of causality”.
Such notions were not the only source of friction between the Copenhagen team and Einstein. Bohr didn’t like light quanta. While they seemed to explain the photoelectric effect, Bohr was convinced that light had to be fundamentally wave-like, so that photons (to use the anachronistic term) were only a way of speaking, not real entities.
To add to the turmoil in 1924, the French physicist Louis de Broglie had, in his doctoral thesis for the Sorbonne, turned the quantum idea on its head by proposing that particles such as electrons might show wave-like behaviour. Einstein had at first considered this too wild, but soon came round to the idea.
Go where the waves take you
In 1924 these virtually heretical ideas were only beginning to surface, but they were creating such a sense of crisis that it seemed anything was possible. In the 1960s, science historian Paul Forman suggested that the feverish atmosphere in physics was part of an even wider cultural current. By rejecting causality and materialism, the German quantum physicists, Forman said, were attempting to align their ideas with a rejection of mechanistic thinking while embracing the irrational – as was the fashion in the philosophical and intellectual circles of the beleaguered Weimar republic. The idea has been hotly debated by historians and philosophers of science – but it was surely in Copenhagen, not Munich or Göttingen, that the most radical attitudes to quantum theory were developing.
Particle pilot In 1923, US physicist John Clark Slater moved to Copenhagen, and suggested the concept of a “virtual field” that spread throughout a quantum system. (Emilio Segrè Visual Archives General Collection/MIT News Office)
Then, just before Christmas in 1923, a new student arrived at Copenhagen. John Clarke Slater, who had a PhD in physics from Harvard, turned up at Bohr’s institute with a bold idea. “You know those difficulties about not knowing whether light is old-fashioned waves or Mr Einstein’s light particles”, he wrote to his family during a spell in Cambridge that November. “I had a really hopeful idea… I have both the waves and the particles, and the particles are sort of carried along by the waves, so that the particles go where the waves take them.” The waves were manifested in a kind of “virtual field” of some kind that spread throughout the system, and they acted to “pilot” the particles.
Bohr was mostly not a fan of Slater’s idea, not least because it retained the light particles that he wished to dispose of. But he liked Slater’s notion of a virtual field that could put one part of a quantum system in touch with others. Together with Slater and Kramers, Bohr prepared a paper in a remarkably short time (especially for him) outlining what became known as the Bohr-Kramers-Slater (BKS) theory. They sent it off to the Philosophical Magazine (where Bohr had published his seminal papers on the quantum atom) at the end of January 1924, and it was published in May (47(281) 785). As was increasingly characteristic of Bohr’s style, it was free of any mathematics (beyond Einstein’s quantum relationship E=hν).
In the BKS picture, an excited atom about to emit light can “communicate continually” with the other atoms around it via the virtual field. The transition, with emission of a light quantum, is then not spontaneous but induced by the virtual field. This mechanism could solve the long-standing question of how an atom “knows” which frequency of light to emit in order to reach another energy level: the virtual field effectively puts the atom “in touch” with all the possible energy states of the system.
The problem was that this meant the emitting atom was in instant communication with its environment all around – which violated the law of causality. Well then, so much the worse for causality: BKS abandoned it. The trio’s theory also violated the conservation of energy and momentum – so they had to go too.
Causality and conservation, abandoned
But wait: hadn’t these conservation laws been proved? In 1923 the American physicist Arthur Compton in Cambridge had shown that when light is scattered by electrons, they exchange energy, and the frequency of the light decreases as it gives up energy to the electrons. The results of Compton’s experiments agreed perfectly with predictions made on the assumptions that light is a stream of quanta (photons) and that their collisions with electrons conserve energy and momentum.
Ah, said BKS, but that’s only true statistically. The quantities are conserved on average, but not in individual collisions. After all, such statistical outcomes were familiar to physicists: that was the basis of the second law of thermodynamics, which presented the inexorable increase in entropy as a statistical phenomenon that need not constrain processes involving single particles.
The radicalism of the BKS paper got a mixed reception. Einstein, perhaps predictably, was dismissive. “Abandonment of causality as a matter of principle should be permitted only in the most extreme emergency”, he wrote. Wolfgang Pauli, who had worked in Copenhagen in 1922–23, confessed to being “completely negative” about the idea. Born and Schrödinger were more favourable.
Geiger agreed, and the duo devised a scheme for detecting both the scattered electron and the scattered photon in separate detectors. If causality and energy conservation were preserved, the detections should be simultaneous; while any delay between them could indicate a violation. As Bothe would later recall “The ‘question to Nature’ which the experiment was designed to answer could therefore be formulated as follows: is it exactly a scatter quantum and a recoil electron that are simultaneously emitted in the elementary process, or is there merely a statistical relationship between the two?” It was incredibly painstaking work to seek such coincident detections using the resources then available. But in April 1925 Geiger and Bothe reported simultaneity within a millisecond – close enough to make a strong case that Compton’s treatment, which assumed energy conservation, was correct. Compton himself, working with Alfred Simon using a cloud chamber, confirmed that energy and momentum were conserved for individual events (Phys. Rev. 26 289).
Revolutionary defeat… singularly important
Bothe was awarded the 1954 Nobel Prize for Physics for the work. He shared it with Born for his work on quantum theory, and Geiger would surely have been a third recipient, if he had not died in 1945. In his Nobel speech, Bothe definitively stated that “the strict validity of the law of the conservation of energy even in the elementary process had been demonstrated, and the ingenious way out of the wave-particle problem discussed by Bohr, Kramers, and Slater was shown to be a blind alley.”
Bohr was gracious in his defeat, writing to a colleague in April 1925 that “It seems… there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible.” Yet he was soon to have no need of that particular revolution, for just a few months later Heisenberg, who had returned to Göttingen after working with Bohr in Copenhagen for six months, came up the first proper theory of quantum mechanics, later called matrix mechanics.
“In spite of its short lifetime, the BKS theory was singularly important,” says historian of science Helge Kragh, now emeritus professor at the Niels Bohr Institute. “Its radically new approach paved the way for a greater understanding, that methods and concepts of classical physics could not be carried over in a future quantum mechanics.”
The Bothe-Geiger experiment that [the paper] inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument [about] the probabilistic character of his matrix mechanics
The BKS paper was thus in a sense merely a mistaken curtain-raiser for the main event. But the Bothe-Geiger experiment that it inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument that the probabilistic character of his matrix mechanics (and also of Schrödinger’s 1926 version of quantum mechanics, called wave mechanics) couldn’t be explained away as a statistical expression of our ignorance about the details, as it is in classical statistical mechanics.
Radical approach Despite its swift defeat, the BKS proposal showed how classical concepts could not apply to a quantum reality. (Courtesy: Shutterstock/Vink Fan)
Rather, the probabilities that emerged from Heisenberg’s and Schrödinger’s theories applied to individual events: they were, Heisenberg said, fundamental to the way single particles behave. Schrödinger was never happy with that idea, but today it seems inescapable.
Over the next few years, Bohr and Heisenberg argued that the new quantum mechanics indeed smashed causality and shattered the conventional picture of reality as an objective world of objects moving in space–time with fixed properties. Assisted by Born, Wolfgang Pauli and others, they articulated the “Copenhagen interpretation”, which became the predominant vision of the quantum world for the rest of the century.
Failed connections
Slater wasn’t at all pleased with what became of the idea he took to Copenhagen. Bohr and Kramers had pressured him into accepting their take on it, “without the little lump carried along on the waves”, as he put it in mid-January. “I am willing to let them have their way”, he wrote at the time, but in retrospect he felt very unhappy about his time in Denmark. After the BKS theory was disproved, Bohr wrote to Slater saying “I have a bad conscience in persuading you to our views”.
Slater replied that there was no need for that. But in later life – after he had made a name for himself in solid-state physics – Slater admitted to a great deal of resentment. “I completely failed to make any connection with Bohr”, he said in a 1963 interview with the historian of science Thomas Kuhn. “I fought with them [Bohr and Kramers] so seriously that I’ve never had any respect for those people since. I had a horrible time in Copenhagen.” While most of Bohr’s colleagues and students expressed adulation, Slater’s was a rare dissenting voice.
But Slater might have reasonably felt more aggrieved at what became of his “pilot-wave” idea. Today, that interpretation of quantum theory is generally attributed to de Broglie – who intimated a similar notion in his 1924 thesis, before presenting the theory in more detail at the famous 1927 Solvay Conference – and to American physicist David Bohm, who revitalized the idea in the 1950s. Initially dismissed on both occasions, the de Broglie-Bohm theory has gained advocates in recent years, not least because it can be applied to a classical hydrodynamic analogue, in which oil droplets are steered by waves on an oil surface.
Whether or not it is the right way to think about quantum mechanics, the pilot-wave theory touches on the deep philosophical problems of the field. Can we rescue an objective reality of concrete particles with properties described by hidden variables, as Einstein had advocated, from the fuzzy veil that Bohr and Heisenberg seemed to draw over the quantum world? Perhaps Slater would at least be gratified to know that Bohr has not yet had the last word.
In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.
Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.
“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.
Particle exchange
In quantum mechanics, the behaviour of particles (and quasiparticles) is probabilistic in nature and is described by mathematical entities known as wavefunctions. These govern the likelihood of finding a particle in a particular state, as defined by properties like position, velocity, and spin. The exchange statistics of a specific type of particle dictates how its wavefunction behaves when two identical particles swap places.
For bosons such as photons, the wavefunction remains unchanged when particles are exchanged. This means that many bosons can occupy the same quantum state, enabling phenomena like lasers and superfluidity. In contrast, when fermions such as electrons are exchanged, the sign of the wavefunction flips from positive to negative or vice versa. This antisymmetric property prevents fermions from occupying the same quantum state. This underpins the Pauli exclusion principle and results in the electronic structure of atoms and the nature of the periodic table.
Until now, physicists believed that these two types of particle statistics – bosonic and fermionic – were the only possibilities in 3D space. This is the result of fundamental principles like locality, which states that events occurring at one point in space cannot instantaneously influence events at a distant location.
Breaking boundaries
Hazzard and Wang’s research overturns the notion that 3D systems are limited to bosons and fermions and shows that new types of particle statistics, called parastatistics, can exist without violating locality.
The key insight in their theory lies in the concept of hidden internal characteristics. Beyond the familiar properties like position and spin, paraparticles require additional internal parameters that enable more complex wavefunction behaviour. This hidden information allows paraparticles to exhibit exchange statistics that go beyond the binary distinction of bosons and fermions.
Paraparticles exhibit phenomena that resemble – but are distinct from – fermionic and bosonic behaviours. For example, while fermions cannot occupy the same quantum state, up to two paraparticles could be allowed to coexist in the same point in space. This behaviour strikes a balance between the exclusivity of fermions and the clustering tendency of bosons.
Bringing paraparticles to life
While no elementary particles are known to exhibit paraparticle behaviour, the researchers believe that paraparticles might manifest as quasiparticles in engineered quantum systems or certain materials. A quasiparticle is particle-like collective excitation of a system. A familiar example is the hole, which is created in a semiconductor when a valence-band electron is excited to the conduction band. The vacancy (or hole) left in the valence band behaves as a positively-charged particle that can travel through the semiconductor lattice.
Experimental systems of ultracold atoms created by collaborators of the duo could be one place to look for the exotic particles. “We are working with them to see if we can detect paraparticles there,” explains Wang.
In ultracold atom experiments, lasers and magnetic fields are used to trap and manipulate atoms at temperatures near absolute zero. Under these conditions, atoms can mimic the behaviour of more exotic particles. The team hopes that similar setups could be used to observe paraparticle-like behaviour in higher-dimensional systems, such as 3D space. However, further theoretical advances are needed before such experiments can be designed.
Far-reaching implications
The discovery of paraparticles could have far-reaching implications for physics and technology. Fermionic and bosonic statistics have already shaped our understanding of phenomena ranging from the stability of neutron stars to the behaviour of superconductors. Paraparticles could similarly unlock new insights into the quantum world.
“Fermionic statistics underlie why some systems are metals and others are insulators, as well as the structure of the periodic table,” Hazzard explains. “Bose-Einstein condensation [of bosons] is responsible for phenomena such as superfluidity. We can expect a similar variety of phenomena from paraparticles, and it will be exciting to see what these are.”
As research into paraparticles continues, it could open the door to new quantum technologies, novel materials, and deeper insights into the fundamental workings of the universe. This theoretical breakthrough marks a bold step forward, pushing the boundaries of what we thought possible in quantum mechanics.
If you’re a postdoc who wants to nail down that permanent faculty position, it’s wise to publish a highly cited paper after your PhD. That’s the conclusion of a study by an international team of researchers, which finds that publication rates and performance during the postdoc period is key to academic retention and early-career success. Their analysis also reveals that more than four in 10 postdocs drop out of academia.
A postdoc is usually a temporary appointment that is seen as preparation for an academic career. Many researchers, however, end up doing several postdocs in a row as they hunt for a permanent faculty job. “There are many more postdocs than there are faculty positions, so it is a kind of systemic bottleneck,” says Petter Holme, a computer scientist at Aalto University in Finland, who led the study.
Previous research into academic career success has tended to overlook the role of a postdoc, focusing instead on, say, the impact of where researchers did their PhD. To eke out the effect of a postdoc, Holme and colleagues combined information of academics’ career stages from LinkedIn with their publication history obtained from Microsoft Academic Graph. The resulting global dataset covered 45, 572 careers spanning 25 years across all academic disciplines.
Overall, they found, 41% of postdocs left academia. But researchers who publish a highly cited paper as a postdoc are much more likely to pursue a faculty career – whether they published a highly cited paper during their PhD degree, or not. Publication rate is also vital, with researchers who publish less as postdocs compared to their PhD days being more likely to drop out of academia. Conversely, as productivity increased, so did the likelihood of a postdoc gaining a faculty position.
Expanding horizons
Holme says their results suggest that a researcher only has a few years “to get on the positive feedback loop, where one success leads to another”. In fact, the team found that a “moderate” change in research topic when moving from PhD to postdoc could improve future success. “It is a good thing to change your research focus, but not too much,” says Holme because it widens perspective without having to learn an entire new research topic from scratch.
Likewise, shifting perspective by moving abroad can also benefit postdocs. The analysis shows that a researcher moving abroad for a postdoc boosts their citations, but a move to a different institution in the same country has a negligible impact.
Replacing conventional building materials with alternatives that sequester carbon dioxide could allow the world to lock away up to half the CO2 generated by humans each year – about 16 billion tonnes. This is the finding of researchers at the University of California Davis and Stanford University, both in the US, who studied the sequestration potential of materials such as carbonate-based aggregates and biomass fibre in brick.
Despite efforts to reduce greenhouse gas emissions by decarbonizing industry and switching to renewable sources of energy, it is likely that humans will continue to produce significant amounts of CO2 beyond the target “net zero” date of 2050. Carbon storage and sequestration – either at source or directly from the atmosphere – are therefore worth exploring as an additional route towards this goal. Researchers have proposed several possible ways of doing this, including injecting carbon underground or deep under the ocean. However, all these scenarios are challenging to implement practically and pose their own environmental risks.
Modifying common building materials
In the present work, a team of civil engineers and earth systems scientists led by Elisabeth van Roijen (then a PhD student at UC Davis) calculated how much carbon could be stored in modified versions of several common building materials. These include concrete (cement) and asphalt containing carbonate-based aggregates; bio-based plastics; wood; biomass-fibre bricks (from waste biomass); and biochar filler in cement.
The researchers obtained the “16 billion tonnes of CO2” figure by assuming that all aggregates currently employed in concrete would be replaced with carbonate-based versions. They also supplemented 15% of cement with biochar and the remainder with carbonatable cements; increased the amount of wood used in all new construction by 20%; and supplemented 15% of bricks with biomass and the remainder with carbonatable calcium hydroxide. A final element in their calculation was to replace all plastics used in construction today with bio-based plastics and all bitumen with bio-oil in asphalt.
“We calculated the carbon storage potential of each material based on the mass ratio of carbon in each material,” explains van Roijen. “These values were then scaled up based on 2016 consumption values for each material.”
“The sheer magnitude of carbon storage is pretty impressive”
While the production of some replacement materials would need to increase to meet the resulting demand, van Roijen and colleagues found that resources readily available today – for example, mineral-rich waste streams – would already let us replace 10% of conventional aggregates with carbonate-based ones. “These alone could store 1 billion tonnes of CO2,” she says. “The sheer magnitude of carbon storage is pretty impressive, especially when you put it in context of the level of carbon dioxide removal needed to stay below the 1.5 and 2 °C targets set by The Intergovernmental Panel on Climate Change (IPCC).”
Indeed, even if the world doesn’t implement these technologies until 2075, we could still store enough carbon between 2075 and 2100 to stay below these targets, she tells Physics World. “This is assuming, of course, that all other decarbonization efforts outlined in the IPCC reports are also implemented to achieve net-zero emissions,” she says.
Building materials are a good option for carbon storage
The motivation for the study, she explains, came from the urgent need – as expressed by the IPCC – to not only reduce new carbon emissions through rapid and significant decarbonization, but to also remove large amounts of CO2 already present in the atmosphere. “Rather than burying it in geological, terrestrial or ocean reservoirs, we wanted to look into the possibility of leveraging existing technology – namely conventional building materials – as a way to store CO2. Building materials are a good option for carbon storage given the massive quantity (30 billion tonnes) produced each year, not to mention their durability.”
Van Roijen, who is now a postdoctoral researcher at the US Department of Energy Renewable Energy Laboratory, hopes that this work, which is detailed in Science, will go beyond the reach of the research lab and attract the attention of policymakers and industrialists. While some of the technologies outlined in this study are new and require further research, others, such as bio-based plastics, are well established and simply need some economic and political support, she says. “That said, conventional building materials such as concrete and plastics are pretty cheap, so there will need to be some incentive for industries to make the switch over to these low-carbon materials.”
Braille is a tactile writing system that helps people who are blind or partially sighted acquire information by touching patterns of tiny raised dots. Braille uses combinations of six dots (two columns of three) to represent letters, numbers and punctuation. But learning to read braille can be challenging, particularly for those who lose their sight later in life, prompting researchers to create automated braille recognition technologies.
One approach involves simply imaging the dots and using algorithms to extract the required information. This visual method, however, struggles with the small size of braille characters and can be impacted by differing light levels. Another option is tactile sensing; but existing tactile sensors aren’t particularly sensitive, with small pressure variations leading to incorrect readings.
To tackle these limitations, researchers from Beijing Normal University and Shenyang Aerospace University in China have employed an optical fibre ring resonator (FRR) to create a tactile braille recognition system that accurately reads braille in real time.
“Current braille readers often struggle with accuracy and speed, especially when it comes to dynamic reading, where you move your finger across braille dots in real time,” says team leader Zhuo Wang. “I wanted to create something that could read braille more reliably, handle slight variations in pressure and do it quickly. Plus, I saw an opportunity to apply cutting-edge technology – like flexible optical fibres and machine learning – to solve this challenge in a novel way.”
Flexible fibre sensor
At the core of the braille sensor is the optical FRR – a resonant cavity made from a loop of fibre containing circulating laser light. Wang and colleagues created the sensing region by embedding an optical fibre in flexible polymer and connecting it into the FRR ring. Three small polymer protrusions on top of the sensor act as probes to transfer the applied pressure to the optical fibre. Spaced 2.5 mm apart to align with the dot spacing, each protrusion responds to the pressure from one of the three braille dots (or absence of a dot) in a vertical column.
Sensor fabrication The optical FRR is made by connecting ports of a 2×2 fibre coupler to form a loop. The sensing region is then connected into the loop. (Courtesy: Optics Express 10.1364/OE.546873)
As the sensor is scanned over the braille surface, the pressure exerted by the raised dots slightly changes the length and refractive index of the fibre, causing tiny shifts in the frequency of the light travelling through the FRR. The device employs a technique called Pound-Drever-Hall (PDH) demodulation to “lock” onto these shifts, amplify them and convert them into readable data.
“The PDH demodulation curve has an extremely steep linear slope, which means that even a very tiny frequency shift translates into a significant, measurable voltage change,” Wang explains. “As a result, the system can detect even the smallest variations in pressure with remarkable precision. The steep slope significantly enhances the system’s sensitivity and resolution, allowing it to pick up subtle differences in braille dots that might be too small for other sensors to detect.”
The eight possible configurations of three dots generate eight distinct pressure signals, with each braille character defined by two pressure outputs (one per column). Each protrusion has a slightly different hardness level, enabling the sensor to differentiate pressures from each dot. Rather than measuring each dot individually, the sensor reads the overall pressure signal and instantly determines the combination of dots and the character they correspond to.
The researchers note that, in practice, the contact force may vary slightly during the scanning process, resulting in the same dot patterns exhibiting slightly different pressure signals. To combat this, they used neural networks trained on large amounts of experimental data to correctly classify braille patterns, even with small pressure variations.
“This design makes the sensor incredibly efficient,” Wang explains. “It doesn’t just feel the braille, it understands it in real time. As the sensor slides over a braille board, it quickly decodes the patterns and translates them into readable information. This allows the system to identify letters, numbers, punctuation, and even words or poems with remarkable accuracy.”
Stable and accurate
Measurements on the braille sensor revealed that it responds to pressures of up to 3 N, as typically exerted by a finger when touching braille, with an average response time of below 0.1 s, suitable for fast dynamic braille reading. The sensor also exhibited excellent stability under temperature or power fluctuations.
To assess its ability to read braille dots, the team used the sensor to read eight different arrangements of three dots. Using a multilayer perceptron (MLP) neural network, the system effectively distinguished the eight different tactile pressures with a classification accuracy of 98.57%.
Next, the researchers trained a long short-term memory (LSTM) neural network to classify signals generated by five English words. Here, the system demonstrated a classification accuracy of 100%, implying that slight errors in classifying signals in each column will not affect the overall understanding of the braille.
Finally, they used the MLP-LSTM model to read short sentences, either sliding the sensor manually or scanning it electronically to maintain a consistent contact force. In both cases, the sensor accurately recognised the phrases.
The team concludes that the sensor can advance intelligent braille recognition, with further potential in smart medical care and intelligent robotics. The next phase of development will focus on making the sensor more durable, improving the machine learning models and making it scalable.
“Right now, the sensor works well in controlled environments; the next step is to test its use by different people with varying reading styles, or under complex application conditions,” Wang tells Physics World. “We’re also working on making the sensor more affordable so it can be integrated into devices like mobile braille readers or wearables.”
Written with Los Alamos National Laboratory theoretical physicist Ian Tregillis, who is also a science-fiction author of several books, they have derived a mathematical model of the so-called wild cards virus.
The Wild Cards universe is a series of novels created by a consortium of writers including Martin and Tregillis.
Set largely during an alternate history of the US following the Second World War, the series follows events after an extraterrestrial virus, known as the Wild Card virus, has spread worldwide. It mutates human DNA causing profound changes in human physiology and society at large.
The virus follows a fixed statistical distribution of outcomes in that 90% of those infected die, 9% become physically mutated (referred to as “jokers”) and 1% gain superhuman abilities (known as “aces”). Such capabilities include the ability to fly as well as being able to move between dimensions. The stories in the series then follow the individuals that have been impacted by the virus.
Tregillis and Martin have now derived a formula for the viral behaviour of the Wild Card virus. “Like any physicist, I started with back-of-the-envelope estimates, but then I went off the deep end,” notes Tregillis. “Being a theoretician, I couldn’t help but wonder if a simple underlying model might tidy up the canon.”
The model takes into consideration the severity of the changes (for the 10% that don’t instantly die) and the mix of joke/ace traits. After all, those infected can also become cryto-jokers or crypto-aces – undetected cases where individuals have subtle changes or powers – as well as joker-aces, in which a human develops both mutations and superhuman abilities.
The result is a dynamical system in which a carrier’s state vector constantly evolves through the model space — until their “card” turns. At that point the state vector becomes fixed and its permanent location determines the fate of the carrier. “The time-averaged behavior of this system generates the statistical distribution of outcomes,” adds Tregillis.
The purpose of the paper, and the model, is also to provide an exercise in demonstrating how “whimsical” scenarios can be used to explore concepts in physics and mathematics.
“The fictional virus is really just an excuse to justify the world of Wild Cards, the characters who inhabit it, and the plot lines that spin out from their actions,” says Tregillis.
The exact origins of cosmic phenomena known as fast radio bursts (FRBs) are not fully understood, but scientists at the Massachusetts Institute of Technology (MIT) in the US have identified a fresh clue: at least one of these puzzling cosmic discharges got its start very close to the object that emitted it. This result, which is based on measurements of a fast radio burst called FRB 20221022A, puts to rest a long-standing debate about whether FRBs can escape their emitters’ immediate surroundings. The conclusion: they can.
“Competing theories argued that FRBs might instead be generated much farther away in shock waves that propagate far from the central emitting object,” explains astronomer Kenzie Nimmo of MIT’s Kavli Institute for Astrophysics and Space Research. “Our findings show that, at least for this FRB, the emission can escape the intense plasma near a compact object and still be detected on Earth.”
As their name implies, FRBs are brief, intense bursts of radio waves. The first was detected in 2007, and since then astronomers have spotted thousands of others, including some within our own galaxy. They are believed to originate from cataclysmic processes involving compact celestial objects such as neutron stars, and they typically last a few milliseconds. However, astronomers have recently found evidence for bursts a thousand times shorter, further complicating the question of where they come from.
Nimmo and colleagues say they have now conclusively demonstrated that FRB 20221022A, which was detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in 2022, comes from a region only 10 000 km in size. This, they claim, means it must have originated in the highly magnetized region that surrounds a star: the magnetosphere.
“Fairly intuitive” concept
The researchers obtained their result by measuring the FRB’s scintillation, which Nimmo explains is conceptually similar to the twinkling of stars in the night sky. The reason stars twinkle is that because they are so far away, they appear to us as point sources. This means that their apparent brightness is more affected by the Earth’s atmosphere than is the case for planets and other objects that are closer to us and appear larger.
“We applied this same principle to FRBs using plasma in their host galaxy as the ‘scintillation screen’, analogous to Earth’s atmosphere,” Nimmo tells Physics World. “If the plasma causing the scintillation is close to the FRB source, we can use this to infer the apparent size of the FRB emission region.”
According to Nimmo, different models of FRB origins predict very different sizes for this region. “Emissions originating within the magnetized environments of compact objects (for example, magnetospheres) would produce a much smaller apparent size compared to emission generated in distant shocks propagating far from the central object,” she explains. “By constraining the emission region size through scintillation, we can determine which physical model is more likely to explain the observed FRB.”
Challenge to existing models
The idea for the new study, Nimmo says, stemmed from a conversation with another astronomer, Pawan Kumar of the University of Texas at Austin, early last year. “He shared a theoretical result showing how scintillation could be used a ‘probe’ to constrain the size of the FRB emission region, and, by extension, the FRB emission mechanism,” Nimmo says. “This sparked our interest and we began exploring the FRBs discovered by CHIME to search for observational evidence for this phenomenon.”
The researchers say that their study, which is detailed in Nature, shows that at least some FRBs originate from magnetospheric processes near compact objects such as neutron stars. This finding is a challenge for models of conditions in these extreme environments, they say, because if FRB signals can escape the dense plasma expected to exist near such objects, the plasma may be less opaque than previously assumed. Alternatively, unknown factors may be influencing FRB propagation through these regions.
A diagnostic tool
One advantage of studying FRB 20221022A is that it is relatively conventional in terms of its brightness and the duration of its signal (around 2 milliseconds). It does have one special property, however, as discovered by Nimmo’s colleagues at McGill University in Canada: its light is highly polarized. What is more, the pattern of its polarization implies that its emitter must be rotating in a way that is reminiscent of pulsars, which are highly magnetized, rotating neutron stars. This result is reported in a separate paper in Nature.
In Nimmo’s view, the MIT team’s study of this (mostly) conventional FRB establishes scintillation as a “powerful diagnostic tool” for probing FRB emission mechanisms. “By applying this method to a larger sample of FRBs, which we now plan to investigate, future studies could refine our understanding of their underlying physical processes and the diverse environments they occupy.”
In June 1925 a relatively unknown physics postdoc by the name of Werner Heisenberg developed the basic mathematical framework that would be the basis for the first quantum revolution. Heisenberg, who would later win the Nobel Prize for Physics, famously came up with quantum mechanics on a two-week vacation on the tiny island of Helgoland off the coast of Germany, where he had gone to cure a bad bout of hay fever.
Now, a century later, we are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. According to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. The report estimates that up to $50bn in public cash has already been committed.
It’s a fitting tribute, then, that the United Nations (UN) has chosen 2025 to be the International Year of Quantum Science and Technology (IYQ). They hope that the year will raise global awareness of the impact that quantum physics and its applications have already had on our world. The UN also aims to highlight to the global public the myriad potential future applications of quantum technologies and how they could help tackle universal issues – from climate and clean energy to health and infrastructure – while also addressing the UN’s sustainable development goals.
The Institute of Physics (IOP), which publishes Physics World, is one of the IYQ’s six “founding partners” alongside the German (DPG) and American physical societies (APS), SPIE, Optica and the Chinese Optical Society. “The UNESCO International Year of Quantum is a wonderful opportunity to spread the word about quantum research and technology and the transformational opportunities it is opening up” says Tom Grinyer, chief executive of the IOP. “The Institute of Physics is co-ordinating the UK and Irish elements of the year, which mark the 100th anniversary of the first formulation of quantum mechanics, and we are keen to celebrate the milestone, making sure that as many people as possible get the opportunity to find out more about this fascinating area of science and technology,” he adds.
“IYQ provides the opportunity for societies and organizations around the world to come together in marking both the 100-year history of the field, as well as the longer-term real-world impact that quantum science is certain to have for decades to come,” says Tim Smith, head of portfolio development at IOP Publishing. “Quantum science and technology represents one of the most exciting and rapidly developing areas of science today, encompassing the global physical-sciences community in a way that connects scientific wonder with fundamental research, technological innovation, industry, and funding programmes worldwide.”
Taking shape
The official opening ceremony for IYQ takes place on 4–5 February at the UNESCO headquarters in Paris, France, although several countries, including Germany and India, held their own launches in advance of the main event. Working together, the IOP and IOP Publishing have developed a wide array of quantum resources, talks, conferences, festivals and public-themed events planned as a part of the UK’s celebrations for IYQ.
In late February, meanwhile, the Royal Society – the world’s oldest continuously active learned society – will host a two-day quantum conference. Dubbed “Quantum Information”, it will bring together scientists, industry leaders and public-sector stakeholders to discuss the current challenges involved in quantum computing, networks and sensing systems.
The IOP will use the focus this year gives us to continue to make the case for the investment in research and development, and support for physics skills, which will be crucial if we are to fully unlock the economic and social potential of the quantum sector
With the booming quantum marketplace, it’s no surprise that employers are on the hunt for many skilled physicists to join the workforce. And indeed, there is a significant scarcity of skilled quantum professionals for the many roles across industry and academia. Also, with quantum research advancing everything from software and machine learning to materials science and drug discovery, your skills will be transferable across the board.
If you plan to join the quantum workforce, then choosing the right PhD programme, having the right skills for a specific role and managing risk and reward in the emerging quantum industry are all crucial. There are a number of careers events on the IYQ calendar, to learn more about the many career prospects for physicists in the sector. In April, for example, the University of Bristol’s Quantum Engineering Centre for Doctoral Training is hosting a Careers in Quantum event, while the Economist magazine is hosting its annual Commercialising Quantum conference in May.
There will also be a special quantum careers panel discussion, including top speakers from the UK and the US, as part of our newly launched Physics World Live panel discussions in April. This year’s Physics World Careers 2025 guide has a special quantum focus, and there’ll also be a bumper, quantum-themed issue of thePhysics World Briefing in June. The Physics World quantum channel will be regularly updated throughout the year so you don’t miss a thing.
Read all about it
IOP Publishing’s journals will include specially curated content – from a series of Perspectives articles – personal viewpoints from leading quantum scientists – in Quantum Science and Technology. The journal will also be publishing roadmaps in quantum computing, sensing and communication, as well as focus issues on topics such as quantum machine learning and technologies for quantum gravity and thermodynamics in quantum coherent platforms.
“Going right to the core of IOP Publishing’s own historic coverage we’re excited to be celebrating the IYQ through a year-long programme of articles in Physics World and across our journals, that will hopefully show a wide audience just why everyone should care about quantum science and the people behind it,” says Smith.
In this episode of Physics World Stories, we celebrate the 100th anniversary of Werner Heisenberg’s trip to the North Sea island of Helgoland, where he developed the first formulation of quantum theory. Listen to the podcast as we delve into the latest advances in quantum science and technology with three researchers who will be attending a 6-day workshop on Helgoland in June 2025.
Featuring in the episode are: Nathalie De Leon of Princeton University, Ana Maria Rey from the University of Colorado Boulder, and Jack Harris from Yale University, a member of the programme committee. These experts share their insights on the current state of quantum science and technology: discussing the latest developments in quantum sensing, quantum information and quantum computing.
They also reflect on the significance of attending a conference at a location that is so deeply ingrained in the story of quantum mechanics. Talks at the event will span the science and the history of quantum theory, as well as the nature of scientific revolutions.