Peer review is a cornerstone of academic publishing. It is how we ensure that published science is valid. Peer review, by which researchers judge the quality of papers submitted to journals, stops pseudoscience from being peddled as equivalent to rigorous research. At the same time, the peer-review system is under considerable strain as the number of journal articles published each year increases, jumping from 1.9 million in 2016 to 2.8 million in 2022, according to Scopus and Web of Science.
All these articles require experienced peer reviewers, with papers typically taking months to go through peer review. This cannot be blamed alone on the time taken to post manuscripts and reviews back and forth between editors and reviewers, but instead is a result of high workloads and, fundamentally, how busy everyone is. Given peer reviewers need to be expert in their field, the pool of potential reviewers is inherently limited. A bottleneck is emerging as the number of papers grows quicker than the number of researchers in academia.
Scientific publishers have long been central to managing the process of peer review. For anyone outside academia, the concept of peer review may seem illogical given that researchers spend their time on it without much acknowledgement. While initiatives are in place to change this such as outstanding-reviewer awards and the Web of Science recording reviewer data, there is no promise that such recognition will be considered when looking for permanent positions or applying for promotion.
The impact of open access
Why, then, do we agree to review? As an active researcher myself in quantum physics, I peer-reviewed more than 40 papers last year and I’ve always viewed it as a duty. It’s a necessary time-sink to make our academic system function, to ensure that published research is valid and to challenge questionable claims. However, like anything people do out of a sense of duty, inevitably there are those who will seek to exploit it for profit.
Many journals today are open access, in which fees, known as article-processing charges, are levied to make the published work freely available online. It makes sense that costs need to be imposed – staff working at publishing companies need paying; articles need editing and typesetting; servers need be maintained and web-hosting fees have to be paid. Recently, publishers have invested heavily in digital technology and developed new ways to disseminate research to a wider audience.
Open access, however, has encouraged some publishers to boost revenues by simply publishing as many papers as possible. At the same time, there has been an increase in retractions, especially of fabricated or manipulated manuscripts sold by “paper mills”. The rise of retractions isn’t directly linked to the emergence of open access, but it’s not a good sign, especially when the academic publishing industry reports profit margins of roughly 40% – higher than many other industries. Elsevier, for instance, publishes nearly 3000 journals and in 2023 its parent company, Relx, recorded a profit of £1.79bn. This is all money that was either paid in open-access fees or by libraries (or private users) for journal subscriptions but ends up going to shareholders rather than science.
It’s important to add that not all academic publishers are for-profit. Some, like the American Physical Society (APS), IOP Publishing, Optica, AIP Publishing and the American Association for the Advancement of Science – as well as university presses – are wings of academic societies and universities. Any profit they make is reinvested into research, education or the academic community. Indeed, IOP Publishing, AIP Publishing and the APS have formed a new “purpose-led publishing” coalition, in which the three publishers confirm that they will continue to reinvest the funds generated from publishing back into research and “never” have shareholders that result in putting “profit above purpose”.
But many of the largest publishers – the likes of Springer Nature, Elsevier, Taylor and Francis, MDPI and Wiley – are for-profit companies and are making massive sums for their shareholders. Should we just accept that this is how the system is? If not, what can we do about it and what impact can we as individuals have on a multi-billion-dollar industry? I have decided that I will no longer review for, nor submit my articles (when corresponding author) to, any for-profit publishers.
I’m lucky in my field that I have many good alternatives such as the arXiv overlay journal Quantum, IOP Publishing’s Quantum Science and Technology, APS’s Physical Review XQuantum and Optica Quantum. If your field doesn’t, then why not push for them to be created? We may not be able to dismantle the entire for-profit publishing industry, but we can stop contributing to it (especially those who have a permanent job in academia and are not as tied down by the need to publish in high impact factor journals). Such actions may seem small, but together can have an effect and push to make academia the environment we want to be contributing to. It may sound radical to take change into your own hands, but it’s worth a try. You never know, but it could help more money make its way back into science.
Visual assistance system The wearable system uses intuitive multimodal feedback to assist visually impaired people with daily life tasks. (Courtesy: J Tang et al. Nature Machine Intelligence 10.1038/s42256-025-01018-6, 2005, Springer Nature)
Researchers from four universities in Shanghai, China, are developing a practical visual assistance system to help blind and partially sighted people navigate. The prototype system combines lightweight camera headgear, rapid-response AI-facilitated software and artificial “skins” worn on the wrists and finger that provide physiological sensing. Functionality testing suggests that the integration of visual, audio and haptic senses can create a wearable navigation system that overcomes current designs’ adoptability and usability concerns.
Worldwide, 43 million people are blind, according to 2021 estimates by the International Agency for the Prevention of Blindness. Millions more are so severely visually impaired that they require the use of a cane to navigate.
Visual assistance systems offer huge potential as navigation tools, but current designs have many drawbacks and challenges for potential users. These include limited functionality with respect to the size and weight of headgear, battery life and charging issues, slow real-time processing speeds, audio command overload, high system latency that can create safety concerns, and extensive and sometimes complex learning requirements.
Innovations in miniaturized computer hardware, battery charge longevity, AI-trained software to decrease latency in auditory commands, and the addition of lightweight wearable sensory augmentation material providing near-real-time haptic feedback are expected to make visual navigation assistance viable.
The team’s prototype visual assistance system, described in Nature Machine Intelligence, incorporates an RGB-D (red, green, blue, depth) camera mounted on a 3D-printed glasses frame, ultrathin artificial skins, a commercial lithium-ion battery, a wireless bone-conducting earphone and a virtual reality training platform interfaced via triboelectric smart insoles. The camera is connected to a microcontroller via USB, enabling all computations to be performed locally without the need for a remote server.
When a user sets a target using a voice command, AI algorithms process the RGB-D data to estimate the target’s orientation and determine an obstacle-free direction in real time. As the user begins to walk to the target, bone conduction earphones deliver spatialized cues to guide them, and the system updates the 3D scene in real time.
The system’s real-time visual recognition incorporates changes in distance and perspective, and can compensate for low ambient light and motion blur. To provide robust obstacle avoidance, it combines a global threshold method with a ground interval approach to accurately detect overhead hanging, ground-level and sunken obstacles, as well as sloping or irregular ground surfaces.
First author Jian Tang of Shanghai Jiao Tong University and colleagues tested three audio feedback approaches: spatialized cues, 3D sounds and verbal instructions. They determined that spatialized cues are the most rapid to convey and be understood and provide precise direction perception.
Real-world testing A visually impaired person navigates through a cluttered conference room. (Courtesy: Tang et al. Nature Machine Intelligence)
To complement the audio feedback, the researchers developed stretchable artificial skin – an integrated sensory-motor device that provides near-distance alerting. The core component is a compact time-of-flight sensor that vibrates to stimulate the skin when the distance to an obstacle or object is smaller than a predefined threshold. The actuator is designed as a slim, lightweight polyethylene terephthalate cantilever. A gap between the driving circuit and the skin promotes air circulation to improve skin comfort, breathability and long-term wearability, as well as facilitating actuator vibration.
Users wear the sensor on the back of an index or middle finger, while the actuator and driving circuit are worn on the wrist. When the artificial skin detects a lateral obstacle, it provides haptic feedback in just 18 ms.
The researchers tested the trained system in virtual and real-world environments, with both humanoid robots and 20 visually impaired individuals who had no prior experience of using visual assistance systems. Testing scenarios included walking to a target while avoiding a variety of obstacles and navigating through a maze. Participants’ navigation speed increased with training and proved comparable to walking with a cane. Users were also able to turn more smoothly and were more efficient at pathfinding when using the navigation system than when using a cane.
“The proficient completion of tasks mirroring real-world challenges underscores the system’s effectiveness in meeting real-life challenges,” the researchers write. “Overall, the system stands as a promising research prototype, setting the stage for the future advancement of wearable visual assistance.”
The universe’s maximum lifespan may be considerably shorter than was previously thought, but don’t worry: there’s still plenty of time to finish streaming your favourite TV series.
According to new calculations by black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom of Radboud University in the Netherlands, the most persistent stellar objects in the universe – white dwarf stars – will decay away to nothingness in around 1078 years. This, Falcke admits, is “a very long time”, but it’s a far cry from previous predictions, which suggested that white dwarfs could persist for at least 101100 years. “The ultimate end of the universe comes much sooner than expected,” he says.
Writing in the Journal of Cosmology and Astroparticle Physics, Falcke and colleagues explain that the discrepancy stems from different assumptions about how white dwarfs decay. Previous calculations of their lifetime assumed that, in the absence of proton decay (which has never been observed experimentally), their main decay process would be something called pyconuclear fusion. This form of fusion occurs when nuclei in a crystalline lattice essentially vibrate their way into becoming fused with their nearest neighbours.
If that sounds a little unlikely, that’s because it is. However, in the dense, cold cores of white dwarf stars, and over stupendously long time periods, pyconuclear fusion happens often enough to gradually (very, very gradually) turn the white dwarf’s carbon into nickel, which then transmutes into iron by emitting a positron. The resulting iron-cored stars are known as black dwarfs, and some theories predict that they will eventually (very, very eventually) collapse into black holes. Depending on how massive they were to start with, the whole process takes between 101100‒1032 000 years.
An alternative mechanism
Those estimates, however, do not take into account an alternative decay mechanism known as Hawking radiation. First proposed in the early 1970s by Stephen Hawking and Jacob Bekenstein, Hawking radiation arises from fluctuations in the vacuum of spacetime. These fluctuations allow particle-antiparticle pairs to pop into existence by essentially “borrowing” energy from the vacuum for brief periods before the pairs recombine and annihilate.
If this pair production happens in the vicinity of a black hole, one particle in the pair may stray over the black hole’s event horizon before it can recombine. This leaves its partner free to carry away some of the “borrowed” energy as Hawking radiation. After an exceptionally long time – but, crucially, not as long as the time required to disappear a white dwarf via pyconuclear fusion – Hawking radiation will therefore cause black holes to dissipate.
The fate of life, the universe and everything?
But what about objects other than black holes? Well, in a previous work published in 2023, Falcke, Wondrak and van Suijlekom showed that a similar process can occur for any object that curves spacetime with its gravitational field, not just objects that have an event horizon. This means that white dwarfs, neutron stars, the Moon and even human beings can, in principle, evaporate away into nothingness via Hawking radiation – assuming that what the trio delicately call “other astrophysical evolution and decay channels” don’t get there first.
Based on this tongue-in-cheek assumption, the trio calculated that white dwarfs will dissipate in around 1078 years, while denser objects such as black holes and neutron stars will vanish in no more than 1067 years. Less dense objects such as humans, meanwhile, could persist for as long as 1090 years – albeit only in a vast, near-featureless spacetime devoid of anything that would make life worth living, or indeed possible.
While that might sound unrealistic as well as morbid, the trio’s calculations do have a somewhat practical goal. “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory,” van Suijlekom says. “Perhaps one day, we [will] unravel the mystery of Hawking radiation.”
Subtle quantum effects within atomic nuclei can dramatically affect how some nuclei break apart. By studying 100 isotopes with masses below that of lead, an international team of physicists uncovered a previously unknown region in the nuclear landscape where fragments of fission split in an unexpected way. This is driven not by the usual forces, but by shell effects rooted in quantum mechanics.
“When a nucleus splits apart into two fragments, the mass and charge distribution of these fission fragments exhibits the signature of the underlying nuclear structure effect in the fission process,” explains Pierre Morfouace of Université Paris-Saclay, who led the study. “In the exotic region of the nuclear chart that we studied, where nuclei do not have many neutrons, a symmetric split was previously expected. However, the asymmetric fission means that a new quantum effect is at stake.”
This unexpected discovery not only sheds light on the fine details of how nuclei break apart but also has far-reaching implications. These range from the development of safer nuclear energy to understanding how heavy elements are created during cataclysmic astrophysical events like stellar explosions.
Quantum puzzle
Fission is the process by which a heavy atomic nucleus splits into smaller fragments. It is governed by a complex interplay of forces. The strong nuclear force, which binds protons and neutrons together, competes with the electromagnetic repulsion between positively charged protons. The result is that certain nuclei are unstable and typically leads to a symmetric fission.
But there’s another, subtler phenomenon at play: quantum shell effects. These arise because protons and neutrons inside the nucleus tend to arrange themselves into discrete energy levels or “shells,” much like electrons do in atoms.
“Quantum shell effects [in atomic electrons] play a major role in chemistry, where they are responsible for the properties of noble gases,” says Cedric Simenel of the Australian National University, who was not involved in the study. “In nuclear physics, they provide extra stability to spherical nuclei with so-called ‘magic’ numbers of protons or neutrons. Such shell effects drive heavy nuclei to often fission asymmetrically.”
In the case of very heavy nuclei, such as uranium or plutonium, this asymmetry is well documented. But in lighter, neutron-deficient nuclei – those with fewer neutrons than their stable counterparts – researchers had long expected symmetric fission, where the nucleus breaks into two roughly equal parts. This new study challenges that view.
New fission landscape
To investigate fission in this less-explored part of the nuclear chart, scientists from the R3B-SOFIA collaboration carried out experiments at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. They focused on nuclei ranging from iridium to thorium, many of which had never been studied before. The nuclei were fired at high energies into a lead target to induce fission.
The fragments produced in each fission event were carefully analysed using a suite of high-resolution detectors. A double ionization chamber captured the number of protons in each product, while a superconducting magnet and time-of-flight detectors tracked their momentum, enabling a detailed reconstruction of how the split occurred.
Using this method, the researchers found that the lightest fission fragments were frequently formed with 36 protons, which is the atomic number of krypton. This pattern suggests the presence of a stabilizing shell effect at that specific proton number.
“Our data reveal the stabilizing effect of proton shells at Z=36,” explains Morfouace. “This marks the identification of a new ‘island’ of asymmetric fission, one driven by the light fragment, unlike the well-known behaviour in heavier actinides. It expands our understanding of how nuclear structure influences fission outcomes.”
Future prospects
“Experimentally, what makes this work unique is that they provide the distribution of protons in the fragments, while earlier measurements in sub-lead nuclei were essentially focused on the total number of nucleons,” comments Simenel.
Since quantum shell effects are tied to specific numbers of protons or neutrons, not just the overall mass, these new measurements offer direct evidence of how proton shell structure shapes the outcome of fission in lighter nuclei. This makes the results particularly valuable for testing and refining theoretical models of fission dynamics.
“This work will undoubtedly lead to further experimental studies, in particular with more exotic light nuclei,” Simenel adds. “However, to me, the ball is now in the camp of theorists who need to improve their modelling of nuclear fission to achieve the predictive power required to study the role of fission in regions of the nuclear chart not accessible experimentally, as in nuclei formed in the astrophysical processes.”
How it works Diagram showing simulated light from an exoplanet and its companion star (far left) moving through the new coronagraph. (Courtesy: Nico Deshler/University of Arizona)
A new type of coronagraph that could capture images of dim exoplanets that are extremely close to bright stars has been developed by a team led by Nico Deshler at the University of Arizona in the US. As well as boosting the direct detection of exoplanets, the new instrument could support advances in areas including communications, quantum sensing, and medical imaging.
Astronomers have confirmed the existence of nearly 6000 exoplanets, which are planets that orbit stars other as the Sun. The majority of these were discovered based on their effects on their companion stars, rather than being observed directly. This is because most exoplanets are too dim and too close to their companion stars for the exoplanet light to be differentiated from starlight. That is where a coronagraph can help.
A coronagraph is an astronomical instrument that blocks light from an extremely bright source to allow the observation of dimmer objects in the nearby sky. Coronagraphs were first developed a century ago to allow astronomers to observe the outer atmosphere (corona) of the Sun , which would otherwise be drowned out by light from the much brighter photosphere.
At the heart of a coronagraph is a mask that blocks the light from a star, while allowing light from nearby objects into a telescope. However, the mask (and the telescope aperture) will cause the light to interfere and create diffraction patterns that blur tiny features. This prevents the observation of dim objects that are closer to the star than the instrument’s inherent diffraction limit.
Off limits
Most exoplanets lie within the diffraction limit of today’s coronagraphs and Deshler’s team addressed this problem using two spatial mode sorters. The first device uses a sequence of optical elements to separate starlight from light originating from the immediate vicinity of the star. The starlight is then blocked by a mask while the rest of the light is sent through a second spatial mode sorter, which reconstructs an image of the region surrounding the star.
As well as offering spatial resolution below the diffraction limit, the technique approaches the fundamental limit on resolution that is imposed by quantum mechanics.
“Our coronagraph directly captures an image of the surrounding object, as opposed to measuring only the quantity of light it emits without any spatial orientation,” Deshler describes. “Compared to other coronagraph designs, ours promises to supply more information about objects in the sub-diffraction regime – which lie below the resolution limits of the detection instrument.”
To test their approach, Deshler and colleagues simulated an exoplanet orbiting at a sub-diffraction distance from a host star some 1000 times brighter. After passing the light through the spatial mode sorters, they could resolve the exoplanet’s position – which would have been impossible with any other coronagraph.
Context and composition
The team believe that their technique will improve astronomical images. “These images can provide context and composition information that could be used to determine exoplanet orbits and identify other objects that scatter light from a star, such as exozodiacal dust clouds,” Deshler says.
The team’s coronagraph could also have applications beyond astronomy. With the ability to detect extremely faint signals close to the quantum limit, it could help to improve the resolution of quantum sensors. This could to lead to new methods for detecting tiny variations in magnetic or gravitational fields.
Elsewhere, the coronagraph could help to improve non-invasive techniques for imaging living tissue on the cellular scale – with promising implications in medical applications such as early cancer detection and the imaging of neural circuits. Another potential use could be new multiplexing techniques for optical communications. This would see the coronagraph being used to differentiate between overlapping signals. This has the potential of boosting the rate at which data could be transferred between satellites and ground-based receivers.
Experimental setup Top: schematic and photo of the setup for measurements behind a homogeneous phantom. Bottom: IMPT treatment plan for the head phantom (left); the detector sensor position (middle, sensor thickness not to scale); and the setup for measurements behind the phantom (right). (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)
Proton therapy is a highly effective and conformal cancer treatment. Proton beams deposit most of their energy at a specific depth – the Bragg peak – and then stop, enabling proton treatments to destroy tumour cells while sparing surrounding normal tissue. To further optimize the clinical treatment planning process, there’s recently been increased interest in considering the radiation quality, quantified by the proton linear energy transfer (LET).
LET – defined as the mean energy deposited by a charged particle over a given distance – increases towards the end of the proton range. Incorporating LET as an optimization parameter could better exploit the radiobiological properties of protons, by reducing LET in healthy tissue, while maintaining or increasing it within the target volume. This approach, however, requires a method for experimental verification of proton LET distributions and patient-specific quality assurance in terms of proton LET.
To meet this need, researchers at the Institute of Nuclear Physics, Polish Academy of Sciences have used the miniaturized semiconductor pixel detector Timepix3 to perform LET characterization of intensity-modulated proton therapy (IMPT) plans in homogeneous and heterogeneous phantoms. They report their findings in Physics in Medicine & Biology.
Experimental validation
First author Paulina Stasica-Dudek and colleagues performed a series of experiments in a gantry treatment room at the Cyclotron Centre Bronowice (CCB), a proton therapy facility equipped with a proton cyclotron accelerator and pencil-beam scanning system that provides IMPT for up to 50 cancer patients per day.
The MiniPIX Timepix3 is a radiation imaging pixel detector based on the Timepix3 chip developed at CERN within the Medipix collaboration (provided commercially by Advacam). It provides quasi-continuous single particle tracking, allowing particle type recognition and spectral information in a wide range of radiation environments.
For this study, the team used a Timepix3 detector with a 300 µm-thick silicon sensor operated as a miniaturized online radiation camera. To overcome the problem of detector saturation in the relatively high clinical beam currents, the team developed a pencil-beam scanning method with the beam current reduced to the picoampere (pA) level.
The researchers used Timepix3 to measure the deposited energy and LET spectra for spread-out Bragg peak (SOBP) and IMPT plans delivered to a homogeneous water-equivalent slab phantom, with each plan energy layer irradiated and measured separately. They also performed measurements on an IMPT plan delivered to a heterogeneous head phantom. For each scenario, they used a Monte Carlo (MC) code to simulate the corresponding spectra of deposited energy and LET for comparison.
The team first performed a series of experiments using a homogeneous phantom irradiated with various fields, mimicking patient-specific quality assurance procedures. The measured and simulated dose-averaged LET (LETd) and LET spectra agreed to within a few percent, demonstrating proper calibration of the measurement methodology.
The researchers also performed an end-to-end test in a heterogeneous CIRS head phantom, delivering a single field of an IMPT plan to a central 4 cm-diameter target volume in 13 energy layers (96.57–140.31 MeV) and 315 spots.
End-to-end testing Energy deposition (left) and LET in water (right) spectra for an IMPT plan measured in the CIRS head phantom obtained based on measurements (blue) and MC simulations (orange). The vertical lines indicate LETd values. (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)
For head phantom measurements, the peak positions for deposited energy and LET spectra obtained based on experiment and simulation agreed within the error bars, with LETd values of about 1.47 and 1.46 keV/µm, respectively. The mean LETd values derived from MC simulation and measurement differed on average by 5.1% for individual energy layers.
Clinical translation
The researchers report that implementing the proposed LET measurement scheme using Timepix3 in a clinical setting requires irradiating IMPT plans with a reduced beam current (at the pA level). While they successfully conducted LET measurements at low beam currents in the accelerator’s research mode, pencil-beam scanning at pA-level currents is not currently available in the commercial clinical or quality assurance modes. Therefore, they note that translating the proposed approach into clinical practice would require vendors to upgrade the beam delivery system to enable beam monitoring at low beam currents.
“The presented results demonstrate the feasibility of the Timepix3 detector to validate LET computations in IMPT fields and perform patient-specific quality assurance in terms of LET. This will support the implementation of LET in treatment planning, which will ultimately increase the effectiveness of the treatment,” Stasica-Dudek and colleagues write. “Given the compact design and commercial availability of the Timepix3 detector, it holds promise for broad application across proton therapy centres.”
Physicists at CERN have completed a “test run” for taking antimatter out of the laboratory and transporting it across the site of the European particle-physics facility. Although the test was carried out with ordinary protons, the team that performed it says that antiprotons could soon get the same treatment. The goal, they add, is to study antimatter in places other than the labs that create it, as this would enable more precise measurements of the differences between matter and antimatter. It could even help solve one of the biggest mysteries in physics: why does our universe appear to be made up almost entirely of matter, with only tiny amounts of antimatter?
According to the Standard Model of particle physics, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed). This might sound straightforward, but it leads to a peculiar prediction. Under the Standard Model, the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter. But if that were the case, there shouldn’t be any matter left, because whenever pairs of antimatter and matter particles collide, they annihilate each other in a burst of energy.
Physicists therefore suspect that there are other, more subtle differences between matter particles and their antimatter counterparts – differences that could explain why the former prevailed while the latter all but disappeared. By searching for these differences, they hope to shed more light on antimatter-matter asymmetry – and perhaps even reveal physics beyond the Standard Model.
Extremely precise measurements
At CERN’s Baryon-Antibaryon Symmetry Experiment (BASE) experiment, the search for matter-antimatter differences focuses on measuring the magnetic moment (or charge-to-mass ratio) of protons and antiprotons. These measurements need to be extremely precise, but this is difficult at CERN’s “Antimatter Factory” (AMF), which manufactures the necessary low-energy antiprotons in profusion. This is because essential nearby equipment – including the Antiproton Decelerator and ELENA, which reduce the energy of incoming antiprotons from GeV to MeV – produces magnetic field fluctuations that blur the signal.
To carry out more precise measurements, the team therefore needs a way of transporting the antiprotons to other, better-shielded, laboratories. This is easier said than done, because antimatter needs to be carefully isolated from its environment to prevent it from annihilating with the walls of its container or with ambient gas molecules.
The BASE team’s solution was to develop a device that can transport trapped antiprotons on a truck for substantial distances. It is this device, known as BASE-STEP (for Symmetry Tests in Experiments with Portable Antiprotons), that has now been field-tested for the first time.
Protons on the go
During the test, the team successfully transported a cloud of about 105 trapped protons out of the AMF and across CERN’s Meyrin campus over a period of four hours. Although protons are not the same as antiprotons, BASE-STEP team leader Christian Smorra says they are just as sensitive to disturbances in their environment caused by, say, driving them around. “They are therefore ideal stand-ins for initial tests, because if we can transport protons, we should also be able to transport antiprotons,” he says.
The next step: BASE-STEP on a transfer trolley, watched over by BASE team members Fatma Abbass and Christian Smorra. (Photo: BASE/Maria Latacz)
The BASE-STEP device is mounted on an aluminium frame and measures 1.95 m x 0.85 m x 1.65 m. At 850‒900 kg, it is light enough to be transported using standard forklifts and cranes.
Like BASE, it traps particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper. To further confine the protons and prevent them from colliding with the trap’s walls, this trap is surrounded by a superconducting magnet bore operated at cryogenic temperatures. The second electrode stack is also kept at ultralow pressures of 10-19 bar, which Smorra says is low enough to keep antiparticles from annihilating with residual gas molecules. To transport antiprotons instead of protons, Smorra adds, they would just need to switch the polarity of the electrodes.
The transportable trap system, which is detailed in Nature, is designed to remain operational on the road. It uses a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, and its frame can handle accelerations of up to 1g (9.81 m/s2) in all directions over and above the usual (vertical) force of gravity. This means it can travel up and down slopes with a gradient of up to 10%, or approximately 6°.
Once the BASE-STEP device is re-configured to transport antiprotons, the first destination on the team’s list is a new Penning-trap system currently being constructed at the Heinrich Heine University in Düsseldorf, Germany. Here, physicists hope to search for charge-parity-time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is possible at CERN’s AMF.
“At BASE, we are currently performing measurements with a precision of 16 parts in a trillion,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine and a researcher at CERN and Japan’s RIKEN laboratory. “These experiments are the most precise tests of matter/antimatter symmetry in the baryon sector to date, but to make these experiments better, we have no choice but to transport the particles out of CERN’s antimatter factory,” he tells Physics World.
Many creative industries rely on cutting-edge digital technologies, so it is not surprising that this sector could easily become an early adopter of quantum computing.
In this episode of the Physics World Weekly podcast I am in conversation with James Wootton, who is chief scientific officer at Moth Quantum. Based in the UK and Switzerland, the company is developing quantum-software tools for the creative industries – focusing on artists, musicians and game developers.
Wootton joined Moth Quantum in September 2024 after working on quantum error correction at IBM. He also has long-standing interest in quantum gaming and creating tools that make quantum computing more accessible. If you enjoyed this interview with Wootton, check out this article that he wrote for Physics World in 2018: “Playing games with quantum computers“.
Five-body recombination, in which five identical atoms form a tetramer molecule and a single free atom, could be the largest contributor to loss from ultracold atom traps at specific “Efimov resonances”, according to calculations done by physicists in the US. The process, which is less well understood than three- and four-body recombination, could be useful for building molecules, and potentially for modelling nuclear fusion.
A collision involving trapped atoms can be either elastic – in which the internal states of the atoms and their total kinetic energy remain unchanged – or inelastic, in which there is an interchange between the kinetic energy of the system and the internal energy states of the colliding atoms.
Most collisions in a dilute quantum gas involve only two atoms, and when physicists were first studying Bose-Einstein condensates (the ultralow-temperature state of some atomic gases), they suppressed inelastic two-body collisions, keeping the atoms in the desired state and preserving the condensate. A relatively small number of collisions, however, involve three or more bodies colliding simultaneously.
“They couldn’t turn off three body [inelastic collisions], and that turned out to be the main reason atoms leaked out of the condensate,” says theoretical physicist Chris Greene of Purdue University in the US.
Something remarkable
While attempting to understand inelastic three-body collisions, Greene and colleagues made the connection to work done in the 1970s by the Soviet theoretician Vitaly Efimov. He showed that at specific “resonances” of the scattering length, quantum mechanics allowed two colliding particles that could otherwise not form a bound state to do so in the presence of a third particle. While Efimov first considered the scattering of nucleons (protons and neutrons) or alpha particles, the effect applies to atoms and other quantum particles.
In the case of trapped atoms, the bound dimer and free atom are then ejected from the trap by the energy released from the binding event. “There were signatures of this famous Efimov effect that had never been seen experimentally,” Greene says. This was confirmed in 2005 by experiments from Rudolf Grimm’s group at the University of Innsbruck in Austria.
Hundreds of scientific papers have now been written about three-body recombination. Greene and colleagues subsequently predicted resonances at which four-body Efimov recombination could occur, producing a trimer. These were observed almost immediately by Grimm and colleagues. “Five was just too hard for us to do at the time, and only now are we able to go that next step,” says Greene.
Principal loss channel
In the new work, Greene and colleague Michael Higgins modelled collisions between identical caesium atoms in an optical trap. At specific resonances, five-body recombination – in which four atoms combine to produce a tetramer and a free particle – is not only enhanced but becomes the principal loss channel. The researchers believe these resonances should be experimentally observable using today’s laser box traps, which hold atomic gases in a square-well potential.
“For most ultracold experiments, researchers will be avoiding loss as much as possible – they would stay away from these resonances,” says Greene; “But for those of us in the few-body community interested in how atoms bind and resonate and how to describe complicated rearrangement, it’s really interesting to look at these points where the loss becomes resonant and very strong.” This is one technique that can be used to create new molecules, for example.
In future, Greene hopes to apply the model to nucleons themselves. “There have been very few people in the few-body theory community willing to tackle a five-particle collision – the Schrödinger equation has so many dimensions,” he says.
Fusion reactions
He hopes it may be possible to apply the researchers’ toolkit to nuclear reactions. “The famous one is the deuterium/tritium fusion reaction. When they collide they can form an alpha particle and a neutron and release a ton of energy, and that’s the basis of fusion reactors…There’s only one theory in the world from the nuclear community, and it’s such an important reaction I think it needs to be checked,” he says.
The researchers also wish to study the possibility of even larger bound states. However, they foresee a problem because the scattering length of the ground state resonance gets shorter and shorter with each additional particle. “Eventually the scattering length will no longer be the dominant length scale in the problem, and we think between five and six is about where that border line occurs,” Greene says. Nevertheless, higher-lying, more loosely-bound six-body Efimov resonances could potentially be visible at longer scattering lengths.
Theoretical physicist Ravi Rau of Louisiana State University in the US is impressed by Greene and Higgins’ work. “For quite some time Chris Greene and a succession of his students and post-docs have been extending the three-body work that they did, using the same techniques, to four and now five particles,” he says. “Each step is much more complicated, and that he could use this technique to extend it to five bosons is what I see as significant.” Rau says, however, that “there is a vast gulf” between five atoms and the number treated by statistical mechanics, so new theoretical approaches may be required to bridge the gap.
The Mars rover Perseverance has captured the first image of an aurora as seen from the surface of another planet. The visible-light image, which was taken during a solar storm on 18 March 2024, is not as detailed or as colourful as the high-resolution photos of green swirls, blue shadows and pink whorls familiar to aurora aficionados on Earth. Nevertheless, it shows the Martian sky with a distinctly greenish tinge, and the scientists who obtained it say that similar aurorae would likely be visible to future human explorers.
“Kind of like with aurora here on Earth, we need a good solar storm to induce a bright green colour, otherwise our eyes mostly pick up on a faint grey-ish light,” explains Elise Wright Knutsen, a postdoctoral researcher in the Centre for Space Sensors and Systems at the University of Oslo, Norway. The storm Knutsen and her colleagues captured was, she adds, “rather moderate”, and the aurora it produced was probably too faint to see with the naked eye. “But with a camera, or if the event had been more intense, the aurora will appear as a soft green glow covering more or less the whole sky.”
The role of planetary magnetic fields
Aurorae happen when charged particles from the Sun – the solar wind – interact with the magnetic field around a planet. On Earth, this magnetic field is the product of an internal, planetary-scale magnetic dynamo. Mars, however, lost its dynamo (and, with it, its oceans and its thick protective atmosphere) around four billion years ago, so its magnetic field is much weaker. Nevertheless, it retains some residual magnetization in its southern highlands, and its conductive ionosphere affects the shape of the nearby interplanetary magnetic field. Together, these two phenomena give Mars a hybrid magnetosphere too feeble to protect its surface from cosmic rays, but strong enough to generate an aurora.
Scientists had previously identified various types of aurorae on Mars (and every other planet with an atmosphere in our solar system) in data from orbiting spacecraft. However, no Mars rover had ever observed an aurora before, and all the orbital aurora observations, from Mars and elsewhere, were at ultraviolet wavelengths.
Awesome sight: An artist’s impression of the aurora and the Perseverance rover. (Courtesy: Alex McDougall-Page)
How to spot an aurora on Mars
According to Knutsen, the lack of visible-light, surface-based aurora observations has several causes. First, the visible-wavelength instruments on Mars rovers are generally designed to observe the planet’s bright “dayside”, not to detect faint emissions on its nightside. Second, rover missions focus primarily on geology, not astronomy. Finally, aurorae are fleeting, and there is too much demand for Perseverance’s instruments to leave them pointing at the sky just in case something interesting happens up there.
“We’ve spent a significant amount of time and effort improving our aurora forecasting abilities,” Knutsen says.
Getting the timing of observations right was the most challenging part, she adds. The clock started whenever solar satellites detected events called coronal mass ejections (CMEs) that create unusually strong pulses of solar wind. Next, researchers at the NASA Community Coordinated Modeling Center simulated how these pulses would propagate through the solar system. Once they posted the simulation results online, Knutsen and her colleagues – an international consortium of scientists in Belgium, France, Germany, the Netherlands, Spain, the UK and the US as well as Norway – had a decision to make. Was this CME likely to trigger an aurora bright enough for Perseverance to detect?
If the answer was “yes”, their next step was to request observation time on Perseverance’s SuperCam and Mastcam-Z instruments. Then they had to wait, knowing that although CMEs typically take three days to reach Mars, the simulations are only accurate to within a few hours and the forecast could change at any moment. Even if they got the timing right, the CME might be too weak to trigger an aurora.
“We have to pick the exact time to observe, the whole observation only lasts a few minutes, and we only get one chance to get it right per solar storm,” Knutsen says. “It took three unsuccessful attempts before we got everything right, but when we did, it appeared exactly as we had imagined it: as a diffuse green haze, uniform in all directions.”
Future observations
Writing in Science Advances, Knutsen and colleagues say it should now be possible to investigate how Martian aurorae vary in time and space – information which, they note, is “not easily obtained from orbit with current instrumentation”. They also point out that the visible-light instruments they used tend to be simpler and cheaper than UV ones.
“This discovery will open up new avenues for studying processes of particle transport and magnetosphere dynamics,” Knutsen tells Physics World. “So far we have only reported our very first detection of this green emission, but observations of aurora can tell us a lot about how the Sun’s particles are interacting with Mars’s magnetosphere and upper atmosphere.”
Late on Friday 18 April, the provost of Stony Brook University, where I teach, received a standard letter from the National Science Foundation (NSF), the body that funds much academic research in the US. “Termination of certain awards is necessary,” the e-mail ran, “because they are not in alignment with current NSF priorities”. The e-mail mentioned “NSF Award Id 2318247”. Mine.
The termination notice, forwarded to me a few minutes later, was the same one that 400 other researchers all over the US received the same day, in which the agency, following a directive from the Trump administration, grabbed back $233m in grant money. According to the NSF website, projects terminated were “including but not limited to those on diversity, equity, and inclusion (DEI) and misinformation/disinformation”.
Losing grant money is disastrous for research and for the faculty, postdocs, graduate students and support staff who depend on that support. A friend of mine tried to console me by saying that I had earned a badge of honour for being among the 400 people who threatened the Trump Administration so much that it set out to stop their work. Still, I was baffled. Did I really deserve the axe?
My award, entitled “Social and political dynamics of lab-community relations”, was small potatoes. As the sole principal investigator, I’d hired no postdocs or grad students. I’d also finished most of the research and been given a “no-cost extension” to write it up that was due to expire in a few months. In fact, I’d spent all but $21,432 of the $263,266 of cash.
That may sound like a lot for a humanities researcher, but it barely covered a year of my salary and included indirect costs (to which my grant was subject like any other), along with travel and so on. What’s more, my project’s stated aim was to “enhance the effectiveness of national scientific facilities”, which was clearly within the NSF’s mission.
Such facilities, I had pointed out in my official proposal, are vital if the US is to fulfil its national scientific, technological, medical and educational goals. But friction between a facility and the surrounding community can hamper its work, particularly if the lab’s research is seen as threatening – for example, involving chemical, radiological or biological hazards. Some labs, in fact, have had important, yet perfectly safe, facilities permanently closed out of such fear.
“In an age of Big Science,” I argued, “understanding the dynamics of lab-community interaction is crucial to advancing national, scientific, and public interests.” What’s so contentious about that?
“New bad words”
Maybe I had been careless. After all, Ted Cruz, who chairs the Senate’s commerce committee, had claimed in February that 3400 NSF awards worth over $2 billion made during the Biden–Harris administration had promoted DEI and advanced “neo-Marxist class warfare propaganda”. I wondered if I might have inadvertently used some trigger word that outed me as an enemy of the state.
I knew, for instance, that the Trump Administration had marked for deletion photos of the Enola Gay aircraft, which had dropped an atomic bomb on Hiroshima, in a Defense Department database because officials had not realized that “Gay” was part of the name of the pilot’s mother. Administration officials had made similar misinterpretations in scientific proposals that included the words “biodiversity” and “transgenic”.
Had I used one of those “new bad words”? I ran a search on my proposal. Did it mention “equity”? No. “Inclusion”? Also no. The word “diversity” appeared only once, in the subtitle of an article in the bibliography about radiation fallout. “Neo-Marxist”? Again, no. Sure, I’d read Marx’s original texts during my graduate training in philosophy, but my NSF documents hadn’t tapped him or his followers as essential to my project.
Then I remembered a sentence in my proposal. “Well-established scientific findings,” I wrote, “have been rejected by activists and politicians, distorted by lurid headlines, and fuelled partisan agendas.” These lead in turn to “conspiracy theories, fake facts, science denial and charges of corruption”.
Was that it, I wondered? Had the NSF officials thought that I had meant to refer to the administration’s attacks on climate change science, vaccines, green energy and other issues? If so, that was outrageous! There was not a shred of truth to it – no truth at all!
I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract
I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract. The abstract given for my grant was apparently that of another NSF award, for a study that touched on DEI themes – a legitimate and useful thing to study under any normal regime, but not this one. At last, I had the reason for my grant termination: an NSF error.
The next day, 24 April, I managed to speak to the beleaguered NSF programme director, who was kind and understanding and said there’d been a mistake in the database. When I asked her if it could be fixed she said, “I don’t know”. When I asked her if the termination can be reversed, she said, “I don’t know”. I alerted Stony Brook’s grants-management office, which began to press the NSF to reverse its decision. A few hours later I learned that NSF director Sethuraman Panchanathan had resigned.
I briefly wondered if Panchanathan had been fired because my grant had been bungled. No such luck; he was probably disgusted with the administration’s treatment of the agency. But while the mistake over my abstract evidently wasn’t deliberate, the malice behind my grant’s termination certainly was. Further, doesn’t one routinely double-check before taking such an unprecedented and monumental step as terminating a grant by a major scientific agency?
I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately
I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately. A man was mistakenly sent by the Department of Homeland Security to a dangerous prison in El Salvador and they couldn’t (or wouldn’t) get him back. The Department of Health and Human Services has downplayed the value of vaccines, fuelling a measles epidemic in Texas, while defence secretary Pete Hegseth used the Signal messaging app to release classified military secrets regarding a war in progress to a journalist.
How narcissistic of me to become livid only when personally affected by termination of an award that’s almost over anyway.
A few days later, on 28 April, Stony Brook’s provost received another e-mail about my grant from the NSF. Forwarded to me, it said: “the termination notice is retracted; NSF terminated this project in error”. Since then, the online documents at the NSF, and the information about my grant in the tracker, have thankfully been corrected.
The critical point
In a few years’ time, I’ll put together another proposal to study the difference between the way that US government handles science and the needs of its citizens. I’ll certainly have a lot more material to draw on. Meanwhile, I’ll reluctantly wear my badge of honour. For I deserve it – though not, as I initially thought, because I had threatened the Trump Administration enough that they tried to halt my research.
I got it simply because I’m yet another victim of the Trump Administration’s incompetence.
Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.
Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.
Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.
Bullet Cluster experiment
In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.
Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.
Interaction strength: Constraints on the dark electromagnetic coupling constant 𝛼𝐷 based on observations from the Bullet Cluster. 𝛼𝐷 must lie below the blue, green and red regions. Dashed lines show the reference value used for the mass of 1 TeV. (Courtesy: K Schoefler et al., “Can plasma physics establish a significant bound on long-range dark matter interactions?” Phys Rev D111 L071701, https://doi.org/10.1103/PhysRevD.111.L071701)
The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.
Ruling out many of the simplest dark matter theories
The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.
This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”
As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.
In the quantum world, observing a particle is not a passive act. If you shine light on a quantum object to measure its position, photons scatter off it and disturb its motion. This disturbance is known as quantum backaction noise, and it limits how precisely physicists can observe or control delicate quantum systems.
Physicists at Swansea University have now proposed a technique that could eliminate quantum backaction noise in optical traps, allowing a particle to remain suspended in space undisturbed. This would bring substantial benefits for quantum sensors, as the amount of noise in a system determines how precisely a sensor can measure forces such as gravity; detect as-yet-unseen interactions between gravity and quantum mechanics; and perhaps even search for evidence of dark matter.
There’s just one catch: for the technique to work, the particle needs to become invisible.
Levitating nanoparticles
Backaction noise is a particular challenge in the field of levitated optomechanics, where physicists seek to trap nanoparticles using light from lasers. “When you levitate an object, the whole thing moves in space and there’s no bending or stress, and the motion is very pure,” explains James Millen, a quantum physicist who studies levitated nanoparticles at Kings College, London, UK. “That’s why we are using them to detect crazy stuff like dark matter.”
While some noise is generally unavoidable, Millen adds that there is a “sweet spot” called the Heisenberg limit. “This is where you have exactly the right amount of measurement power to measure the position optimally while causing the least noise,” he explains.
The problem is that laser beams powerful enough to suspend a nanoparticle tend to push the system away from the Heisenberg limit, producing an increase in backaction noise.
Blocking information flow
The Swansea team’s method avoids this problem by, in effect, blocking the flow of information from the trapped nanoparticle. Its proposed setup uses a standing-wave laser to trap a nanoparticle in space with a hemispherical mirror placed around it. When the mirror has a specific radius, the scattered light from the particle and its reflection interfere so that the outgoing field no longer encodes any information about the particle’s position.
At this point, the particle is effectively invisible to the observer, with an interesting consequence: because the scattered light carries no usable information about the particle’s location, quantum backaction disappears. “I was initially convinced that we wanted to suppress the scatter,” team leader James Bateman tells Physics World. “After rigorous calculation, we arrived at the correct and surprising answer: we need to enhance the scatter.”
In fact, when scattering radiation is at its highest, the team calculated that the noise should disappear entirely. “Even though the particle shines brighter than it would in free space, we cannot tell in which direction it moves,” says Rafał Gajewski, a postdoctoral researcher at Swansea and Bateman’s co-author on a paper in Physical Review Research describing the technique.
Gajewski and Bateman’s result flips a core principle of quantum mechanics on its head. While it’s well known that measuring a quantum system disturbs it, the reverse is also true: if no information can be extracted, then no disturbance occurs, even when photons continuously bombard the particle. If physicists do need to gain information about the trapped nanoparticle, they can use a different, lower-energy laser to make their measurements, allowing experiments to be conducted at the Heisenberg limit with minimal noise.
Putting it into practice
For the method to work experimentally, the team say the mirror needs a high-quality surface and a radius that is stable with temperature changes. “Both requirements are challenging, but this level of control has been demonstrated and is achievable,” Gajewski says.
Positioning the particle precisely at the center of the hemisphere will be a further challenge, he adds, while the “disappearing” effect depends on the mirror’s reflectivity at the laser wavelength. The team is currently investigating potential solutions to both issues.
If demonstrated experimentally, the team says the technique could pave the way for quieter, more precise experiments and unlock a new generation of ultra-sensitive quantum sensors. Millen, who was not involved in the work, agrees. “I think the method used in this paper could possibly preserve quantum states in these particles, which would be very interesting,” he says.
Because nanoparticles are far more massive than atoms, Millen adds, they interact more strongly with gravity, making them ideal candidates for testing whether gravity follows the strange rules of quantum theory. “Quantum gravity – that’s like the holy grail in physics!” he says.
The UK-based company Delta.g has bagged the 2025 qBIG prize, which is awarded by the Institute of Physics (IOP). Initiated in 2023, qBIG celebrates and promotes the innovation and commercialization of quantum technologies in the UK and Ireland.
Based in Birmingham, Delta.g makes quantum sensors that measure the local gravity gradient. This is done using atom interferometry, whereby laser pulses are fired at a cloud of cold atoms that is freefalling under gravity.
On the Earth’s surface, this gradient is sensitive to the presence of buildings and underground voids such as tunnels. The technology was developed by physicists at the University of Birmingham and in 2022 they showed how it could be used to map out a tunnel below a road on campus. The system has also been deployed in a cave and on a ship to test its suitability for use in navigation.
Challenging to measure
“Gravity is a fundamental force, yet its full potential remains largely untapped because it is so challenging to measure,” explains Andrew Lamb who is co-founder and chief technology officer at Delta.g. “As the first to take quantum technology gravity gradiometry from the lab to the field, we have set a new benchmark for high-integrity, noise-resistant data transforming how we understand and navigate the subsurface.”
Awarded by the IOP, the qBIG prize is sponsored by Quantum Exponential, which is the UK’s first enterprise venture capital fund focused on quantum technology. The winner was announced today at the Economist’s Commercialising Quantum Global 2025 event in London. Delta.g receives a £10,000 unrestricted cash prize; 10 months of mentoring from Quantum Exponential; and business support from the IOP.
Louis Barson, the IOP’s director of science, innovation and skills says, “The IOP’s role as UK and Ireland coordinator of the International Year of Quantum 2025 gives us a unique opportunity to showcase the exciting developments in the quantum sector. Huge congratulations must go to the Delta.g team, whose incredible work stood out in a diverse and fast-moving field.”
Two runners-up were commended by the IOP. One is Glasgow-based Neuranics, which makes quantum sensors that detect tiny magnetic signals from the human body. This other is Southampton’s Smith Optical, which makes an augmented-reality display based on quantum technology.
The electrochemical reduction of carbon dioxide is used to produce a range of chemical and energy feedstocks including syngas (hydrogen and carbon monoxide), formic acid, methane and ethylene. As well as being an important industrial process, the large-scale reduction of carbon dioxide by electrolysis offers a practical way to capture and utilize carbon dioxide.
As a result, developing new and improved electrochemical processes for carbon-dioxide reduction is an important R&D activity. This work involves identifying which catalyst and electrolyte materials are optimal for efficient production. And when a promising electrochemical system is identified in the lab, the work is not over because the design must be then scaled up to create an efficient and practical industrial process.
Such R&D activities must overcome several challenges in operating and characterizing potential electrochemical systems. These include maintaining the correct humidification of carbon-dioxide gas during the electrolysis process and minimizing the production of carbonates – which can clog membranes and disrupt electrolysis.
While these challenges can be daunting, they can be overcome using the 670 Electrolysis Workstation from US-based Scribner. This is a general-purpose electrolysis system designed to test the materials used in the conversion of electrical energy to fuels and chemical feedstocks – and it is ideal for developing systems for carbon-dioxide reduction.
Turn-key and customizable
The workstation is a flexible system that is both turn-key and customizable. Liquid and gas reactants can be used on one or both of the workstation’s electrodes. Scribner has equipped the 670 Electrolysis Workstation with cells that feature gas diffusion electrodes and membranes from US-based Dioxide Materials. The company specializes in the development of technologies for converting carbon dioxide into fuels and chemicals, and it was chosen by Scribner because Dioxide Materials’ products are well documented in the scientific literature.
The gas diffusion electrodes are porous graphite cathodes through which carbon-dioxide gas flows between input and output ports. The gas can migrate from the graphite into a layer containing a metal catalyst. Membranes are used in electrolysis cells to ensure that only the desired ions are able to migrate across the cell, while blocking the movement of gases.
Fully integrated Scribner’s Jarrett Mansergh (left) and Luke Levin-Pompetzki of Hiden Analytical in Scribner’s lab after integrating the electrolysis and mass-spectrometry systems. (Courtesy: Scribner)
The system employs a multi-range ±20 A and 5 V potentiostat for high-accuracy operation over a wide range of reaction rates and cell sizes. The workstation is controlled by Scribner’s FlowCell™ software, which provides full control and monitoring of test cells and comes pre-loaded with a wide range of experimental protocols. This includes electrochemical impedance spectroscopy (EIS) capabilities up to 20 KHz and cyclic voltammetry protocols – both of which are used to characterize the health and performance of electrochemical systems. FlowCell™ also allows users to set up long duration experiments while providing safety monitoring with alarm settings for the purging of gases.
Humidified gas
The 670 Electrolysis Workstation features a gas handling unit that can supply humidified gas to test cells. Adding water vapour to the carbon-dioxide reactant is crucial because the water provides the protons that are needed to convert carbon dioxide to products such as methane and syngas. Humidifying gas is very difficult and getting it wrong leads to unwanted condensation in the system. The 670 Electrolysis Workstation uses temperature control to minimize condensation. The same degree of control can be difficult to achieve in homemade systems, leading to failure.
The workstation offers electrochemical cells with 5 cm2 and 25 cm2 active areas. These can be used to build carbon-dioxide reduction cells using a range of materials, catalysts and membranes – allowing the performance of these prototype cells to be thoroughly evaluated. By studying cells at these two different sizes, researchers can scale up their electrochemical systems from a preliminary experiment to something that is closer in size to an industrial system. This makes the 670 Electrolysis Workstation ideal for use across university labs, start-up companies and corporate R&D labs.
The workstation can handle, acids, bases and organic solutions. For carbon-dioxide reduction, the cell is operated with a liquid electrolyte on the positive electrode (anode) and gaseous carbon dioxide at the negative electrode (cathode). An electric potential is applied across the electrodes and the product gas comes off the cathode side.
The specific product is largely dependent on the catalyst used at the cathode. If a silver catalyst is used for example, the cell is likely to produce the syngas. If a tin catalyst is used, the product is more likely to be formic acid.
Mass spectrometry
The best way to ensure that the desired products are being made in the cell is to connect the gas output to a mass spectrometer. As a result, Scribner has joined forces with Hiden Analytical to integrate the UK-based company’s HPR-20 mass spectrometer for gas analysis. The Hiden system is specifically configured to perform continuous analysis of evolved gases and vapours from the 670 Electrolysis Workstation.
The Scribner CO2 Reduction Cell Fixture (Courtesy: Scribner)
If a cell is designed to create syngas, for example, the mass spectrometer will determine exactly how much carbon monoxide is being produced and how much hydrogen is being produced. At the same time, researchers can monitor the electrochemical properties of the cell. This allows researchers to study relationships between a system’s electrical performance and the chemical species that it produces.
Monitoring gas output is crucial for optimizing electrochemical processes that minimize negative effects such as the production of carbonates, which is a significant problem when doing carbon dioxide reduction.
In electrochemical cells, carbon dioxide is dissolved in a basic solution. This results in the precipitation of carbonate salts that clog up the membranes in cells, greatly reducing performance. This is a significant problem when scaling up cell designs for industrial use because commercial cells must be very long-lived.
Pulsed-mode operation
One strategy for dealing with carbonates is to operate electrochemical cells in pulsed mode, rather than in a steady state. The off time allows the carbonates to migrate away from electrodes, which minimizes clogging. The 670 Electrolysis Workstation allows users to explore the use of short, second-scale pulses. Another option that researchers can explore is the use of pulses of fresh water to flush carbonates away from the cathode area. These and other options are available in a set of pre-programmed experiments that allow users to explore the mitigation of salt formation in their electrochemical cells.
The gaseous products of these carbonate-mitigation modes can be monitored in real time using Hiden’s mass spectrometer. This allows researchers to identify any changes in cell performance that are related to pulsed operation. Currently, electrochemical and product characteristics can be observed on time scales as short as 100 ms. This allows researchers to fine-tune how pulses are applied to minimize carbonate production and maximize the production of desired gases.
Real-time monitoring of product gases is also important when using EIS to observe the degradation of the electrochemical performance of a cell over time. This provides researchers with a fuller picture of what is happening in a cell as it ages.
The integration of Hiden’s mass spectrometer to the 670 Electrolysis Workstation is the latest innovation from Scribner. Now, the company is working on improving the time resolution of the system so that even shorter pulse durations can be studied by users. The company is also working on boosting the maximum current of the 670 to 100 A.
As we celebrate the International Year of Quantum Science and Technology, the quantum technology landscape is a swiftly evolving place. From developments in error correction and progress in hybrid classical-quantum architectures all the way to the commercialization of quantum sensors, there is much to celebrate.
An expert in quantum information processing and quantum technology, physicist Mauro Paternostro is based at the University of Palermo and Queen’s University Belfast. He is also editor-in-chief of the IOP Publishing journal Quantum Science and Technology, which celebrates its 10th anniversary this year. Paternostro talks to Tushna Commissariat about the most exciting recent developments in the filed, his call for a Quantum Erasmus programme and his plans for the future of the journal.
What’s been the most interesting development in quantum technologies over the last year or so?
I have a straightforward answer as well as a more controversial one. First, the simpler point: the advances in quantum error correction for large-scale quantum registers are genuinely exciting. I’m specifically referring to the work conducted by Mikhail Lukin, Dolev Bluvstein and colleagues at Harvard University, and at the Massachusetts Institute of Technology and QuEra Computing, who built a quantum processor with 48 logical qubits that can execute algorithms while correcting errors in real time. In my opinion, this marks a significant step forward in developing computational platforms with embedded robustness. Error correction plays a vital role in the development of practical quantum computers, and Lukin and colleagues won Physics World’s 2024 Breakthrough of the Year award for their work.
Logical minds Dolev Bluvstein (left) and Mikhail Lukin with their quantum processor. (Courtesy: Jon Chase/Harvard University)
Now, for the more complex perspective. Aside from ongoing debate about whether Microsoft’s much-discussed eight-qubit topological quantum processor – Majorana 1 – is genuinely using topological qubits, I believe the device will help to catalyze progress in integrated quantum chips. While it may not qualify as a genuine breakthrough in the long run, this moment could be the pivotal turning-point in the evolution of quantum computational platforms. All the major players will likely feel compelled to accelerate their efforts toward the unequivocal demonstration of “quantum chip” capabilities, and such a competitive drive is just want both industry and government need right now.
Technical turning-point? Microsoft has unveiled a quantum processor called Majorana 1 that boasts a “topological core”. (Courtesy: John Brecher/Microsoft)
How do you think quantum technologies will scale up as they emerge from the lab and into real-world applications?
I am optimistic in this regard. In fact, progress is already underway, with quantum-sensing devices and atomic quantum clocks are achieving the levels of technological readiness necessary for practical, real-world applications. In the future, hybrid quantum-high-performance computing (HPC) architectures will play crucial roles in bridging classical data-analysis with whatever the field evolves into, once quantum computers can offer genuine “quantum advantage” over classical machines.
Regarding communication, the substantial push toward networked, large-scale communication structures is noteworthy. The availability of the first operating system for programmable quantum networks opens “highways” toward constructing a large-scale “quantum internet”. This development promises to transform the landscape of communication, enabling new possibilities that we are just beginning to explore.
What needs to be done to ensure that the quantum sector can deliver on its promises in Europe and the rest of the world?
We must prioritize continuity and stability to maintain momentum. The national and supranational funding programmes that have supported developments and achievements over the past few years should not only continue, but be enhanced. I am concerned, however, that the current geopolitical climate, which is undoubtedly challenging, may divert attention and funding away from quantum technologies. Additionally, I worry that some researchers might feel compelled to shift their focus toward areas that align more closely with present priorities, such as military applications. While such shifts are understandable, they may not help us keep pace with the remarkable progress the field has made since governments in Europe and beyond began to invest substantially.
On a related note, we must take education seriously. It would be fantastic to establish a Quantum Erasmus programme that allows bachelor’s, master’s and PhD students in quantum technology to move freely across Europe so that they can acquire knowledge and expertise. We need coordinated national and supranational initiatives to build a pipeline of specialists in this field. Such efforts would provide the significant boost that quantum technology needs to continue thriving.
How can the overlap between quantum technology and artificial intelligence (AI) help each other develop?
The intersection and overlap between AI, high-performance computing, and quantum technologies are significant, and their interplay is, in my opinion, one of the most promising areas of exploration. While we are still in the early stages, we have only just started to tap into the potential of AI-based tools for tackling quantum tasks. We are already witnessing the emergence of the first quantum experiments supported by this hybrid approach to information processing.
The convergence of AI, HPC, and quantum computing would revolutionize how we conceive data processing, analysis, forecasting and many other such tasks. As we continue to explore and refine these technologies, the possibilities for innovation and advancement are vast, paving the way for transformations in various fields.
What do you hope the International Year of Quantum Science and Technology (IYQ) will have achieved, going forward?
The IYQ represents a global acknowledgment, at the highest levels, of the immense potential within this field. It presents a genuine opportunity to raise awareness worldwide about what a quantum paradigm for technological development can mean for humankind. It serves as a keyhole into the future, and IYQ could enable an unprecedented number of individuals – governments, leaders and policymakers alike – to peek though it and glimpse at this potential.
All stakeholders in the field should contribute to making this a memorable year. With IYQ, 2025 might even be considered as “year zero” of the quantum technology era.
As we mark its 10th anniversary, how have you enjoyed your time over the last year as editor-in-chief of the journal Quantum Science and Technology (QST)?
Time flies when you have fun, and this is a good time for me to reflect on the past year. Firstly, I want to express my heartfelt gratitude to Rob Thew, the founding editor-in-chief of QST, for his remarkable leadership during the journal’s early years. With unwavering dedication, he and the rest of the entire editorial board, has established QST as an authoritative and selective reference point for the community engaged in the broad field of quantum science and technology. The journal is now firmly recognized as a leading platform for timely and significant research outcomes. A 94% increase in submissions since our fifth anniversary has led to an impressive 747 submissions from 62 countries in 2024 alone, revealing the growing recognition and popularity of QST among scholars. Our acceptance rate of 27% further demonstrates our commitment to publishing only the highest calibre research.
As we celebrate IYQ, QST will lead the way with several exciting editorial initiatives aimed at disseminating the latest achievements in addressing the essential “pillars” of quantum technologies – computing, communication, sensing, and simulation – while also providing authoritative perspectives and visions for the future. Our focus collections seek research within Quantumtechnologies for quantum gravity & Focus on perspectives on the future of variational quantum computing.
What are your goals with QST, looking ahead?
As quantum technologies advance into an inter- and multi-disciplinary realm, merging fundamental quantum-science with technological applications, QST is evolving as well. We have an increasing number of submissions addressing the burgeoning area of machine learning-enhanced quantum information processing, alongside pioneering studies exploring the application of quantum computing in fields such as chemistry, materials science and quantitative finance. All of this illustrates how QST is proactive in seizing opportunities to advance knowledge from our community of scholars and authors.
This dynamic growth is a fantastic way to celebrate the journal’s 10th anniversary, especially with the added significant milestone of IYQ. Finally, I want to highlight a matter that is very close to my heart, reflecting a much-needed “duty of care” for our readership. As editor-in-chief, I am honoured to support a journal that is part of the ‘Purpose-Led Publishing’ initiative. I view this as a significant commitment to integrity, ethics, high standards, and transparency, which should be the foundation of any scientific endeavour.
Researchers in Germany report that they have directly measured a superconducting gap in a hydride sulphide material for the first time. The new finding represents “smoking gun” evidence for superconductivity in these materials, while also confirming that the electron pairing that causes it is mediated by phonons.
Superconductors are materials that conduct electricity without resistance. Many materials behave this way when cooled below a certain transition temperature Tc, but in most cases this temperature is very low. For example, solid mercury, the first superconductor to be discovered, has a Tc of 4.2 K. Superconductors that operate at higher temperatures – perhaps even at room temperature – are thus highly desirable, as an ambient-temperature superconductor would dramatically increase the efficiency of electrical generators and transmission lines.
The rise of the superhydrides
The 1980s and 1990s saw considerable progress towards this goal thanks to the discovery of high-temperature copper oxide superconductors, which have Tcs between 30–133 K. Then, in 2015, the maximum known critical temperature rose even higher thanks to the discovery that a sulphide material, H3S, has a Tc of 203 K when compressed to pressures of 150 GPa.
This result sparked a flurry of interest in solid materials containing hydrogen atoms bonded to other elements. In 2019, the record was broken again, this time by lanthanum decahydride (LaH10), which was found to have a Tc of 250–260 K, again at very high pressures.
A further advance occurred in 2021 with the discovery of high-temperature superconductivity in cerium hydrides. These novel phases of CeH9 and another newly-synthesized material, CeH10, are remarkable in that they are stable and display high-temperature superconductivity at lower pressures (about 80 GPa, or 0.8 million atmospheres) than the other so-called “superhydrides”.
But how does it work?
One question left unanswered amid these advances concerned the mechanism for superhydride superconductivity. According to the Bardeen–Cooper–Schrieffer (BCS) theory of “conventional” superconductivity, superconductivity occurs when electrons overcome their mutual electrical repulsion to form pairs. These electron pairs, which are known as Cooper pairs, can then travel unhindered through the material as a supercurrent without scattering off phonons (quasiparticles arising from vibrations of the material’s crystal lattice) or other impurities.
Cooper pairing is characterized by a tell-tale energy gap near what’s known as the Fermi level, which is the highest energy level that electrons can occupy in a solid at a temperature of absolute zero. This gap is equivalent to the maximum energy required to break up a Cooper pair of electrons, and spotting it is regarded as unambiguous proof of that material’s superconducting nature.
For the superhydrides, however, this is easier said than done, because measuring such a gap requires instruments that can withstand the extremely high pressures required for superhydrides to exist and behave as superconductors. Traditional techniques such as scanning tunnelling spectroscopy or angle-resolved photoemission spectroscopy do not work, and there was little consensus on what might take their place.
Planar electron tunnelling spectroscopy
A team led by researchers at Germany’s Max Planck Institute for Chemistry has now stepped in by developing a form of spectroscopy that can operate under extreme pressures. The technique, known as planar electron tunnelling spectroscopy, required the researchers to synthesize highly pure planar tunnel junctions of H3S and its deuterated equivalent D3S under pressures of over 100 GPa. Using a technique called laser heating, they created junctions with three parts: a metal, tantalum; a barrier made of tantalum pentoxide, Ta2O5; and the H3S or D3S superconductors. By measuring the differential conductance across the junctions, they determined the density of electron states in H3S and D3S near the Fermi level.
These tunnelling spectra revealed that both H3S and D3S have fully open superconducting gaps of 60 meV and 44 meV respectively. According to team member Feng Du, the smaller gap in D3S confirms that the superconductivity in H3S comes about thanks to interactions between electrons and phonons – a finding that backs up long-standing predictions.
The researchers hope their work, which they report on in Nature, will inspire more detailed studies of superhydrides. They now plan to measure the superconducting gap of other metal superhydrides and compare them with the covalent superhydrides they studied in this work. “The results from such experiments could help us understand the origin of the high Tc in these superconductors,” Du tells Physics World.
Researchers on the AEgIS collaboration at CERN have designed an experiment that could soon boost our understanding of how antimatter falls under gravity. Created by a team led by Francesco Guatieri at the Technical University of Munich, the scheme uses modified smartphone camera sensors to improve the spatial resolution of measurements of antimatter annihilations. This approach could be used in rigorous tests of the weak equivalence principle (WEP).
The WEP is a key concept of Albert Einstein’s general theory of relativity, which underpins our understanding of gravity. It suggests that within a gravitational field, all objects of should be accelerated at the same rate, regardless of their mass or whether they are matter or antimatter. Therefore, if matter and antimatter accelerate at different rates in freefall, it would reveal serious problems with the WEP.
In 2023 the ALPHA-g experiment at CERN was the first to observe how antimatter responds to gravity. They found that it falls down, with the tantalizing possibility that antimatter’s gravitational response is weaker than matter’s. Today, there are several experiments that are seeking to improve on this observation.
Falling beam
AEgIS’ approach is to create a horizontal beam of cold antihydrogen atoms and observe how the atoms fall under gravity. The drop will be measured by a moiré deflectometer in which a beam passes through two successive and aligned grids of horizontal slits before striking a position-sensitive detector. As the beam falls under gravity between the grids, the effect is similar to a slight horizontal misalignment of the grids. This creates a moiré pattern – or superlattice – that results in the particles making a distinctive pattern on the detector. By detecting a difference in the measured moiré pattern and that predicted by WEP, the AEgIS collaboration hopes to reveal a discrepancy with general relativity.
However, as Guatieri explains, a number of innovations are required for this to work. “For AEgIS to work, we need a detector with incredibly high spatial resolution. Previously, photographic plates were the only option, but they lacked real-time capabilities.”
AEgIS physicists are addressing this by developing a new vertexing detector. Instead of focussing on the antiparticles directly, their approach detects the secondary particles produced when the antimatter annihilates on contact with the detector. Tracing the trajectories of these particles back to their vertex gives the precise location of the annihilation.
Vertexing detector
Borrowing from industry, the team has created its vertexing detector using an array of modified mobile-phone camera sensors (see figure). Gautieri had already used this approach to measure the real-time positions of low-energy positrons (anti-electrons) with unprecedented precision.
“Mobile camera sensors have pixels smaller than 1 micron,” Guatieri describes. “We had to strip away the first layers of the sensors, which are made to deal with the advanced integrated electronics of mobile phones. This required high-level electronic design and micro-engineering.”
With these modifications in place, the team measured the positions of antiproton annihilations to within just 0.62 micron: making their detector some 35 times more precise than previous designs.
Many benefits
“Our solution, demonstrated for antiprotons and directly applicable to antihydrogen, combines photographic-plate-level resolution, real-time diagnostics, self-calibration and a good particle collection surface, all in one device,” Gautieri says.
With some further improvements, the AEgIS team is confident that their vertexing detector with boost the resolution of the freefall of horizontal antihydrogen beams – allowing rigorous tests of the WEC.
AEgIS team member Ruggero Caravita of Italy’s University of Trento adds, “This game-changing technology could also find broader applications in experiments where high position resolution is crucial, or to develop high-resolution trackers”. He says, “Its extraordinary resolution enables us to distinguish between different annihilation fragments, paving the way for new research on low-energy antiparticle annihilation in materials”.
A ceremony has been held today to officially open the Ray Dolby Centre at the University of Cambridge. Named after the Cambridge physicist and sound pioneer Ray Dolby, who died in 2013, the facility is the new home of the Cavendish Laboratory and will feature 173 labs as well as lecture halls, workshops, cleanrooms and offices.
Spanning 33 000 m² across five floors, the new centre will house 1100 staff members and students.
The basement will feature microscopy and laser labs containing vibration-sensitive equipment as well as 2500 m² of clean rooms.
The Dolby centre will also serve as a national hub for physics, hosting the Collaborative R&D Environment – a EPSRC National Facility – that will foster collaboration between industry and university researchers and enhance public access to new research.
Parts of the centre will be open to the public, including a café as well as outreach and exhibition spaces that are organised around six courtyards.
The centre also provides a new home for the Cavendish Museum, which includes the model of DNA created by James Watson and Francis Crick as well as the cathode ray tube that was used to discover the electron.
The ceremony today was attended by Dagmar Dolby, president of the Ray and Dagmar Dolby Family Fund, Deborah Prentice, vice-chancellor of the University of Cambridge and physicist Mete Atatüre, who is head of the Cavendish Laboratory.
“The greatest impacts on society – including the Cavendish’s biggest discoveries – have happened because of that combination of technological capability and human ingenuity,” notes Atatüre. “Science is getting more complex and technically demanding with progress, but now we have the facilities we need for our scientists to ask those questions, in the pursuit of discovering creative paths to the answers – that’s what we hope to create with the Ray Dolby Centre.”
Physicists have succeeded in making neutrons travel in a curved parabolic waveform known as an Airy beam. This behaviour, which had previously been observed in photons and electrons but never in a non-elementary particle, could be exploited in fundamental quantum science research and in advanced imaging techniques for materials characterization and development.
In free space, beams of light propagate in straight lines. When they pass through an aperture, they diffract, becoming wider and less intense. Airy beams, however, are different. Named after the 19th-century British scientist George Biddell Airy, who developed the mathematics behind them while studying rainbows, they follow a parabola-shaped path – a property known as self-acceleration – and do not spread out as they travel. Airy beams are also “self-healing”, meaning that they reconstruct themselves after passing through an obstacle that blocked part of the beam.
Scientists have been especially interested in Airy beams since 1979, when theoretical work by the physicist Michael Berry suggested several possible applications for them, says Dmitry Pushin, a physicist at the Institute for Quantum Computing (IQC) and the University of Waterloo, Canada. Researchers created the first Airy beams from light in 2007, followed by an electron Airy beam in 2013.
“Inspired by the unusual properties of these beams in optics and electron experiments, we wondered whether similar effects could be harnessed for neutrons,” Pushin says.
Making such beams out of neutrons turned out to be challenging, however. Because neutrons have no charge, they cannot be shaped by electric fields. Also, lenses that focus neutron beams do not exist.
A holographic approach
A team led by Pushin and Dusan Sarenac of the University at Buffalo’s Department of Physics in the US has now overcome these difficulties using a holographic approach based on a custom-microfabricated silicon diffraction grating. The team made this grating from an array of 6 250 000 micron-sized cubic phase patterns etched onto a silicon slab. “The grating modulates incoming neutrons into an Airy form and the resulting beam follows a curved trajectory, exhibiting the characteristics of a two-dimensional Airy profile at a neutron detector,” Sarenac explains.
According to Pushin, it took years of work to figure out the correct dimensions for the array. Once the design was optimized, however, fabricating it took just 48 hours at the IQC’s nanofabrication facility. “Developing a precise wave phase modulation method using holography and silicon microfabrication allowed us to overcome the difficulties in manipulating neutrons,” he says.
The researchers say the self-acceleration and self-healing properties of Airy beams could improve existing neutron imaging techniques (including neutron scattering and diffraction), potentially delivering sharper and more detailed images. The new beams might even allow for new types of neutron optics and could be particularly useful, for example, when targeting specific regions of a sample or navigating around structures.
Creating the neutron Airy beams required access to international neutron science facilities such as the US National Institute of Standards and Technology’s Center for Neutron Research; the US Department of Energy’s Oak Ridge National Laboratory; and the Paul Scherrer Institute in Villigen, Switzerland. To continue their studies, the researchers plan to use the UK’s ISIS Neutron and Muon Source to explore ways of combining neutron Airy beams with other structured neutron beams (such as helical waves of neutrons or neutron vortices). This could make it possible to investigate complex properties such as the chirality, or handedness, of materials. Such work could be useful in drug development and materials science. Since a material’s chirality affects how its electrons spin, it could be important for spintronics and quantum computing, too.
“We also aim to further optimize beam shaping for specific applications,” Sarenac tells Physics World. “Ultimately, we hope to establish a toolkit for advanced neutron optics that can be tailored for a wide range of scientific and industrial uses.”
Chatbots could boost students’ interest in maths and physics and make learning more enjoyable. So say researchers in Germany, who have compared the emotional response of students using artificial intelligence (AI) texts to learn physics compared to those who only read traditional textbooks. The team, however, found no difference in test performance between the two groups.
The study has been led by Julia Lademann, a physics-education researcher from the University of Cologne, who wanted to see if AI could boost students’ interested in physics. They did this by creating a customized chatbot using OpenAI’s ChatGPT model with a tone and language that was considered accessible to second-year high-school students in Germany.
After testing the chatbot for factual accuracy and for its use of motivating language, the researchers prompted it to generate explanatory text on proportional relationships in physics and mathematics. They then split 214 students, who had an average age of 11.7, into two groups. One was given textbook material on the topic along with chatbot text, while the control group only got the textbook .
The researchers first surveyed the students’ interest in mathematics and physics and then gave them 15 minutes to review the learning material. Their interest was assessed again afterwards along with the students’ emotional state and “cognitive load” – the mental effort required to do the work – through a series of questionnaires.
Higher confidence
The chatbot was found to significantly enhance students’ positive emotions – including pleasure and satisfaction, interest in the learning material and self-belief in their understanding of the subject — compared with those who only used textbook text. “The text of the chatbot is more human-like, more conversational than texts you will find in a textbook,” explains Lademann. “It is more chatty.”
Chatbot text was also found to reduce cognitive load. “The group that used the chatbot explanation experience higher positive feelings about the subject [and] they also had a higher confidence in their learning comprehension,” adds Lademann.
Tests taken within 30 minutes of the “learning phase” of the experiment, however, found no difference in performance between students that received the AI-generated explanatory text and the control group, despite the former receiving more information. Lademann says this could be due to the short study time of 15 minutes.
The researchers say that while their findings suggest that AI could provide a superior learning experience for students, further research is needed to assess its impact on learning performance and long-term outcomes. “It is also important that this improved interest manifests in improved learning performance,” Lademann adds.
Lademann would now like to see “longer term studies with a lot of participants and with children actually using the chatbot”. Such research would explore the potential key strength of chatbots; their ability to respond in real time to student’s queries and adapt their learning level to each individual student.
First light The cosmic microwave background, as imaged by the European Space Agency’s Planck mission. (Courtesy: ESA and the Planck Collaboration)
In classical physics, gravity is universally attractive. At the quantum level, however, this may not always be the case. If vast quantities of matter are present within an infinitesimally small volume – at the centre of a black hole, for example, or during the very earliest moments of the universe – space–time becomes curved at scales that approach the Planck length. This is the fundamental quantum unit of distance, and is around 1020 times smaller than a proton.
In these extremely curved regions, the classical theory of gravity – Einstein’s general theory of relativity – breaks down. However, research on loop quantum cosmology offers a possible solution. It suggests that gravity, in effect, becomes repulsive. Consequently, loop quantum cosmology predicts that our present universe began in a so-called “cosmic bounce”, rather than the Big Bang singularity predicted by general relativity.
In a recent paper published in EPL, Edward Wilson-Ewing, a mathematical physicist at the University of New Brunswick, Canada, explores the interplay between loop quantum cosmology and a phenomenon sometimes described as “the echo of the Big Bang”: the cosmic microwave background (CMB). This background radiation pervades the entire visible universe, and it stems from the moment the universe became cool enough for neutral atoms to form. At this point, light was suddenly able to travel through space without being continually scattered by the plasma of electrons and light nuclei that existed before. It is this freshly liberated light that makes up the CMB, so studying it offers clues to what the early universe was like.
Cosmologist Edward Wilson-Ewing uses loop quantum gravity to study quantum effects in the very early universe. (Courtesy: University of New Brunswick)
What was the motivation for your research?
Observations of the CMB show that the early universe (that is, the universe as it was when the CMB formed) was extremely homogeneous, with relative anisotropies of the order of one part in 104. Classical general relativity has trouble explaining this homogeneity on its own, because a purely attractive version of gravity tends to drive things in the opposite direction. This is because if a region has a higher density than the surrounding area, then according to general relativity, that region will become even denser; there is more mass in that region and therefore particles surrounding it will be attracted to it. Indeed, this is how the small inhomogeneities we do see in the CMB grew over time to form stars and galaxies today.
The main way this gets resolved in classical general relativity is to suggest that the universe experienced an episode of super-rapid growth in its earliest moments. This super-rapid growth is known as inflation, and it can suffice to generate homogeneous regions. However, in general, this requires a very large amount of inflation (much more than is typically considered in most models).
Alternately, if for some reason there happens to be a region that is moderately homogeneous when inflation starts, this region will increase exponentially in size while also becoming further homogenized. This second possibility requires a little more than a minimal amount of inflation, but not much more.
My goal in this work was to explore whether, if gravity becomes repulsive in the deep quantum regime (as is the case in loop quantum cosmology), this will tend to dilute regions of higher density, leading to inhomogeneities being smoothed out. In other words, one of the main objectives of this work was to find out whether quantum gravity could be the source of the high degree of homogeneity observed in the CMB.
What did you do in the paper?
In this paper, I studied spherically symmetric space–times coupled to dust (a simple model for matter) in loop quantum cosmology. These space–times are known as Lemaître–Tolman–Bondi space–times, and they allow arbitrarily large inhomogeneities in the radial direction. They therefore provide an ideal arena to explore whether homogenization can occur: they are simple enough to be mathematically tractable, while still allowing for large inhomogeneities (which, in general, are very hard to handle).
Loop quantum cosmology predicts several leading-order quantum effects. One of these effects is that space–time, at the quantum level, is discrete: there are quanta of geometry just as there are quanta of matter. This has implications for the equations of motion, which relate the geometry of space–time to the matter in it: if we take into account the discrete nature of quantum geometry, we have to modify the equations of motion.
These modifications are captured by so-called effective equations, and in the paper I solved these equations numerically for a wide range of initial conditions. From this, I found that while homogenization doesn’t occur everywhere, it always occurs in some regions. These homogenized regions can then be blown up to cosmological scales by inflation (and inflation will further homogenize them). Therefore, this quantum gravity homogenization process could indeed explain the homogeneity observed in the CMB.
What do you plan to do next?
It is important to extend this work in several directions to check the robustness of the homogenization effect in loop quantum cosmology. The restriction to spherical symmetry should be relaxed, although this will be challenging from a mathematical perspective. It will also be important to go beyond dust as a description of matter. The simplicity of dust makes calculations easier, but it is not particularly realistic.
Other relevant forms of matter include radiation and the so-called inflaton field, which is a type of matter that can cause inflation to occur. That said, in cosmology, the physics is to some extent independent of the universe’s matter content, at least at a qualitative level. This is because while different types of matter content may dilute more rapidly than others in an expanding universe, and the universe may expand at different rates depending on its matter content, the main properties of the cosmological dynamics (for example, the expanding universe, the occurrence of an initial singularity and so on) within general relativity are independent of the specific matter being considered.
I therefore think it is reasonable to expect that the quantitative predictions will depend on the matter content, but the qualitative features (in particular, that small regions are homogenized by quantum gravity) will remain the same. Still, further research is needed to test this expectation.
This episode of the Physics World Weekly podcast comes from the Chicago metropolitan area – a scientific powerhouse that is home to two US national labs and some of the country’s leading universities.
Physics World’s Margaret Harris was there recently and met Nadya Mason. She is dean of the Pritzker School of Molecular Engineering at the University of Chicago, which focuses on quantum engineering; materials for sustainability; and immunoengineering. Mason explains how molecular-level science is making breakthroughs in these fields and she talks about her own research on the electronic properties of nanoscale and correlated systems.
Harris also spoke to Jeffrey Spangenberger who leads the Materials Recycling Group at Argonne National Laboratory, which is on the outskirts of Chicago. Spangenberger talks about the challenges of recycling batteries and how we could make it easier to recover materials from batteries of the future. Spangenberger leads the ReCell Center, a national collaboration of industry, academia and national laboratories that is advancing recycling technologies along the entire battery life-cycle.
On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now.
What is the main role of the European Centre for Medium-Range Weather Forecasts (ECMWF)?
Making weather forecasts more accurate is at the heart of what we do at the ECMWF, working in close collaboration with our member states and their national meteorological services (see box below). That means enhanced forecasting for the weeks and months ahead as well as seasonal and annual predictions. We also have a remit to monitor the atmosphere and the environment – globally and regionally – within the context of a changing climate.
How does the ECMWF produce its weather forecasts?
Our task is to get the best representation, in a 3D sense, of the current state of the atmosphere versus key metrics like wind, temperature, humidity and cloud cover. We do this via a process of reanalysis and data assimilation: combining the previous short-range weather forecast, and its component data, with the latest atmospheric observations – from satellites, ground stations, radars, weather balloons and aircraft. Unsurprisingly, using all this observational data is a huge challenge, with the exploitation of satellite measurements a significant driver of improved forecasting over the past decade.
In what ways do satellite measurements help?
Consider the EarthCARE satellite that was launched in May 2024 by the European Space Agency (ESA) and is helping ECMWF to improve its modelling of clouds, aerosols and precipitation. EarthCARE has a unique combination of scientific instruments – a cloud-profiling radar, an atmospheric lidar, a multispectral imager and a broadband radiometer – to infer the properties of clouds and how they interact with solar radiation as well as thermal-infrared radiation emitted by different layers of the atmosphere.
How are you combining such data with modelling?
The ECMWF team is learning how to interpret and exploit the EarthCARE data to directly initiate our models. Put simply, mathematical models that better represent clouds and, in turn, yield more accurate forecasts. Indirectly, EarthCARE is also revealing a clearer picture of the fundamental physics governing cloud formation, distribution and behaviour. This is just one example of numerous developments taking advantage of new satellite data. We are looking forward, in particular, to fully exploiting next-generation satellite programmes from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) – including the EPS-SG polar-orbiting system and the Meteosat Third Generation geostationary satellite for continuous monitoring over Europe, Africa and the Indian Ocean.
Big data, big opportunities: the ECMWF’s high-performance computing facility in Bologna, Italy, is the engine-room of the organization’s weather and climate modelling efforts. (Courtesy: ECMWF)
What other factors help improve forecast accuracy?
We talk of “a day, a decade” improvement in weather forecasting, such that a five-day forecast now is as good as a three-day forecast 20 years ago. A richer and broader mix of observational data underpins that improvement, with diverse data streams feeding into bigger supercomputers that can run higher-resolution models and better algorithms. Equally important is ECMWF’s team of multidisciplinary scientists, whose understanding of the atmosphere and climate helps to optimize our models and data assimilation methods. A case study in this regard is Destination Earth, an ambitious European Union initiative to create a series of “digital twins” – interactive computer simulations – of our planet by 2030. Working with ESA and EUMETSTAT, the ECMWF is building the software and data environment for Destination Earth as well as developing the first two digital twins.
What are these two twins?
Our Digital Twin on Weather-Induced and Geophysical Extremes will assess and predict environmental extremes to support risk assessment and management. Meanwhile, in collaboration with others, the Digital Twin on Climate Change Adaptation complements and extends existing capabilities for the analysis and testing of “what if” scenarios – supporting sustainable development and climate adaptation and mitigation policy-making over multidecadal timescales.
Progress in machine learning and AI has been dramatic over the past couple of years
What kind of resolution will these models have?
Both digital twins integrate sea, atmosphere, land, hydrology and sea ice and their deep connections with a resolution currently impossible to reach. Right now, for example, the ECMWF’s operational forecasts cover the whole globe in a 9 km grid – effectively a localized forecast every 9 km. With Destination Earth, we’re experimenting with 4 km, 2 km, and even 1 km grids.
In February, the ECMWF unveiled a 10-year strategy to accelerate the use of machine learning and AI. How will this be implemented?
The new strategy prioritizes growing exploitation of data-driven methods anchored on established physics-based modelling – rapidly scaling up our previous deployment of machine learning and AI. There are also a variety of hybrid approaches combining data-driven and physics-based modelling.
What will this help you achieve?
On the one hand, data assimilation and observations will help us to directly improve as well as initialize our physics-based forecasting models – for example, by optimizing uncertain parameters or learning correction terms. We are also investigating the potential of applying machine-learning techniques directly on observations – in effect, to make another step beyond the current state-of-the-art and produce forecasts without the need for reanalysis or data assimilation.
How is machine learning deployed at the moment?
Progress in machine learning and AI has been dramatic over the past couple of years – so much so that we launched our Artificial Intelligence Forecasting System (AIFS) back in February. Trained on many years of reanalysis and using traditional data assimilation, AIFS is already an important addition to our suite of forecasts, though still working off the coat-tails of our physics-based predictive models. Another notable innovation is our Probability of Fire machine-learning model, which incorporates multiple data sources beyond weather prediction to identify regional and localized hot-spots at risk of ignition. Those additional parameters – among them human presence, lightning activity as well as vegetation abundance and its dryness – help to pinpoint areas of targeted fire risk, improving the model’s predictive skill by up to 30%.
What do you like most about working at the ECMWF?
Every day, the ECMWF addresses cutting-edge scientific problems – as challenging as anything you’ll encounter in an academic setting – by applying its expertise in atmospheric physics, mathematical modelling, environmental science, big data and other disciplines. What’s especially motivating, however, is that the ECMWF is a mission-driven endeavour with a straight line from our research outcomes to wider societal and economic benefits.
ECMWF at 50: new frontiers in weather and climate prediction
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organization supported by 35 states – 23 member states and 12 co-operating states. Established in 1975, the centre employs around 500 staff from more than 30 countries at its headquarters in Reading, UK, and sites in Bologna, Italy, and Bonn, Germany. As a research institute and 24/7 operational service, the ECMWF produces global numerical weather predictions four times per day and other data for its member/cooperating states and the broader meteorological community.
The ECMWF processes data from around 90 satellite instruments as part of its daily activities (yielding 60 million quality-controlled observations each day for use in its Integrated Forecasting System). The centre is a key player in Copernicus – the Earth observation component of the EU’s space programme – by contributing information on climate change for the Copernicus Climate Change Service; atmospheric composition to the Copernicus Atmosphere Monitoring Service; as well as flooding and fire danger for the Copernicus Emergency Management Service. This year, the ECMWF is celebrating its 50th anniversary and has a series of celebratory events scheduled in Bologna (15–19 September) and Reading (1–5 December).
During this webinar, the key steps of integrating an MRI scanner and MRI Linac into a radiotherapy will be presented, specially focusing on the quality assurance required for the use of the MRI images. Furthermore, the use of phantoms and their synergy with each other across the multi-vendor facility will be discussed.
Akos Gulyban
Akos Gulyban is a medical physicist with a PhD in Physics (in Medicine), renowned for his expertise in MRI-guided radiotherapy (MRgRT). Currently based at Institut Jules Bordet in Brussels, he plays a pivotal role in advancing MRgRT technologies, particularly through the integration of the Elekta Unity MR-Linac system along the implementation of dedicated MRI simulation for radiotherapy.
In addition to his clinical research, Gulyban has been involved in developing quality assurance protocols for MRI-linear accelerator (MR-Linac) systems, contributing to guidelines that ensure safe and effective implementation of MRI-guided radiotherapy.
Gulyban is playing a pivotal role in integrating advanced imaging technologies into radiotherapy, striving to enhance treatment outcomes for cancer patients.
Finding fakes Illustration of how neutrons can pass easily through the metallic regions of an old coin, but are blocked by hydrogen-bearing compounds formed by corrosion. (Courtesy: S Kelley/NIST)
The presence of hydrogen in a sample is usually a bad thing in neutron scattering experiments, but now researchers in the US have turned the tables on the lightest element and used it to spot fake antique coins.
The scattering of relatively slow moving neutrons from materials provides a wide range of structural information. This is because these “cold” neutrons have wavelengths on par with the separations of atoms in a materials. However, materials that contain large amounts of hydrogen-1 nuclei (protons) can be difficult to study because hydrogen is very good at scattering neutrons in random directions – creating a noisy background signal. Indeed, biological samples containing lots of hydrogen are usually “deuterated” – replacing hydrogen with deuterium – before they are placed in a neutron beam.
However, there are some special cases where this incoherent scattering of hydrogen can be useful – measuring the water content of samples, for example.
Surfeit of hydrogen
Now, researchers in the US and South Korea have used a neutron beam to differentiate between genuine antique coins and fakes. The technique relies on the fact that the genuine coins have suffered corrosion that has resulted in the inclusion of hydrogen-bearing compounds within the coins.
Led by Youngju Kim and Daniel Hussey at the National Institute of Standards and Technology (NIST) in Colorado, the team fired a parallel beam of neutrons through individual coins (see figure). The particles travel with ease through a coin’s original metal, but tend to be scattered by the hydrogen-rich corrosion inclusions. This creates a 2D pattern of high and low intensity regions on a neutron-sensitive screen behind the coin. The coin can be rotated and a series of images taken. Then, the researchers used computed tomography to create a 3D image showing the corroded regions of a coin.
The team used this neutron tomography technique to examine an authentic 19th century coin that was recovered from a shipwreck, and on a coin that is known to be a replica. Although both coins had surface corrosion, the corrosion extended much deeper into the bulk of the authentic coin than it did in the replica.
The researchers also used a separate technique called neutron grating interferometry to characterize the pores in the surfaces of the coins. Pores are common on the surface of coins that have been buried or submerged. Authentic antique coins are often found buried or submerged, whereas replica coins will be buried or submerged to make them look more authentic.
Small-angle scattering
Neutron grating interferometry looks at the small-angle scattering of neutrons from a sample and focuses on structures that range in size from about 1 nm to 1 micron.
The team found that the authentic coin had many more tiny pores than the replica coin, which was dominated by much larger (millimetre scale) pores.
This observation was expected because when a coin is buried or submerged, chemical reactions cause metals to leach out of its surface, creating millimetre-sized pores. As time progresses, however, further chemical reactions cause corrosion by-products such as copper carbonates to fill in the pores. The result is that the pores in the older authentic coin are smaller than the pores in the newer replica coin.
The team now plans to expand its study to include more Korean coins and other metallic artefacts. The techniques could also be used to pinpoint corrosion damage in antique coins, allowing these areas to be protected using coatings.
As well as being important to coin collectors and dealers, the ability to verify the age of coins is of interest to historians and economists – who use the presence of coins in their research.
The study was done using neutrons from NIST’s research reactor in Maryland. That facility is scheduled to restart in 2026 so the team plans to continue its investigation using a neutron source in South Korea.
The first clear images of Yellowstone’s shallowest magma reservoir have revealed its depth with unprecedented precision, providing information that could help scientists determine how dangerous it is. By pinpointing the reservoir’s location, geophysicists and seismologists from Rice University and the universities of Utah, New Mexico and Texas at Dallas, hope to develop more accurate predictions of when this so-called “supervolcano” will erupt again.
Yellowstone is America’s oldest national park, and it owes its spectacular geysers and hot springs to its location above one of the world’s largest volcanoes. The last major eruption of the Yellowstone supervolcano happened around 630 000 years ago, and was violent enough to create a collapsed crater, or caldera, over 60 km across. Though it shows no sign of repeating this cataclysm anytime soon, it is still an active volcano, and it is slowly forming a new magma reservoir.
Previous estimates of the depth of this magma reservoir were highly imprecise, ranging from three to eight kilometres. Scientists also lacked an accurate location for the reservoir’s top and were unsure how its properties changed with increasing depth.
The latest results, from a team led by Brandon Schmandt and Chenglong Duan at Rice and Jamie Farrell at Utah, show that the reservoir’s top lies 3.8 km below the surface. They also show evidence of an abrupt downward transition into a mixture of gas bubbles and magma filling the pore space of volcanic rock. The gas bubbles are made of mostly H2O in supercritical form and the magma comprises molten silicic rock such as rhyolite.
Creating artificial seismic waves
Duan and colleagues obtained their result by using a mechanical vibration source (a specialized truck built by the oil and gas firm Dawson Geophysical) to create artificial seismic waves across the ground beneath the northeast portion of Yellowstone’s caldera. They also deployed a network of hundreds of portable seismometers capable of recording both vertical and ground vibrations, spaced at 100 to 150-m intervals, across the national park. “Researchers already knew from previous seismic and geochemical studies that this region was underlain by magma, but we needed new field data and an innovative adaptation of conventional seismic imaging techniques,” explains Schmandt. The new study, he tells Physics World, is “a good example of how the same technologies are relevant to energy industry imaging and studies of natural hazards”.
Over a period of a few days, the researchers created artificial earthquakes at 110 different locations using 20 shocks lasting 40 seconds apiece. This enabled them to generate two types of seismic wave, known as S- and P-waves, which reflect off molten rock at different velocities. Using this information, they were able to locate the top of the magma chamber and determine that 86% of this upper portion was solid rock.
The rest, they discovered, was made up of pores filled with molten material such as rhyolite and volatile gases (mostly water in supercritical form) and liquids in roughly equal proportion. Importantly, they say, this moderate concentration of pores allows the volatile bubbles to gradually escape to the surface so they do not accumulate and increase the buoyancy deeper inside the chamber. This is good news as it means that the Yellowstone supervolcano is unlikely to erupt any time soon.
A key aspect of this analysis was a wave-equation imaging method that Duan developed, which substantially improved the spatial resolution of the features observed. “This was important since we had to adapt the data we obtained to its less than theoretically ideal properties,” Schmandt explains.
The work, which is detailed in Nature, could also help scientists monitor the eruption potential of other volcanos, Schmandt adds. This is because estimating the accumulation and buoyancy of volatile material beneath sharp magmatic cap layers is key to assessing the stability of the system. “There are many types of similar hazardous magmatic systems and their older remnants on our planet that are important for resources like metal ores and critical minerals,” he explains. “We therefore have plenty of targets left to understand and now some refined ideas about how we might approach them in the field and on the computer.”
I’ve said this neat line more times than I can count at the start of a public lecture. It summarizes one of the most incomprehensible ideas in science: that the universe began in an extreme, hot, dense and compact state, before expanding and evolving into everything we now see around us. The certainty of the simple statement is reassuring, and it is an easy way of quickly setting the background to any story in astronomy.
But what if it isn’t just an oversimplified summary? What if it is misleading, perhaps even wholly inaccurate?
When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith
Early on, authors Niayesh Afshordi and Phil Halper say “in some sense the theory of the Big Bang cannot be trusted”, which caused me to raise an eyebrow and wonder what I had let myself in for. After all, for many astronomers, myself included, the Big Bang is practically gospel. And therein lies the problem. When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith.
It is easy to read the first few pages of The Battle of the Big Bang with deep scepticism but don’t worry, your eyebrows will eventually lower. That the universe has evolved from a “hot Big Bang” is not in doubt – observations such as the measurements of the cosmic microwave background leave no room for debate. But the idea that the universe “began” as a singularity – a region of space where the curvature of space–time becomes infinite – is another matter. The authors argue that no current theory can describe such a state, and there is no evidence to support it.
An astronomical crowbar
Given the confidence with which we teach it, many might have assumed the Big Bang theory beyond any serious questioning, thereby shutting the door on their own curiosity. Well, Afshordi and Halper have written the popular science equivalent of a crowbar, gently prising that door back open without judgement, keen only to share the adventure still to be had.
A cosmologist at the University of Waterloo, Canada, Afshordi is obsessed with finding observational ways of solving problems in fundamental physics, and is known for his creative alternative theories, such as a non-constant speed of light. Meanwhile Halper, a science popularizer, has carved out a niche by interviewing leading voices in early universe cosmology on YouTube, often facilitating fierce debates between competing thinkers. The result is a book that is both authoritative and accessible – and refreshingly free from ego.
Over 12 chapters, the book introduces more than two dozen alternatives to the Big Bang singularity, with names as tongue-twisting as the theories are mind-bending. For most readers, and even this astrophysicist, the distinctions between the theories quickly blur. But that’s part of the point. The focus isn’t on convincing you which model is correct, it’s about making clear that many alternatives exist that are all just as credible (give or take). Reading this book feels like walking through an art gallery with a knowledgeable and thoughtful friend explaining each work’s nuance. They offer their own opinions in hushed tones, but never suggest that their favourite should be yours too, or even that you should have a favourite.
If you do find yourself feeling dizzy reading about the details of holographic cosmology or eternal inflation, then it won’t be long before an insight into the nature of scientific debate or a crisp analogy brings you up for air. This is where the co-authorship begins to shine: Halper’s presence is felt in the moments when complicated theories are reduced to an idea anyone can relate to; while Afshordi brings deep expertise and an insider’s view of the cosmological community. These vivid and sometimes gossipy glimpses into the lives and rivalries of his colleagues paint a fascinating picture. It is a huge cast of characters – including Roger Penrose, Alan Guth and Hiranya Peiris – most of whom appear only for a page. But even though you won’t remember all the names, you are left with the feeling that Big Bang cosmology is a passionate, political and philosophical side of science very much still in motion.
Keep the door open
The real strength of this book is its humility and lack of defensiveness. As much as reading about the theory behind a multiverse is interesting, as a scientist, I’m always drawn to data. A theory that cannot be tested can feel unscientific, and the authors respect that instinct. Surprisingly, some of the most fantastical ideas, like pre-Big Bang cosmologies, are testable. But the tools required are almost science fiction themselves – such as a fleet of gravitational-wave detectors deployed in space. It’s no small task, and one of the most delightful moments in the book is a heartfelt thank you to taxpayers, for funding the kind of fundamental research that might one day get us to an answer.
In the concluding chapters, the authors pre-emptively respond to scepticism, giving real thought to discussing when thinking outside the box becomes going beyond science altogether. There are no final answers in this book, and it does not pretend to offer any. In fact, it actively asks the reader to recognize that certainty does not belong at the frontiers of science. Afshordi doesn’t mind if his own theories are proved wrong, the only terror for him is if people refuse to ask questions or pursue answers simply because the problem is seen as intractable.
Curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain
A book that leaves you feeling like you understand less about the universe than when you started it might sound like it has failed. But when that “understanding” was an illusion based on dogma, and a book manages to pry open a long-sealed door in your mind, that’s a success.
The Battle of the Big Bang offers both intellectual humility and a reviving invitation to remain eternally open-minded. It reminded me of how far I’d drifted from being one of the fearless schoolchildren who, after I declare with certainty that the universe began with a Big Bang, ask, “But what came before it?”. That curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain.
May 2025 University of Chicago Press 360pp $32.50/£26.00 hb
The Canadian firm General Fusion is to lay off about 25% of its 140-strong workforce and reduce the operation of its fusion device dubbed Lawson Machine 26 (LM26). The announcement was made in an open letter published on 5 May by the company’s chief executive Greg Twinney. The moves follows what the firm says is an “unexpected and urgent financing constraint”.
Founded in 2002 by the Canadian plasma physicist Michel Laberge, General Fusion is based in Richmond, British Columbia. It is one of the first private fusion companies and has attracted more than $325m of funding from both private investors, including Amazon boss Jeff Bezos and the Canadian government.
The firm is pursuing commercial fusion energy via Magnetized Target Fusion (MTF) technology, based on the concept of an enclosed, liquid-metal vortex. Plasma is injected into the centre of the vortex before numerous pistons hammer on the outside of the enclosure, compressing the plasma and sparking a fusion reaction, with the resulting heat being absorbed by the liquid metal.
The firm was hoping to achieve “scientific breakeven equivalent” in the coming years with the aim of potentially building a commercial-scale machine with the technology in the 2030s. But that timescale now looks unlikely as General Fusion announces plans to downscales its efforts due to funding issues. In his letter, Twinney said the firm has “proven a lot with a lean budget”.
Challenging environment
“Today’s funding landscape is more challenging than ever as investors and governments navigate a rapidly shifting and uncertain political and market climate,” says Twinney. “We are ready to execute our plan but are caught in an economic and geopolitical environment that is forcing us to wait.” But he insists that General Fusion, which his seeking new investors, remains an “attractive opportunity”.
Andrew Holland, chief executive of the non-profit Fusion Industry Association, told Physics World that the “nature of private enterprise is that business cycles go up and go down” and claims that excitement about fusion is growing around the world. “I hope that business cycles and geopolitics don’t interrupt the good work of scientific advancement,” he says. “I’m hopeful investors see the value being created with every experiment.”
In a sunny office, Ji-Seon Kim holds up a sheet of stripy plastic. In the middle of dark blue and transparent bands, a small red glow catches the eye, clearly visible even against the bright daylight. There are no sockets or chargers, but that little light is no magic trick.
“It’s a printed solar cell from my industrial collaborator,” Kim explains. “This blue material is the organic semiconductor printed in the plastic. It absorbs indoor light and generates electricity to power the LED.”
Yet she came to the field almost by accident. After completing her master’s degree in theoretical physics in Seoul in 1994, Kim was about to embark on a theory-focused PhD studying nonlinear optics at Imperial, when her master’s supervisor told her about some exciting work happening at the University of Cambridge.
A team there had just created the first organic light-emitting diodes (OLEDs) based on conjugated polymers, successfully stimulating carbon-based molecules to glow under an applied voltage. Intrigued by the nascent field, Kim contacted Richard Friend, who led the research and, following an interview, he offered her a PhD position. Friend himself won the IOP’s Isaac Newton Medal and Prize in 2024.
I was really lucky to be in the right place at the right time, just after this new discovery
Ji-Seon Kim
“I spent almost six months learning how to use certain equipment in the lab,” Kim recalls of the tricky transition from theory to experimental work. “For example, there’s a big glove box you have to put your hands in to make the devices inside it, and I wasn’t sure whether I was even able to open the chamber.”
But as she found her feet, she became increasingly passionate about the work. “I was really lucky to be in the right place at the right time, just after this new discovery.”
Seeing the light
You could hardly find a clearer example of fundamental research moving into consumer applications in recent years than OLEDs – now a familiar term in the world of TVs and smartphones. But when Kim joined the field, the first OLEDs were inefficient and degraded quickly due to high electric fields, heat and oxygen exposure. So, during her PhD, Kim focused on making the devices more efficient and last for longer.
Skilling up When Ji-Seon Kim moved from theoretical physics to experimental work she spent six months learning to use the lab equipment. (Courtesy: Imperial College London)
She also helped to develop a better understanding of the physics underlying the phenomenon. At the time, researchers disagreed about the fundamental limit of device efficiency determined by excited state (singlet vs triplet) formation under charge injection. Drawing on her theoretical background, Kim developed innovative simulation work on display device outcoupling, which provided a new way of determining the orientation of emitting molecules and the device efficiency, which is now commonly used in the OLED community.
Kim completed her PhD in 2000 and continued studying organic semiconductors, moving to Imperial in 2007. Besides display screens, she is interested in numerous other potential applications of the materials, including sustainable energy. After all, just as the molecules can emit light in response to injected charges, so too can they absorb photons and generate electricity.
Organic semiconductors have several advantages over traditional silicon-based photovoltaic materials. As well as being lightweight, carbon molecules can be tuned to absorb different wavelengths. Whereas silicon solar cells only work with sunlight, and must be installed as heavy panels on roofs or in fields, organic semiconductors offer more options. They could be inconspicuously integrated into buildings, capturing indoor office light that is normally wasted and using it to power appliances. They could even be made into a transparent film and incorporated into windows to convert sunlight into electricity.
Plastic fabrication methods offer a further benefit. Unlike silicon, carbon-based semiconductors can be dissolved in common organic solvents to create a kind of ink, opening the door to low-cost, flexible printing techniques.
And it doesn’t stop there. “A future direction I am particularly interested in is using organic semiconductors for neuromorphic applications,” says Kim. “You can make synaptic transistors – which mimic biological neurons – using molecular semiconductors.”
With all the promise of these materials, the field has flourished. Kim’s group is currently tackling the challenge of the high binding energy between the electron–hole pair in organic semiconductors, which resists separation into free charges, increasing the intrinsic energy cost of using them. Kim and her team are exploring new small molecules, which create an energy level offset by simply changing their packing and orientations, providing an extra driving force to separate the charges.
“International interactions are critical not only for scientific development but also for future technology,” Kim says. “The UK is really strong in fundamental science, but we don’t have many manufacturing sites compared to Asian countries like Korea. For a fundamental discovery to be applied in a commercial device, there’s a transition from the lab to the manufacturing scale. For that we need a partner, and those partners are overseas.”
Kim is also seeking to build bridges across disciplines. She will soon be moving to the University of Oxford to work on physical chemistry as part of a research initiative focused on sustainable materials and chemistry. She will draw on her expertise in spectroscopic techniques to study and engineer molecules for sustainable applications.
“These days physics is multidisciplinary,” she notes. “For future technology and science, you have to be able to integrate different disciplines. I hope I can contribute as a physicist to bridge different disciplines in molecular semiconductors.”
But one constant is how Kim mentors undergraduate students. Her advice is to engage them with innovations from the lab, which is why she likes to get out the plastic sheet powering the LED. The emphasis on tangible experience is inspired by the excitement and motivation she remembers feeling when she saw organic semiconductors glowing at the start of her PhD.
“Even though the efficiency was so poor that we had to turn the overhead light off and use a really high voltage to see the faint light, that exposure to the real physics was really important,” she says. “That was for me a Eureka moment.”
What does the word “overselling” mean to you? At one level, it can just mean selling more of something than already exists or can be delivered. It’s what happens when airlines overbook flights by selling more seats than physically exist on their planes. They assume a small fraction of passengers won’t turn up, which is fine – until you can’t fly because everyone else has rocked up ahead of you.
Overselling can also involve selling more of something than is strictly required. Also known as “upselling”, you might have experienced it when buying a car or taking out a new broadband contract. You end up paying for extras and add-ons that were offered but you didn’t really need or even want, which explains why you’ve got all those useless WiFi boosters lying around the house.
There’s also a third meaning of “overselling”, which is to exaggerate the merits of something. You see it when a pharmaceutical company claims its amazing anti-ageing product “will make you live 20 years longer”, which it won’t. Overselling in this instance means overstating a product’s capability or functionality. It’s pretending something is more mature than it is, or claiming a technology is real when it’s still at proof-of-concept-stage.
From my experience in science and technology, this form of overselling often happens when companies and their staff want to grab attention or to keep customers or consumers on board. Sometimes firms do it because they are genuinely enthusiastic (possibly too much so) about the future possibilities of their product. I’m not saying overselling is necessarily a bad thing but just that there are reservations.
Fact and fiction
Before I go any further, let’s learn the lingo of overselling. First off, there’s “vapourware”, which refers to a product that either doesn’t exist or doesn’t fulfil the stated technical capability. Often, it’s something a firm wants to include in its product portfolio because they’re sure people would like to own it. Deep down, though, the company knows the product simply isn’t possible, at least not right now. Like a vapour, it’s there but can’t be touched.
Sometimes vapourware is just a case of waiting for product development to catch up with a genuine product plan. Sales staff know they haven’t got the product at the right specification yet, and while the firm will definitely get there one day, they’re pretending the hurdles have already been crossed. But genuine over-enthusiasm can sometimes cross over into wishful thinking – the idea that a certain functionality can be achieved with an existing technical approach.
Do you remember Google Glass? This was wearable tech, integrated into spectacle frames, that was going to become the ubiquitous portable computer. Information would be requested via voice commands, with the user receiving back the results, visible on a small heads-up display. While the computing technology worked, the product didn’t succeed. Not only did it look clunky, there were also deployment constraints and concerns about privacy and safety.
Google Glass simply didn’t capture the public’s imagination or meet the needs of enough consumers
Google Glass failed on multiple levels and was discontinued in 2015, barely a year after it hit the market. Subsequent relaunches didn’t succeed either and the product was pulled for a final time in 2023. Despite Google’s best efforts, the product simply didn’t capture the public’s imagination or meet the needs of enough consumers.
Next up in our dictionary of overselling is “unobtanium”, which is a material or material specification that we would like to exist, but simply doesn’t. In the aerospace sector, where I work, we often dream of unobtanium. We’re always looking for materials that can repeatedly withstand the operational extremes encountered during a flight, while also being sustainable without cutting corners on safety.
Like other engine manufacturers, my company – GE Aerospace – is pioneering multiple approaches to help develop such materials. We know that engines become more efficient when they burn at higher temperatures and pressures. We also know that nitrous-oxide (NOx) emissions fall when an engine burns more leanly. Unfortunately, there are no metals we know of that can survive to such high temperatures.
But the quest for unobtanium can drive innovative technical solutions. At GE, for example, we’re making progress by looking instead at composite materials, such as carbon fibre and composite matrix ceramics. Stronger and more tolerant to heat and pressure than metals, they’ve already been included on the turbofan engines in planes such as the Boeing 787 Dreamliner.
We’re also using “additive manufacturing” to build components layer by layer. This approach lets us make highly intricate components with far less waste than conventional techniques, in which a block of material is machined away. We’re also developing innovative lean-burn combustion technologies, such as novel cooling and flow strategies, to reduce NOx emissions.
While unobtanium can never be reached, it’s worth trying to get there to drive technology forward
A further example is the single crystal turbine blade developed by Rolls-Royce in 2012. Each blade is cast to form a single crystal of super alloy, making it extremely strong and able to resist the intense heat inside a jet engine. According to the company, the single crystal turbine blades operate up to 200 degrees above the melting point of their alloy. So while unobtanium can never be reached, it’s worth trying to get there to drive technology forward.
Lead us not into temptation
Now, here’s the caveat. There’s an unwelcome side to overselling, which is that it can easily morph into downright mis-selling. This was amply demonstrated by the Volkswagen diesel emissions scandal, which saw the German carmaker install “defeat devices” in its diesel engines. The software changed how the engine performed when it was undergoing emissions tests to make its NOx emissions levels appear much lower than they really were.
VW was essentially falsifying its diesel engine emissions to conform with international standards. After regulators worldwide began investigating the company, VW took a huge reputational and financial hit, ultimately costing it more than $33bn in fines, penalties and financial settlements. Senior chiefs at the company got the sack and the company’s reputation took a serious hit.
It’s tempting – and sometimes even fun – to oversell. Stretching the truth draws interest from customers and consumers. But when your product no longer does “what it says on the tin”, your brand can suffer, probably more so than having something slightly less functional.
On the upside, the quest for unobtanium and, to some extent, the selling of vapourware can drive technical progress and lead to better technical solutions. I suspect this was the case for Google Glass. The underlying technology has had some success in certain niche applications such as medical surgery and manufacturing. So even though Google Glass didn’t succeed, it did create a gap for other vendors to fill.
Google Glass was essentially a portable technology with similar functionality to smartphones, such as wireless Internet access and GPS connectivity. Customers, however, proved to be happier carrying this kind of technology in their hands than wearing it on their heads. The smartphone took off; Google Glass didn’t. But the underlying tech – touchpads, cameras, displays, processors and so on – got diverted into other products.
Vapourware, in other words, can give a firm a competitive edge while it waits for its product to mature. Who knows, maybe one day even Google Glass will make a comeback?
By adapting their quantum twisting microscope to operate at cryogenic temperatures, researchers have made the first observations of a type of phonon that occurs in twisted bilayer graphene. These “phasons” could have implications for the electron dynamics in these materials.
Graphene is a layer of carbon just one atom thick and it has range of fascinating and useful properties – as do bilayer and multilayer versions of graphene. Since 2018, condensed-matter physicists have been captivated by the intriguing electron behaviour in two layers of graphene that are rotated with respect to each other.
As the twist angle deviates from zero, the bilayer becomes a moiré superlattice. The emergence of this structure influences electronic properties of the material, which can transform from a semiconductor to a superconductor.
In 2023, researchers led by Shahal Ilani at the Weizmann Institute of Science in Israel developed a quantum twisting microscope to study these effects. Based on a scanning probe microscope with graphene on the substrate and folded over the tip such as to give it a flat end, the instrument allows precise control over the relative orientation between two graphene surfaces – in particular, the twist angle.
Strange metals
Now Ilani and an international team have operated the microscope at cryogenic temperatures for the first time. So far, their measurements support the current understanding of how electrons couple to phasons, which are specific modes of phonons (quantized lattice vibrations). Characterizing this coupling could help us understand “strange metals”, whose electrical resistance increases at lower temperatures – which is the opposite of normal metals.
There are different types of phonons, such as acoustic phonons where atoms within the same unit cell oscillate in phase with each other, and optical phonons where they oscillate out of phase. Phasons are phonons involving lattice oscillations in one layer that are out of phase or antisymmetric with oscillations in the layer above.
“This is the one that turns out to be very important for how the electrons behave between the layers because even a small relative displacement between the two layers affects how the electrons go from one layer to the other,” explains Weizmann’s John Birkbeck as he describes the role of phasons in twisted bilayer graphene materials.
For most phonons the coupling to electrons is weaker the lower the energy of the phonon mode. However for twisted bilayer materials, theory suggests that phason coupling to electrons increases as the twist between the two layers approaches alignment due to the antisymmetric motion of the two layers and the heightened sensitivity of interlayer tunnelling to small relative displacements.
Unique perspective
“There are not that many tools to see phonons, particularly in moiré systems” adds Birkbeck. This is where the quantum twisting microscope offers a unique perspective. Thanks to the atomically flat end of the tip, electrons can tunnel between the layer on the substrate and the layer on the tip whenever there is a matching state in terms of not just energy but also momentum too.
Where there is a momentum mismatch, tunnelling between tip and substrate is still possible by balancing the mismatch with the emission or absorption of a phonon. By operating at cryogenic temperatures, the researchers were able to get a measure of these momentum transactions and probe the electron phonon coupling too.
“What was interesting from this work is not only that we could image the phonon dispersion, but also we can quantify it,” says Birkbeck stressing the absolute nature of these quantified electron phonon coupling-strength measurements.
The measurements are the first observations of phasons in twisted bilayer graphene and reveal a strong increase in coupling as the layers approach alignment, as predicted by theory. However, the researchers were not able to study angles smaller than 6°. Below this angle the tunnelling resistance is so low that the contact resistance starts to warp readings, among other limiting factors.
Navigating without eyes
A certain amount of technical adjustment was needed to operate the tool at cryogenic temperatures, not least to “to navigate without eyes” because the team was not able to incorporate their usual optics with the cryogenic set up. The researchers hope that with further technical adjustments they will be able to use the quantum twisting microscope in cryogenic conditions at the magic angle of 1.1°, where superconductivity occurs.
Pablo Jarillo Herrero, who led the team at MIT that first reported superconductivity in twisted bilayer graphene in 2018 but was not involved in this research describes it as an “interesting study” adding, “I’m looking forward to seeing more interesting results from low temperature QTM research!”
Hector Ochoa De Eguileor Romillo at Columbia University in the US, who proposed a role for phason–electron interactions in these materials in 2019, but was also not involved in this research describes it as “a beautiful experiment”. He adds, “I think it is fair to say that this is the most exciting experimental technique of the last 15 years or so in condensed matter physics; new interesting data are surely coming.”
Roses have been cultivated for thousands of years, admired for their beauty. Despite their use in fragrance, skincare and even in teas and jams, there are some things, however, that we still don’t know about these symbolic flowers.
And that includes the physical mechanism behind the shape of rose petals.
The curves and curls of leaves and flower petals arise due to the interplay between their natural growth and geometry.
Uneven growth in a flat sheet, in which the edges grow quicker than the interior, gives rise to strain and in plant leaves and petals, for example, this can result in a variety of shapes such as saddle and ripple shapes.
Yet when it comes to rose petals, the sharply pointed cusps – a point where two curves meet — that form at the edge of the petals set it apart from soft, wavy patterns seen in many other plants.
While young rose petals have smooth edges, as the rose matures the petals change to a polygonal shape with multiples cusps.
They found that the pointed cusps that form at the edge of rose petals are due to a type of geometric frustration called a Mainardi-Codazzi-Peterson (MCP) incompatibility.
This type of mechanism results in stress concentrating in a specific area, which go on to form cusps to avoid tearing or forming unnatural folding.
When the researchers supressed the formation of cusps, they found that the discs reverted to being smooth and concave.
The researchers say that the findings could be used for applications in soft robotics and even the deployment of spacecraft components.
And it also goes some way to deepen our appreciation of nature’s ability to juggle growth and geometry.
Physicists have observed axion quasiparticles for the first time in a two-dimensional quantum material. As well as having applications in materials science, the discovery could aid the search for fundamental axions, which are a promising (but so far hypothetical) candidate for the unseen dark matter pervading our universe.
Theorists first proposed axions in the 1970s as a way of solving a puzzle involving the strong nuclear force and charge-parity (CP) symmetry. In systems that obey this symmetry, the laws of physics are the same for a particle and the spatial mirror image of its oppositely charged antiparticle. Weak interactions are known to violate CP symmetry, and the theory of quantum chromodynamics (QCD) allows strong interactions to do so, too. However, no-one has ever seen evidence of this happening, and the so-called “strong CP problem” remains unresolved.
More recently, the axion has attracted attention as a potential constituent of dark matter – the mysterious substance that appears to make up more than 85% of matter in the universe. Axions are an attractive dark matter candidate because while they do have mass, and theory predicts that the Big Bang should have generated them in large numbers, they are much less massive than electrons, and they carry no charge. This combination means that axions interact only very weakly with matter and electromagnetic radiation – exactly the behaviour we expect to see from dark matter.
Despite many searches, though, axions have never been detected directly. Now, however, a team of physicists led by Jianxiang Qiu of Harvard University has proposed a new detection strategy based on quasiparticles that are axions’ condensed-matter analogue. According to Qiu and colleagues, these quasiparticle axions, as they are known, could serve as axion “simulators”, and might offer a route to detecting dark matter in quantum materials.
Topological antiferromagnet
To detect axion quasiparticles, the Harvard team constructed gated electronic devices made from several two-dimensional layers of manganese bismuth telluride (MnBi2Te4). This material is a rare example of a topological antiferromagnet – that is, a material that is insulating in its bulk while conducting electricity on its surface, and that has magnetic moments that point in opposite directions. These properties allow quasiparticles known as magnons (collective oscillations of spin magnetic moments) to appear in and travel through the MnBi2Te4. Two types of magnon mode are possible: one in which the spins oscillate in sync; and another in which they are out of phase.
Qiu and colleagues applied a static magnetic field across the plane of their MnBi2Te4 sheets and bombarded the devices with sub-picosecond light pulses from a laser. This technique, known as ultrafast pump-probe spectroscopy, allowed them to observe the 44 GHz coherent oscillation of the so-called condensed-matter field. This field is the CP-violating term in QCD, and it is proportional to a material’s magnetoelectric coupling constant. “This is uniquely enabled by the out-of-phase magnon in this topological material,” explains Qiu. “Such coherent oscillations are the smoking-gun evidence for the axion quasiparticle and it is the combination of topology and magnetism in MnBi2Te4 that gives rise to it.”
A laboratory for axion studies
Now that they have detected axion quasiparticles, Qiu and colleagues say their next step will be to do experiments that involve hybridizing them with particles such as photons. Such experiments would create a new type of “axion-polariton” that would couple to a magnetic field in a unique way – something that could be useful for applications in ultrafast antiferromagnetic spintronics, in which spin-polarized currents can be controlled with an electric field.
The axion quasiparticle could also be used to build an axion dark matter detector. According to the team’s estimates, the detection frequency for the quasiparticle is in the milli-electronvolt (meV) range. While several theories for the axion predict that it could have a mass in this range, most existing laboratory detectors and astrophysical observations search for masses outside this window.
“The main technical barrier to building such a detector would be grow high-quality large crystals of MnBi2Te4 to maximize sensitivity,” Qiu tells Physics World. “In contrast to other high-energy experiments, such a detector would not require expensive accelerators or giant magnets, but it will require extensive materials engineering.”
In a conversation with Physics World’s Tami Freeman Krausz talks about his research into using ultrashort-pulsed laser technology to develop a diagnostic tool for early disease detection. He also discusses his collaboration with Semmelweis University to establish the John von Neumann Institute for Data Science, and describes the Science4People initiative, a charity that he and his colleagues founded to provide education for children who have been displaced by the war in Ukraine.
On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now and use code QUANTUM20 to receive 20% off. This offer ends on 4 May.
Researchers at Linköping University in Sweden have developed a new fluid electrode and used it to make a soft, malleable battery that can recharge and discharge over 500 cycles while maintaining its high performance. The device, which continues to function even when stretched to twice its length, might be used in next-generation wearable electronics.
Futuristic wearables such as e-skin patches, e-textiles and even internal e-implants on the organs or nerves will need to conform far more closely to the contours of the human body than today’s devices can. To fulfil this requirement of being soft and stretchable as well as flexible, such devices will need to be made from mechanically pliant components powered by soft, supple batteries. Today’s batteries, however, are mostly rigid. They also tend to be bulky because long-term operations and power-hungry functions such as wireless data transfer, continuous sensing and complex processing demand plenty of stored energy.
To overcome these barriers, researchers led by the Linköping chemist Aiman Rahmanudin decided to rethink the very concept of battery electrode design. Instead of engineering softness and stretchability into a solid electrode, as was the case in most previous efforts, they made the electrode out of a fluid. “Bulky batteries compromise the mechanical compliance of wearable devices, but since fluids can be easily shaped into any configuration, this limitation is removed, opening up new design possibilities for next-generation wearables,” Rahmanudin says.
A “holistic approach”
Designing a stretchable battery requires a holistic approach, he adds, as all the device’s components need to be soft and stretchy. For example, they used a modified version of the wood-based biopolymer lignin as the cathode and a conjugated poly(1-amino-5-chloroanthraquinone) (PACA) as the anode. They made these electrodes fluid by dispersing them separately with conductive carbon fillers in an aqueous electrolyte medium consisting of 0.1 M HClO4.
To integrate these electrodes into a complete cell, they had to design a stretchable current collector and an ion-selective membrane to prevent the cathodic and anodic fluids from crossing over. They also encapsulated the fluids in a robust, low-permeability elastomer to prevent them from drying up.
Designing energy storage devices from the “inside out”
Previous flexible, high-performance electrode work by the Linköping team focused on engineering the mechanical properties of solid battery electrodes by varying their Young’s modulus. “For example, think of a rubber composite that can be stretched and bent,” explains Rahmanudin. “The thicker the rubber, however, the higher the force required to stretch it, which affects mechanical compliancy.
“Learning from our past experience and work on electrofluids (which are conductive particles dispersed in a liquid medium employed as stretchable conductors), we figured that mixing redox particles with conductive particles and suspending them in an electrolyte could potentially work as battery electrodes. And we found that it did.”
Rahmanudin tells Physics World that fluid-based electrodes could lead to entirely new battery designs, including batteries that could be moulded into textiles, embedded in skin-worn patches or integrated into soft robotics.
After reporting their work in Science Advances, the researchers are now working on increasing the voltage output of their battery, which currently stands 0.9 V. “We are also looking into using Earth-abundant and sustainable materials like zinc and manganese oxide for future versions of our device and aim at replacing the acidic electrolyte we used with a safer pH neutral and biocompatible equivalent,” Rahmanudin says.
Another exciting direction, he adds, will be to exploit the fluid nature of such materials to build batteries with more complex three-dimensional shapes, such as spirals or lattices, that are tailored for specific applications. “Since the electrodes can be poured, moulded or reconfigured, we envisage a lot of creative potential here,” Rahmanudin says.
The first strong evidence for an exoplanet with an orbit perpendicular to that of the binary system it orbits has been observed by astronomers in the UK and Portugal. Based on observations from the ESO’s Very Large Telescope (VLT), researchers led by Tom Baycroft, a PhD student at the University of Birmingham, suggest that such an exoplanet is required to explain the changing orientation in the orbit of a pair of brown dwarfs – objects that are intermediate in mass between the heaviest gas-giant planets and the lightest stars.
The Milky Way is known to host a diverse array of planetary systems, providing astronomers with extensive insights into how planets form and systems evolve. One thing that is evident is that most exoplanets (planets that orbit stars other than the Sun) and systems that have been observed so far bear little resemblance to Earth and the solar system.
Among the most interesting planets are the circumbinaries, which orbit two stars in a binary system. So far, 16 of these planets have been discovered. In each case, they have been found to orbit in the same plane as the orbits of their binary host stars. In other words, the planetary system is flat. This is much like the solar system, where each planet orbits the Sun within the same plane.
“But there has been evidence that planets might exist in a different configuration around a binary star,” Baycroft explains. “Inclined at 90° to the binary, these polar orbiting planets have been theorized to exist, and discs of dust and gas have been found in this configuration.”
Especially interesting
Baycroft’s team had set out to investigate a binary pair of brown dwarfs around 120 light–years away. The system is called 2M1510 and each brown dwarf is only about 45 million years old and they have masses about 18 times that of Jupiter. The pair are especially interesting because they are eclipsing: periodically passing in front of each other from our line of sight. When observed by the VLT, this unique vantage allowed the astronomers to determine the masses and radii of the stars and the nature of their orbit.
“This is a rare object, one of only two eclipsing binary brown dwarfs, which is useful for understanding how brown dwarfs form and evolve,” Baycroft explains. “In our study, we were not looking for a planet, only aiming to improve our understanding of the brown dwarfs.”
Yet as they analysed the VLT’s data, the team noticed something strange about pair’s orbit. Doppler shifts in the light they emitted revealed that their elliptical orbit was slowly changing orientation in an apsidal precession.
Not unheard of
This behaviour is not unheard of. In its orbit around the Sun, Mercury undergoes apsidal precession, which is explained by Albert Einstein’s general theory of relativity. But Baycroft says that the precession must have had an entirely different cause in the brown-dwarf pair.
“Unlike Mercury, this precession is going backwards, in the opposite direction to the orbit,” he explains. “Ruling out any other causes for this, we find that the best explanation is that there is a companion to the binary on a polar orbit, inclined at close to 90° relative to the binary.” As it exerts its gravitational pull on the binary pair, the inclination of this third, smaller body induces a gradual rotation in the orientation of the binary’s elliptical orbit.
For now, the characteristics of this planet are difficult to pin down and the team believe its mass could lie anywhere between 10–100 Earths. All the same, the astronomers are confident that their results now confirm the possibility of polar exoplanets existing in circumbinary orbits – providing valuable guidance for future observations.
“This result exemplifies how the many different configurations of planetary systems continue to astound us,” Baycroft comments. “It also paves the way for more studies aiming to find out how common such polar orbits may be.”
Researchers in Singapore and the US have independently developed two new types of photonic computer chips that match existing purely electronic chips in terms of their raw performance. The chips, which can be integrated with conventional silicon electronics, could find use in energy-hungry technologies such as artificial intelligence (AI).
For nearly 60 years, the development of electronic computers proceeded according to two rules of thumb: Moore’s law (which states that the number of transistors in an integrated circuit doubles every two years) and Dennard scaling (which says that as the size of transistors decreases, their power density will stay constant). However, both rules have begun to fail, even as AI systems such as large language models, reinforcement learning and convolutional neural networks are becoming more complex. Consequently, electronic computers are struggling to keep up.
Light-based computation, which exploits photons instead of electrons, is a promising alternative because it can perform multiplication and accumulation (MAC) much more quickly and efficiently than electronic devices. These operations are crucial for AI, and especially for neural networks. However, while photonic systems such as photonic accelerators and processors have made considerable progress in performing linear algebra operations such as matrix multiplication, integrating them into conventional electronics hardware has proved difficult.
A hybrid photonic-electronic system
The Singapore device was made by researchers at the photonic computing firm Lightelligence and is called PACE, for Photonic Arithmetic Computing Engine. It is a hybrid photonic-electronic system made up of more than 16 000 photonic components integrated on a single silicon chip and performs matrix MAC on 64-entry binary vectors.
“The input vector data elements start in electronic form and are encoded as binary intensities of light (dark or light) and fed into a 64 x 64 array of optical weight modulators that then perform multiply and summing operations to accumulate the results,” explains Maurice Steinman, Lightelligence’s senior vice president and general manager for product strategy. “The result vectors are then converted back to the electronic domain where each element is compared to its corresponding programmable 8-bit threshold, producing new binary vectors that subsequently re-circulate optically through the system.”
The process repeats until the resultant vectors reach “convergence” with settled values, Steinman tells Physics World. Each recurrent step requires only a few nanoseconds and the entire process completes quickly.
The Lightelligence device, which the team describe in Nature, can solve complex computational problems known as max-cut/optimization problems that are important for applications in areas such as logistics. Notably, its greatly reduced minimum latency – a key measure of computation speed – means it can solve a type of problem known as an Ising model in just five nanoseconds. This makes it 500 times faster than today’s best graphical-processing-unit-based systems at this task.
High level of integration achieved
Independently, researchers led by Nicholas Harris at Lightmatter in Mountain View, California, have fabricated the first photonic processor capable of executing state-of-the-art neural network tasks such as classification, segmentation and running reinforcement learning algorithms. Lightmatter’s design consists of six chips in a single package with high-speed interconnects between vertically aligned photonic tensor cores (PTCs) and control dies. The team’s processor integrates four 128 x 128 PTCs, with each PTC occupying an area of 14 x 24.96 mm. It contains all the photonic components and analogue mixed-signal circuits required to operate and members of the team say that the current architecture could be scaled to 512 x 512 computing units in a single die.
The result is a device that can perform 65.5 trillion adaptive block floating-point 35 (ABFP) 16-bit operations per second with just 78 W of electrical power and 1.6 W of optical power. Writing in Nature, the researchers claim that this represents the highest level of integration achieved in photonic processing.
The team also showed that the Lightmatter processor can implement complex AI models such as the neural network ResNet (used for image processing) and the natural language processing model BERT (short for Bidirectional Encoder Representations from Transformers) – all with an accuracy rivalling that of standard electronic processors. It can also compute reinforcement learning algorithms such as DeepMind’s Atari. Harris and colleagues have already applied their device to several real-world AI applications, such as generating literary texts and classifying film reviews, and they say that their photonic processor marks an essential step in post-transistor computing.
Both teams fabricated their photonic and electronic chips using standard complementary metal-oxide-semiconductor (CMOS) processing techniques. This means that existing infrastructures could be exploited to scale up their manufacture. Another advantage: both systems were fully integrated in a standard chip interface – a first.
Given these results, Steinman says he expects to see innovations emerging from algorithm developers who seek to exploit the unique advantages of photonic computing, including low latency. “This could benefit the exploration of new computing models, system architectures and applications based on large-scale integrated photonics circuits.”
In this episode of Physics WorldStories, writer Kevlin Henney discusses his new flash fiction, Heisenberg (not) in Helgoland – written exclusively for Physics World as part of the International Year of Quantum Science and Technology. The story spans two worlds: the one we know, and an alternate reality in which Werner Heisenberg never visits the island of Helgoland – a trip that played a key role in the development of quantum theory.
Henney reads an extract from the piece and reflects on the power of flash fiction – why the format’s brevity and clarity make it an interesting space for exploring complex ideas. In conversation with host Andrew Glester, he also discusses his varied career as an independent software consultant, trainer and writer. Tune in to hear his thoughts on quantum computing, and why there should be greater appreciation for how modern physics underpins the technologies we use every day.
The full version of Henney’s story will be published in the Physics WorldQuantum Briefing 2025 – a free-to-read digital issue launching in May. Packed with features on the history, mystery and applications of quantum mechanics, it will be available via the Physics World website.
The image accompanying this article is Werner Heisenberg in 1933 (Credit: German Federal Archive with posterised version by James Dacey/Physics World) CC-BY-SA 3.0
On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now and use code QUANTUM20 to receive 20% off. This offer ends on 4 May.
Mathematical genius Emmy Noether, around 1900. (Public domain. Photographer unknown)
In his debut book, Einstein’s Tutor: the Story of Emmy Noether and the Invention of Modern Physics, Lee Phillips champions the life and work of German mathematician Emmy Noether (1882–1935). Despite living a life filled with obstacles, injustices and discrimination as a Jewish mathematician, Noether revolutionized the field and discovered “the single most profound result in all of physics”. Phillips’ book weaves the story of her extraordinary life around the central subject of “Noether’s theorem”, which itself sits at the heart of a fascinating era in the development of modern theoretical physics.
Noether grew up at a time when women had few rights. Unable to officially register as a student, she was instead able to audit courses at the University of Erlangen in Bavaria, with the support of her father who was a mathematics professor there. At the time, young Noether was one of only two female auditors in the university of 986 students. Just two years previously, the university faculty had declared that mixed-sex education would “overthrow academic order”. Despite going against this formidable status quo, she was able to graduate in 1903.
Noether continued her pursuit of advanced mathematics, travelling to the “[world’s] centre of mathematics” – the University of Göttingen. Here, she was able to sit in the lectures of some of the brightest mathematical minds of the time – Karl Schwarzschild, Hermann Minkowski, Otto Blumenthal, Felix Klein and David Hilbert. While there, the law finally changed: women were, at last, allowed to enrol as students at university. In 1904 Noether returned to the University of Erlangen to complete her postgraduate dissertation under the supervision of Paul Gordan. At the time, she was the only woman to matriculate alongside 46 men.
Despite being more than qualified, Noether was unable to secure a university position after graduating from her PhD in 1907. Instead, she worked unpaid for almost a decade – teaching her father’s courses and supervising his PhD students. As of 1915, Noether was the only woman in the whole of Europe with a PhD in mathematics. She had worked hard to be recognized as an expert on symmetry and invariant theory, and eventually accepted an invitation from Klein and Hilbert to work alongside them in Göttingen. Here, the three of them would meet Albert Einstein to discuss his latest project – a general theory of relativity.
Infiltrating the boys’ club
In Einstein’s Tutor, Phillips paints an especially vivid picture of Noether’s life at Göttingen, among colleagues including Klein, Hilbert and Einstein, who loom large and bring a richness to the story. Indeed, much of the first three chapters are dedicated to these men, setting the scene for Noether’s arrival in Göttingen. Phillips makes it easy to imagine these exceptionally talented and somewhat eccentric individuals working at the forefront of mathematics and theoretical physics together. And it was here, when supporting Einstein with the development of general relativity (GR), that Noether discovered a profound result: for every symmetry in the universe, there is a corresponding conservation law.
Throughout the book, Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of GR. Einstein himself “expressed wonderment at what happened to his equations in her hands, how he never imagined that things could be expressed with such elegance and generality”. Phillips argues that Einstein should not be credited as the sole architect of GR. Indeed, the contributions of Grossman, Klein, Besso, Hilbert, and crucially, Noether, remain largely unacknowledged – a wrong that Phillips is trying to right with this book.
Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of general relativity
A key theme running through Einstein’s Tutor is the importance of the support and allyship that Noether received from her male contemporaries. While at Göttingen, there was a battle to allow Noether to receive her habilitation (eligibility for tenure). Many argued in her favour but considered her an exception, and believed that in general, women were not suited as university professors. Hilbert, in contrast, saw her sex as irrelevant (famously declaring “this is not a bath house”) and pointed out that science requires the best people, of which she was one. Einstein also fought for her on the basis of equal rights for women.
Eventually, in 1919 Noether was allowed to habilitate (as an exception to the rule) and was promoted to professor in 1922. However, she was still not paid for her work. In fact, her promotion came with the specific condition that she remained unpaid, making it clear that Noether “would not be granted any form of authority over any male employee”. Hilbert however, managed to secure a contract with a small salary for her from the university administration.
Her allies rose to the cause again in 1933, when Noether was one of the first Jewish academics to be dismissed under the Nazi regime. After her expulsion, German mathematician Helmut Hasse convinced 14 other colleagues to write letters advocating for her importance, asking that she be allowed to continue as a teacher to a small group of advanced students – the government denied this request.
When the time came to leave Germany, many colleagues wrote testimonials in her support for immigration, with one writing “She is one of the 10 or 12 leading mathematicians of the present generation in the entire world.” Rather than being placed at a prestigious university or research institute (Hermann Weyl and Einstein were both placed at “the men’s university”, the Institute for Advanced Study in Princeton), it was recommended she join Bryn Mawr, a women’s college in Pennsylvania, US. Her position there would “compete with no-one… the most distinguished feminine mathematician connected with the most distinguished feminine university”. Phillips makes clear his distaste for the phrasing of this recommendation. However, all accounts show that she was happy at Bryn Mawr and stayed there until her unexpected death in 1935 at the age of 53.
Noether’s legacy
With a PhD in theoretical physics, Phillips has worked for many years in both academia and industry. His background shows itself clearly in some unusual writing choices. While his writing style is relaxed and conversational, it includes the occasional academic turn of phrase (e.g. “In this chapter I will explain…”), which feels out of place in a popular-science book. He also has a habit of piling repetitive and overly sincere praise onto Noether. I personally prefer stories that adopt the “show, don’t tell” approach – her abilities speak for themselves, so it should be easy to let the reader come to their own conclusions.
Phillips has made the ambitious choice to write a popular-science book about complex mathematical concepts such as symmetries and conservation laws that are challenging to explain, especially to general readers. He does his best to describe the mathematics and physics behind some of the key concepts around Noether’s theorem. However, in places, you do need to have some familiarity with university-level physics and maths to properly follow his explanations. The book also includes a 40-page appendix filled with additional physics content, which I found unnecessary.
Einstein’s Tutor does achieve its primary goal of familiarizing the reader with Emmy Noether and the tremendous significance of her work. The final chapter on her legacy breezes quickly through developments in particle physics, astrophysics, quantum computers, economics and XKCDComics to highlight the range and impact this single theorem has had. Phillips’ goal was to take Noether into the mainstream, and this book is a small step in the right direction. As cosmologist and author Katie Mack summarizes perfectly: “Noether’s theorem is to theoretical physics what natural selection is to biology.”
Sending an email, typing a text message, streaming a movie. Many of us do these activities every day. But what if you couldn’t move your muscles and navigate the digital world? This is where brain–computer interfaces (BCIs) come in.
BCIs that are implanted in the brain can bypass pathways damaged by illness and injury. They analyse neural signals and produce an output for the user, such as interacting with a computer.
A major focus for scientists developing BCIs has been to interpret brain activity associated with movements to control a computer cursor. The user drives the BCI by imagining arm and hand movements, which often originate in the dorsal motor cortex. Speech BCIs, which restore communication by decoding attempted speech from neural activity in sensorimotor cortical areas such as the ventral precentral gyrus, have also been developed.
Researchers at the University of California, Davis recently found that the same part of the brain that supported a speech BCI could also support computer cursor control for an individual with amyotrophic lateral sclerosis (ALS). ALS is progressive neurodegenerative disease affecting the motor neurons in the brain and spinal cord.
“Once that capability [to control a computer mouse] became reliably achievable roughly a decade ago, it stood to reason that we should go after another big challenge, restoring speech, that would help people unable to speak. And from there – and this is where this new paper comes in – we recognized that patients would benefit from both of these capabilities [speech and computer cursor control],” says Sergey Stavisky, who co-directs the UC Davis Neuroprosthetics Lab with David Brandman.
Their clinical case study suggests that computer cursor control may not be as body-part-specific as scientists previously believed. If results are replicable, this could enable the creation of multi-modal BCIs that restore communication and movement to people with paralysis. The researchers share information about their cursor BCI and the case study in the Journal of Neural Engineering.
The study participant, a 45-year-old man with ALS, had previous success working with a speech BCI. The researchers recorded neural activity from the participant’s ventral precentral gyrus while he imagined controlling a computer cursor, and built a BCI to interpret that neural activity and predict where and when he wanted to move and click the cursor. The participant then used the new cursor BCI to send texts and emails, watch Netflix, and play The New York Times Spelling Bee game on his personal computer.
“This finding, that the tiny region of the brain we record from has a lot more than just speech information, has led to the participant also being able to control his own computer on a daily basis, and get back some independence for him and his family,” says first author Tyler Singer-Clark, a graduate student in biomedical engineering at UC Davis.
The researchers found that most of the information driving cursor control came from one of the participant’s four implanted microelectrode arrays, while click information was available on all four of the BCI arrays.
“The neural recording arrays are the same ones used in many prior studies,” explains Singer-Clark. “The result that our cursor BCI worked well given this choice makes it all the more convincing that this brain area (speech motor cortex) has untapped potential for controlling BCIs in multiple useful ways.”
The researchers are working to incorporate more computer actions into their cursor BCI, to make the control faster and more accurate, and to reduce calibration time. They also note that it’s important to replicate these results in more people to understand how generalizable the results of their case study may be.
A new all-electrical way of controlling spin-polarized currents has been developed by researchers at the Singapore University of Technology and Design (SUTD). By using bilayers of recently-discovered materials known as altermagnets, the researchers developed a tuneable and magnetic-free alternative to current approaches – something they say could bring spintronics closer to real-world applications.
Spintronics stores and processes information by exploiting the quantum spin (or intrinsic angular momentum) of electrons rather than their charge. The technology works by switching electronic spins, which can point either “up” or “down”, to perform binary logical operations in much the same way as electronic circuits use electric charge. One of the main advantages is that when an electron’s spin switches direction, its new state is stored permanently; it is said to be “non-volatile”. Spintronics circuits therefore do not require any additional input power to keep their states stable, which could make them more efficient and faster than the circuits in conventional electronic devices.
The problem is that the spin currents that carry information in spintronics circuits are usually generated using ferromagnetic materials and the magnetization of these materials can only be switched using very strong magnetic fields. Doing this requires bulky apparatus, which hinders the creation of ultracompact devices – a prerequisite for real-world applications.
“Notoriously difficult to achieve”
Controlling the spins with electric fields instead would be ideal, but Ang Yee Sin, who led the new research, says it has proved notoriously difficult to achieve – until now. “We have now shown that we can generate and reverse the spin direction of the electron current in an altermagnet made of two very thin layers of chromium sulphide (CrS) at room temperature using only an electric field,” Ang says.
Altermagnets, which were only discovered in 2024, are different from the conventional magnetically-ordered materials, ferromagnets and antiferromagnets. In ferromagnets, the magnetic moments (or spins) of atoms line up parallel to each other. In antiferromagnets, they line up antiparallel. The spins in altermagnets are also antiparallel, but the atoms that host these spins are rotated with respect to their neighbours. This combination gives altermagnets some properties of both ferromagnets and antiferromagnets, plus new properties of their own.
In bilayers of CrS, explains Ang, the electrons in each layer naturally prefer to spin in opposite directions, essentially cancelling each other out. “When we apply an electric field across the layers, however, one layer becomes more ‘active’ than the other. The current flowing through the device therefore becomes spin-polarized.”
A new device concept
The main challenge the researchers faced in their work was to identify a suitable material and a stacking arrangement in which spin and layers intertwined just right. This required detailed quantum-level simulations and theoretical modelling to prove that CrS bilayers could do the job, says Ang.
The work opens up a new device concept that the team calls layer-spintronics in which spin control is achieved via layer selection using an electric field. According to Ang, this concept has clear applications for next-generation, energy-efficient, compact and magnet-free memory and logic devices. And, since the technology works at room temperature and uses electric gating – a common approach in today’s electronics – it could make it possible to integrate spintronics devices with current semiconductor technology. This could lead to novel spin transistors, reconfigurable logic gates, or ultrafast memory cells based entirely on spin in the future, he says.
The SUTD researchers, who report their work in Materials Horizons, now aim to identify other 2D altermagnets that can host similar or even more robust spin-electric effects. “We are also collaborating with experimentalists to synthesize and characterize CrS bilayers to validate our predictions in the lab and investigating how to achieve non-volatile spin control by integrating them with ferroelectric materials,” reveals Ang. “This could potentially allow for memory devices that can retain information for longer.”
Most of us have heard of Schrödinger’s eponymous cat, but it is not the only feline in the quantum physics bestiary. Quantum Cheshire cats may not be as well known, yet their behaviour is even more insulting to our classical-world common sense.
These quantum felines get their name from the Cheshire cat in Lewis Carroll’s Alice’s Adventures in Wonderland, which disappears leaving its grin behind. As Alice says: “I’ve often seen a cat without a grin, but a grin without a cat! It’s the most curious thing I ever saw in my life!”
Things are curiouser in the quantum world, where the property of a particle seems to be in a different place from the particle itself. A photon’s polarization, for example, may exist in a totally different location from the photon itself: that’s a quantum Cheshire cat.
While the prospect of disembodied properties might seem disturbing, it’s a way of interpreting the elegant predictions of quantum mechanics. That at least was the thinking when quantum Cheshire cats were first put forward by Yakir Aharonov, Sandu Popescu, Daniel Rohrlich and Paul Skrzypczyk in an article published in 2013 (New J. Phys.15 113015).
Strength of a measurement
To get to grips with the concept, remember that making a measurement on a quantum system will “collapse” it into one of its eigenstates – think of opening the box and finding Schrödinger’s cat either dead or alive. However, by playing on the trade-off between the strength of a measurement and the uncertainty of the result, one can gain a tiny bit of information while disturbing the system as little as possible. If such a measurement is done many times, or on an ensemble of particles, it is possible to average out the results, to obtain a precise value.
First proposed in the 1980s, this method of teasing out information from the quantum system by a series of gentle pokes is known as weak measurement. While the idea of weak measurement in itself does not appear a radical departure from quantum formalism, “an entire new world appeared” as Popescu puts it. Indeed, Aharonov and his collaborators have spent the last four decades investigating all kinds of scenarios in which weak measurement can lead to unexpected consequences, with the quantum Cheshire cat being one they stumbled upon.
In their 2013 paper, Aharonov and colleagues imagined a simple optical interferometer set-up, in which the “cat” is a photon that can be in either the left or the right arm, while the “grin” is the photon’s circular polarization. The cat (the photon) is first prepared in a certain superposition state, known as pre-selection. After it enters the set-up, the cat can leave via several possible exits. The disembodiment between particle and property appears in the cases in which the particle emerges in a particular exit (post-selection).
Certain measurements, analysing the properties of the particle, are performed while the particle is in the interferometer (in between the pre- and post-selection). Being weak measurements, they have to be carried out many times to get the average. For certain pre- and post-selection, one finds the cat will be in the left arm while the grin is in the right. It’s a Cheshire cat disembodied from its grin.
The mathematical description of this curious state of affairs was clear, but the interpretation seemed preposterous and the original article spent over a year in peer review, with its eventual publication still sparking criticism. Soon after, experiments with polarized neutrons (Nature Comms 5 4492) and photons (Phys. Rev. A94 012102) tested the original team’s set-up. However, these experiments and subsequent tests, despite confirming the theoretical predictions, did not settle the debate – after all, the issue was with the interpretation.
A quantum of probabilities
To come to terms with this perplexing notion, think of the type of pre- and post-selected set-up as a pachinko machine, in which a ball starts at the top in a single pre-selected slot and goes down through various obstacles to end up in a specific point (post-selection): the jackpot hole. If you count how many balls hit the jackpot hole, you can calculate the probability distribution. In the classical world, measuring the position and properties of the ball at different points, say with a camera, is possible.
This observation will not affect the trajectory of the ball, or the probability of the jackpot. In a quantum version of the pachinko machine, the pre- and post-selection will work in a similar way, except you could feed in balls in superposition states. A weak measurement will not disturb the system so multiple measurements can tease out the probability of certain outcomes. The measurement result will not yield an eigenvalue, which corresponds to a physical property of the system, but weak values, and the way one should interpret these is not clear-cut.
Quantum Cheshire cats are a curious phenomenon, whereby the property of a quantum particle can be completely separate from the particle itself. A photon’s polarization, for example, may exist at a location where there is no photon at all. In this illustration, our quantum Cheshire cats (the photons) are at a pachinko parlour. Depending on certain pre- and post-selection criteria, the cats end up in one location – in one arm of the detector or the other – and their grins in a different location, on the chairs.
To make sense of this in a quantum sense, we need an intuitive mental image, even a limited one. This is why quantum Cheshire cats are a powerful metaphor, but they are also more than that, guiding researchers into new directions. Indeed, since the initial discovery, Aharonov, Popescu and colleagues have stumbled upon more surprises.
In 2021 they generalized the quantum Cheshire cat effect to a dynamical picture in which the “disembodied” property can propagate in space (Nature Comms12 4770). For example, there could be a flow of angular momentum without anything carrying it (Phys. Rev. A110 L030201). In another generalization, Aharonov imagined a massive particle with a mass that could be measured in one place with no momentum, while its momentum could be measured in another place without its mass (Quantum8 1536). A gedankenexperiment to test this effect would involve a pair of nested Mach–Zehnder interferometers with moving mirrors and beam splitters.
Provocative interpretations
If you find these ideas bewildering, you’re in good company. “They’re brain teasers,” explains Jonte Hance, a researcher in quantum foundations at Newcastle University, UK. In fact, Hance thinks that quantum Cheshire cats are a great way to get people interested in the foundations of quantum mechanics.
Physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions
Sure, the early years of quantum physics saw famous debates between Niels Bohr and Albert Einstein, culminating in the criticism in the Einstein–Podolski–Rosen (EPR) paradox (Phys. Rev.47 777) in 1935. But after that, physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions.
This lack of interest in quantum fundamentals is perfectly illustrated by two anecdotes, the first involving Aharonov himself. When he was studying physics at Technion in Israel in the 1950s, he asked Nathan Rosen (the R of the EPR) about working on the foundations of quantum mechanics. The topic was deemed so unfashionable that Rosen advised him to focus on applications. Luckily, Aharonov ignored the advice and went on to work with American quantum theorist David Bohm.
The other story concerns Alain Aspect, who in 1975 visited CERN physicist John Bell to ask for advice on his plans to do an experimental test of Bell’s inequalities to settle the EPR paradox. Bell’s very first question was not about the details of the experiment – but whether Aspect had a permanent position (Nature Phys.3 674). Luckily, Aspect did, so he carried out the test, which went on to earn him a share of the 2022 Nobel Prize for Physics.
As quantum computing and quantum information began to emerge, there was a brief renaissance in quantum foundations culminating in the early 2010s. But over the past decade, with many of aspects of quantum physics reaching commercial fruition, research interest has shifted firmly once again towards applications.
Despite popular science’s constant reminder of how “weird” quantum mechanics is, physicists often take the pragmatic “shut up and calculate” approach. Hance says that researchers “tend to forget how weird quantum mechanics is, and to me you need that intuition of it being weird”. Indeed, paradoxes like Schrödinger’s cat and EPR have attracted and inspired generations of physicists and have been instrumental in the development of quantum technologies.
The point of the quantum Cheshire cat, and related paradoxes, is to challenge our intuition and provoke us to think outside the box. That’s important even if applications may not be immediately in sight. “Most people agree that although we know the basic laws of quantum mechanics, we don’t really understand what quantum mechanics is all about,” says Popescu.
Aharonov and colleagues’ programme is to develop a correct intuition that can guide us further. “We strongly believe that one can find an intuitive way of thinking about quantum mechanics,” adds Popescu. That may, or may not, involve felines.
India must intensify its efforts in quantum technologies as well as boost private investment if it is to become a leader in the burgeoning field. That is according to the first report from India’s National Quantum Mission (NQM), which also warns that the country must improve its quantum security and regulation to make its digital infrastructure quantum-safe.
Approved by the Indian government in 2023, the NQM is an eight-year $750m (60bn INR) initiative that aims to make the country a leader in quantum tech. Its new report focuses on developments in four aspects of NQM’s mission: quantum computing; communication; sensing and metrology; and materials and devices.
Entitled India’s International Technology Engagement Strategy for Quantum Science, Technology and Innovation, the report finds that India’s research interests include error-correction algorithms for quantum computers. It is also involved in building quantum hardware with superconducting circuits, trapped atoms/ions and engineered quantum dots.
The NQM-supported Bengaluru-based startup QPiAI, for example, recently developed a 25-superconducting qubit quantum computer called “Indus”, although the qubits were fabricated abroad.
Ajay Sood, principal scientific advisor to the Indian government, told Physics World that while India is strong in “software-centric, theoretical and algorithmic aspects of quantum computing, work on completely indigenous development of quantum computing hardware is…at a nascent stage.”
Sood, who is a physicist by training, adds that while there are a few groups working on different platforms, these are at less than 10-qubit stage. “[It is] important for [India] to have indigenous capabilities for fabricating qubits and other ancillary hardware for quantum computers,” he says
India is also developing secure protocols and satellite-based systems and implementing quantum systems for precision measurements. QNu Labs – another Begalaru startup – is, for example, developing a quantum-safe communication-chip module to secure satellite and drone communications with built-in quantum randomness and security micro-stack.
Lagging behind
The report highlights the need for greater involvement of Indian industry in hardware-related activities. Unlike other countries, India struggles with limited industry funding, in which most comes from angel investors, with limited participation from institutional investors such as venture-capital firms, tech corporates and private equity funds.
There are many areas of quantum tech that are simply not being pursued in India
Arindam Ghosh
The report also calls for more indigenous development of essential sensors and devices such as single-photon detectors, quantum repeaters, and associated electronics, with necessary testing facilities for quantum communication. “There is also room for becoming global manufacturers and suppliers for associated electronic or cryogenic components,” says Sood. “Our industry should take this opportunity.”
India must work on its quantum security and regulation as well, according to the report. It warns that the Indian financial sector, which is one the major drivers for quantum tech applications, “risks lagging behind” in quantum security and regulation, with limited participation of Indian financial-service providers.
“Our cyber infrastructure, especially related to our financial systems, power grids, and transport systems, need to be urgently protected by employing the existing and evolving post quantum cryptography algorithms and quantum key distribution technologies,” says Sood.
India currently has about 50 educational programmes in various universities and institutions. Yet Arindam Ghosh, who runs the Quantum Technology Initiative at the India Institute of Science, Bangalore, says that the country faces a lack of people going into quantum-related careers.
“In spite of [a] very large number of quantum-educated graduates, the human resource involved in developing quantum technologies is abysmally small,” says Ghosh. “As a result, there are many areas of quantum tech that are simply not being pursued in India.” Other problems, according to Ghosh, include “modest” government funding compared to other countries as well as “slow and highly bureaucratic” government machinery.
Sood, however, is optimistic, pointing out recent Indian initiatives such as setting up hardware fabrication and testing facilities, supporting start-ups as well as setting up a $1.2bn (100bn INR) fund to promote “deep-tech” startups. “[With such initiatives] there is every reason to believe that India would emerge even stronger in the field,” says Sood.
A new theoretical framework proposes that gravity may arise from entropy, offering a fresh perspective on the deep connections between geometry, quantum mechanics and statistical physics. Developed by Ginestra Bianconi, a mathematical physicist at Queen Mary University of London, UK, and published in Physical Review D, this modified version of gravity provides new quantum information theory insights on the well-established link between statistical mechanics and gravity that is rooted in the thermodynamic properties of black holes.
Quantum relative entropy
At the heart of Bianconi’s theory is the concept of quantum relative entropy (QRE). This is a fundamental concept of information theory, and it quantifies the difference in information encoded in two quantum states. More specifically, QRE is a measure of how much information of one quantum state is carried by another quantum state.
Bianconi’s idea is that the metrics associated with spacetime are quantum operators that encode the quantum state of its geometry. Building on this geometrical insight, she proposes that the action for gravity is the QRE between two different metrics: one defined by the geometry of spacetime and another by the matter fields present within it. In this sense, the theory takes inspiration from John Wheeler’s famous description of gravity: “Matter tells space how to curve, and space tells matter how to move.” However, it also goes further, as it aims to make this relationship explicit in the mathematical formulation of gravity, framing it in a statistical mechanics and information theory action.
Additionally, the theory adapts QRE to the Dirac-Kähler formalism extended to bosons, allowing for a more nuanced understanding of spacetime. The Dirac-Kähler formalism is a geometric reformulation of fermions using differential forms, unifying spinor and tensor descriptions in a coordinate-free way. In simpler terms, it offers an elegant way to describe particles like electrons using the language of geometry and calculus on manifolds.
The role of the G-field
For small energies and low values of spacetime curvature (the “low coupling” regime), the equations Bianconi presents reduce to the standard equations of Einstein’s general theory of relativity. Beyond this regime, the full modified Einstein equations can be written in terms of a new field, the G-field, that gives rise to a non-zero cosmological constant. Often associated with the accelerated expansion of the universe, the cosmological constant contributes to the still-mysterious substance known as dark energy, which is estimated to make up 68% of the mass-energy in the universe. A key feature of Bianconi’s entropy-based theory is that the cosmological constant is actually not constant, but dependent on the G-field. Hence, a key feature of the G-field is that it might provide new insight into what the cosmological constant really is, and where it comes from.
The G-field also has implications for black hole physics. In a follow-up work, Bianconi shows that a common solution in general relativity known as the Schwarzschild metric is an approximation, with the full solution requiring consideration of the G-field’s effects.
What does this mean for quantum gravity and cosmology?
The existence of a connection between black holes and entropy also raises the possibility that Bianconi’s framework could shed new light on the black hole information paradox. Since black holes are supposed to evaporate due to Hawking radiation, the paradox addresses the question of whether information that falls into a black hole is truly lost after evaporation. Namely, does a black hole destroy information forever, or is it somehow preserved?
The general theory predicts that the QRE for the Schwarzschild black hole follows the area law, a key feature of black hole thermodynamics, suggesting that further exploration of this framework might lead to new answers about the fundamental nature of black holes.
Unlike other approaches to quantum gravity that are primarily phenomenological, Bianconi’s framework seeks to understand gravity from first principles by linking it directly to quantum information and statistical mechanics. When asked how she became interested in this line of research, she emphasizes the continuity between her previous work on the topology and geometry of higher-order networks, her work on the topological Dirac operator and her current pursuits.
“I was especially struck by a passage in Gian Francesco Giudice’s recent book Before the Big Bang, where a small girl asks, ‘If your book speaks about the universe, does it also speak about me?’” Bianconi says. “This encapsulates the idea that new bridges between different scientific domains could be key to advancing our understanding.”
Future directions
There is still much to explore in this approach. In particular, Bianconi hopes to extend this theory into second quantization, where fields are thought of as operators just as physical quantities (position, momentum, so on) are in first quantization. Additionally, the modified Einstein equations derived in this theory have yet to be fully solved, and understanding the full implications of the theory for classical gravity is an ongoing challenge.
Though the research is still in its early stages, Bianconi emphasizes that it could eventually lead to testable hypotheses. The relationship between the theory’s predicted cosmological constant and experimental measurements, for example, could offer a way to test it against existing data.
Vaire Computing is a start-up seeking to commercialize computer chips based on the principles of reversible computing – a topic Earley studied during her PhD in applied mathematics and theoretical physics at the University of Cambridge, UK. The central idea behind reversible computing is that reversible operations use much less energy, and thus generate much less waste heat, than those in conventional computers.
What skills do you use every day in your job?
In an early-stage start-up environment, you have to wear lots of different hats. Right now, I’m planning for the next few years, but I’m also very deep into the engineering side of Vaire, which spans a lot of different areas.
The skill I use most is my ability to jump into a new field and get up to speed with it as quickly as possible, because I cannot claim to be an expert in all the different areas we work in. I cannot be an expert in integrated circuit design as well as developing electronic design automation tooling as well as building better resonators. But what I can do is try to learn about all these things at as deep a level as I can, very quickly, and then guide the people around me with higher-level decisions while also having a bit of fun and actually doing some engineering work.
What do you like best and least about your job?
We have so many great people at Vaire, and being able to talk with them and discuss all the most interesting aspects of their specialities is probably the part I like best. But I’m also enjoying the fact that in a few years, all this work will culminate in an actual product based on things I worked on when I was in academia. I love theory, and I love thinking about what could be possible in hundreds of years’ time, but seeing an idea get closer and closer to reality is great.
The part I have more of a love-hate relationship with is just how intense this job is. I’m probably intrinsically a workaholic. I don’t think I’ve ever had a good balance in terms of how much time I spend on work, whether now or when I was doing my PhD or even before. But when you are responsible for making your company succeed, that degree of intensity becomes unavoidable. It feels difficult to take breaks or to feel comfortable taking breaks, but I hope that as our company grows and gets more structured, that part will improve.
What do you know now that you wish you’d known when you were starting out in your career?
There are so many specifics of what it means to build a computer chip that I wish I’d known. I may even have suffered a little bit from the Dunning–Kruger effect [in which people with limited experience of a particular topic overestimate their knowledge] at the beginning, thinking, “I know what a transistor is like. How hard can it be to build a large-scale integrated circuit?”
It turns out it’s very, very hard, and there’s a lot of complexity around it. When I was a PhD student, it felt like there wasn’t that big a gap between theory and implementation. But there is, and while to some extent it’s not possible to know about something until you’ve done it, I wish I’d known a lot more about chip design a few years ago.
Quantum transducer A niobium microwave LC resonator (silver) is capacitively coupled to two hybridized lithium niobate racetrack resonators in a paperclip geometry (black) to exchange energy between the microwave and optical domains using the electro-optic effect. (Courtesy: Lončar group/Harvard SEAS)
The future of quantum communication and quantum computing technologies may well revolve around superconducting qubits and quantum circuits, which have already been shown to improve processing capabilities over classical supercomputers – even when there is noise within the system. This scenario could be one step closer with the development of a novel quantum transducer by a team headed up at the Harvard John A Paulson School of Engineering and Applied Sciences (SEAS).
Realising this future will rely on systems having hundreds (or more) logical qubits (each built from multiple physical qubits). However, because superconducting qubits require ultralow operating temperatures, large-scale refrigeration is a major challenge – there is no technology available today that can provide the cooling power to realise such large-scale qubit systems.
Superconducting microwave qubits are a promising option for quantum processor nodes, but they currently require bulky microwave components. These components create a lot of heat that can easily disrupt the refrigeration systems cooling the qubits.
One way to combat this cooling conundrum is to use a modular approach, with small-scale quantum processors connected via quantum links, and each processor having its own dilution refrigerator. Superconducting qubits can be accessed using microwave photons between 3 and 8 GHz, thus the quantum links could be used to transmit microwave signals. The downside of this approach is that it would require cryogenically cooled links between each subsystem.
On the other hand, optical signals at telecoms frequency (around 200 THz) can be generated using much smaller form factor components, leading to lower thermal loads and noise, and can be transmitted via low-loss optical fibres. The transduction of information between optical and microwave frequencies is therefore key to controlling superconducting microwave qubits without the high thermal cost.
The large energy gap between microwave and optical photons makes it difficult to control microwave qubits with optical signals and requires a microwave–optical quantum transducer (MOQT). These MOQTs provide a coherent, bidirectional link between microwave and optical frequencies while preserving the quantum states of the qubit. A team led by SEAS researcher Marko Lončar has now created such a device, describing it in Nature Physics.
Lončar and collaborators have developed a thin-film lithium niobate (TFLN) cavity electro-optic (CEO)-based MOQT (clad with silica to aid thermal dissipation and mitigate optical losses) that converts optical frequencies into microwave frequencies with low loss. The team used the CEO-MOQT to facilitate coherent optical driving of a superconducting qubit (controlling the state of the quantum system by manipulating its energy).
The on-chip transducer system contains three resonators: a microwave LC resonator capacitively coupled to two optical resonators using the electro-optic effect. The device creates hybridized optical modes in the transducer that enables a resonance-enhanced exchange of energy between the microwave and optical modes.
The transducer uses a process known as difference frequency generation to create a new frequency output from two input frequencies. The optical modes – an optical pump in a classical red-pumping regime and an optical idler – interact to generate a microwave signal at the qubit frequency, in the form of a shaped, symmetric single microwave photon.
This microwave signal is then transmitted from the transducer to a superconducting qubit (in the same refrigerator system) using a coaxial cable. The qubit is coupled to a readout resonator that enables its state to be read by measuring the transmission of a readout pulse.
The MOQT operated with a peak conversion efficiency of 1.18% (in both microwave-to-optical and optical-to-microwave regimes), low microwave noise generation and the ability to drive Rabi oscillations in a superconducting qubit. Because of the low noise, the researchers state that stronger optical-pump fields could be used without affecting qubit performance.
Having effectively demonstrated the ability to control superconducting circuits with optical light, the researchers suggest a number of future improvements that could increase the device performance by orders of magnitude. For example, microwave and optical coupling losses could be reduced by fabricating a single-ended microwave resonator directly onto the silicon wafer instead of on silica. A flux tuneable microwave cavity could increase the optical bandwidth of the transducer. Finally, the use of improved measurement methods could improve control of the qubits and allow for more intricate gate operations between qubit nodes.
The researchers suggest this type of device could be used for networking superconductor qubits when scaling up quantum systems. The combination of this work with other research on developing optical readouts for superconducting qubit chips “provides a path towards forming all-optical interfaces with superconducting qubits…to enable large scale quantum processors,” they conclude.
Nonlocal correlations that define quantum entanglement could be reconciled with Einstein’s theory of relativity if space–time had two temporal dimensions. That is the implication of new theoretical work that extends nonlocal hidden variable theories of quantum entanglement and proposes a potential experimental test.
Marco Pettini, a theoretical physicist at Aix Marseille University in France, says the idea arose from conversations with the mathematical physicist Roger Penrose – who shared the 2020 Nobel Prize for Physics for showing that the general theory of relativity predicted black holes. “He told me that, from his point of view, quantum entanglement is the greatest mystery that we have in physics,” says Pettini. The puzzle is encapsulated by Bell’s inequality, which was derived in the mid-1960s by the Northern Irish physicist John Bell.
Bell’s breakthrough was inspired by the 1935 Einstein–Podolsky–Rosen paradox, a thought experiment in which entangled particles in quantum superpositions (using the language of modern quantum mechanics) travel to spatially separated observers Alice and Bob. They make measurements of the same observable property of their particles. As they are superposition states, the outcome of neither measurement is certain before it is made. However, as soon as Alice measures the state, the superposition collapses and Bob’s measurement is now fixed.
Quantum scepticism
A sceptic of quantum indeterminacy could hypothetically suggest that the entangled particles carried hidden variables all along, so that when Alice made her measurement, she simply found out the state that Bob would measure rather than actually altering it. If the observers are separated by a distance so great that information about the hidden variable’s state would have to travel faster than light between them, then hidden variable theory violates relativity. Bell derived an inequality showing the maximum degree of correlation between the measurements possible if each particle carried such a “local” hidden variable, and showed it was indeed violated by quantum mechanics.
A more sophisticated alternative investigated by the theoretical physicists David Bohm and his student Jeffrey Bub, as well as by Bell himself, is a nonlocal hidden variable. This postulates that the particle – including the hidden variable – is indeed in a superposition and defined by an evolving wavefunction. When Alice makes her measurement, this superposition collapses. Bob’s value then correlates with Alice’s. For decades, researchers believed the wavefunction collapse could travel faster than light without allowing superliminal exchange of information – therefore without violating the special theory of relativity. However, in 2012 researchers showed that any finite-speed collapse propagation would enable superluminal information transmission.
“I met Roger Penrose several times, and while talking with him I asked ‘Well, why couldn’t we exploit an extra time dimension?’,” recalls Pettini. Particles could have five-dimensional wavefunctions (three spatial, two temporal), and the collapse could propagate through the extra time dimension – allowing it to appear instantaneous. Pettini says that the problem Penrose foresaw was that this would enable time travel, and the consequent possibility that one could travel back through the “extra time” to kill one’s ancestors or otherwise violate causality. However, Pettini says he “recently found in the literature a paper which has inspired some relatively standard modifications of the metric of an enlarged space–time in which massive particles are confined with respect to the extra time dimension…Since we are made of massive particles, we don’t see it.”
Toy model
Pettini believes it might be possible to test this idea experimentally. In a new paper, he proposes a hypothetical experiment (which he describes as a toy model), in which two sources emit pairs of entangled, polarized photons simultaneously. The photons from one source are collected by recipients Alice and Bob, while the photons from the other source are collected by Eve and Tom using identical detectors. Alice and Eve compare the polarizations of the photons they detect. Alice’s photon must, by fundamental quantum mechanics, be entangled with Bob’s photon, and Eve’s with Tom’s, but otherwise simple quantum mechanics gives no reason to expect any entanglement in the system.
Pettini proposes, however, that Alice and Eve should be placed much closer together, and closer to the photon sources, than to the other observers. In this case, he suggests, the communication of entanglement through the extra time dimension when the wavefunction of Alice’s particle collapses, transmitting this to Bob, or when Eve’s particle is transmitted to Tom would also transmit information between the much closer identical particles received by the other woman. This could affect the interference between Alice’s and Eve’s photons and cause a violation of Bell’s inequality. “[Alice and Eve] would influence each other as if they were entangled,” says Pettini. “This would be the smoking gun.”
Bub, now a distinguished professor emeritus at the University of Maryland, College Park, is not holding his breath. “I’m intrigued by [Pettini] exploiting my old hidden variable paper with Bohm to develop his two-time model of entanglement, but to be frank I can’t see this going anywhere,” he says. “I don’t feel the pull to provide a causal explanation of entanglement, and I don’t any more think of the ‘collapse’ of the wave function as a dynamical process.” He says the central premise of Pettini’s – that adding an extra time dimension could allow the transmission of entanglement between otherwise unrelated photons, is “a big leap”. “Personally, I wouldn’t put any money on it,” he says.
A burst of solar wind triggered a planet-wide heatwave in Jupiter’s upper atmosphere, say astronomers at the University in Reading, UK. The hot region, which had a temperature of over 750 K, propagated at thousands of kilometres per hour and stretched halfway around the planet.
“This is the first time we have seen something like a travelling ionospheric disturbance, the likes of which are found on Earth, at a giant planet,” says James O’Donoghue, a Reading planetary scientist and lead author of a study in Geophysical Research Letters on the phenomenon. “Our finding shows that Jupiter’s atmosphere is not as self-contained as we thought, and that the Sun can drive dramatic, global changes, even this far out in the solar system.”
Jupiter’s upper atmosphere begins hundreds of kilometres above its surface and has two components. One is a neutral thermosphere composed mainly of molecular hydrogen. The other is a charged ionosphere comprising electrons and ions. Jupiter also has a protective magnetic shield, or magnetosphere.
When emissions from Jupiter’s volcanic moon, Io, become ionized by extreme ultraviolet radiation from the Sun, the resulting plasma becomes trapped in the magnetosphere. This trapped plasma then generates magnetosphere-ionosphere currents that heat the planet’s polar regions and produce aurorae. Thanks to this heating, the hottest places on Jupiter, at around 900 K, are its poles. From there, temperatures gradually decrease, reaching 600 K at the equator.
Quite a different temperature-gradient pattern
In 2021, however, O’Donoghue and colleagues observed quite a different temperature-gradient pattern in near-infrared spectral data recorded by the 10-metre Keck II telescope in Hawaii, US, during an event in 2017. When they analysed these data, they found an enormous hot region far from Jupiter’s aurorae and stretching across 180° in longitude – half the planet’s circumference.
“At the time, we could not definitively explain this hot feature, which is roughly 150 K hotter than the typical ambient temperature of Jupiter,” says O’Donoghue, “so we re-analysed the Keck data using updated solar wind propagation models.”
Two instruments on NASA’s Juno spacecraft were pivotal in the re-analysis, he explains. The first, called Waves, can measure electron densities locally. Its data showed that these electron densities ramped up as the spacecraft approached Jupiter’s magnetosheath, which is the region between the planet’s magnetic field and the solar wind. The second instrument was Juno’s magnetometer, which recorded measurements that backed up the Waves-based analyses, O’Donoghue says.
A new interpretation
In their latest study, the Reading scientists analysed a burst of fast solar wind that emanated from the Sun in January 2017 and propagated towards Jupiter. They found that a high-speed stream of this wind arrived several hours before the Keck telescope recorded the data that led them to identify the hot region.
“Our analysis of Juno’s magnetometer measurements also showed that this spacecraft exited the magnetosphere of Jupiter early,” says O’Donoghue. “This is a strong sign that strong solar winds probably compressed Jupiter’s magnetic field several hours before the hot region appeared.
“We therefore see the hot region emerging as a response to solar wind compression: the aurorae flared up and heat spilled equatorward.”
The result shows that the Sun can significantly reshape the global energy balance in Jupiter’s upper atmosphere, he tells Physics World. “That changes how we think about energy balance at all giant planets, not just Jupiter, but potentially Saturn, Uranus, Neptune and exoplanets too,” he says. “It also shows that solar wind can trigger complex atmospheric responses far from Earth and it could help us understand space weather in general.”
The Reading researchers say they would now like to hunt for more of these events, especially in the southern hemisphere of Jupiter where they expect a mirrored response. “We are also working on measuring wind speeds and temperatures across more of the planet and at different times to better understand how often this happens and how energy moves around,” O’Donoghue reveals. “Ultimately, we want to build a more complete picture of how space weather shapes Jupiter’s upper atmosphere and drives (or interferes) with global circulation there.”
The world’s smallest pacemaker to date is smaller than a single grain of rice, optically controlled and dissolves after it’s no longer needed. According to researchers involved in the work, the pacemaker could work in human hearts of all sizes that need temporary pacing, including those of newborn babies with congenital heart defects.
“Our major motivation was children,” says Igor Efimov, a professor of medicine and biomedical engineering, in a press release from Northwestern University. Efimov co-led the research with Northwestern bioelectronics pioneer John Rogers.
“About 1% of children are born with congenital heart defects – regardless of whether they live in a low-resource or high-resource country,” Efimov explains. “Now, we can place this tiny pacemaker on a child’s heart and stimulate it with a soft, gentle, wearable device. And no additional surgery is necessary to remove it.”
The current clinical standard-of-care involves sewing pacemaker electrodes directly onto a patient’s heart muscle during surgery. Wires from the electrodes protrude from the patient’s chest and connect to an external pacing box. Placing the pacemakers – and removing them later – does not come without risk. Complications include infection, dislodgment, torn or damaged tissues, bleeding and blood clots.
To minimize these risks, the researchers sought to develop a dissolvable pacemaker, which they introduced in Nature Biotechnology in 2021. By varying the composition and thickness of materials in the devices, Rogers’ lab can control how long the pacemaker functions before dissolving. The dissolvable device also eliminates the need for bulky batteries and wires.
“The heart requires a tiny amount of electrical stimulation,” says Rogers in the Northwestern release. “By minimizing the size, we dramatically simplify the implantation procedures, we reduce trauma and risk to the patient, and, with the dissolvable nature of the device, we eliminate any need for secondary surgical extraction procedures.”
Light-controlled pacing When the wearable device (left) detects an irregular heartbeat, it emits light to activate the pacemaker. (Courtesy: John A Rogers/Northwestern University)
The latest iteration of the device – reported in Nature – advances the technology further. The pacemaker is paired with a small, soft, flexible, wireless device that is mounted onto the patient’s chest. The skin-interfaced device continuously captures electrocardiogram (ECG) data. When it detects an irregular heartbeat, it automatically shines a pulse of infrared light to activate the pacemaker and control the pacing.
“The new device is self-powered and optically controlled – totally different than our previous devices in those two essential aspects of engineering design,” says Rogers. “We moved away from wireless power transfer to enable operation, and we replaced RF wireless control strategies – both to eliminate the need for an antenna (the size-limiting component of the system) and to avoid the need for external RF power supply.”
Measurements demonstrated that the pacemaker – which is 1.8 mm wide, 3.5 mm long and 1 mm thick – delivers as much stimulation as a full-sized pacemaker. Initial studies in animals and in the human hearts of organ donors suggest that the device could work in human infants and adults. The devices are also versatile, the researchers say, and could be used across different regions of the heart or the body. They could also be integrated with other implantable devices for applications in nerve and bone healing, treating wounds and blocking pain.
The next steps for the research (supported by the Querrey Simpson Institute for Bioelectronics, the Leducq Foundation and the National Institutes of Health) include further engineering improvements to the device. “From the translational standpoint, we have put together a very early-stage startup company to work individually and/or in partnerships with larger companies to begin the process of designing the device for regulatory approval,” Rogers says.
In a conversation with Physics World’s Matin Durrani, Meredith talks about the importance of semiconductors in a hi-tech economy and why it is crucial for the UK to have a homegrown semiconductor industry.
Founded in 2020, CISM moved into a new, state-of-the-art £50m building in 2023 and is now in its first full year of operation. Meredith explains how technological innovation and skills training at CSIM is supporting chipmakers in the M4 hi-tech corridor, which begins in Swansea in South Wales and stretches eastward to London.
Harvard University is suing the Trump administration over its plan to block up to $9bn of government research grants to the institution. The suit, filed in a federal court on 21 April, claims that the administration’s “attempt to coerce and control” Harvard violates the academic freedom protected by the first amendment of the US constitution.
The action comes in the wake of the US administration claiming that Harvard and other universities have not protected Jewish students during pro-Gaza campus demonstrations. Columbia University has already agreed to change its teaching policies and clamp down on demonstrations in the hope of regaining some $400,000 of government grants.
Harvard president Alan Garber also sought negotiations with the administration on ways that it might satisfy its demands. But a letter sent to Garber dated 11 April, signed by three Trump administration officials, asserted that the university had “failed to live up to both the intellectual and civil rights conditions that justify federal investments”.
The letter demanded that Harvard reform and restructure its governance, stop all diversity, equality and inclusion (DEI) programmes and reform how it hires staff and students. It also said Harvard must stop recruiting international students who are “hostile to American values” and provide an audit on “viewpoint diversity” on admissions and hiring.
Some administration sources suggested that the letter, which effectively insists on government oversight of Harvard’s affairs, was an internal draft sent to Harvard by mistake. Nevertheless, Garber decided to end negotiations, leading Harvard to instead sue the government over the blocked funds.
We stand for the values that have made American higher education a beacon for the world
Alan Garber
A letter on 14 April from Harvard’s lawyers states that the university is “committed to fighting antisemitism and other forms of bigotry in its community”. It adds that it is “open to dialogue” about what it has done, and is planning to do, to “improve the experience of every member” of its community but concludes that Harvard “is not prepared to agree to demands that go beyond the lawful authority of this or any other administration”.
Writing in an open letter to the community dated 22 April, Garber says that “we stand for the values that have made American higher education a beacon for the world”. The administration has hit back by threatening to withdraw Harvard’s non-profit status, tax its endowment and jeopardise its ability to enrol overseas students, who currently make up more than 27% of its intake.
Budget woes
The Trump administration is also planning swingeing cuts to government science agencies. If its budget request for 2026 is approved by Congress, funding for NASA’s Science Mission Directorate would be almost halved from $7.3bn to $3.9bn. The Nancy Grace Roman Space Telescope, a successor to the Hubble and James Webb space telescopes, would be axed. Two missions to Venus – the DAVINCI atmosphere probe and the VERITAS surface-mapping project – as well as the Mars Sample Return mission would lose their funding too.
“The impacts of these proposed funding cuts would not only be devastating to the astronomical sciences community, but they would also have far-reaching consequences for the nation,” says Dara Norman, president of the American Astronomical Society. “These cuts will derail not only cutting-edge scientific advances, but also the training of the nation’s future STEM workforce.”
The National Oceanic and Atmospheric Administration (NOAA) also stands to lose key programmes, with the budget for its Ocean and Atmospheric Research Office slashed from $485m to just over $170m. Surviving programmes from the office, including research on tornado warning and ocean acidification, would move to the National Weather Service and National Ocean Service.
“This administration’s hostility toward research and rejection of climate science will have the consequence of eviscerating the weather forecasting capabilities that this plan claims to preserve,” says Zoe Lofgren, a senior Democrat who sits on the House of Representatives’ Science, Space, and Technology Committee.
The National Science Foundation (NSF), meanwhile, is unlikely to receive $234m for major building projects this financial year, which could spell the end of the Horizon supercomputer being built at the University of Texas at Austin. The NSF has already halved the number of graduate students in its research fellowship programme, while Science magazine says it is calling back all grant proposals that had been approved but not signed off, apparently to check that awardees conform to Trump’s stance on DEI.
A survey of 292 department chairs at US institutions in early April, carried out by the American Institute of Physics, reveals that almost half of respondents are experiencing or anticipate cuts in federal funding in the coming months. Entitled Impacts of Restrictions on Federal Grant Funding in Physics and Astronomy Graduate Programs, the report also says that the number of first-year graduate students in physics and astronomy is expected to drop by 13% in the next enrolment.
Update: 25/04/2025: Sethuraman Panchanathan has resigned as NSF director five years into his six-year term. Panchanathan took up the position in 2020 during Trump’s first term as US President. “I believe that I have done all I can to advance the mission of the agency and feel that it is time to pass the baton to new leadership,” Panchanathan said in a statement yesterday. “This is a pivotal moment for our nation in terms of global competitiveness. We must not lose our competitive edge.”
A series of spectacular images of the cosmos has been released to celebrate the Hubble Space Telescope‘s 35 years in space. The images include pictures of Mars, planetary nebulae and a spiral galaxy.
Hubble was launched into low-Earth orbit in April 1990, stowed in the payload bay of the space shuttle Discovery. The telescope experienced a difficult start as its 2.4 m primary mirror suffered from spherical aberration – a fault that caused the curvature of the mirror to not bring light to focus at the same point. This was fixed three years later during a daring space walk in which astronauts successfully installed the COSTAR instrument.
During Hubble’s operational life, the telescope has made nearly 1.7 million observations, studying approximately 55,000 astronomical targets. Its discoveries have resulted in over 22,000 papers and over 1.3 million citations.
Operating for three decades, Hubble has allowed astronomers to see astronomical changes such as seasonal variability on the planets in our solar system, black-hole jets travelling at nearly the speed of light as well as stellar convulsions, asteroid collisions and expanding supernova bubbles.
Despite being 35 years in orbit around the Earth, Hubble is still one of the most sought after observatories with demand for observing time oversubscribed by 6:1.
“[Hubble’s] stunning imagery inspired people across the globe, and the data behind those images revealed surprises about everything from early galaxies to planets in our own solar system,” notes Shawn Domagal-Goldman, acting director of NASA’s astrophysics division. “The fact that it is still operating today is a testament to the value of our flagship observatories.”
Worms move faster in an environment riddled with randomly-placed obstacles than they do in an empty space. This surprising observation by physicists at the University of Amsterdam in the Netherlands can be explained by modelling the worms as polymer-like “active matter”, and it could come in handy for developers of robots for soil aeriation, fertility treatments and other biomedical applications.
When humans move, the presence of obstacles – disordered or otherwise – has a straightforward effect: it slows us down, as anyone who has ever driven through “traffic calming” measures like speed bumps and chicanes can attest. Worms, however, are different, says Antoine Deblais, who co-led the new study with Rosa Sinaasappel and theorist colleagues in Sara Jabbari Farouji’s group. “The arrangement of obstacles fundamentally changes how worms move,” he explains. “In disordered environments, they spread faster as crowding increases, while in ordered environments, more obstacles slow them down.”
A maze of cylindrical pillars
The team obtained this result by placing single living worms at the bottom of a water chamber containing a 50 x 50 cm array of cylindrical pillars, each with a radius of 2.5 mm. By tracking the worms’ movement and shape changes with a camera for two hours, the scientists could see how the animals behaved when faced with two distinct pillar arrangements: a periodic (square lattice) structure; and a disordered array. The minimum distance between any two pillars was set to the characteristic width of a worm (around 0.5 mm) to ensure they could always pass through.
“By varying the number and arrangement of the pillars (up to 10 000 placed by hand!), we tested how different environments affect the worm’s movement,” Sinaasappel explains. “We also reduced or increased the worm’s activity by lowering or raising the temperature of the chamber.”
These experiments showed that when the chamber contained a “maze” of obstacles placed at random, the worms moved faster, not slower. The same thing happened when the researchers increased the number of obstacles. More surprisingly still, the worms got through the maze faster when the temperature was lower, even though the cold reduced their activity.
Active polymer-like filaments
To explain these counterintuitive results, the team developed a statistical model that treats the worms as active polymer-like filaments and accounts for both the worms’ flexibility and the fact that they are self-driven. This analysis revealed that in a space containing disordered pillar arrays, the long-time diffusion coefficient of active polymers with a worm-like degree of flexibility increases significantly as the fraction of the surface occupied by pillars goes up. In regular, square-lattice arrangements, the opposite happens.
The team say that this increased diffusivity comes about because randomly-positioned pillars create narrow tube-like structures between them. These curvilinear gaps guide the worms and allow them to move as if they were straight rods for longer before they reorient. In contrast, ordered pillar arrangements create larger open spaces, or pores, in which worms can coil up. This temporarily traps them and they slow down.
Similarly, the team found that reducing the worm’s activity by lowering ambient temperatures increases a parameter known as its persistence length. This is essentially a measure of how straight the worm is, and straighter worms pass between the pillars more easily.
“Self-tuning plays a key role”
Identifying the right active polymer model was no easy task, says Jabbari Farouji. One challenge was to incorporate the way worms adjust their flexibility depending on their surroundings. “This self-tuning plays a key role in their surprising motion,” says Jabbari Farouji, who credits this insight to team member Twan Hooijschuur.
Understanding how active, flexible objects move through crowded environments is crucial in physics, biology and biophysics, but the role of environmental geometry in shaping this movement was previously unclear, Jabbari Farouji says. The team’s discovery that movement in active, flexible systems can be controlled simply by adjusting the environment has important implications, adds Deblais.
“Such a capability could be used to sort worms by activity and therefore optimize soil aeration by earthworms or even influence bacterial transport in the body,” he says. “The insights gleaned from this study could also help in fertility treatments – for instance, by sorting sperm cells based on how fast or slow they move.”
Looking ahead, the researchers say they are now expanding their work to study the effects of different obstacle shapes (not just simple pillars), more complex arrangements and even movable obstacles. “Such experiments would better mimic real-world environments,” Deblais says.
Precise control over the generation of intense, ultrafast changes in magnetic fields called “magnetic steps” has been achieved by researchers in Hamburg, Germany. Using ultrashort laser pulses, Andrea Cavalleri and colleagues at the Max Planck Institute for the Structure and Dynamics of Matter disrupted the currents flowing through a superconducting disc. This alters the superconductor’s local magnetic environment on very short timescales – creating a magnetic step.
Magnetic steps rise to their peak intensity in just a few picoseconds, before decaying more slowly in several nanoseconds. They are useful to scientists because they rise and fall on timescales far shorter than the time it takes for materials to respond to external magnetic fields. As a result, magnetic steps could provide fundamental insights into the non-equilibrium properties of magnetic materials, and could also have practical applications in areas such as magnetic memory storage.
So far, however, progress in this field has been held back by technical difficulties in generating and controlling magnetic steps on ultrashort timescales. Previous strategies have employed technologies including microcoils, specialized antennas, and circularly polarized light pulses. However, each of these schemes offer a limited degree of control over the properties of the magnetic steps they generated.
Quenching supercurrents
Now, Cavalleri’s team has developed a new technique that involves the quenching of currents in a superconductor. Normally, these “supercurrents” will flow indefinitely without losing energy, and will act to expel any external magnetic fields from the superconductor’s interior. However, if these currents are temporarily disrupted on ultrashort timescales, a sudden change will be triggered in the magnetic field close to the superconductor – which could be used to create a magnetic step.
To create this process, Cavalleri and colleagues applied ultrashort laser pulses to a thin, superconducting disc of yttrium barium copper oxide (YBCO), while also exposing the disc to an external magnetic field.
To detect whether magnetic steps had been generated, they placed a crystal of the semiconductor gallium phosphide in the superconductor’s vicinity. This material exhibits an extremely rapid Faraday response. This involves the rotation of the polarization of light passing through the semiconductor in response to changes in the local magnetic field. Crucially, this rotation can occur on sub-picosecond timescales.
In their experiments, researchers monitored changes to the polarization of an ultrashort “probe” laser pulse passing through the semiconductor shortly after they quenched supercurrents in their YBCO disc using a separate ultrashort “pump” laser pulse.
“By abruptly disrupting the material’s supercurrents using ultrashort laser pulses, we could generate ultrafast magnetic field steps with rise times of approximately one picosecond – or one trillionth of a second,” explains team member Gregor Jotzu.
Broadband step
This was used to generate an extremely broadband magnetic step, which contains frequencies ranging from sub-gigahertz to terahertz. In principle, this should make the technique suitable for studying magnetization in a diverse variety of materials.
To demonstrate practical applications, the team used these magnetic steps to control the magnetization of a ferrimagnet. Such a magnet has opposing magnetic moments, but has a non-zero spontaneous magnetization in zero magnetic field.
When they placed a ferrimagnet on top of their superconductor and created a magnetic step, the step field caused the ferrimagnet’s magnetization to rotate.
For now, the magnetic steps generated through this approach do not have the speed or amplitude needed to switch materials like a ferrimagnet between stable states. Yet through further tweaks to the geometry of their setup, the researchers are confident that this ability may not be far out of reach.
“Our goal is to create a universal, ultrafast stimulus that can switch any magnetic sample between stable magnetic states,” Cavalleri says. “With suitable improvements, we envision applications ranging from phase transition control to complete switching of magnetic order parameters.”
Over the years, first as a PhD student and now as a postdoc, I have been approached by many students and early-career academics who have confided their problems with me. Their issues, which they struggled to deal with alone, ranged from anxiety and burnout to personal and professional relationships as well as mental-health concerns. Sadly, such discussions were not one-off incidents but seemed worryingly common in academia where people are often under pressure to perform, face uncertainty over their careers and need to juggle lots of different tasks simultaneously.
But it can be challenging to even begin to approach someone else with a problem. That first step can take days or weeks of mental preparation, so to those who are approached for help, it is our responsibility to listen and act appropriately when someone does finally open up. This is especially so given that a supervisor, mentor, teaching assistant, or anybody in a position of seniority, may be the first point of contact when a difficulty becomes debilitating.
I am fortunate to have had excellent relationships with my PhD and postdoc supervisors – providing great examples to follow. Even then, however, it was difficult to subdue the feeling of nausea when I knocked on their office doors to have a difficult conversation. I was worried about their response and reaction and how they would judge me. While that first conversation is challenging for both parties, fortunately it does gets easier from there.
Yet it can also be hard for the person who is trying to offer help, especially if they haven’t done so before. In fact, when colleagues began to confide in me, I’d had no formal preparation or training to support them. But through experience and some research, I found a few things that worked well in such complex situations. The first is to set and maintain boundaries or where your personal limits lie. This includes which topics are off limits and to what extent you will engage with somebody. Someone who has recently experienced bereavement, for example, may not want to engage deeply with a student who is enduring the same and so should make it clear they can’t offer help. Yet at the same time, that person may feel confident providing support for someone struggling with imposter syndrome – a feeling that you don’t deserve to be there and aren’t good at your work.
Time restrictions can also be used as boundaries. If you are working on a critical experiment, have an article deadline or are about to go on holiday, explain that you can only help them until a certain point, after which you will explore alternative solutions together. Setting boundaries can also be handy for mentors to prepare to help someone struggling. This could involve taking a mental-health first-aid course to support a person who experiences panic attacks or is relapsing into depression. It could also mean finding contact details for professionals, either on campus or beyond, who could help. While providing such information might sound trivial and unimportant, remember that for a person who is feeling overwhelmed, it can be hugely appreciated.
Following up
Sharing problems takes courage. It also requires trust because if information leaks out, rumours and accusations can spread quickly and worsen situations. It is, however, possible to ask more senior colleagues for advice without identifying anyone or their exact circumstances, perhaps in cases when dealing with less than amicable relationships with collaborators. It is also possible to let colleagues know that a particular person needs more support without explicitly saying why.
There are times, however, when that confidentiality must be broken. In my experience, this should always be first addressed with the person at hand and broken to somebody who is sure to have a concrete solution. For a student who is struggling with a particular subject, it could, for example, be the lecturer responsible for that course. For somebody not coping with divorce, say, it could be someone from HR or a supervisor for a colleague. It could even be a university’s support team or the police for a student who has experienced sexual assault.
Even if the situation has been handed over to someone else, it’s important to follow up with the person struggling, which helps them know they’re being heard and respected
I have broken confidentiality at times and it can be nerve-wracking, but it is essential to provide the best possible support and take a situation that you cannot handle off your hands. Even if the issue has been handed over to someone else, it’s important to follow up with the person struggling, which helps them know they’re being heard and respected. Following up is not always a comfortable conversation, potentially invoking trauma or broaching sensitive topics. But it also allows them to admit that they are still looking for more support or that their situation has worsened.
A follow-up conversation could also be held in a discrete environment with reassurance that nobody is obliged to go into detail. It may be as simple as asking “How are you feeling today?”. Letting someone express themselves without judgement can help them come to terms with their situation, let them speak or have confidence to approach you again.
Regularly reflecting on your boundaries and limits as well as having a good knowledge of possible resources can help you prepare for unexpected circumstances. It gives students and colleagues immediate care and relief at what might be their lowest point. But perhaps the most important aspect when approached by someone is to ask yourself this: “What kind of person would I want to speak to if I were struggling?”. That is the person you want to be.
Until now, researchers have had to choose between thermal and visible imaging: One reveals heat signatures while the other provides structural detail. Recording both and trying to align them manually — or harder still, synchronizing them temporally — can be inconsistent and time-consuming. The result is data that is close but never quite complete. The new FLIR MIX is a game changer, capturing and synchronizing high-speed thermal and visible imagery at up to 1000 fps. Visible and high-performance infrared cameras with FLIR Research Studio software work together to deliver one data set with perfect spatial and temporal alignment — no missed details or second guessing, just a complete picture of fast-moving events.
Jerry Beeney
Jerry Beeney is a seasoned global business development leader with a proven track record of driving product growth and sales performance in the Teledyne FLIR Science and Automation verticals. With more than 20 years at Teledyne FLIR, he has played a pivotal role in launching new thermal imaging solutions, working closely with technical experts, product managers, and customers to align products with market demands and customer needs. Before assuming his current role, Beeney held a variety of technical and sales positions, including senior scientific segment engineer. In these roles, he managed strategic accounts and delivered training and product demonstrations for clients across diverse R&D and scientific research fields. Beeney’s dedication to achieving meaningful results and cultivating lasting client relationships remains a cornerstone of his professional approach.
Researchers at the University of Victoria in Canada are developing a low-cost radiotherapy system for use in low- and middle-income countries and geographically remote rural regions. Initial performance characterization of the proof-of-concept device produced encouraging results, and the design team is now refining the system with the goal of clinical commercialization.
This could be good news for people living in low-resource settings, where access to cancer treatment is an urgent global health concern. The WHO’s International Agency for Research on Cancer estimates that there are at least 20 million new cases of cancer diagnosed annually and 9.7 million annual cancer-related deaths, based on 2022 data. By 2030, approximately 75% of cancer deaths are expected to occur in low- and middle-income countries, due to rising populations, healthcare and financial disparities, and a general lack of personnel and equipment resources compared with high-income countries.
The team’s orthovoltage radiotherapy system, known as KOALA (kilovoltage optimized alternative for adaptive therapy), is designed to create, optimize and deliver radiation treatments in a single session. The device, described in Biomedical Physics & Engineering Express, consists of a dual-robot system with a 225 kVp X-ray tube mounted onto one robotic arm and a flat-panel detector mounted on the other.
The same X-ray tube can be used to acquire cone-beam CT (CBCT) images, as well as to deliver treatment, with a peak tube voltage of 225 kVp and a maximum tube current of 2.65 mA for a 1.2 mm focal spot. Due to its maximum reach of 2.05 m and collision restrictions, the KOALA system has a limited range of motion, achieving 190° arcs for both CBCT acquisition and treatments.
Device testing
To characterize the KOALA system, lead author Olivia Masella and colleagues measured X-ray spectra for tube voltages of 120, 180 and 225 kVp. At 120 and 180 kVp, they observed good agreement with spectra from SpekPy (a Python software toolkit for modelling X-ray tube spectra). For the 225 kVp spectrum, they found a notable overestimation in the higher energies.
The researchers performed dosimetric tests by measuring percent depth dose (PDD) curves for a 120 kVp imaging beam and a 225 kVp therapy beam, using solid water phantom blocks with a Farmer ionization chamber at various depths. They used an open beam with 40° divergence and a source-to-surface distance of 30 cm. They also measured 2D dose profiles with radiochromic film at various depths in the phantom for a collimated 225 kVp therapy beam and a dose of approximately 175 mGy at the surface.
The PDD curves showed excellent agreement between experiment and simulations at both 120 and 225 kVp, with dose errors of less than 2%. The 2D profile results were less than optimal. The team aims to correct this by using a more optimal source-to-collimator distance (100 mm) and a custom-built motorized collimator.
Workflow proof-of-concept The team tested the workflow by acquiring a CBCT image of a dosimetry phantom containing radiochromic film, delivering a 190° arc to the phantom, and scanning and analysing the film. The CBCT image was then processed for Monte Carlo dose calculation and compared to the film dose. (Courtesy: CC BY 4.0/Biomed. Phys. Eng. Express 10.1088/2057-1976/adbcb2)
Geometrical evaluation conducted using a coplanar star-shot test showed that the system demonstrated excellent geometrical accuracy, generating a wobble circle with a diameter of just 0.3 mm.
Low costs and clinical practicality
Principal investigator Magdalena Bazalova-Carter describes the rationale behind the KOALA’s development. “I began the computer simulations of this project about 15 years ago, but the idea originated from Michael Weil, a radiation oncologist in Northern California,” she tells Physics World. “He and our industrial partner, Tai-Nang Huang, the president of Linden Technologies, are overseeing the progress of the project. Our university team is diversified, working in medical physics, computer science, and electrical and mechanical engineering. Orimtech, a medical device manufacturer and collaborator, developed the CBCT acquisition and reconstruction software and built the imaging prototype.”
Masella says that the team is keeping costs low is various ways. “Megavoltage X-rays are most commonly used in conventional radiotherapy, but KOALA’s design utilizes low-energy kilovoltage X-rays for treatment. By using a 225 kVp X-ray tube, the X-ray generation alone is significantly cheaper compared to a conventional linac, at a cost of USD $150,000 compared to $3 million,” she explains. “By operating in the kilovoltage instead of megavoltage range, only about 4 mm of lead shielding is required, instead of 6 to 7 feet of high-density concrete, bringing the shielding cost down from $2 million to $50,000. We also have incorporated components that are much lower cost than [those in] a conventional radiotherapy system.”
“Our novel iris collimator leaves are only 1-mm thick due to the lower treatment X-ray beam energy, and its 12 leaves are driven by a single motor,” adds Bazalova-Carter. “Although multileaf collimators with 120 leaves utilized with megavoltage X-ray radiotherapy are able to create complex fields, they are about 8-cm thick and are controlled by 120 separate motors. Given the high cost and mechanical vulnerability of multileaf collimators, our single motor design offers a more robust and reliable alternative.”
The team is currently developing a new motorized collimator, an improved treatment couch and a treatment planning system. They plan to improve CBCT imaging quality with hardware modifications, develop a CBCT-to-synthetic CT machine learning algorithm, refine the auto-contouring tool and integrate all of the software to smooth the workflow.
The researchers are planning to work with veterinarians to test the KOALA system with dogs diagnosed with cancer. They will also develop quality assurance protocols specific to the KOALA device using a dog-head phantom.
“We hope to demonstrate the capabilities of our system by treating beloved pets for whom available cancer treatment might be cost-prohibitive. And while our system could become clinically adopted in veterinary medicine, our hope is that it will be used to treat people in regions where conventional radiotherapy treatment is insufficient to meet demand,” they say.
Contrary to some theorists’ expectations, water does not form hydrogen bonds in its supercritical phase. This finding, which is based on terahertz spectroscopy measurements and simulations by researchers at Ruhr University Bochum, Germany, puts to rest a long-standing controversy and could help us better understand the chemical processes that occur near deep-sea vents.
Water is unusual. Unlike most other materials, it is denser as a liquid than it is as the ice that forms when it freezes. It also expands rather than contracting when it cools; becomes less viscous when compressed; and exists in no fewer than 17 different crystalline phases.
Another unusual property is that at high temperatures and pressures – above 374 °C and 221 bars – water mostly exists as a supercritical fluid, meaning it shares some properties with both gases and liquids. Though such extreme conditions are rare on the Earth’s surface (at least outside a laboratory), they are typical for the planet’s crust and mantle. They are also present in so-called black smokers, which are hydrothermal vents that exist on the seabed in certain geologically active locations. Understanding supercritical water is therefore important for understanding the geochemical processes that occur in such conditions, including the genesis of gold ore.
Supercritical water also shows promise as an environmentally friendly solvent for industrial processes such as catalysis, and even as a mediator in nuclear power plants. Before any such applications see the light of day, however, researchers need to better understand the structure of water’s supercritical phase.
Probing the hydrogen bonding between molecules
At ambient conditions, the tetrahedrally-arranged hydrogen bonds (H-bonds) in liquid water produce a three-dimensional H-bonded network. Many of water’s unusual properties stem from this network, but as it approaches its supercritical point, its structure changes.
Previous studies of this change have produced results that were contradictory or unclear at best. While some pointed to the existence of distorted H-bonds, others identified heterogeneous structures involving rigid H-bonded dimers or, more generally, small clusters of tetrahedrally-bonded water surrounded by nonbonded gas-like molecules.
To resolve this mystery, an experimental team led by Gerhard Schwaab and Martina Havenith, together with Philipp Schienbein and Dominik Marx, investigated how water absorbs light in the far infrared/terahertz (THz) range of the spectrum. They performed their experiments and simulations at temperatures of 20° to 400°C and pressures from 1 bar up to 240 bars. In this way, they were able to investigate the hydrogen bonding between molecules in samples of water that were entering the supercritical state and samples that were already in it.
Diamond and gold cell
Because supercritical water is highly corrosive, the researchers carried out their experiments in a specially-designed cell made from diamond and gold. By comparing their experimental data with the results of extensive ab initio simulations that probed different parts of water’s high-temperature phase diagram, they obtained a molecular picture of what was happening.
The researchers found that the terahertz spectrum of water in its supercritical phase was practically identical to that of hot gaseous water vapour. This, they say, proves that supercritical water is different from both liquid water at ambient conditions and water in a low-temperature gas phase where clusters of molecules form directional hydrogen bonds. No such molecular clusters appear in supercritical water, they note.
The team’s ab initio molecular dynamics simulations also revealed that two water molecules in the supercritical phase remain close to each other for a very limited time – much shorter than the typical lifetime of hydrogen bonds in liquid water – before distancing themselves. What is more, the bonds between hydrogen and oxygen atoms in supercritical water do not have a preferred orientation. Instead, they are permanently and randomly rotating. “This is completely different to the hydrogen bonds that connect the water molecules in liquid water at ambient conditions, which do have a persisting preferred orientation,” Havenith says.
Now that they have identified a clear spectroscopic fingerprint for supercritical water, the researchers want to study how solutes affect the solvation properties of this substance. They anticipate that the results from this work, which is published in Science Advances, will enable them to characterize the properties of supercritical water for use as a “green” solvent.
Physicists working on the ATLAS experiment on the Large Hadron Collider (LHC) are the first to report the production of top quark–antiquark pairs in collisions involving heavy nuclei. By colliding lead ions, CERN’s LHC creates a fleeting state of matter called the quark–gluon plasma. This is an extremely hot and dense soup of subatomic particles that includes deconfined quarks and gluons. This plasma is believed to have filled the early universe microseconds after the Big Bang.
“Heavy-ion collisions at the LHC recreate the quark–gluon plasma in a laboratory setting,” Anthony Badea, a postdoctoral researcher at the University of Chicago and one of the lead authors of a paper describing the research. As well as boosting our understanding of the early universe, studying the quark–gluon plasma at the LHC could also provide insights into quantum chromodynamics (QCD), which is the theory of how quarks and gluons interact.
Although the quark–gluon plasma at the LHC vanishes after about 10-23 s, scientists can study it by analysing how other particles produced in collisions move through it. The top quark is the heaviest known elementary particle and its short lifetime and distinct decay pattern offer a unique way to explore the quark–gluon plasma. This because the top quark decays before the quark–gluon plasma dissipates.
“The top quark decays into lighter particles that subsequently further decay,” explains Stefano Forte at the University of Milan, who was not involved in the research. “The time lag between these subsequent decays is modified if they happen within the quark–gluon plasma, and thus studying them has been suggested as a way to probe [quark–gluon plasma’s] structure. In order for this to be possible, the very first step is to know how many top quarks are produced in the first place, and determining this experimentally is what is done in this [ATLAS] study.”
First observations
The ATLAS team analysed data from lead–lead collisions and searched for events in which a top quark and its antimatter counterpart were produced. These particles can then decay in several different ways and the researchers focused on a less frequent but more easily identifiable mode known as the di-lepton channel. In this scenario, each top quark decays into a bottom quark and a W boson, which is a weak force-carrying particle that then transforms into a detectable lepton and an invisible neutrino.
The results not only confirmed that top quarks are created in this complex environment but also showed that their production rate matches predictions based on our current understanding of the strong nuclear force.
“This is a very important study,” says Juan Rojo, a theoretical physicist at the Free University of Amsterdam who did not take part in the research. “We have studied the production of top quarks, the heaviest known elementary particle, in the relatively simple proton–proton collisions for decades. This work represents the first time that we observe the production of these very heavy particles in a much more complex environment, with two lead nuclei colliding among them.”
As well as confirming QCD’s prediction of heavy-quark production in heavy-nuclei collisions, Rojo explains that “we have a novel probe to resolve the structure of the quark–gluon plasma”. He also says that future studies will enable us “to understand novel phenomena in the strong interactions such as how much gluons in a heavy nucleus differ from gluons within the proton”.
Crucial first step
“This is a first step – a crucial one – but further studies will require larger samples of top quark events to explore more subtle effects,” adds Rojo.
The number of top quarks created in the ATLAS lead–lead collisions agrees with theoretical expectations. In the future, more detailed measurements could help refine our understanding of how quarks and gluons behave inside nuclei. Eventually, physicists hope to use top quarks not just to confirm existing models, but to reveal entirely new features of the quark–gluon plasma.
Rojo says we could, “learn about the time structure of the quark–gluon plasma, measurements which are ‘finer’ would be better, but for this we need to wait until more data is collected, in particular during the upcoming high-luminosity run of the LHC”.
Badea agrees that ATLAS’s observation opens the door to deeper explorations. “As we collect more nuclei collision data and improve our understanding of top-quark processes in proton collisions, the future will open up exciting prospects”.
Great mind Grete Hermann, pictured here in 1955, was one of the first scientists to consider the philosophical implications of quantum mechanics. (Photo: Lohrisch-Achilles. Courtesy: Bremen State Archives)
In the early days of quantum mechanics, physicists found its radical nature difficult to accept – even though the theory had successes. In particular Werner Heisenberg developed the first comprehensive formulation of quantum mechanics in 1925, while the following year Erwin Schrödinger was able to predict the spectrum of light emitted by hydrogen using his eponymous equation. Satisfying though these achievements were, there was trouble in store.
Long accustomed to Isaac Newton’s mechanical view of the universe, physicists had assumed that identical systems always evolve with time in exactly the same way, that is to say “deterministically”. But Heisenberg’s uncertainty principle and the probabilistic nature of Schrödinger’s wave function suggested worrying flaws in this notion. Those doubts were famously expressed by Albert Einstein, Boris Podolsky and Nathan Rosen in their “EPR” paper of 1935 (Phys. Rev.47 777) and in debates between Einstein and Niels Bohr.
But the issues at stake went deeper than just a disagreement among physicists. They also touched on long-standing philosophical questions about whether we inhabit a deterministic universe, the related question of human free will, and the centrality of cause and effect. One person who rigorously addressed the questions raised by quantum theory was the German mathematician and philosopher Grete Hermann (1901–1984).
Hermann stands out in an era when it was rare for women to contribute to physics or philosophy, let alone to both. Writing in The Oxford Handbook of the History of Quantum Interpretations, published in 2022, the City University of New York philosopher of science Elise Crull has called Hermann’s work “one of the first, and finest, philosophical treatments of quantum mechanics”.
Grete Hermann upended the famous ‘proof’, developed by the Hungarian-American mathematician and physicist John von Neumann, that ‘hidden variables’ are impossible in quantum mechanics
What’s more, Hermann upended the famous “proof”, developed by the Hungarian-American mathematician and physicist John von Neumann, that “hidden variables” are impossible in quantum mechanics. But why have Hermann’s successes in studying the roots and meanings of quantum physics been so often overlooked? With 2025 being the International Year of Quantum Science and Technology, it’s time to find out.
Free thinker
Hermann was born on 2 March 1901 in the north German port city of Bremen. One of seven children, her mother was deeply religious, while her father was a merchant, a sailor and later an itinerant preacher. According to the 2016 book Grete Hermann: Between Physics and Philosophy by Crull and Guido Bacciagaluppi, she was raised according to her father’s maxim: “I train my children in freedom!” Essentially, he enabled Hermann to develop a wide range of interests and benefit from the best that the educational system could offer a woman at the time.
She was eventually admitted as one of a handful of girls at the Neue Gymnasium – a grammar school in Bremen – where she took a rigorous and broad programme of subjects. In 1921 Hermann earned a certificate to teach high-school pupils – an interest in education that reappeared in her later life – and began studying mathematics, physics and philosophy at the University of Göttingen.
In just four years, Hermann earned a PhD under the exceptional Göttingen mathematician Emmy Noether (1882–1935), famous for her groundbreaking theorem linking symmetry to physical conservation laws. Hermann’s final oral exam in 1925 featured not just mathematics, which was the subject of her PhD, but physics and philosophy too. She had specifically requested to be examined in the latter by the Göttingen philosopher Leonard Nelson, whose “logical sharpness” in lectures had impressed her.
Mutual interconnections Grete Hermann was fascinated by the fundamental overlap between physics and philosophy. (Courtesy: iStock/agsandrew)
By this time, Hermann’s interest in philosophy was starting to dominate her commitment to mathematics. Although Noether had found a mathematics position for her at the University of Freiburg, Hermann instead decided to become Nelson’s assistant, editing his books on philosophy. “She studies mathematics for four years,” Noether declared, “and suddenly she discovers her philosophical heart!”
Hermann found Nelson to be demanding and sometimes overbearing but benefitted from the challenges he set. “I gradually learnt to eke out, step by step,” she later declared, “the courage for truth that is necessary if one is to utterly place one’s trust, also within one’s own thinking, in a method of thought recognized as cogent.” Hermann, it appeared, was searching for a path to the internal discovery of truth, rather like Einstein’s Gedankenexperimente.
After Nelson died in 1927 aged just 45, Hermann stayed in Göttingen, where she continued editing and expanding his philosophical work and related political ideas. Espousing a form of socialism based on ethical reasoning to produce a just society, Nelson had co-founded a political action group and set up the associated Philosophical-Political Academy (PPA) to teach his ideas. Hermann contributed to both and also wrote for the PPA’s anti-Nazi newspaper.
Hermann’s involvement in the organizations Nelson had founded later saw her move to other locations in Germany, including Berlin. But after Hitler came to power in 1933, the Nazis banned the PPA, and Hermann and her socialist associates drew up plans to leave Germany. Initially, she lived at a PPA “school-in-exile” in neighbouring Denmark. As the Nazis began to arrest socialists, Hermann feared that Germany might occupy Denmark (as it indeed later did) and so moved again, first to Paris and then London.
Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics
Arriving in Britain in early 1938, Hermann became acquainted with Edward Henry, another socialist, whom she later married. It was, however, merely a marriage of convenience that gave Hermann British citizenship and – when the Second World War started in 1939 – stopped her from being interned as an enemy alien. (The couple divorced after the war.) Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics.
Mixing philosophy and physics
A major stimulus for Hermann’s work came from discussions she had in 1934 with Heisenberg and Carl Friedrich von Weizsäcker, who was then his research assistant at the Institute for Theoretical Physics in Leipzig. The previous year Hermann had written an essay entitled “Determinism and quantum mechanics”, which analysed whether the indeterminate nature of quantum mechanics – central to the “Copenhagen interpretation” of quantum behaviour – challenged the concept of causality.
Much cherished by physicists, causality says that every event has a cause, and that a given cause always produces a single specific event. Causality was also a tenet of the 18th-century German philosopher Immanuel Kant, best known for his famous 1781 treatise Critique of Pure Reason. He believed that causality is fundamental for how humans organize their experiences and make sense of the world.
Hermann, like Nelson, was a “neo-Kantian” who believed that Kant’s ideas should be treated with scientific rigour. In her 1933 essay, Hermann examined how the Copenhagen interpretation undermines Kant’s principle of causality. Although the article was not published at the time, she sent copies to Heisenberg, von Weizsäcker, Bohr and also Paul Dirac, who was then at the University of Cambridge in the UK.
In fact, we only know of the essay’s existence because Crull and Bacciagaluppi discovered a copy in Dirac’s archives at Churchill College, Cambridge. They also found a 1933 letter to Hermann from Gustav Heckmann, a physicist who said that Heisenberg, von Weizsäcker and Bohr had all read her essay and took it “absolutely and completely seriously”. Heisenberg added that Hermann was a “fabulously clever woman”.
Heckmann then advised Hermann to discuss her ideas more fully with Heisenberg, who he felt would be more open than Bohr to new ideas from an unexpected source. In 1934 Hermann visited Heisenberg and von Weizsäcker in Leipzig, with Heisenberg later describing their interaction in his 1971 memoir Physics and Beyond: Encounters and Conversations.
In that book, Heisenberg relates how rigorously Hermann wanted to treat philosophical questions. “[She] believed she could prove that the causal law – in the form Kant had given it – was unshakable,” Heisenberg recalled. “Now the new quantum mechanics seemed to be challenging the Kantian conception, and she had accordingly decided to fight the matter out with us.”
Their interaction was no fight, but a spirited discussion, with some sharp questioning from Hermann. When Heisenberg suggested, for instance, that a particular radium atom emitting an electron is an example of an unpredictable random event that has no cause, Hermann countered by saying that just because no cause has been found, it didn’t mean no such cause exists.
Significantly, this was a reference to what we now call “hidden variables” – the idea that quantum mechanics is being steered by additional parameters that we possibly don’t know anything about. Heisenberg then argued that even with such causes, knowing them would lead to complications in other experiments because of the wave nature of electrons.
Forward thinker Grete Hermann was one of the first people to study the notion that quantum mechanics might be steered by mysterious additional parameters – now dubbed “hidden variables” – that we know nothing about. (Courtesy: iStock/pobytov)
Suppose, using a hidden variable, we could predict exactly which direction an electron would move. The electron wave wouldn’t then be able to split and interfere with itself, resulting in an extinction of the electron. But such electron interference effects are experimentally observed, which Heisenberg took as evidence that no additional hidden variables are needed to make quantum mechanics complete. Once again, Hermann pointed out a discrepancy in Heisenberg’s argument.
In the end, neither side fully convinced the other, but inroads were made, with Heisenberg concluding in his 1971 book that “we had all learned a good deal about the relationship between Kant’s philosophy and modern science”. Hermann herself paid tribute to Heisenberg in a 1935 paper “Natural-philosophical foundations of quantum mechanics”, which appeared in a relatively obscure philosophy journal called Abhandlungen der Fries’schen Schule (6 69). In it, she thanked Heisenberg “above all for his willingness to discuss the foundations of quantum mechanics, which was crucial in helping the present investigations”.
Quantum indeterminacy versus causality
In her 1933 paper, Hermann aimed to understand if the indeterminacy of quantum mechanics threatens causality. Her overall finding was that wherever indeterminacy is invoked in quantum mechanics, it is not logically essential to the theory. So without claiming that quantum theory actually supports causality, she left the possibility open that it might.
To illustrate her point, Hermann considered Heisenberg’s uncertainty principle, which says that there’s a limit to the accuracy with which complementary variables, such as position, q, and momentum, p, can be measured, namely ΔqΔp ≥ h where h is Planck’s constant. Does this principle, she wondered, truly indicate quantum indeterminism?
Hermann asserted that this relation can mean only one of two possible things. One is that measuring one variable leaves the value of the other undetermined. Alternatively, the result of measuring the other variable can’t be precisely predicted. Hermann dismissed the first option because its very statement implies that exact values exist, and so it cannot be logically used to argue against determinism. The second choice could be valid, but that does not exclude the possibility of finding new properties – hidden variables – that give an exact prediction.
Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics
In making her argument about hidden variables, Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics. Quantum mechanics, according to von Neumann, is complete and no extra deterministic features need to be added.
For decades, his result was cited as “proof” that any deterministic addition to quantum mechanics must be wrong. Indeed, von Neumann had such a well-deserved reputation as a brilliant mathematician that few people had ever bothered to scrutinize his analysis. But in 1964 the Northern Irish theorist John Bell famously showed that a valid hidden-variable theory could indeed exist, though only if it’s “non-local” (Physics Physique Fizika 1 195).
Non-locality says that things can happen at different parts of the universe simultaneously without needing faster-than-light communication. Despite being a notion that Einstein never liked, non-locality has been widely confirmed experimentally. In fact, non-locality is a defining feature of quantum physics and one that’s eminently useful in quantum technology.
Then, in 1966 Bell examined von Neumann’s reasoning and found an error that decisively refuted the proof (Rev. Mod, Phys. 38 447). Bell, in other words, showed that quantum mechanics could permit hidden variables after all – a finding that opened the door to alternative interpretations of quantum mechanics. However, Hermann had reported the very same error in her 1933 paper, and again in her 1935 essay, with an especially lucid exposition that almost exactly foresees Bell’s objection.
She had got there first, more than three decades earlier (see box).
Grete Hermann: 30 years ahead of John Bell
(Courtesy: iStock/Chayanan)
According to Grete Hermann, John von Neumann’s 1932 proof that quantum mechanics doesn’t need hidden variables “stands or falls” on his assumption concerning “expectation values”, which is the sum of all possible outcomes weighted by their respective probabilities. In the case of two quantities, say, r and s, von Neumann supposed that the expectation value of (r + s) is the same as the expectation value of r plus the expectation value of s. In other words, <(r + s)> = <r> + <s>.
This is clearly true in classical physics, Hermann writes, but the truth is more complicated in quantum mechanics. Suppose r and s are the conjugate variables in an uncertainty relationship, such as momentum q and position p given by ΔqΔp ≥ h. By definition, measuring q eliminates making a precise measurement of p, so it is impossible to simultaneously measure them and satisfy the relation <q + p> = <q> + <p>.
Further analysis, which Hermann supplied and Bell presented more fully, shows exactly why this invalidates or at least strongly limits the applicability of von Neumann’s proof; but Hermann caught the essence of the error first. Bell did not recognize or cite Hermann’s work, most probably because it was hardly known to the physics community until years after his 1966 paper.
A new view of causality
After rebutting von Neumann’s proof in her 1935 essay, Hermann didn’t actually turn to hidden variables. Instead, Hermann went in a different and surprising direction, probably as a result of her discussions with Heisenberg. She accepted that quantum mechanics is a complete theory that makes only statistical predictions, but proposed an alternative view of causality within this interpretation.
We cannot foresee precise causal links in a quantum mechanics that is statistical, she wrote. But once a measurement has been made with a known result, we can work backwards to get a cause that led to that result. In fact, Hermann showed exactly how to do this with various examples. In this way, she maintains, quantum mechanics does not refute the general Kantian category of causality.
Not all philosophers have been satisfied by the idea of retroactive causality. But writing in The Oxford Handbook of the History of Quantum Interpretations, Crull says that Hermann “provides the contours of a neo-Kantian interpretation of quantum mechanics”. “With one foot squarely on Kant’s turf and the other squarely on Bohr’s and Heisenberg’s,” Crull concludes, “[Hermann’s] interpretation truly stands on unique ground.”
Grete Hermann’s 1935 paper shows a deep and subtle grasp of elements of the Copenhagen interpretation.
But Hermann’s 1935 paper did more than just upset von Neumann’s proof. In the article, she shows a deep and subtle grasp of elements of the Copenhagen interpretation such as its correspondence principle, which says that – in the limit of large quantum numbers – answers derived from quantum physics must approach those from classical physics.
The paper also shows that Hermann was fully aware – and indeed extended the meaning – of the implications of Heisenberg’s thought experiment that he used to illustrate the uncertainty principle. Heisenberg envisaged a photon colliding with an electron, but after that contact, she writes, the wave function of the physical system is a linear combination of terms, each being “the product of one wave function describing the electron and one describing the light quantum”.
As she went on to say, “The light quantum and the electron are thus not described each by itself, but only in their relation to each other. Each state of the one is associated with one of the other.” Remarkably, this amounts to an early perception of quantum entanglement, which Schrödinger described and named later in 1935. There is no evidence, however, that Schrödinger knew of Hermann’s insights.
Hermann’s legacy
On the centenary of the birth of a full theory of quantum mechanics, how should we remember Hermann? According to Crull, the early founders of quantum mechanics were “asking philosophical questions about the implications of their theory [but] none of these men were trained in both physics and philosophy”. Hermann, however, was an expert in the two. “[She] composed a brilliant philosophical analysis of quantum mechanics, as only one with her training and insight could have done,” Crull says.
Had Hermann’s 1935 paper been more widely known, it could have altered the early development of quantum mechanics
Sadly for Hermann, few physicists at the time were aware of her 1935 paper even though she had sent copies to some of them. Had it been more widely known, her paper could have altered the early development of quantum mechanics. Reading it today shows how Hermann’s style of incisive logical examination can bring new understanding.
Hermann leaves other legacies too. As the Second World War drew to a close, she started writing about the ethics of science, especially the way in which it was carried out under the Nazis. After the war, she returned to Germany, where she devoted herself to pedagogy and teacher training. She disseminated Nelson’s views as well as her own through the reconstituted PPA, and took on governmental positions where she worked to rebuild the German educational system, apparently to good effect according to contemporary testimony.
Hermann also became active in politics as an adviser to the Social Democratic Party. She continued to have an interest in quantum mechanics, but it is not clear how seriously she pursued it in later life, which saw her move back to Bremen to care for an ill comrade from her early socialist days.
Hermann’s achievements first came to light in 1974 when the physicist and historian Max Jammer revealed her 1935 critique of von Neumann’s proof in his book The Philosophy of Quantum Mechanics. Following Hermann’s death in Bremen on 15 April 1984, interest slowly grew, culminating in Crull and Bacciagaluppi’s 2016 landmark study Grete Hermann: Between Physics and Philosophy.
The life of this deep thinker, who also worked to educate others and to achieve worthy societal goals, remains an inspiration for any scientist or philosopher today.
Synchronization studies: When the experimenters mapped the laser’s breathing frequency intensity in the parameter space of pump current and intracavity loss (left), unusual features appeared. The areas contoured by blue dashed lines correspond to strong intensity, and represent the main synchronization regions. (right) Synchronization regions extracted from this map highlight their leaf-like structure. (Courtesy: DOI: 10.1126/sciadv.ads3660)
Abnormal versions of synchronization patterns known as “Arnold’s tongues” have been observed in a femtosecond fibre laser that generates oscillating light pulses. While these unconventional patterns had been theorized to exist in certain strongly-driven oscillatory systems, the new observations represent the first experimental confirmation.
Scientists have known about synchronization since 1665, when Christiaan Huygens observed that pendulums placed on a table eventually begin to sway in unison, coupled by vibrations within the table. It was not until the mid-20th century, however, that a Russian mathematician, Vladimir Arnold, discovered that plotting certain parameters of such coupled oscillating systems produces a series of tongue-like triangular shapes.
These shapes are now known as Arnold’s tongues, and they are an important indicator of synchronization. When the system’s parameters are in the tongue region, the system is synchronized. Otherwise, it is not.
Arnold’s tongues are found in all real-world synchronized systems, explains Junsong Peng, a physicist at East China Normal University. They have previously been studied in systems such as nanomechanical and biological resonators to which external driving frequencies are applied. More recently, they have been observed in the motion of two bound solitons (wave packets that maintain their shapes and sizes as they propagate) when they are subject to external forces.
Abnormal synchronization regions
In the new work, Peng, Sonia Boscolo of Aston University in the UK, Christophe Finot of the University of Burgundy in France, and colleagues studied Arnold’s tongue patterns in a laser that emits solitons. Lasers of this type possess two natural synchronization frequencies: the repetition frequency of the solitons (determined by the laser’s cavity length) and the frequency at which the energy of the soliton becomes self-modulating, or “breathing”.
In their experiments, which they describe in Science Advances, the researchers found that as they increased the driving force applied to this so-called breathing-soliton laser, the synchronization region first broadened, then narrowed. These changes produced Arnold’s tongues with very peculiar shapes. Instead of being triangle-like, they appeared as two regions shaped like leaves or rays.
Avoiding amplitude death
Although theoretical studies had previously predicted that Arnold’s-tongue patterns would deviate substantially from the norm as the driving force increased, Peng says that demonstrating this in a real system was not easy. The driving force required to access the anomalous regime is so strong that it can destroy fragile coherent pulsing states, leading to “amplitude death” in which all oscillations are completely suppressed.
In the breathing-soliton laser, however, the two frequencies synchronized without amplitude death even though the repetition frequency is about two orders of magnitude higher than the breathing frequency. “These lasers therefore open up a new frontier for studying synchronization phenomena,” Peng says.
To demonstrate the system’s potential, the researchers explored the effects of using an optical attenuator to modulate the laser’s dissipation while changing the laser’s pump current to modulate its gain. Having precise control over both parameters enabled them to identify “holes” within the ray-shaped tongue regions. These holes appear when the driving force exceeds a certain strength, and they represent quasi-periodic (unsynchronized) states inside the larger synchronized regions.
“The manifestation of holes is interesting not only for nonlinear science, it is also important for practical applications,” Peng explains. “This is because these holes, which have not been realized in experiments until now, can destabilize the synchronized system.”
Understanding when and under which conditions these holes appear, Peng adds, could help scientists ensure that oscillating systems operate more stably and reliably.
Extending synchronization to new regimes
The researchers also used simulations to produce a “map” of the synchronization regions. These simulations perfectly reproduced the complex synchronization structures they observed in their experiments, confirming the existence of the “hole” effect.
Despite these successes, however, Peng says it is “still quite challenging” to understand why such patterns appear. “We would like to do more investigations on this issue and get a better understanding of the dynamics at play,” he says.
The current work extends studies of synchronization into a regime where the synchronized region no longer exhibits a linear relationship with the coupling strength (as is the case for normal Arnold’s-tongue pattern), he adds. “This nonlinear relationship can generate even broader synchronization regions compared to the linear regime, making it highly significant for enhancing the stability of oscillating systems in practical applications,” he tells Physics World.
A new retinal stimulation technique called Oz enabled volunteers to see colours that lie beyond the natural range of human vision. Developed by researchers at UC Berkeley, Oz works by stimulating individual cone cells in the retina with targeted microdoses of laser light, while compensating for the eye’s motion.
Colour vision is enabled by cone cells in the retina. Most humans have three types of cone cells, known as L, M and S (long, medium and short), which respond to different wavelengths of visible light. During natural human vision, the spectral distribution of light reaching these cone cells determines the colours that we see.
Spectral sensitivity curves The response function of M cone cells overlaps completely with those of L and S cones. (Courtesy: Ben Rudiak-Gould)
Some colours, however, simply cannot be seen. The spectral sensitivity curves of the three cone types overlap – in particular, there is no wavelength of light that stimulates only the M cone cells without stimulating nearby L (and sometimes also S) cones as well.
The Oz approach, however, is fundamentally different. Rather than being based on spectral distribution, colour perception is controlled by shaping the spatial distribution of light on the retina.
Describing the technique in Science Advances, Ren Ng and colleagues showed that targeting individual cone cells with a 543 nm laser enabled subjects to see a range of colours in both images and videos. Intriguingly, stimulating only the M cone cells sent a colour signal to the brain that never occurs in natural vision.
The Oz laser system uses a technique called adaptive optics scanning light ophthalmoscopy (AOSLO) to simultaneously image and stimulate the retina with a raster scan of laser light. The device images the retina with infrared light to track eye motion in real time and targets pulses of visible laser light at individual cone cells, at a rate of 105 per second.
In a proof-of-principle experiment, the researchers tested a prototype Oz system on five volunteers. In a preparatory step, they used adaptive optics-based optical coherence tomography (AO-OCT) to classify the LMS spectral type of 1000 to 2000 cone cells in a region of each subject’s retina.
When exclusively targeting M cone cells in these retinal regions, subjects reported seeing a new blue–green colour of unprecedented saturation – which the researchers named “olo”. They could also clearly perceive Oz hues in image and video form, reliably detecting the orientation of a red line and the motion direction of a rotating red dot on olo backgrounds. In colour matching experiments, subjects could only match olo with the closest monochromatic light by desaturating it with white light – demonstrating that olo lies beyond the range of natural vision.
The team also performed control experiments in which the Oz microdoses were intentionally “jittered” by a few microns. With the target locations no longer delivered accurately, the subjects instead perceived the natural colour of the stimulating laser. In the image and video recognition experiments, jittering the microdose target locations reduced the task accuracy to guessing rate.
Ng and colleagues conclude that “Oz represents a new class of experimental platform for vision science and neuroscience [that] will enable diverse new experiments”. They also suggest that the technique could one day help to elicit full colour vision in people with colour blindness.
Oh, balls A record-breaking 34-ball, 12-storey tower with three balls per layer (photo a); a 21-ball six-storey tower with four balls per layer (photo b); an 11-ball, three-storey tower with five balls per layers (photo c); and why a tower with six balls per layer would be impossible as the “locker” ball just sits in the middle (photo d). (Courtesy: Andria Rogava)
A few years ago, I wrote in Physics World about various bizarre structures I’d built from tennis balls, the most peculiar of which I termed “tennis-ball towers”. They consisted of a series of three-ball layers topped by a single ball (“the locker”) that keeps the whole tower intact. Each tower had (3n + 1) balls, where n is the number of triangular layers. The tallest tower I made was a seven-storey, 19-ball structure (n = 6). Shortly afterwards, I made an even bigger, nine-storey, 25-ball structure (n = 8).
Now, in the latest exciting development, I have built a new, record-breaking tower with 34 balls (n = 11), in which all 30 balls from the second to the eleventh layer are kept in equilibrium by the locker on the top (see photo a). The three balls in the bottom layer aren’t influenced by the locker as they stay in place by virtue of being on the horizontal surface of a table.
I tried going even higher but failed to build a structure that would stay intact without supporting “scaffolds”. Now in case you think I’ve just glued the balls together, watch the video below to see how the incredible 34-ball structure collapses spontaneously, probably due to a slight vibration as I walked around the table.
Even more unexpectedly, I have been able to make tennis-ball towers consisting of layers of four balls (4n + 1) and five balls too (5n + 1). Their equilibria are more delicate and, in the case of four-ball structures, so far I have only managed to build (photo b) a 21-ball, six-storey tower (n = 5). You can also see the tower in the video below.
The (5n + 1) towers are even trickier to make and (photo c) I have only got up to a three-storey structure with 11 balls (n = 2): two lots of five balls with a sixth single ball on top. In case you’re wondering, towers with six balls in each layer are physically impossible to build because they form a regular hexagon. You can’t just use another ball as a locker because it would simply sit between the other six (photo d).
This podcast features Alonso Gutierrez, who is chief of medical physics at the Miami Cancer Institute in the US. In a wide-ranging conversation with Physics World’s Tami Freeman, Gutierrez talks about his experience using Elekta’s Leksell Gamma Knife for radiosurgery in a busy radiotherapy department.
A concept from quantum information theory appears to explain at least some of the peculiar behaviour of so-called “strange” metals. The new approach, which was developed by physicists at Rice University in the US, attributes the unusually poor electrical conductivity of these metals to an increase in the quantum entanglement of their electrons. The team say the approach could advance our understanding of certain high-temperature superconductors and other correlated quantum structures.
While electrons can travel through ordinary metals such as gold or copper relatively freely, strange metals resist their flow. Intriguingly, some high-temperature superconductors have a strange metal phase as well as a superconducting one. This phenomenon that cannot be explained by conventional theories that treat electrons as independent particles, ignoring any interactions between them.
To unpick these and other puzzling behaviours, a team led by Qimiao Si turned to the concept of quantum Fisher information (QFI). This statistical tool is typically used to measure how correlations between electrons evolve under extreme conditions. In this case, the team focused on a theoretical model known as the Anderson/Kondo lattice that describes how magnetic moments are coupled to electron spins in a material.
Correlations become strongest when strange metallicity appears
These analyses revealed that electron-electron correlations become strongest at precisely the point at which strange metallicity appears in a material. “In other words, the electrons become maximally entangled at this quantum critical point,” Si explains. “Indeed, the peak signals a dramatic amplification of multipartite electron spin entanglement, leading to a complex web of quantum correlations between many electrons.”
What is striking, he adds, is that this surge of entanglement provides a new and positive characterization of why strange metals are so strange, while also revealing why conventional theory fails. “It’s not just that traditional theory falls short, it is that it overlooks this rich web of quantum correlations, which prevents the survival of individual electrons as the elementary objects in this metallic substance,” he explains.
To test their finding, the researchers, who report their work in Nature Communications, compared their predictions with neutron scattering data from real strange-metal materials. They found that the experimental data was a good match. “Our earlier studies had also led us to suspect that strange metals might host a deeply entangled electron fluid – one whose hidden quantum complexity had yet to be fully understood,” adds Si.
The implications of this work are far-reaching, he tells Physics World. “Strange metals may hold the key to unlocking the next generation of superconductors — materials poised to transform how we transmit energy and, perhaps one day, eliminate power loss from the electric grid altogether.”
The Rice researchers say they now plan to explore how QFI manifests itself in the charge of electrons as well as their spins. “Until now, our focus has only been on the QFI associated with electrons spins, but electrons also of course carry charge,” Si says.
Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.
Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.
Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.
Looking for clues in electrons
In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.
“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”
Because the subtle effects of neutrino mass are most visible in decays where the antineutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.
Improvements over previous results
The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.
Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.
Keeping track: Laser system for the analysis of the tritium gas composition at KATRIN’s Windowless Gaseous Tritium Source. Improvements to temperature control in this source helped raise the precision of the neutrino mass limit. (Courtesy: Tritium Laboratory, KIT)
At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”
Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”
Model independence
Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”
KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.
“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”
The high-street bank HSBC has worked with the NQCC, hardware provider Rigetti and the Quantum Software Lab to investigate the advantages that quantum computing could offer for detecting the signs of fraud in transactional data. (Courtesy: Shutterstock/Westend61 on Offset)
Rapid technical innovation in quantum computing is expected to yield an array of hardware platforms that can run increasingly sophisticated algorithms. In the real world, however, such technical advances will remain little more than a curiosity if they are not adopted by businesses and the public sector to drive positive change. As a result, one key priority for the UK’s National Quantum Computing Centre (NQCC) has been to help companies and other organizations to gain an early understanding of the value that quantum computing can offer for improving performance and enhancing outcomes.
To meet that objective the NQCC has supported several feasibility studies that enable commercial organizations in the UK to work alongside quantum specialists to investigate specific use cases where quantum computing could have a significant impact within their industry. One prime example is a project involving the high-street bank HSBC, which has been exploring the potential of quantum technologies for spotting the signs of fraud in financial transactions. Such fraudulent activity, which affects millions of people every year, now accounts for about 40% of all criminal offences in the UK and in 2023 generated total losses of more than £2.3 bn across all sectors of the economy.
Banks like HSBC currently exploit classical machine learning to detect fraudulent transactions, but these techniques require a large computational overhead to train the models and deliver accurate results. Quantum specialists at the bank have therefore been working with the NQCC, along with hardware provider Rigetti and the Quantum Software Lab at the University of Edinburgh, to investigate the capabilities of quantum machine learning (QML) for identifying the tell-tale indicators of fraud.
“HSBC’s involvement in this project has brought transactional fraud detection into the realm of cutting-edge technology, demonstrating our commitment to pushing the boundaries of quantum-inspired solutions for near-term benefit,” comments Philip Intallura, Group Head of Quantum Technologies at HSBC. “Our philosophy is to innovate today while preparing for the quantum advantage of tomorrow.”
Another study focused on a key problem in the aviation industry that has a direct impact on fuel consumption and the amount of carbon emissions produced during a flight. In this logistical challenge, the aim was to find the optimal way to load cargo containers onto a commercial aircraft. One motivation was to maximize the amount of cargo that can be carried, the other was to balance the weight of the cargo to reduce drag and improve fuel efficiency.
“Even a small shift in the centre of gravity can have a big effect,” explains Salvatore Sinno of technology solutions company Unisys, who worked on the project along with applications engineers at the NQCC and mathematicians at the University of Newcastle. “On a Boeing 747 a displacement of just 75 cm can increase the carbon emissions on a flight of 10,000 miles by four tonnes, and also increases the fuel costs for the airline company.”
A hybrid quantum–classical solution has been used to optimize the configuration of air freight, which can improve fuel efficiency and lower carbon emissions. (Courtesy: Shutterstock/supakitswn)
With such a large number of possible loading combinations, classical computers cannot produce an exact solution for the optimal arrangement of cargo containers. In their project the team improved the precision of the solution by combining quantum annealing with high-performance computing, a hybrid approach that Unisys believes can offer immediate value for complex optimization problems. “We have reached the limit of what we can achieve with classical computing, and with this work we have shown the benefit of incorporating an element of quantum processing into our solution,” explains Sinno.
The HSBC project team also found that a hybrid quantum–classical solution could provide an immediate performance boost for detecting anomalous transactions. In this case, a quantum simulator running on a classical computer was used to run quantum algorithms for machine learning. “These simulators allow us to execute simple QML programmes, even though they can’t be run to the same level of complexity as we could achieve with a physical quantum processor,” explains Marco Paini, the project lead for Rigetti. “These simulations show the potential of these low-depth QML programmes for fraud detection in the near term.”
The team also simulated more complex QML approaches using a similar but smaller-scale problem, demonstrating a further improvement in performance. This outcome suggests that running deeper QML algorithms on a physical quantum processor could deliver an advantage for detecting anomalies in larger datasets, even though the hardware does not yet provide the performance needed to achieve reliable results. “This initiative not only showcases the near-term applicability of advanced fraud models, but it also equips us with the expertise to leverage QML methods as quantum computing scales,” comments Intellura.
Indeed, the results obtained so far have enabled the project partners to develop a roadmap that will guide their ongoing development work as the hardware matures. One key insight, for example, is that even a fault-tolerant quantum computer would struggle to process the huge financial datasets produced by a bank like HSBC, since a finite amount of time is needed to run the quantum calculation for each data point. “From the simulations we found that the hybrid quantum–classical solution produces more false positives than classical methods,” says Paini. “One approach we can explore would be to use the simulations to flag suspicious transactions and then run the deeper algorithms on a quantum processor to analyse the filtered results.”
This particular project also highlighted the need for agreed protocols to navigate the strict rules on data security within the banking sector. For this project the HSBC team was able to run the QML simulations on its existing computing infrastructure, avoiding the need to share sensitive financial data with external partners. In the longer term, however, banks will need reassurance that their customer information can be protected when processed using a quantum computer. Anticipating this need, the NQCC has already started to work with regulators such as the Financial Conduct Authority, which is exploring some of the key considerations around privacy and data security, with that initial work feeding into international initiatives that are starting to consider the regulatory frameworks for using quantum computing within the financial sector.
For the cargo-loading project, meanwhile, Sinno says that an important learning point has been the need to formulate the problem in a way that can be tackled by the current generation of quantum computers. In practical terms that means defining constraints that reduce the complexity of the problem, but that still reflect the requirements of the real-world scenario. “Working with the applications engineers at the NQCC has helped us to understand what is possible with today’s quantum hardware, and how to make the quantum algorithms more viable for our particular problem,” he says. “Participating in these studies is a great way to learn and has allowed us to start using these emerging quantum technologies without taking a huge risk.”
Indeed, one key feature of these feasibility studies is the opportunity they offer for different project partners to learn from each other. Each project includes an end-user organization with a deep knowledge of the problem, quantum specialists who understand the capabilities and limitations of present-day solutions, and academic experts who offer an insight into emerging theoretical approaches as well as methodologies for benchmarking the results. The domain knowledge provided by the end users is particularly important, says Paini, to guide ongoing development work within the quantum sector. “If we only focused on the hardware for the next few years, we might come up with a better technical solution but it might not address the right problem,” he says. “We need to know where quantum computing will be useful, and to find that convergence we need to develop the applications alongside the algorithms and the hardware.”
Another major outcome from these projects has been the ability to make new connections and identify opportunities for future collaborations. As a national facility NQCC has played an important role in providing networking opportunities that bring diverse stakeholders together, creating a community of end users and technology providers, and supporting project partners with an expert and independent view of emerging quantum technologies. The NQCC has also helped the project teams to share their results more widely, generating positive feedback from the wider community that has already sparked new ideas and interactions.
“We have been able to network with start-up companies and larger enterprise firms, and with the NQCC we are already working with them to develop some proof-of-concept projects,” says Sinno. “Having access to that wider network will be really important as we continue to develop our expertise and capability in quantum computing.”
Through new experiments, researchers in Switzerland have tested models of how microwaves affect low-temperature chemical reactions between ions and molecules. Through their innovative setup, Valentina Zhelyazkova and colleagues at ETH Zurich showed for the first time how the application of microwave pulses can slow down reaction rates via nonthermal mechanisms.
Physicists have been studying chemical reactions between ions and neutral molecules for some time. At close to room temperature, classical models can closely predict how the electric fields emanating from ions will induce dipoles in nearby neutral molecules, allowing researchers to calculate these reaction rates with impressive accuracy. Yet as temperatures drop close to absolute zero, a wide array of more complex effects come into play, which have gradually been incorporated into the latest theoretical models.
“At low temperatures, models of reactivity must include the effects of the permanent electric dipoles and quadrupole moments of the molecules, the effect of their vibrational and rotational motion,” Zhelyazkova explains. “At extremely low temperatures, even the quantum-mechanical wave nature of the reactants must be considered.”
Rigorous experiments
Although these low-temperature models have steadily improved in recent years, the ability to put them to the test through rigorous experiments has so far been hampered by external factors.
In particular, stray electric fields in the surrounding environment can heat the ions and molecules, so that any important quantum effects are quickly drowned out by noise. “Consequently, it is only in the past few years that experiments have provided information on the rates of ion–molecule reactions at very low temperatures,” Zhelyazkova explains.
In their study, Zhelyazkova’s team improved on these past experiments through an innovative approach to cooling the internal motions of the molecules being heated by stray electric fields. Their experiment involved a reaction between positively-charged helium ions and neutral molecules of carbon monoxide (CO). This creates neutral atoms of helium and oxygen, and a positively-charged carbon atom.
To initiate the reaction, the researchers created separate but parallel supersonic beams of helium and CO that were combined in a reaction cell. “In order to overcome the problem of heating the ions by stray electric fields, we study the reactions within the distant orbit of a highly excited electron, which makes the overall system electrically neutral without affecting the ion–molecule reaction taking place within the electron orbit,” explains ETH’s Frédéric Merkt.
Giant atoms
In such a “Rydberg atom”, the highly excited electron is some distance from the helium nucleus and its other electron. As a result, a Rydberg helium atom can be considered an ion with a “spectator” electron, which has little influence over how the reaction unfolds. To ensure the best possible accuracy, “we use a printed circuit board device with carefully designed surface electrodes to deflect one of the two beams,” explains ETH’s, Fernanda Martins. “We then merged this beam with the other, and controlled the relative velocity of the two beams.”
Altogether, this approach enabled the researchers to cool the molecules internally to temperatures below 10 K – where their quantum effects can dominate over externally induced noise. With this setup, Zhelyazkova, Merkt, Martins, and their colleagues could finally put the latest theoretical models to the test.
According to the latest low-temperature models, the rate of the CO–helium ion reaction should be determined by the quantized rotational states of the CO molecule – whose energies lie within the microwave range. In this case, the team used microwave pulses to put the CO into different rotational states, allowing them to directly probe their influence on the overall reaction rate.
Three important findings
Altogether, their experiment yielded three important findings. It confirmed that the reaction rate can vary, depending on the rotational state of the CO molecule; it showed that this reactivity can be modified by using a short microwave pulse to excite the CO molecule from its ground state to its first excited state – with the first excited state being less reactive than the ground state.
The third and most counterintuitive finding is that microwaves can slow down the reaction rate, via mechanisms unrelated to the heat they impart on the molecules absorbing them. “In most applications of microwaves in chemical synthesis, the microwaves are used as a way to thermally heat the molecules up, which always makes them more reactive,” Zhelyazkova says.
Building on the success of their experimental approach, the team now hopes to investigate these nonthermal mechanisms in more detail – with the aim to shed new light on how microwaves can influence chemical reactions via effects other than heating. In turn, their results could ultimately pave the way for advanced new techniques for fine-tuning the rate of reactions between ions and neutral molecules.
A quarter of a century ago, in May 2000, I published an article entitled “Why science thrives on criticism”. The article, which ran to slightly over a page in Physics World magazine, was the first in a series of columns called Critical Point. Periodicals, I said, have art and music critics as well as sports and political commentators, and book and theatre reviewers too. So why shouldn’t Physics World have a science critic?
The implication that I had a clear idea of the “critical point” for this series was not entirely accurate. As the years go by, I have found myself improvising, inspired by politics, books, scientific discoveries, readers’ thoughts, editors’ suggestions and more. If there is one common theme, it’s that science is like a workshop – or a series of loosely related workshops – as I argued in The Workshop and the World, a book that sprang from my columns.
Workshops are controlled environments, inside which researchers can stage and study special things – elementary particles, chemical reactions, plant uptakes of nutrients – that appear rarely or in a form difficult to study in the surrounding world. Science critics do not participate in the workshops themselves or even judge their activities. What they do is evaluate how workshops and worlds interact.
This can happen in three ways
Critical triangle
First is to explain why what’s going on inside the workshops matters to outsiders. Sometimes, those activities can be relatively simple to describe, which leads to columns concerning all manner of everyday activities. I have written, for example, about the physics of coffee and breadmaking. I’ve also covered toys, tops, kaleidoscopes, glass and other things that all of us – physicists and non-physicists alike – use, value and enjoy.
Physicists often engage in activities that might seem inconsequential to them yet are an intrinsic part of the practice of physics
When viewing science as workshops, a second role is to explain why what’s outside the workshops matters to insiders. That’s because physicists often engage in activities that might seem inconsequential to them – they’re “just what the rest the world does” – yet are an intrinsic part of the practice of physics. I’ve covered, for example, physicists taking out patents, creating logos, designing lab architecture, taking holidays, organizing dedications, going on retirement and writing memorials for the deceased.
Such activities I term “black elephants”. That’s because they’re a cross between things physicists don’t want to talk about (“elephants in the room”) and things that force them to renounce cherished notions (just as “black swans” disprove that “all swans are white”).
A third role of a science critic is to explain what matters that takes place both inside and outside the workshop. I’m thinking of things like competition, leadership, trust, surprise, workplace training courses, cancel culture and even jokes and funny tales. Interpretations of the meaning of quantum mechanics, such as “QBism”, which I covered both in 2019 and 2022, are an ongoing interest. That’s because they’re relevant both to the structure of physics and to philosophy as they disrupt notions of realism, objectivity, temporality and the scientific method.
Being critical
The term “critic” may suggest someone with a congenitally negative outlook, but that’s wrong. My friend Fred Cohn, a respected opera critic, told me that, in a conversation after a concert, he criticized the performance of the singer Luciano Pavarotti. His remark provoked a woman to shout angrily at him: “Could you do better?” Of course not! It’s the critic’s role to evaluate performances of an activity, not to perform the activity oneself.
Working practices In his first Critical Point column for Physics World, philosopher and historian of science Robert P Crease interrogated the role of the science critic. (Courtesy: iStock/studiostockart)
Having said that, sometimes a critic must be critical to be honest. In particular, I hate it when scientists try to delegitimize the experience of non-scientists by saying, for example, that “time does not exist”. Or when they pretend they don’t see rainbows but wavelengths of light or that they don’t see sunrises or the plane of a Foucault pendulum move but the Earth spinning. Comments like that turn non-scientists off science by making it seem elitist and other-worldly. It’s what I call “scientific gaslighting”.
Most of all, I hate it when scientists pontificate that philosophy is foolish or worthless, especially when it’s the likes of Steven Pinker, who ought to know better. Writing in Nature (518 300), I once criticized the great theoretical physicist Steven Weinberg, who I counted as a friend, for taking a complex and multivalent text, plucking out a single line, and misreading it as if the line were from a physics text.
The text in question was Plato’s Phaedo, where Socrates expresses his disappointment with his fellow philosopher Anaxagoras for giving descriptions of heavenly bodies “in purely physical terms, without regard to what is best”. Weinberg claimed this statement meant that Socrates “was not very interested in natural science”. Nothing could be further from the truth.
At that moment in the Phaedo, Socrates is recounting his intellectual autobiography. He has just come to the point where, as a youth, he was entranced by materialism and was eager to hear Anaxagoras’s opposing position. When Anaxagoras promised to describe the heavens both mechanically and as the product of a wise and divine mind but could do only the former, Socrates says he was disappointed.
Weinberg’s jibe ignores the context. Socrates is describing how he had once embraced Anaxagoras’s view of a universe ruled by a divine mind but later rejected that view. As an adult, Socrates learned to test hypotheses and other claims through putting them to the test, just as modern-day scientists do. Weinberg was misrepresenting Socrates by describing a position that he later abandoned.
The critical point of the critical point
Ultimately, the “critical point” of my columns over the last 25 years has been to provoke curiosity and excitement about what philosophers, historians and sociologists do for science. I’ve also wanted to raise awareness that these fields are not just fripperies but essential if we are to fully understand and protect scientific activity.
As I have explained several times – especially in the wake of the US shutting its High Flux Beam Reactor and National Tritium Labeling Facility – scientists need to understand and relate to the surrounding world with the insight of humanities scholars. Because if they don’t, they are in danger of losing their workshops altogether.
New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.
The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.
The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.
From the night sky to the laboratory
One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”
Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).
This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”
The 94Sr(alpha,n)97Zr reaction
To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.
The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.
“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.
To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.
Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.
As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.
Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.
In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.
Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.
Cat on a hot tin…microwave cavity?
The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.
Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.
The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.
To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.
Benefits for quantum sensing and error correction
According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.
“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.
Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.
However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.
Electron therapy has long played an important role in cancer treatments. Electrons with energies of up to 20 MeV can treat superficial tumours while minimizing delivered dose to underlying tissues; they are also ideal for performing total skin therapy and intraoperative radiotherapy. The limited penetration depth of such low-energy electrons, however, limits the range of tumour sites that they can treat. And as photon-based radiotherapy technology continues to progress, electron therapy has somewhat fallen out of fashion.
That could all be about to change with the introduction of radiation treatments based on very high-energy electrons (VHEEs). Once realised in the clinic, VHEEs – with energies from 50 up to 400 MeV – will deliver highly penetrating, easily steerable, conformal treatment beams with the potential to enable emerging techniques such as FLASH radiotherapy. French medical technology company THERYQ is working to make this opportunity a reality.
Therapeutic electron beams are produced using radio frequency (RF) energy to accelerate electrons within a vacuum cavity. An accelerator of a just over 1 m in length can boost electrons to energies of about 25 MeV – corresponding to a tissue penetration depth of a few centimetres. It’s possible to create higher energy beams by simply daisy chaining additional vacuum chambers. But such systems soon become too large and impractical for clinical use.
THERYQ is focusing on a totally different approach to generating VHEE beams. “In an ideal case, these accelerators allow you to reach energy transfers of around 100 MeV/m,” explains THERYQ’s Sébastien Curtoni. “The challenge is to create a system that’s as compact as possible, closer to the footprint and cost of current radiotherapy machines.”
Working in collaboration with CERN, THERYQ is aiming to modify CERN’s Compact Linear Collider technology for clinical applications. “We are adapting the CERN technology, which was initially produced for particle physics experiments, to radiotherapy,” says Curtoni. “There are definitely things in this design that are very useful for us and other things that are difficult. At the moment, this is still in the design and conception phase; we are not there yet.”
VHEE advantages
The higher energy of VHEE beams provides sufficient penetration to treat deep tumours, with the dose peak region extending up to 20–30 cm in depth for parallel (non-divergent) beams using energy levels of 100–150 MeV (for field sizes of 10 x 10 cm or above). And in contrast to low-energy electrons, which have significant lateral spread, VHEE beams have extremely narrow penumbra with sharp beam edges that help to create highly conformal dose distributions.
“Electrons are extremely light particles and propagate through matter in very straight lines at very high energies,” Curtoni explains. “If you control the initial direction of the beam, you know that the patient will receive a very steep and well defined dose distribution and that, even for depths above 20 cm, the beam will remain sharp and not spread laterally.”
Electrons are also relatively insensitive to tissue inhomogeneities, such as those encountered as the treatment beam passes through different layers of muscle, bone, fat or air. “VHEEs have greater robustness against density variations and anatomical changes,” adds THERYQ’s Costanza Panaino. “This is a big advantage for treatments in locations where there is movement, such as the lung and pelvic areas.”
It’s also possible to manipulate VHEEs via electromagnetic scanning. Electrons have a charge-to-mass ratio roughly 1800 times higher than that of protons, meaning that they can be steered with a much weaker magnetic field than required for protons. “As a result, the technology that you are building has a smaller footprint and the possibility costing less,” Panaino explains. “This is extremely important because the cost of building a proton therapy facility is prohibitive for some countries.”
Enabling FLASH
In addition to expanding the range of clinical indications that can be treated with electrons, VHEE beams can also provide a tool to enable the emerging – and potentially game changing – technique known as FLASH radiotherapy. By delivering therapeutic radiation at ultrahigh dose rates (higher than 100 Gy/s), FLASH vastly reduces normal tissue toxicity while maintaining anti-tumour activity, potentially minimizing harmful side-effects.
The recent interest in the FLASH effect began back in 2014 with the report of a differential response between normal and tumour tissue in mice exposed to high dose-rate, low-energy electrons. Since then, most preclinical FLASH studies have used electron beams, as did the first patient treatment in 2019 – a skin cancer treatment at Lausanne University Hospital (CHUV) in Switzerland, performed with the Oriatron eRT6 prototype from PMB-Alcen, the French company from which THERYQ originated.
FLASH radiotherapy is currently being used in clinical trials with proton beams, as well as with low-energy electrons, where it remains intrinsically limited to superficial treatments. Treating deep-seated tumours with FLASH requires more highly penetrating beams. And while the most obvious option would be to use photons, it’s extremely difficult to produce an X-ray beam with a high enough dose rate to induce the FLASH effect without excessive heat generation destroying the conversion target.
“It’s easier to produce a high dose-rate electron beam for FLASH than trying to [perform FLASH] with X-rays, as you use the electron beam directly to treat the patient,” Curtoni explains. “The possibility to treat deep-seated tumours with high-energy electron beams compensates for the fact that you can’t use X-rays.”
Panaino points out that in addition to high dose rates, FLASH radiotherapy also relies on various interdependent parameters. “Ideally, to induce the FLASH effect, the beam should be pulsed at a frequency of about 100 Hz, the dose-per-pulse should be 1 Gy or above, and the dose rate within the pulse should be higher than 106 Gy/s,” she explains.
Into the clinic
THERYQ is using its VHEE expertise to develop a clinical FLASH radiotherapy system called FLASHDEEP, which will use electrons at energies of 100 to 200 MeV to treat tumours at depths of up to 20 cm. The first FLASHDEEP systems will be installed at CHUV (which is part of a consortium with CERN and THERYQ) and at the Gustave Roussy cancer centre in France.
“We are trying to introduce FLASH into the clinic, so we have a prototype FLASHKNiFE machine that allows us to perform low-energy, 6 and 9 MeV, electron therapy,” says Charlotte Robert, head of the medical physics department research group at Gustave Roussy. “The first clinical trials using low-energy electrons are all on skin tumours, aiming to show that we can safely decrease the number of treatment sessions.”
While these initial studies are limited to skin lesions, clinical implementation of the FLASHDEEP system will extend the benefits of FLASH to many more tumour sites. Robert predicts that VHEE-based FLASH will prove most valuable for treating radioresistant cancers that cannot currently be cured. The rationale is that FLASH’s ability to spare normal tissue will allow delivery of higher target doses without increasing toxicity.
“You will not use this technology for diseases that can already be cured, at least initially,” she explains. “The first clinical trial, I’m quite sure, will be either glioblastoma or pancreatic cancers that are not effectively controlled today. If we can show that VHEE FLASH can spare normal tissue more than conventional radiotherapy can, we hope this will have a positive impact on lesion response.”
“There are a lot of technological challenges around this technology and we are trying to tackle them all,” Curtoni concludes. “The ultimate goal is to produce a VHEE accelerator with a very compact beamline that makes this technology and FLASH a reality for a clinical environment.”
Brain–computer interfaces (BCIs) enable the flow of information between the brain and an external device such as a computer, smartphone or robotic limb. Applications range from use in augmented and virtual reality (AR and VR), to restoring function to people with neurological disorders or injuries.
Electroencephalography (EEG)-based BCIs use sensors on the scalp to noninvasively record electrical signals from the brain and decode them to determine the user’s intent. Currently, however, such BCIs require bulky, rigid sensors that prevent use during movement and don’t work well with hair on the scalp, which affects the skin–electrode impedance. A team headed up at Georgia Tech’s WISH Center has overcome these limitations by creating a brain sensor that’s small enough to fit between strands of hair and is stable even while the user is moving.
“This BCI system can find wide applications. For example, we can realize a text spelling interface for people who can’t speak,” says W Hong Yeo, Harris Saunders Jr Professor at Georgia Tech and director of the WISH Center, who co-led the project with Tae June Kang from Inha University in Korea. “For people who have movement issues, this BCI system can offer connectivity with human augmentation devices, a wearable exoskeleton, for example. Then, using their brain signals, we can detect the user’s intentions to control the wearable system.”
A tiny device
The microscale brain sensor comprises a cross-shaped structure of five microneedle electrodes, with sharp tips (less than 30°) that penetrate the skin easily with nearly pain-free insertion. The researchers used UV replica moulding to create the array, followed by femtosecond laser cutting to shape it to the required dimensions – just 850 x 1000 µm – to fit into the space between hair follicles. They then coated the microsensor with a highly conductive polymer (PEDOT:Tos) to enhance its electrical conductivity.
Between the hairs The size and lightweight design of the sensor significantly reduces motion artefacts. (Courtesy: W Hong Yeo)
The microneedles capture electrical signals from the brain and transmit them along ultrathin serpentine wires that connect to a miniaturized electronics system on the back of the neck. The serpentine interconnector stretches as the skin moves, isolating the microsensor from external vibrations and preventing motion artefacts. The miniaturized circuits then wirelessly transmit the recorded signals to an external system (AR glasses, for example) for processing and classification.
Yeo and colleagues tested the performance of the BCI using three microsensors inserted into the scalp of the occipital lobe (the brain’s visual processing centre). The BCI exhibited excellent stability, offering high-quality measurement of neural signals – steady-state visual evoked potentials (SSVEPs) – for up to 12 h, while maintaining low contact impedance density (0.03 kΩ/cm2).
The team also compared the quality of EEG signals measured using the microsensor-based BCI with those obtained from conventional gold-cup electrodes. Participants wearing both sensor types closed and opened their eyes while standing, walking or running.
With the participant stood still, both electrode types recorded stable EEG signals, with an increased amplitude upon closing the eyes, due to the rise in alpha wave power. During motion, however, the EEG time series recorded with the conventional electrodes showed noticeable fluctuations. The microsensor measurements, on the other hand, exhibited minimal fluctuations while walking and significantly fewer fluctuations than the gold-cup electrodes while running.
Overall, the alpha wave power recorded by the microsensors during eye-closing was higher than that of the conventional electrode, which could not accurately capture EEG signals while the user was running. The microsensors only exhibited minor motion artefacts, with little to no impact on the EEG signals in the alpha band, allowing reliable data extraction even during excessive motion.
Real-world scenario
Next, the team showed how the BCI could be used within everyday activities – such as making calls or controlling external devices – that require a series of decisions. The BCI enables a user to make these decisions using their thoughts, without needing physical input such as a keyboard, mouse or touchscreen. And the new microsensors free the user from environmental and movement constraints.
The researchers demonstrated this approach in six subjects wearing AR glasses and a microsensor-based EEG monitoring system. They performed experiments with the subjects standing, walking or running on a treadmill, with two distinct visual stimuli from the AR system used to induce SSVEP responses. Using a train-free SSVEP classification algorithm, the BCI determined which stimulus the subject was looking at with a classification accuracy of 99.2%, 97.5% and 92.5%, while standing, walking and running, respectively.
The team also developed an AR-based video call system controlled by EEG, which allows users to manage video calls (rejecting, answering and ending) with their thoughts, demonstrating its use during scenarios such as ascending and descending stairs and navigating hallways.
“By combining BCI and AR, this system advances communication technology, offering a preview of the future of digital interactions,” the researchers write. “Additionally, this system could greatly benefit individuals with mobility or dexterity challenges, allowing them to utilize video calling features without physical manipulation.”
With so much turmoil in the world at the moment, it’s always great to meet enthusiastic physicists celebrating all that their subject has to offer. That was certainly the case when I travelled with my colleague Tami Freeman to the 2025 Celebration of Physics at Nottingham Trent University (NTU) on 10 April.
Organized by the Institute of Physics (IOP), which publishes Physics World, the event was aimed at “physicists, creative thinkers and anyone interested in science”. It also featured some of the many people who won IOP awards last year, including Nick Stone from the University of Exeter, who was awarded the 2024 Rosalind Franklin medal and prize.
Stone was honoured for his “pioneering use of light for diagnosis and therapy in healthcare”, including “developing novel Raman spectroscopic tools and techniques for rapid in vivo cancer diagnosis and monitoring”. Speaking in a Physics World Live chat, Stone explained why Raman spectroscopy is such a useful technique for medical imaging.
Nottingham is, of course, a city famous for medical imaging, thanks in particular to the University of Nottingham Nobel laureate Peter Mansfield (1933–2017), who pioneered magnetic resonance imaging (MRI). In an entertaining talk, Rob Morris from NTU explained how MRI is also crucial for imaging foodstuffs, helping the food industry to boost productivity, reduce waste – and make tastier pork pies.
Still on the medical theme, Niall Holmes from Cerca Magnetics, which was spun out from the University of Nottingham, explained how his company has developed wearable magnetoencephalography (MEG) sensors that can measures magnetic fields generated by neuronal firings in the brain. In 2023 Cerca won one of the IOP’s business and innovation awards.
Richard Friend from the University of Cambridge, who won the IOP’s top Isaac Newton medal and prize, discussed some of the many recent developments that have followed from his seminal 1990 discovery that semiconducting polymers can be used in light-emitting diodes (LEDs).
The event ended with a talk from particle physicist Tara Shears from the University of Liverpool, who outlined some of the findings of the new IOP report Physics and AI, to which she was an adviser. Based on a survey with 700 responses and a workshop with experts from academia and industry, the report concludes that physics doesn’t only benefit from AI – but underpins it too.
I’m sure AI will be good for physics overall, but I hope it never removes the need for real-life meetings like the Celebration of Physics.
Researchers from the Institute of Physics of the Chinese Academy of Sciences have produced the first two-dimensional (2D) sheets of metal. At just angstroms thick, these metal sheets could be an ideal system for studying the fundamental physics of the quantum Hall effect, 2D superfluidity and superconductivity, topological phase transitions and other phenomena that feature tight quantum confinement. They might also be used to make novel electronic devices such as ultrathin low-power transistors, high-frequency devices and transparent displays.
Since the discovery of graphene – a 2D sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently-bonded atoms are separated by gaps. The presence of these gaps mean that neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets.
Making atomically thin metals would expand this class of technologically important structures. However, because each atom in a metal is strongly bonded to surrounding atoms in all directions, thinning metal sheets to this degree has proved difficult. Indeed, many researchers thought it might be impossible.
Melting and squeezing pure metals
The technique developed by Guangyu Zhang, Luojun Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. The team used MoS2/sapphire because both materials are atomically flat and lack dangling bonds that could react with the metals. They also have high Young’s moduli, of 430 GPa and 300 GPa respectively, meaning they can withstand extremely high pressures.
Once the metal powders melted into a droplet, the researchers applied a pressure of 200 MPa. They then continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal formed.
The team produced five atomically thin 2D metals using this technique. The thinnest, at around 5.8 Å, was tin, followed by bismuth (~6.3 Å), lead (~7.5 Å), indium (~8.4 Å) and gallium (~9.2 Å).
“Arduous explorations”
Zhang, Du and colleagues started this project around 10 years ago after they decided it would be interesting to work on 2D materials other than graphene and its layered vdW cousins. At first, they had little success. “Since 2015, we tried out a host of techniques, including using a hammer to thin a metal foil – a technique that we borrowed from gold foil production processes – all to no avail,” Du recalls. “We were not even able to make micron-thick foils using these techniques.”
After 10 years of what Du calls “arduous explorations”, the team finally moved a crucial step forward by developing the vdW squeezing method.
Writing in Nature, the researchers say that the five 2D metals they’ve realized so far are just the “tip of the iceberg” for their method. They now intend to increase this number. “In terms of novel properties, there is still a knowledge gap in the emerging electrical, optical, magnetic properties of 2D metals, so it would be nice to see how these materials behave physically as compared to their bulk counterparts thanks to 2D confinement effects,” says Zhang. “We would also like to investigate to what extent such 2D metals could be used for specific applications in various technological fields.”
A proposed experiment that would involve trapping atoms on a two-layered laser grid could be used to study the mechanism behind high-temperature superconductivity. Developed by physicists in Germany and France led by Henning Schlömer the new techniques could revolutionize our understanding of high-temperature superconductivity.
Superconductivity is a phenomenon characterized by an abrupt drop to zero of electric resistance when certain materials are cooled below a critical temperature. It has remained in the physics zeitgeist for over a hundred years and continues to puzzle contemporary physicists. While scientists have a good understanding of “conventional” superconductors (which tend to have low critical temperatures), the physics of high-temperature superconductors remains poorly understood. A deeper understanding of the mechanisms responsible for high-temperature superconductivity could unveil the secrets behind macroscopic quantum phenomena in many-body systems.
Mimicking real crystalline materials
Optical lattices have emerged as a powerful tool to study such many-body quantum systems. Here, two counter-propagating laser beams overlap to create a standing wave. Extending this into two dimensions creates a grid (or lattice) of potential-energy minima where atoms can be trapped (see figure). The interactions between these trapped atoms can then be tuned to mimic real crystalline materials giving us an unprecedented ability to study their properties.
Superconductivity is characterized by the formation of long-range correlations between electron pairs. While the electronic properties of high-temperature superconductors can be studied in the lab, it can be difficult to test hypotheses because the properties of each superconductor are fixed. In contrast, correlations between atoms in an optical lattice can be tuned, allowing different models and parameters to be explored.
Henning Schlömer (left) and Hannah Lange The Ludwig Maximilian University of Munich PhD students collaborated on the proposal. (Courtesy: Henning Schlömer/Hannah Lange)
This could be done by trapping fermionic atoms (analogous to electrons in a superconducting material) in an optical lattice and enabling them to form pair correlations. However, this has proved to be challenging because these correlations only occur at very low temperatures that are experimentally inaccessible. Measuring these correlations presents an additional challenge of adding or removing atoms at specific sites in the lattice without disturbing the overall lattice state. But now, Schlömer and colleagues propose a new protocol to overcome these challenges.
The proposal
The researchers propose trapping fermionic atoms on a two-layered lattice. By introducing a potential-energy offset between the two layers, they ensure that the atoms can only move within a layer and there is no hopping between layers. They enable magnetic interaction between the two layers, allowing the atoms to form spin-correlations such as singlets, where atoms always have opposing spins . The dynamics of such interlayer correlations will give rise to superconducting behaviour.
This system is modelled using a “mixed-dimensional bilayer” (MBD) model. It accounts for three phenomena: the hopping of atoms between lattice sites within a layer; the magnetic (spin) interaction between the atoms of the two layers; and the magnetic interactions within the atoms of a layer.
Numerical simulations of the MBD model suggest the occurrence of superconductor-like behaviour in optical lattices at critical temperatures much higher than traditional models. These temperatures are readily accessible in experiments.
To measure the correlations, one needs to track pair formation in the lattice. One way to track pairs is to add or remove atoms from the lattice without disturbing the overall lattice state. However, this is experimentally infeasible. Instead, the researchers propose doping the energetically higher layer with holes – that is the removal of atoms to create vacant sites. The energetically lower layer is doped with doublons, which are atom pairs that occupy just one lattice site. Then the potential offset between the two layers can be tuned to enable controlled interaction between the doublons and holes. This would allow researchers to study pair formation via this interaction rather than having to add or remove atoms from specific lattice sites.
Clever mathematical trick
To study superconducting correlations in the doped system, the researchers employ a clever mathematical trick. Using a mathematical transformation, they transform the model to an equivalent model described by only “hole-type” dopants without changing the underlying physics. This allows them to map superconducting correlations to density correlations, which can be routinely accessed is existing experiments.
With their proposal, Schlömer and colleagues are able to both prepare the optical lattice in a state, where superconducting behaviour occurs at experimentally accessible temperatures and study this behaviour by measuring pair formation.
When asked about possible experimental realizations, Schlömer is optimistic: “While certain subtleties remain to be addressed, the technology is already in place – we expect it will become experimental reality in the near future”.
Imagine, if you will, that you are a quantum system. Specifically, you are an unstable quantum system – one that would, if left to its own devices, rapidly decay from one state (let’s call it “awake”) into another (“asleep”). But whenever you start to drift into the “asleep” state, something gets in the way. Maybe it’s a message pinging on your phone. Maybe it’s a curious child peppering you with questions. Whatever it is, it jolts you out of your awake–asleep superposition and projects you back into wakefulness. And because it keeps happening faster than you can fall asleep, you remain awake, diverted from slumber by a stream of interruptions – or, in quantum terms, measurements.
This phenomenon of repeated measurements “freezing” an unstable quantum system into a particular state is known as the quantum Zeno effect (figure 1). Named after a paradox from ancient Greek philosophy, it was hinted at in the 1950s by the scientific polymaths Alan Turing and John von Neumann but only fully articulated in 1977 by the physicists Baidyanath Misra and George Sudarshan (J. Math. Phys.18 756). Since then, researchers have observed it in dozens of quantum systems, including trapped ions, superconducting flux qubits and atoms in optical cavities. But the apparent ubiquitousness of the quantum Zeno effect cannot hide the strangeness at its heart. How does the simple act of measuring a quantum system have such a profound effect on its behaviour?
A watched quantum pot
“When you come across it for the first time, you think it’s actually quite amazing because it really shows that the measurement in quantum mechanics influences the system,” says Daniel Burgarth, a physicist at the Friedrich-Alexander-Universität in Erlangen-Nürnberg, Germany, who has done theoretical work on the quantum Zeno effect.
Giovanni Barontini, an experimentalist at the University of Birmingham, UK, who has studied the quantum Zeno effect in cold atoms, agrees. “It doesn’t have a classical analogue,” he says. “I can watch a classical system doing something forever and it will continue doing it. But a quantum system really cares if it’s watched.”
1 A watched quantum pot
(Illustration courtesy: Mayank Shreshtha; Zeno image public domain; Zeno crop CC BY S Perquin)
Applying heat to a normal, classical pot of water will cause it to evolve from state 1 (not boiling) to state 2 (boiling) at the same rate regardless of whether anyone is watching it (even if it doesn’t seem like it). In the quantum world, however, a system that would normally evolve from one state to the other if left unobserved (blindfolded Zeno) can be “frozen” in place by repeated frequent measurements (eyes-open Zeno).
For the physicists who laid the foundations of quantum mechanics a century ago, any connection between measurement and outcome was a stumbling block. Several tried to find ways around it, for example by formalizing a role for observers in quantum wavefunction collapse (Niels Bohr and Werner Heisenberg); introducing new “hidden” variables (Louis de Broglie and David Bohm); and even hypothesizing the creation of new universes with each measurement (the “many worlds” theory of Hugh Everett).
But none of these solutions proved fully satisfactory. Indeed, the measurement problem seemed so intractable that most physicists in the next generation avoided it, preferring the approach sometimes described – not always pejoratively – as “shut up and calculate”.
Today’s quantum physicists are different. Rather than treating what Barontini calls “the apotheosis of the measurement effect” as a barrier to overcome or a triviality to ignore, they are doing something few of their forebears could have imagined. They are turning the quantum Zeno effect into something useful.
Noise management
To understand how freezing a quantum system by measuring it could be useful, consider a qubit in a quantum computer. Many quantum algorithms begin by initializing qubits into a desired state and keeping them there until they’re required to perform computations. The problem is that quantum systems seldom stay where they’re put. In fact, they’re famously prone to losing their quantum nature (decohering) at the slightest disturbance (noise) from their environment. “Whenever we build quantum computers, we have to embed them in the real world, unfortunately, and that real world causes nothing but trouble,” Burgarth says.
Quantum scientists have many strategies for dealing with environmental noise. Some of these strategies are passive, such as cooling superconducting qubits with dilution refrigerators and using electric and magnetic fields to suspend ionic and atomic qubits in a vacuum. Others, though, are active. They involve, in effect, tricking qubits into staying in the states they’re meant to be in, and out of the states they’re not.
The quantum Zeno effect is one such trick. “The way it works is that we apply a sequence of kicks to the system, and we are actually rotating the qubit with each kick,” Burgarth explains. “You’re rotating the system, and then effectively the environment wants to rotate it in the other direction.” Over time, he adds, these opposing rotations average out, protecting the system from noise by freezing it in place.
Quantum state engineering
While noise mitigation is useful, it’s not the quantum Zeno application that interests Burgarth and Barontini the most. The real prize, they agree, is something called quantum state engineering, which is much more complex than simply preventing a quantum system from decaying or rotating.
The source of this added complexity is that real quantum systems – much like real people – usually have more than two states available to them. For example, the set of permissible “awake” states for a person – the Hilbert space of wakefulness, let’s call it – might include states such as cooking dinner, washing dishes and cleaning the bathroom. The goal of quantum state engineering is to restrict this state-space so the system can only occupy the state(s) required for a particular application.
As for how the quantum Zeno effect does this, Barontini explains it by referring to Zeno’s original, classical paradox. In the fifth century BCE, the philosopher Zeno of Elea posed a conundrum based on an arrow flying through the air. If you look at this arrow at any possible moment during its flight, you will find that in that instant, it is motionless. Yet somehow, the arrow still moves. How?
In the quantum version, Barontini explains, looking at the arrow freezes it in place. But that isn’t the only thing that happens. “The funniest thing is that if I look somewhere, then the arrow cannot go where I’m looking,” he says. “It will have to go around it. It will have to modify its trajectory to go outside my field of view.”
By shaping this field of view, Barontini continues, physicists can shape the system’s behaviour. As an example, he cites work by Serge Haroche, who shared the 2012 Nobel Prize for Physics with another notable quantum Zeno experimentalist, David Wineland.
In 2014 Haroche and colleagues at the École Normale Supérieure (ENS) in Paris, France, sought to control the dynamics of an electron within a so-called Rydberg atom. In this type of atom, the outermost electron is very weakly bound to the nucleus and can occupy any of several highly excited states.
The researchers used a microwave field to divide 51 of these highly excited Rydberg states into two groups, before applying radio-frequency pulses to the system. Normally, these pulses would cause the electron to hop between states. However, the continual “measurement” supplied by the microwave field meant that although the electron could move within either group of states, it could not jump from one group to the other. It was stuck – or, more precisely, it was in a special type of quantum superposition known as a Schrödinger cat state.
Restricting the behaviour of an electron might not sound very exciting in itself. But in this and other experiments, Haroche and colleagues showed that imposing such restrictions brings forth a slew of unusual quantum states. It’s as if telling the system what it can’t do forces it to do a bunch of other things instead, like a procrastinator who cooks dinner and washes dishes to avoid cleaning the bathroom. “It really enriches your quantum toolbox,” explains Barontini. “You can generate an entangled state that is more entangled or methodologically more useful than other states you could generate with traditional means.”
Just what is a measurement, anyway?
As well as generating interesting quantum states, the quantum Zeno effect is also shedding new light on the nature of quantum measurements. The question of what constitutes a “measurement” for quantum Zeno purposes turns out to be surprisingly broad. This was elegantly demonstrated in 2014, when physicists led by Augusto Smerzi at the Università di Firenze, Italy, showed that simply shining a resonant laser at their quantum system (figure 2) produced the same quantum Zeno dynamics as more elaborate “projective” measurements – which in this case involved applying pairs of laser pulses to the system at frequencies tailored to specific atomic transitions. “It’s fair to say that almost anything causes a Zeno effect,” says Burgarth. “It’s a very universal and easy-to-trigger phenomenon.”
2 Experimental realization of quantum Zeno dynamics
(First published in Nature Commun.5 3194. Reproduced with permission from Springer Nature)
The energy level structure of a population of ultracold 87Rb atoms, evolving in a five-level Hilbert space given by the five spin orientations of the F=2 hyperfine ground state. An applied RF field (red arrows) couples neighbouring quantum states together and allows atoms to “hop” between states. Normally, atoms initially placed in the |F, mF> = |2,2> state would cycle between this state and the other four F=2 states in a process known as Rabi oscillation. However, by introducing a “measurement” – shown here as a laser beam (green arrow) resonant with the transition between the |1,0> state and the |2,0> state – Smerzi and colleagues drastically changed the system’s dynamics, forcing the atoms to oscillate between just the |2,2> and |2,1> states (represented by up and down arrows on the so-called Bloch sphere at right). An additional laser beam (orange arrow) and the detector D were used to monitor the system’s evolution over time.
Other research has broadened our understanding of what measurement can do. While the quantum Zeno effect uses repeated measurements to freeze a quantum system in place (or at least slow its evolution from one state to another), it is also possible to do the opposite and use measurements to accelerate quantum transitions. This phenomenon is known as the quantum anti-Zeno effect, and it has applications of its own. It could, for example, speed up reactions in quantum chemistry.
Over the past 25 years or so, much work has gone into understanding where the ordinary quantum Zeno effect leaves off and the quantum anti-Zeno effect begins. Some systems can display both Zeno and anti-Zeno dynamics, depending on the frequency of the measurements and various environmental conditions. Others seem to favour one over the other.
But regardless of which version turns out to be the most important, quantum Zeno research is anything but frozen in place. Some 2500 years after Zeno posed his paradox, his intellectual descendants are still puzzling over it.
With increased water scarcity and global warming looming, electrochemical technology offers low-energy mitigation pathways via desalination and carbon capture. This webinar will demonstrate how the less than 5 molar solid-state concentration swings afforded by cation intercalation materials – used originally in rocking-chair batteries – can effect desalination using Faradaic deionization (FDI). We show how the salt depletion/accumulation effect – that plagues Li-ion battery capacity under fast charging conditions – is exploited in a symmetric Na-ion battery to achieve seawater desalination, exceeding by an order of magnitude the limits of capacitive deionization with electric double layers. While initial modeling that introduced such an architecture blazed the trail for the development of new and old intercalation materials in FDI, experimental demonstration of seawater-level desalination using Prussian blue analogs required cell engineering to overcome the performance-degrading processes that are unique to the cycling of intercalation electrodes in the presence of flow, leading to innovative embedded, micro-interdigitated flow fields with broader application toward fuel cells, flow batteries, and other flow-based electrochemical devices. Similar symmetric FDI architectures using proton intercalation materials are also shown to facilitate direct-air capture of carbon dioxide with unprecedentedly low energy input by reversibly shifting pH within aqueous electrolyte.
Kyle Smith
Kyle C Smith joined the faculty of Mechanical Science and Engineering at the University of Illinois Urbana-Champaign (UIUC) in 2014 after completing his PhD in mechanical engineering (Purdue, 2012) and his post-doc in materials science and engineering (MIT, 2014). His group uses understanding of flow, transport, and thermodynamics in electrochemical devices and materials to innovate toward separations, energy storage, and conversion. For his research he was awarded the 2018 ISE-Elsevier Prize in Applied Electrochemistry of the International Society of Electrochemistry and the 2024 Dean’s Award for Early Innovation as an associate professor by UIUC’s Grainger College. Among his 59 journal papers and 14 patents and patents pending, his work that introduced Na-ion battery-based desalination using porous electrode theory [Smith and Dmello, J. Electrochem. Soc., 163, p. A530 (2016)] was among the top ten most downloaded in the Journal of the Electrochemical Society for five months in 2016. His group was also the first to experimentally demonstrate seawater-level salt removal using this approach [Do et al., Energy Environ. Sci., 16, p. 3025 (2023); Rahman et al., Electrochimica Acta, 514, p. 145632 (2025)], introducing flow fields embedded in electrodes to do so.
A model that could help explain how heavy elements are forged within collapsing stars has been unveiled by Matthew Mumpower at Los Alamos National Laboratory and colleagues in the US. The team suggests that energetic photons generated by newly forming black holes or neutron stars transmute protons within ejected stellar material into neutrons, thereby providing ideal conditions for heavy elements to form.
Astrophysicists believe that elements heavier than iron are created in violent processes such as the explosions of massive stars and the mergers of neutron stars. One way that this is thought to occur is the rapid neutron-capture process (r-process), whereby lighter nuclei created in stars capture neutrons in rapid succession. However, exactly where the r-process occurs is not well understood.
As Mumpower explains, the r-process must be occurring in environments where free neutrons are available in abundance. “But there’s a catch,” he says. “Free neutrons are unstable and decay in about 15 min. Only a few places in the universe have the right conditions to create and use these neutrons quickly enough. Identifying those places has been one of the toughest open questions in physics.”
Intense flashes of light
In their study, Mumpower’s team – which included researchers from the Los Alamos and Argonne national laboratories – looked at how lots of neutrons could be created within massive stars that are collapsing to become neutron stars or black holes. Their idea focuses on the intense flashes of light that are known to be emitted from the cores of these objects.
This radiation is emitted at wavelengths across the electromagnetic spectrum – including highly energetic gamma rays. Furthermore, the light is emitted along a pair of narrow jets, which blast outward above each pole of the star’s collapsing core. As they form, these jets plough through the envelope of stellar material surrounding the core, which had been previously ejected by the star. This is believed to create a “cocoon” of hot, dense material surrounding each jet.
In this environment, Mumpower’s team suggest that energetic photons in a jet collide with protons to create a neutron and a pion. Since these neutrons are have no electrical charge, many of them could dissolve into the cocoon, providing ideal conditions for the r-process to occur.
To test their hypothesis, the researchers carried out detailed computer simulations to predict the number of free neutrons entering the cocoon due to this process.
Gold and platinum
“We found that this light-based process can create a large number of neutrons,” Mumpower says. “There may be enough neutrons produced this way to build heavy elements, from gold and platinum all the way up to the heaviest elements in the periodic table – and maybe even beyond.”
If their model is correct, suggests that the origin of some heavy elements involves processes that are associated with the high-energy particle physics that is studied at facilities like the Large Hadron Collider.
“This process connects high-energy physics – which usually focuses on particles like quarks, with low-energy astrophysics – which studies stars and galaxies,” Mumpower says. “These are two areas that rarely intersect in the context of forming heavy elements.”
Kilonova explosions
The team’s findings also shed new light on some other astrophysical phenomena. “Our study offers a new explanation for why certain cosmic events, like long gamma-ray bursts, are often followed by kilonova explosions – the glow from the radioactive decay of freshly made heavy elements,” Mumpower continues. “It also helps explain why the pattern of heavy elements in old stars across the galaxy looks surprisingly similar.”
The findings could also improve our understanding of the chemical makeup of deep-sea deposits on Earth. The presence of both iron and plutonium in this material suggests that both elements may have been created in the same type of event, before coalescing into the newly forming Earth.
For now, the team will aim to strengthen their model through further simulations – which could better reproduce the complex, dynamic processes taking place as massive stars collapse.