When people think of wind energy, they usually think of windmill-like turbines dotted among hills or lined up on offshore platforms. But there is also another kind of wind energy, one that replaces stationary, earthbound generators with tethered kites that harvest energy as they soar through the sky.
This airborne form of wind energy, or AWE, is not as well-developed as the terrestrial version, but in principle it has several advantages. Power-generating kites are much less massive than ground-based turbines, which reduces both their production costs and their impact on the landscape. They are also far easier to install in areas that lack well-developed road infrastructure. Finally, and perhaps most importantly, wind speeds are many times greater at high altitudes than they are near the ground, significantly enhancing the power densities available for kites to harvest.
There is, however, one major technical challenge for AWE, and it can be summed up in a single word: control. AWE technology is operationally more complex than conventional turbines, and the traditional method of controlling kites (known as model-predictive control) struggles to adapt to turbulent wind conditions. At best, this reduces the efficiency of energy generation. At worst, it makes it challenging to keep devices safe, stable and airborne.
In a paper published in EPL,Antonio Celani and his colleagues Lorenzo Basile and Maria Grazia Berni of the University of Trieste, Italy, and the Abdus Salam International Centre for Theoretical Physics (ICTP) propose an alternative control method based on reinforcement learning. In this form of machine learning, an agent learns to make decisions by interacting with its environment and receiving feedback in the form of “rewards” for good performance. This form of control, they say, should be better at adapting to the variable and uncertain conditions that power-generating kites encounter while airborne.
What was your motivation for doing this work?
Our interest originated from some previous work where we studied a fascinating bird behaviour called thermal soaring. Many birds, from the humble seagull to birds of prey and frigatebirds, exploit atmospheric currents to rise in the sky without flapping their wings, and then glide or swoop down. They then repeat this cycle of ascent and descent for hours, or even for weeks if they are migratory birds. They’re able to do this because birds are very effective at extracting energy from the atmosphere to turn it into potential energy, even though the atmospheric flow is turbulent, hence very dynamic and unpredictable.
Antonio Celani. (Courtesy: Antonio Celani)
In those works, we showed that we could use reinforcement learning to train virtual birds and also real toy gliders to soar. That got us wondering whether this same approach could be exported to AWE.
When we started looking at the literature, we saw that in most cases, the goal was to control the kite to follow a predetermined path, irrespective of the changing wind conditions. These cases typically used only simple models of atmospheric flow, and almost invariably ignored turbulence.
This is very different from what we see in birds, which adapt their trajectories on the fly depending on the strength and direction of the fluctuating wind they experience. This led us to ask: can a reinforcement learning (RL) algorithm discover efficient, adaptive ways of controlling a kite in a turbulent environment to extract energy for human consumption?
What is the most important advance in the paper?
We offer a proof of principle that it is indeed possible to do this using a minimal set of sensor inputs and control variables, plus an appropriately designed reward/punishment structure that guides trial-and-error learning. The algorithm we deploy finds a way to manoeuvre the kite such that it generates net energy over one cycle of operation. Most importantly, this strategy autonomously adapts to the ever-fluctuating conditions induced by turbulence.
Lorenzo Basile. (Courtesy: Lorenzo Basile)
The main point of RL is that it can learn to control a system just by interacting with the environment, without requiring any a priori knowledge of the dynamical laws that rule its behaviour. This is extremely useful when the systems are very complex, like the turbulent atmosphere and the aerodynamics of a kite.
What are the barriers to implementing RL in real AWE kites, and how might these barriers be overcome?
The virtual environment that we use in our paper to train the kite controller is very simplified, and in general the gap between simulations and reality is wide. We therefore regard the present work mostly as a stimulus for the AWE community to look deeper into alternatives to model-predictive control, like RL.
On the physics side, we found that some phases of an AWE generating cycle are very difficult for our system to learn, and they require a painful fine-tuning of the reward structure. This is especially true when the kite is close to the ground, where winds are weaker and errors are the most punishing. In those cases, it might be a wise choice to use other heuristic, hard-wired control strategies rather than RL.
Finally, in a virtual environment like the one we used to do the RL training in this work, it is possible to perform many trials. In real power kites, this approach is not feasible – it would take too long. However, techniques like offline RL might resolve this issue by interleaving a few field experiments where data are collected with extensive off-line optimization of the strategy. We successfully used this approach in our previous work to train real gliders for soaring.
What do you plan to do next?
We would like to explore the use of offline RL to optimize energy production for a small, real AWE system. In our opinion, the application to low-power systems is particularly relevant in contexts where access to the power grid is limited or uncertain. A lightweight, easily portable device that can produce even small amounts of energy might make a big difference in the everyday life of remote, rural communities, and more generally in the global south.
Circularly polarized (CP) light is encoded with information through its photon spin and can be utilized in applications such as low-power displays, encrypted communications and quantum technologies. Organic light emitting diodes (OLEDs) produce CP light with a left or right “handedness”, depending on the chirality of the light-emitting molecules used to create the device.
While OLEDs usually only emit either left- or right-handed CP light, researchers have now developed OLEDs that can electrically switch between emitting left- or right-handed CP light – without needing different molecules for each handedness.
“We had recently identified an alternative mechanism for the emission of circularly polarized light in OLEDs, using our chiral polymer materials, which we called anomalous circularly polarized electroluminescence,” says lead author Matthew Fuchter from the University of Oxford. “We set about trying to better understand the interplay between this new mechanism and the generally established mechanism for circularly polarized emission in the same chiral materials”.
Light handedness controlled by molecular chirality
The CP light handedness of an organic emissive molecule is controlled by its chirality. A chiral molecule is one that has two mirror-image structural isomers that can’t be superimposed on top of each other. Each of these non-superimposable molecules is called an enantiomer, and will absorb, emit and refract CP light with a defined spin angular momentum. Each enantiomer will produce CP light with a different handedness, through an optical mechanism called normal circularly polarized electroluminescence (NCPE).
OLED designs typically require access to both enantiomers, but most chemical synthesis processes will produce racemic mixtures (equal amounts of the two enantiomers) that are difficult to separate. Extracting each enantiomer so that they can be used individually is complex and expensive, but the research at Oxford has simplified this process by using a molecule that can switch between emitting left- and right-handed CP light.
The molecule in question is a helical molecule called (P)-aza[6]helicene, which is the right-handed enantiomer. Even though it is just a one-handed form, the researchers found a way to control the handedness of the OLED, enabling it to switch between both forms.
Switching handedness without changing the structure
The researchers designed the helicene molecules so that the handedness of the light could be switched electrically, without needing to change the structure of the material itself. “Our work shows that either handedness can be accessed from a single-handed chiral material without changing the composition or thickness of the emissive layer,” says Fuchter. “From a practical standpoint, this approach could have advantages in future circularly polarized OLED technologies.”
Instead of making a structural change, the researchers changed the way that the electric charges are recombined in the device, using interlayers to alter the recombination position and charge carrier mobility inside the device. Depending on where the recombination zone is located, this leads to situations where there is balanced or unbalanced charge transport, which then leads to different handedness of CP light in the device.
When the recombination zone is located in the centre of the emissive layer, the charge transport is balanced, which generates an NCPE mechanism. In these situations, the helicene adopts its normal handedness (right handedness).
However, when the recombination zone is located close to one of the transport layers, it creates an unbalanced charge transport mechanism called anomalous circularly polarized electroluminescence (ACPE). The ACPE overrides the NCPE mechanism and inverts the handedness of the device to left handedness by altering the balance of induced orbital angular momentum in electrons versus holes. The presence of these two electroluminescence mechanisms in the device enables it to be controlled electrically by tuning the charge carrier mobility and the recombination zone position.
The research allows the creation of OLEDs with controllable spin angular momentum information using a single emissive enantiomer, while probing the fundamental physics of chiral optoelectronics. “This work contributes to the growing body of evidence suggesting further rich physics at the intersection of chirality, charge and spin. We have many ongoing projects to try and understand and exploit such interplay,” Fuchter concludes.
Born in 1916, Crick studied physics at University College London in the mid-1930s, before working for the Admiralty Research Laboratory during the Second World War. But after reading physicist Erwin Schrödinger’s 1944 book What Is Life? The Physical Aspect of the Living Cell, and a 1946 article on the structure of biological molecules by chemist Linus Pauling, Crick left his career in physics and switched to molecular biology in 1947.
Six years later, while working at the University of Cambridge, he played a key role in decoding the double-helix structure of DNA, working in collaboration with biologist James Watson, biophysicist Maurice Wilkins and other researchers including chemist and X-ray crystallographer Rosalind Franklin. Crick, alongside Watson and Wilkins, went on to receive the 1962 Nobel Prize in Physiology and Medicine for the discovery.
Finally, Crick’s career took one more turn in the mid-1970s. After experiencing a mental health crisis, Crick left Britain and moved to California. He took up neuroscience in an attempt to understand the roots of human consciousness, as discussed in his 1994 book, The Astonishing Hypothesis: the Scientific Search for the Soul.
Parallel lives
When he died in 2004, Crick’s office wall at Salk Institute in La Jolla, US, carried portraits of Charles Darwin and Albert Einstein, as Cobb notes on the final page of his deeply researched and intellectually fascinating biography. But curiously, there is not a single other reference to Einstein in Cobb’s massive book. Furthermore, there is no reference at all to Einstein in the equally large 2009 biography of Crick, Francis Crick: Hunter of Life’s Secrets, by historian of science Robert Olby, who – unlike Cobb – knew Crick personally.
Nevertheless, a comparison of Crick and Einstein is illuminating. Crick’s family background (in the shoe industry), and his childhood and youth are in some ways reminiscent of Einstein’s. Both physicists came from provincial business families of limited financial success, with some interest in science yet little intellectual distinction. Both did moderately well at school and college, but were not academic stars. And both were exposed to established religion, but rejected it in their teens; they had little intrinsic respect for authority, without being open rebels until later in life.
The similarities continue into adulthood, with the two men following unconventional early scientific careers. Both of them were extroverts who loved to debate ideas with fellow scientists (at times devastatingly), although they were equally capable of long, solitary periods of concentration throughout their careers. In middle age, they migrated from their home countries – Germany (Einstein) and Britain (Crick) – to take up academic positions in the US, where they were much admired and inspiring to other scientists, but failed to match their earlier scientific achievements.
In their personal lives, both Crick and Einstein had a complicated history with women. Having divorced their first wives, they had a variety of extramarital affairs – as discussed by Cobb without revealing the names of these women – while remaining married to their second wives. Interestingly, Crick’s second wife, Odile Crick (whom he was married to for 55 years) was an artist, and drew the famous schematic drawing of the double helix published in Nature in 1953.
Stories of friendships
Although Cobb misses this fascinating comparison with Einstein, many other vivid stories light up his book. For example, he recounts Watson’s claim that just after their success with DNA in 1953, “Francis winged into the Eagle [their local pub in Cambridge] to tell everyone within hearing distance that we had found the secret of life” – a story that later appeared on a plaque outside the pub.
“Francis always denied he said anything of the sort,” notes Cobb, “and in 2016, at a celebration of the centenary of Crick’s birth, Watson publicly admitted that he had made it up for dramatic effect (a few years earlier, he had confessed as much to Kindra Crick, Francis’s granddaughter).” No wonder Watson’s much-read 1968 book The Double Helix caused a furious reaction from Crick and a temporary breakdown in their friendship, as Cobb dissects in excoriating detail.
Watson’s deprecatory comments on Franklin helped to provoke the current widespread belief that Crick and Watson succeeded by stealing Franklin’s data. After an extensive analysis of the available evidence, however, Cobb argues that the data was willingly shared with them by Franklin, but that they should have formally asked her permission to use it in their published work – “Ambition, or thoughtlessness, stayed their hand.”
In fact, it seems Crick and Franklin were friends in 1953, and remained so – with Franklin asking Crick for his advice on her draft scientific papers – until her premature death from ovarian cancer in 1958. Indeed, after her first surgery in 1956, Franklin went to stay with Crick and his wife at their house in Cambridge, and then returned to them after her second operation. There certainly appears to be no breakdown in trust between the two. When Crick was nominated for the Nobel prize in 1961, he openly stated, “The data which really helped us obtain the structure was mainly obtained by Rosalind Franklin.”
As for Crick’s later study of consciousness, Cobb comments, “It would be easy to dismiss Crick’s switch to studying the brain as the quixotic project of an ageing scientist who did not know his limits. After all, he did not make any decisive breakthrough in understanding the brain – nothing like the double helix… But then again, nobody else did, in Crick’s lifetime or since.” One is perhaps reminded once again of Einstein, and his preoccupation during later life with his unified field theory, which remains an open line of research today.
Sound waves can make small objects hover in the air, but applying this acoustic levitation technique to an array of objects is difficult because the objects tend to clump together. Physicists at the Institute of Science and Technology Austria (ISTA) have now overcome this problem thanks to hybrid structures that emerge from the interplay between attractive acoustic forces and repulsive electrostatic ones. By proving that it is possible to levitate many particles while keeping them separated, the finding could pave the way for advances in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics.
In acoustic levitation, particles ranging in size from tens of microns to millimetres are drawn up into the air and confined by an acoustic force. The origins of this force lie in the momentum that the applied acoustic field transfers to a particle as sound waves scatter off its surface. While the technique works well for single particles, multiple particles tend to aggregate into a single dense object in mid-air because the acoustic forces they scatter can, collectively, create an attractive interaction between them.
Keeping particles separated
Led by Scott Waitukaitis, the ISTA researchers found a way to avoid this so-called “acoustic collapse” by using a tuneable repulsive electrostatic force to counteract the attractive acoustic one. They began by levitating a single silver-coated poly(methyl methacrylate) (PMMA) microsphere 250‒300 µm in diameter above a reflector plate coated with a transparent and conductive layer of indium tin oxide (ITO). They then imbued the particle with a precisely controlled amount of electrical charge by letting it rest on the ITO plate with the acoustic field off, but with a high-voltage DC potential applied between the plate and a transducer. This produces a capacitive build-up of charge on the particle, and the amount of charge can be estimated from Maxwell’s solutions for two contacting conductive spheres (assuming, in the calculations, that the lower plate acts like a sphere with infinite radius).
The next step in the process is to switch on the acoustic field and, after just 10 ms, add the electric field to it. During the short period in which both fields are on, and provided the electric field is strong enough, either field is capable of launching the particle towards the centre of the levitation setup. The electric fields is then switched off. A few seconds later, the particle levitates stably in the trap, with a charge given, in principle, by Maxwell’s approximations.
A visually mesmerizing dance of particles
This charging method works equally well for multiple particles, allowing the researchers to load particles into the trap with high efficiency and virtually any charge they want, limited only by the breakdown voltage of the surrounding air. Indeed, the physicists found they could tune the charge to levitate particles separately or collapse them into a single, dense object. They could even create hybrid states that mix separated and collapsed particles.
And that wasn’t all. According to team member Sue Shi, a PhD student at ISTA and the lead author of a paper in PNAS about the research, the most exciting moment came when they saw the compact parts of the hybrid structures spontaneously begin to rotate, while the expanded parts remained in one place while oscillating in response to the rotation. The result was “a visually mesmerizing dance,” Shi says, adding that “this is the first time that such acoustically and electrostatically coupled interactions have been observed in an acoustically levitated system.”
As well as having applications in areas such as materials science and micro-robotics, Shi says the technique developed in this work could be used to study non-reciprocal effects that lead to the particles rotating or oscillating. “This would pave the way for understanding more elusive and complex non-reciprocal forces and many-body interactions that likely influence the behaviours of our system,” Shi tells Physics World.
Heat travels across a metal by the movement of electrons. However, in an insulator there are no free charge carriers; instead, vibrations in the atoms (phonons) move the heat from hot regions to cool regions in a straight path. In some materials, when a magnetic field is applied, the phonons begin to move sideways, this is known as the Phonon Hall Effect. Quantised collective excitations of the spin structure, called magnons, can also do this via the Magnon Hall Effect. A combined effect occurs when magnons and phonons strongly interact and traverse sideways in the Magnon–Polaron Hall Effect.
Scientists understand the quantum mechanical property known as Berry curvature that causes this transverse heat flow. Yet in some materials, the effect is greater than what Berry curvature alone can explain. In this research, an exceptionally large thermal Hall effect is recorded in MnPS₃, an insulating antiferromagnetic material with strong magnetoelastic coupling and a spin-flop transition. The thermal Hall angle remains large down to 4 K and cannot be accounted for by standard Berry curvature-based models.
This work provides an in-depth analysis of the role of the spin-flop transition in MnPS₃’s thermal properties and highlights the need for new theoretical approaches to understand magnon–phonon coupling and scattering. Materials with large thermal Hall effects could be used to control heat in nanoscale devices such as thermal diodes and transistors.
Topological insulators are materials that are insulating in the bulk within the bandgap, yet exhibit conductive states on their surface at frequencies within that same bandgap. These surface states are topologically protected, meaning they cannot be easily disrupted by local perturbations. In general, a material of n‑dimensions can host n‑1-dimensional topological boundary states. If the symmetry protecting these states is further broken, a bandgap can open between the n-1-dimensional states, enabling the emergence of n-2-dimensional topological states. For example, a 3D material can host 2D protected surface states, and breaking additional symmetry can create a bandgap between these surface states, allowing for protected 1D edge states. A material undergoing such a process is said to exhibit a phenomenon known as a higher-order topological insulator. In general, higher-order topological states appear in dimensions one lower than the parent topological phase due to the further unit-cell symmetry reduction. This requires at least a 2D lattice for second-order states, with the maximal order in 3D systems being three.
The researchers here introduce a new method for repeatedly opening the bandgap between topological states and generating new states within those gaps in an unbounded manner – without breaking symmetries or reducing dimensions. Their approach creates hierarchical topological insulators by repositioning domain walls between different topological regions. This process opens bandgaps between original topological states while preserving symmetry, enabling the formation of new hierarchical states within the gaps. Using one‑ and two‑dimensional Su–Schrieffer–Heeger models, they show that this procedure can be repeated to generate multiple, even infinite, hierarchical levels of topological states, exhibiting fractal-like behavior reminiscent of a Matryoshka doll. These higher-level states are characterized by a generalized winding number that extends conventional topological classification and maintains bulk-edge correspondence across hierarchies.
The researchers confirm the existence of second‑ and third-level domain‑wall and edge states and demonstrate that these states remain robust against perturbations. Their approach is scalable to higher dimensions and applicable not only to quantum systems but also to classical waves such as phononics. This broadens the definition of topological insulators and provides a flexible way to design complex networks of protected states. Such networks could enable advances in electronics, photonics, and phonon‑based quantum information processing, as well as engineered structures for vibration control. The ability to design complex, robust, and tunable hierarchical topological states could lead to new types of waveguides, sensors, and quantum devices that are more fault-tolerant and programmable.
The boundary between a substance’s liquid and solid phases may not be as clear-cut as previously believed. A new state of matter that is a hybrid of both has emerged in research by scientists at the University of Nottingham, UK and the University of Ulm, Germany, and they say the discovery could have applications in catalysis and other thermally-activated processes.
In liquids, atoms move rapidly, sliding over and around each other in a random fashion. In solids, they are fixed in place. The transition between the two states, solidification, occurs when random atomic motion transitions to an ordered crystalline structure.
At least, that’s what we thought. Thanks to a specialist microscopy technique, researchers led by Nottingham’s Andrei Khlobystov found that this simple picture isn’t entirely accurate. In fact, liquid metal nanoparticles can contain stationary atoms – and as the liquid cools, their number and position play a significant role in solidification.
Some atoms remain stationary
The team used a method called spherical and chromatic aberration-corrected high-resolution transmission electron microscopy (Cc/Cs-corrected HRTEM) at the low-voltage SALVE instrument at Ulm to study melted metal nanoparticles (such as platinum, gold and palladium) deposited on an atomically thin layer of graphene. This carbon-based material acted a sort of “hob” for heating the particles, says team member Christopher Leist, who was in charge of the HRTEM experiments. “As they melted, the atoms in the nanoparticles began to move rapidly, as expected,” Leist says. “To our surprise, however, we found that some atoms remained stationary.”
At high temperatures, these static atoms bind strongly to point defects in the graphene support. When the researchers used the electron beam from the transmission microscope to increase the number of these defects, the number of stationary atoms within the liquid increased, too. Khlobystov says that this had a knock-on effect on how the liquid solidified: when the stationary atoms are few in number, a crystal forms directly from the liquid and continues to grow until the entire particle has solidified. When their numbers increase, the crystallization process cannot take place and no crystals form.
“The effect is particularly striking when stationary atoms create a ring (corral) that surrounds and confines the liquid,” he says. “In this unique state, the atoms within the liquid droplet are in motion, while the atoms forming the corral remain motionless, even at temperatures well below the freezing point of the liquid.”
Unprecedented level of detail
The researchers chose to use Cc/Cs-corrected HRTEM in their study because minimizing spherical and chromatic aberrations through specialized hardware installed on the microscope enabled them to resolve single atoms in their images.
“Additionally, we can control both the energy of the electron beam and the sample temperature (the latter using MEMS-heated chip technology),” Khlobystov explains. “As a result, we can study metal samples at temperatures of up to 800 °C, even in a molten state, without sacrificing atomic resolution. We can therefore observe atomic behaviour during crystallization while actively manipulating the environment around the metal particles using the electron beam or by cooling the particles. This level of detail under such extreme conditions is unprecedented.”
Effect could be harnessed for catalysis
The Nottingham-Ulm researchers, who report their work in ACS Nano, say they obtained their results by chance while working on an EPSRC-funded project on 1-2 nm metal particles for catalysis applications. “Our approach involves assembling catalysts from individual metal atoms, utilizing on-surface phenomena to control their assembly and dynamics,” explains Khlobystov. “To gain this control, we needed to investigate the behaviour of metal atoms at varying temperatures and within different local environments on a support material.
“We suspected that the interplay between vacancy defects in the support and the sample temperature creates a powerful mechanism for controlling the size and structure of the metal particles,” he tells Physics World. “Indeed, this study revealed the fundamental mechanisms behind this process with atomic precision.”
The experiments were far from easy, he recalls, with one of the key challenges being to identify a thin, robust and thermally conductive support material for the metal. Happily, graphene meets all these criteria.
“Another significant hurdle to overcome was to be able to control the number of defect sites surrounding each particle,” he adds. “We successfully accomplished this by using the TEM’s electron beam not just as an imaging tool, but also as a means to modify the environment around the particles by creating defects.”
The researchers say they would now like to explore whether the effect can be harnessed for catalysis. To do this, Khlobystov says it will be essential to improve control over defect production and its scale. “We also want to image the corralled particles in a gas environment to understand how the phenomenon is influenced by reaction conditions, since our present measurements were conducted in a vacuum,” he adds.
Rob Farr is a theorist and computer modeller whose career has taken him down an unconventional path. He studied physics at the University of Cambridge, UK, from 1991 to 1994, staying on to do a PhD in statistical physics. But while many of his contemporaries then went into traditional research fields – such as quantum science, high-energy physics and photonic technologies – Farr got a taste for the food and drink manufacturing industry. It’s a multidisciplinary field in which Farr has worked for more than 25 years.
After leaving academia in 1998, first stop was Unilever’s €13bn foods division. For two decades, latterly as a senior scientist, Farr guided R&D teams working across diverse lines of enquiry – “doing the science, doing the modelling”, as he puts it. Along the way, Farr worked on all manner of consumer products including ice-cream, margarine and non-dairy spreads, as well as “dry” goods such as bouillon cubes. There was also the occasional foray into cosmetics, skin creams and other non-food products.
As a theoretical physicist working in industrial-scale food production, Farr’s focus has always been on the materials science of the end-product and how it gets processed. “Put simply,” says Farr, “that means making production as efficient as possible – regarding both energy and materials use – while developing ‘new customer experiences’ in terms of food taste, texture and appearance.”
Ice-cream physics
One tasty multiphysics problem that preoccupied Farr for a good chunk of his time at Unilever is ice cream. It is a hugely complex material that Farr likens to a high-temperature ceramic, in the sense that the crystalline part of it is stored very near to the melting point of ice. “Equally, the non-ice phase contains fats,” he says, “so there’s all sorts of emulsion physics and surface science to take into consideration.”
Ice cream also has polymers in the mix, so theoretical modelling needs to incorporate the complex physics of polymer–polymer phase separation as well as polymer flow, or “rheology”, which contributes to the product’s texture and material properties. “Air is another significant component of ice cream,” adds Farr, “which means it’s a foam as well as an emulsion.”
As well as trying to understand how all these subcomponents interact, there’s also the thorny issue of storage. After it’s produced, ice cream is typically kept at low temperatures of about –25 °C – first in the factory, then in transit and finally in a supermarket freezer. But once that tub of salted-caramel or mint choc chip reaches a consumer’s home, it’s likely to be popped in the ice compartment of a fridge freezer at a much milder –6 or –7 °C.
Manufacturers therefore need to control how those temperature transitions affect the recrystallization of ice. This unwanted outcome can lead to phenomena like “sintering” (which makes a harder product) and “ripening” (which can lead to big ice crystals that can be detected in the mouth and detract from the creamy texture).
“Basically, the whole panoply of soft-matter physics comes into play across the production, transport and storage of ice cream,” says Farr. “Figuring out what sort of materials systems will lead to better storage stability or a more consistent product texture are non-trivial questions given that the global market for ice cream is worth in excess of €100bn annually.”
A shot of coffee?
After almost 20 years working at Unilever, in 2017 Farr took up a role as coffee science expert at JDE Peet’s, the Dutch multinational coffee and tea company. Switching from the chilly depths of ice cream science to the dark arts of coffee production and brewing might seem like a steep career phase change, but the physics of the former provides a solid bridge to the latter.
The overlap is evident, for example, in how instant coffee gets freeze-dried – a low-temperature dehydration process that manufacturers use to extend the shelf-life of perishable materials and make them easier to transport. In the case of coffee, freeze drying (or lyophilization, as it’s commonly known) also helps to retain flavour and aromas.
If you want to study a parameter space that’s not been explored before, the only way to do that is to simulate the core processes using fundamental physics
After roasting and grinding the raw coffee beans, manufacturers extract a coffee concentrate using high pressure and water. This extract is then frozen, ground up and placed in a vacuum well below 0 °C. A small amount of heat is applied to sublime the ice away and remove the remaining water from the non-ice phase.
The quality of the resulting freeze-dried instant coffee is better than ordinary instant coffee. However, freeze-drying is also a complex and expensive process, which manufacturers seek to fine-tune by implementing statistical methods to optimize, for example, the amount of energy consumed during production.
Such approaches involve interpolating the gaps between existing experimental data sets, which is where a physics mind-set comes in. “If you want to study a parameter space that’s not been explored before,” says Farr, “the only way to do that is to simulate the core processes using fundamental physics.”
Beyond the production line, Farr has also sought to make coffee more stable when it’s stored at home. Sustainability is the big driver here: JDE Peet’s has committed to make all its packaging compostable, recyclable or reusable by 2030. “Shelf-life prediction has been a big part of this R&D initiative,” he explains. “The work entails using materials science and the physics of mass transfer to develop next-generation packaging and container systems.”
Line of sight
After eight years unpacking the secrets of coffee physics at JDE Peet’s, Farr was given the option to relocate to the Netherlands in mid-2025 as part of a wider reorganization of the manufacturer’s corporate R&D function. However, he decided to stay put in Oxford and is now deciding between another role in the food manufacturing sector, or moving into a new area of research, such as nuclear energy, or even education.
Cool science “The whole panoply of soft-matter physics comes into play across the production, transport and storage of ice-cream,” says industrial physicist Rob Farr. (Courtesy: London Institute for Mathematical Sciences)
Farr believes he gained a lot from his time at JDE Peet’s. As well as studying a wide range of physics problems, he also benefited from the company’s rigorous approach to R&D, whereby projects are regularly assessed for profitability and quickly killed off if they don’t make the cut. Such prioritization avoids wasted effort and investment, but it also demands agility from staff scientists, who have to build long-term research strategies against a project landscape in constant flux.
A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges
To thrive in that setting, Farr says collaboration and an open mind are essential. “A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges,” he says. “You can then find the scientific question which underpins seemingly disparate problems and work with colleagues to deliver commercially useful solutions.” For Farr, it’s a self-reinforcing dynamic. “As more people come to you, the more helpful you become – and I love that way of working.”
What Farr calls “line-of-sight” is another unique feature of industrial R&D in food materials. “Maybe you’re only building one span of a really long bridge,” he notes, “but when you can see the process end-to-end, as well as your part in in it, that is a fantastic motivator.” Indeed, Farr believes that for physicists who want a job doing something useful, the physics of food materials makes a great career. “There are,” he concludes, “no end of intriguing and challenging research questions.”
Physicists in the UK have succeeded in routing and teleporting entangled states of light between two four-user quantum networks – an important milestone in the development of scalable quantum communications. Led by Mehul Malik and Natalia Herrera Valencia of Heriot-Watt University in Edinburgh, Scotland, the team achieved this milestone thanks to a new method that uses light-scattering processes in an ordinary optical fibre to program a circuit. This approach, which is radically different from conventional methods based on photonic chips, allows the circuit to function as a programmable entanglement router that can implement several different network configurations on demand.
The team performed the experiments using commercially-available optical fibres, which are multi-mode structures that scatter light via random linear optical processes. In simple terms, Herrera Valencia explains that this means the light tends to ricochet chaotically through the fibres along hundreds of internal pathways. While this effect can scramble entanglement, researchers at the Institut Langevin in Paris, France had previously found that the scrambling can be calculated by analysing how the fibre transmits light. What is more, the light-scattering processes in such a medium can be harnessed to make programmable optical circuits – which is exactly what Malik, Herrera Valencia and colleagues did.
“Top-down” approach
The researchers explain that this “top-down” approach simplifies the circuit’s architecture because it separates the layer where the light is controlled from the layer in which it is mixed. Using waveguides for transporting and manipulating the quantum states of light also reduces optical losses. The result is a reconfigurable multi-port device that can distribute quantum entanglement between many users simultaneously in multiple patterns, switching between different channels (local connections, global connections or both) as required.
A further benefit is that the channels can be multiplexed, allowing many quantum processors to access the system at the same time. The researchers say this is similar to multiplexing in classical telecommunications networks, which makes it possible to send huge amounts of data through a single optical fibre using different wavelengths of light.
Access to a large number of modes
Although controlling and distributing entangled states of light is key for quantum networks, Malik says it comes with several challenges. One of these is that conventional methods based on photonics chips cannot be scaled up easily. They are also very sensitive to imperfections in how they’re made. In contrast, the waveguide-based approach developed by the Heriot-Watt team “opens up access to a large number of modes, providing significant improvements in terms of achievable circuit size, quality and loss,” Malik tells Physics World, adding that the approach also fits naturally with existing optical fibre infrastructures.
Gaining control over the complex scattering process inside a waveguide was not easy, though. “The main challenge was the learning curve and understanding how to control quantum states of light inside such a complex medium,” Herrera Valencia recalls. “It took time and iteration, but we now have the precise and reconfigurable control required for reliable entanglement distribution, and even more so for entanglement swapping, which is essential for scalable networks.”
While the Heriot-Watt team used the technique to demonstrate flexible quantum networking, Malik and Herrera Valencia say it might also be used for implementing large-scale photonic circuits. Such circuits could have many applications, ranging from machine learning to quantum computing and networking, they add.
Looking ahead, the researchers, who report their work in Nature Photonics, say they are now aiming to explore larger-scale circuits that can operate on more photons and light modes. “We would also like to take some of our network technology out of the laboratory and into the real world,” says Malik, adding that Herrera Valencia is leading a commercialization effort in that direction.
Multimodal monitoring Pressure and strain sensors on a clinical trial volunteer undergoing an ultrasound scan (left). Snapshot image of the ultrasound video recording (right). (Courtesy: Yap et al., Sci. Adv. 11 eady2661)
The ability to continuously monitor and interpret foetal movement patterns in the third trimester of a pregnancy could help detect any potential complications and improve foetal wellbeing. Currently, however, such assessment of foetal movement is performed only periodically, with an ultrasound exam at a hospital or clinic.
A lightweight, easily wearable, adhesive patch-based sensor developed by engineers and obstetricians at Monash University in Australia may change this. The patches, two of which are worn on the abdomen, can detect foetal movements such as kicking, waving, hiccups, breathing, twitching, and head and trunk motion.
Reduced foetal movement can be associated with potential impairment in the central nervous system and musculoskeletal system, and is a common feature observed in pregnancies that end in foetal death and stillbirth. A foetus compromised in utero may reduce movements as a compensatory strategy to lower oxygen consumption and conserve energy.
To help identify foetuses at risk of complications, the Monash team developed an artificial intelligence (AI)-powered wearable pressure–strain combo sensor system that continuously and accurately detects foetal movement-induced motion in the mother’s abdominal skin. As reported in Science Advances, the “band-aid”-like sensors can discriminate between foetal and non-foetal movement with over 90% accuracy.
The system comprises two soft, thin and flexible patches designed to conform to the abdomen of a pregnant woman. One patch incorporates an octagonal gold nanowire-based strain sensor (the “Octa” sensor), the other is an interdigitated electrode-based pressure sensor.
Pressure and strain combo Photograph of the sensors on a pregnant mother (A). Exploded illustration of the foetal kicks strain sensor (B) and the pressure sensor (C). Dimensions of the strain (D) and pressure (E) sensors. (Courtesy: Yap et al., Sci. Adv. 11 eady2661)
The patches feature a soft polyimide-based flexible printed circuit (FPC) that integrates a thin lithium polymer battery and various integrated circuit chips, including a Bluetooth radiofrequency system for reading the sensor’s electrical resistance, storing data and communicating with a smartphone app. Each patch is encapsulated with kinesiology tape and sticks to the abdomen using a medical double-sided silicone adhesive.
The Octa sensor is attached to a separate FPC connector attached to the primary device, enabling easy replacement after each study. The pressure sensor is mounted on the silicone adhesive, to connect with the interdigitated electrode beneath the primary device. The Octa and pressure sensor patches are lightweight (about 3 g) and compact, measuring 63 x 30 x 4 mm and 62 x 28 x 2 mm, respectively.
Trialling the device
The researchers validated their foetal movement monitoring system via comparison with simultaneous ultrasound exams, examining 59 healthy pregnant women at Monash Health. Each participant had the pressure sensor attached to the area of their abdomen where they felt the most vigorous foetal movements, typically in the lower quadrant, while the strain sensor was attached to the region closest to foetal limbs. An accelerometer placed on the participant’s chest captured non-foetal movement data for signal denoising and training the machine-learning model.
Principal investigator Wenlong Cheng, now at the University of Sydney, and colleagues report that “the wearable strain sensor featured isotropic omnidirectional sensitivity, enabling detection of maternal abdominal [motion] over a large area, whereas the wearable pressure sensor offered high sensitivity with a small domain, advantageous for accurate localized foetal movement detection”.
The researchers note that the pressure sensor demonstrated higher sensitivity to movements directly beneath it compared with motion farther away, while the Octa sensor performed consistently across a wider sensing area. “The combination of both sensor types resulted in a substantial performance enhancement, yielding an overall AUROC [area under the receiver operating characteristic curve] accuracy of 92.18% in binary detection of foetal movement, illustrating the potential of combining diverse sensing modalities to achieve more accurate and reliable monitoring outcomes,” they write.
In a press statement, co-author Fae Marzbanrad explains that the device’s strength lies in a combination of soft sensing materials, intelligent signal processing and AI. “Different foetal movements create distinct strain patterns on the abdominal surface, and these are captured by the two sensors,” she says. “The machine-learning system uses the signals to detect when movement occurs while cancelling maternal movements.”
The lightweight and flexible device can be worn by pregnant women for long periods without disrupting daily life. “By integrating sensor data with AI, the system automatically captures a wider range of foetal movements than existing wearable concepts while staying compact and comfortable,” Marzbanrad adds.
The next steps towards commercialization of the sensors will include large-scale clinical studies in out-of-hospital settings, to evaluate foetal movements and investigate the relationship between movement patterns and pregnancy complications.
Since the beginning of radiation therapy, almost all treatments have been delivered with the patient lying on a table while the beam rotates around them. But a resurgence in upright patient positioning is changing that paradigm. Novel radiation accelerators such as proton therapy, VHEE, and FLASH therapy are often too large to rotate around the patient, making access limited. By instead rotating the patient, these previously hard-to-access beams could now become mainstream in the future.
Join leading clinicians and experts as they discuss how this shift in patient positioning is enabling exploration of new treatment geometries and supporting the development of advanced future cancer therapies.
L-R Serdar Charyyev, Eric Deutsch, Bill Loo, Rock Mackie
Novel beams covered and their representative speaker
Serdar Charyyev – Proton Therapy – Clinical Assistant Professor at Stanford University School of Medicine Eric Deutsch – VHEE FLASH – Head of Radiotherapy at Gustave Roussy Bill Loo – FLASH Photons – Professor of Radiation Oncology at Stanford Medicine Rock Mackie – Emeritus Professor at University of Wisconsin and Co-Founder and Chairman of Leo Cancer Care
Be curious As CTO of a quantum tech company, Andrew Lamb values evidence-based decision making and being surrounded by experts. (Courtesy: Delta.g)
What skills do you use every day in your job?
A quantum sensor is a combination of lots of different parts working together in harmony: a sensor head containing the atoms and isolating them from the environment; a laser system to probe the quantum structure and manipulate atomic states; electronics to drive the power and timing of a device; and software to control everything and interpret the data. As the person building, developing and maintaining these devices you need to have expertise across all these areas. In addition to these skills, as the CTO my role also requires me to set the company’s technical priorities, determine the focus of R&D activities and act as the top technical authority in the firm.
In a developing field like quantum metrology, evidence-based decision making is crucial as you critically assess information, disregarding what is irrelevant and making an informed choice – especially when the “right answer” may not be obvious for months or even years. Challenges arise that may never have been solved before, and the best way to do so is to dive deep into the “why and how” something happens. Once the root cause is identified a creative solution then needs to be found; whether it is something brand new, or implementing an approach from an entirely different discipline.
What do you like best and least about your job?
The best thing about my job is the way in which it enables me to grow my knowledge and understanding of a wide variety of fields, while also providing me opportunities for creative problem solving. When you surround yourself with people who are experts in their field, there is no end to the opportunities to learn. Before co-founding Delta.g I was a researcher at the University of Birmingham where I learnt my technical skills. Moving into a start-up, we built a multidisciplinary team to address the operational, regulatory and technical barriers to establish a disruptive product in the marketplace. The diversity created within our company has afforded a greater pool of experts to learn from.
As the CTO, my role sits at the intersection of the technical and the commercial within the business. That means it is my responsibility to translate commercial milestones into a scientific plan, while also explaining our progress to non-experts. This can be challenging and quite stressful at times – particularly when I need to describe our scientific achievements in a way that truly reflects our advances, while still being accessible.
What do you know today that you wish you knew when you were starting out in your career?
For a long time, I didn’t know what direction I wanted to take, and I used to worry that the lack of a clear purpose would hold me back. Today I know that it doesn’t. Instead of fixating on finding a perfect path early on, it’s far more valuable to focus on developing skills that open doors. Whether those skills are technical, managerial or commercial, no knowledge is ever wasted. I’m still surprised by how often something I learned as far back as GCSE ends up being useful in my work now.
I also wish I had understood just how important it is to stay open to new opportunities. Looking back, every pivotal point in my career – switching from civil engineering to a physics degree, choosing certain undergraduate modules, applying for unexpected roles, even co-founding Delta.g – came from being willing to make a shift when an opportunity appeared. Being flexible and curious matters far more than having everything mapped out from the beginning.
Despite not being close to the frontline of Russia’s military assault on Ukraine, life at the Ivano-Frankivsk National Technical University of Oil and Gas is far from peaceful. “While we continue teaching and research, we operate under constant uncertainty – air raid alerts, electricity outages – and the emotional toll on staff and students,” says Lidiia Davybida, an associate professor of geodesy and land management.
Last year, the university became a target of a Russian missile strike, causing extensive damage to buildings that still has not been fully repaired – although, fortunately, no casualties were reported. The university also continues to leak staff and students to the war effort – some of whom will tragically never return – while new student numbers dwindle as many school graduates leave Ukraine to study abroad.
Despite these major challenges, Davybida and her colleagues remain resolute. “We adapt – moving lectures online when needed, adjusting schedules, and finding ways to keep research going despite limited opportunities and reduced funding,” she says.
Resolute research
Davybida’s research focuses on environmental monitoring using geographic information systems (GIS), geospatial analysis and remote sensing. She has been using these techniques to monitor the devastating impact that the war is having on the environment and its significant contribution to climate change.
In 2023 she published results from using Sentinel-5P satellite data and Google Earth Engine to monitor the air quality impacts of war on Ukraine (IOP Conf. Ser.: Earth Environ. Sci. 1254 012112). As with the COVID-19 lockdowns worldwide, her results reveal that levels of common pollutants such as carbon monoxide, nitrogen dioxide and sulphur dioxide were, on average, down from pre-invasion levels. This reflects the temporary disruption to economic activity that war has brought on the country.
Wider consequences Ukrainian military, emergency services and volunteers work together to rescue people from a large flooded area in Kherson on 8 June 2023. Two days earlier, the Russian army blew up the dam of the Kakhovka hydroelectric power station, meaning about 80 settlements in the flood zone had to be evacuated. (Courtesy: Sergei Chuzavkov/SOPPA Images/Shutterstock)
More worrying, from an environment and climate perspective, were the huge concentrations of aerosols, smoke and dust in the atmosphere. “High ozone concentrations damage sensitive vegetation and crops,” Davybida explains. “Aerosols generated by explosions and fires may carry harmful substances such as heavy metals and toxic chemicals, further increasing environmental contamination.” She adds that these pollutants can alter sunlight absorption and scattering, potentially disrupting local climate and weather patterns, and contributing to long-term ecological imbalances.
A significant toll has been wrought by individual military events too. A prime example is Russia’s destruction of the Kakhovka Dam in southern Ukraine in June 2023. An international team – including Ukrainian researchers – recently attempted to quantify this damage by combining on-the-ground field surveys, remote-sensing data and hydrodynamic modelling; a tool they used for predicting water flow and pollutant dispersion.
The results of this work are sobering (Science387 1181). Though 80% of the ecosystem is expected to re-establish itself within five years, the dam’s destruction released as much as 1.7 cubic kilometres of sediment contaminated by a host of persistent pollutants, including nitrogen, phosphorous and 83,000 tonnes of heavy metals. Discharging this toxic sludge across the land and waterways will have unknown long-term environmental consequences for the region, as the contaminants could be spread by future floods, the researchers concluded (figure 1).
This map shows areas of Ukraine affected or threatened by dam destruction in military operations. Arabic numbers 1 to 6 indicate rivers: Irpen, Oskil, Inhulets, Dnipro, Dnipro-Bug Estuary and Dniester, respectively. Roman numbers I to VII indicate large reservoir facilities: Kyiv, Kaniv, Kremenchuk, Kaminske, Dnipro, Kakhovka and Dniester, respectively. Letters A to C indicate nuclear power plants: Chornobyl, Zaporizhzhia and South Ukraine, respectively.
Dangerous data
A large part of the reason for the researchers’ uncertainty, and indeed more general uncertainty in environmental and climate impacts of war, stems from data scarcity. It is near-impossible for scientists to enter an active warzone to collect samples and conduct surveys and experiments. Environmental monitoring stations also get damaged and destroyed during conflict, explains Davybida – a wrong she is attempting to right in her current work. Many efforts to monitor, measure and hopefully mitigate the environmental and climate impact of the war in Ukraine are therefore less direct.
In 2022, for example, climate-policy researcher Mathijs Harmsen from the PBL Netherlands Environmental Assessment Agency and international collaborators decided to study the global energy crisis (which was sparked by Russia’s invasion of Ukraine) to look at how the war will alter climate policy (Environ. Res. Lett.19 124088).
They did this by plugging in the most recent energy price, trade and policy data (up to May 2023) into an integrated assessment model that simulates the environmental consequences of human activities worldwide. They then imposed different potential scenarios and outcomes and let it run to 2030 and 2050. Surprisingly, all scenarios led to a global reduction of 1–5% of carbon dioxide emissions by 2030, largely due to trade barriers increasing fossil fuel prices, which in turn would lead to increased uptake of renewables.
But even though the sophisticated model represents the global energy system in detail, some factors are hard to incorporate and some actions can transform the picture completely, argues Harmsen. “Despite our results, I think the net effect of this whole war is a negative one, because it doesn’t really build trust or add to any global collaboration, which is what we need to move to a more renewable world,” he says. “Also, the recent intensification of Ukraine’s ‘kinetic sanctions’ [attacks on refineries and other fossil fuel infrastructure] will likely have a larger effect than anything we explored in our paper.”
Elsewhere, Toru Kobayakawa was, until recently, working for the Japan International Cooperation Agency (JICA), leading the Ukraine support team. Kobayakawa used a non-standard method to more realistically estimate the carbon footprint of reconstructing Ukraine when the war ends (Environ. Res.: Infrastruct. Sustain.5 015015). The Intergovernmental Panel on Climate Change (IPCC) and other international bodies only account for carbon emissions within the territorial country. “The consumption-based model I use accounts for the concealed carbon dioxide from the production of construction materials like concrete and steel imported from outside of the country,” he says.
Using an open-source database Eora26 that tracks financial flows between countries’ major economic sectors in simple input–output tables, Kobayakawa calculated that Ukraine’s post-war reconstruction will amount to 741 million tonnes carbon dioxide equivalent over 10 years. This is 4.1 times Ukraine’s pre-war annual carbon-dioxide emissions, or the combined annual emissions of Germany and Austria.
However, as with most war-related findings, these figures come with a caveat. “Our input–output model doesn’t take into account the current situation,” notes Kobayakawa “It is the worst-case scenario.” Nevertheless, the research has provided useful insights, such as that the Ukrainian construction industry will account for 77% of total emissions.
“Their construction industry is notorious for inefficiency, needing frequent rework, which incurs additional costs, as well as additional carbon-dioxide emissions,” he says. “So, if they can improve efficiency by modernizing construction processes and implementing large-scale recycling of construction materials, that will contribute to reducing emissions during the reconstruction phase and ensure that they build back better.”
Military emissions gap
As the experiences of Davybida, Harmsen and Kobayakawa show, cobbling together relevant and reliable data in the midst of war is a significant challenge, from which only limited conclusions can be drawn. Researchers and policymakers need a fuller view of the environmental and climate cost of war if they are to improve matters once a conflict ends.
At present, reporting military emissions is voluntary, so data are often absent or incomplete – but gathering such data is vital. According to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is approximately 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.
The website is an attempt to fill this gap. “We hope that the UNFCCC picks up on this and mandates transparent and visible reporting of military emissions,” Neimark says (figure 2).
2 Closing the data gap
(Reused with permission from Neimark et al. 2025 War on the Climate: A Multitemporal Study of Greenhouse Gas Emissions of the Israel–Gaza Conflict. Available at SSRN)
Current United Nations Framework Convention on Climate Change (UNFCCC) greenhouse-gas emissions reporting obligations do not include all the possible types of conflict emissions, and there is no commonly agreed methodology or scope on how different countries collect emissions data. In a recent publication War on the Climate: a Multitemporal Study of Greenhouse Gas Emissions of the Israel-Gaza Conflict, Benjamin Neimark et al. came up with this framework, using the UNFCCC’s existing protocols. These reporting categories cover militaries and armed conflicts, and hope to highlight previously “hidden” emissions.
Measuring the destruction
Beyond plugging the military emissions gap, Neimark is also involved in developing and testing methods that he and other researchers can use to estimate the overall climate impact of war. Building on foundational work from his collaborator, Dutch climate specialist Lennard de Klerk – who developed a methodology for identifying, classifying and providing ways of estimating the various sources of emissions associated with the Russia–Ukraine war – Neimark and colleagues are trying to estimate the greenhouse-gas emissions from the Israel–Gaza conflict.
Their studies encompass pre-conflict preparation, the conflict itself and post-conflict reconstruction. “We were working with colleagues who were doing similar work in Ukraine, but every war is different,” says Neimark. “In Ukraine, they don’t have large tunnel networks, or they didn’t, and they don’t have this intensive, incessant onslaught of air strikes from carbon-intensive F16 fighter aircraft.” Some of these factors, like the carbon impact of Hamas’ underground maze of tunnels under Gaza, seem unquantifiable, but Neimark has found a way.
“There’s some pretty good data for how big these are in terms of height, the amount of concrete, how far down they’re dug and how thick they are,” says Neimark. “It’s just the length we had to work out based on reported documentation.” Finding the total amount of concrete and steel used in these tunnels involved triangulating open-source information with media reports to finalize an estimate of the dimensions of these structures. Standard emission factors could then be applied to obtain the total carbon emissions. According to data from Neimark’s Confronting Military Greenhouse Gas Emissions report, the carbon emissions from construction of concrete infrastructure by both Israel and Hamas were more than the annual emissions of 33 individual countries and territories (figure 3).
3 Climate change and the Gaza war
(Reused with permission from Neimark et al. 2024 Confronting Military Greenhouse Gas Emissions, Interactive Policy Brief, London, UK. Available from QMUL.)
Data from Benjamin Neimark, Patrick Bigger, Frederick Otu-Larbi and Reuben Larbi’s Confronting Military Greenhouse Gas Emissions report estimates the carbon emissions of the war in Gaza for three distinct periods: direct war activities; large-scale war infrastructure; and future reconstruction.
The impact of Hamas’ tunnels and Israel’s “iron wall” border fence are just two of many pre-war activities that must be factored in to estimate the Israel–Gaza conflict’s climate impact. Then, the huge carbon cost of the conflict itself must be calculated, including, for example, bombing raids, reconnaissance flights, tanks and other vehicles, cargo flights and munitions production.
Gaza’s eventual reconstruction must also be included, which makes up a big proportion of the total impact of the war, as Kobayakawa’s Ukraine reconstruction calculations showed. The United Nations Environment Programme (UNEP) has been systematically studying and reporting on “Sustainable debris management in Gaza” as it tracks debris from damaged buildings and infrastructure in Gaza since the outbreak of the conflict in October 2023. Alongside estimating the amounts of debris, UNEP also models different management scenarios – ranging from disposal to recycling – to evaluate the time, resource needs and environmental impacts of each option.
Visa restrictions and the security situation have prevented UNEP staff from entering the Gaza strip to undertake environmental field assessments to date. “While remote sensing can provide a valuable overview of the situation … findings should be verified on the ground for greater accuracy, particularly for designing and implementing remedial interventions,” says a UNEP spokesperson. They add that when it comes to the issue of contamination, UNEP needs “confirmation through field sampling and laboratory analysis” and that UNEP “intends to undertake such field assessments once conditions allow”.
The main risk from hazardous debris – which is likely to make up about 10–20% of the total debris – arises when it is mixed with and contaminates the rest of the debris stock. “This underlines the importance of preventing such mixing and ensuring debris is systematically sorted at source,” adds the UNEP spokesperson.
The ultimate cost
With all these estimates, and adopting a Monte Carlo analysis to account for uncertainties, Neimark and colleagues concluded that, from the first 15 months of the Israel–Gaza conflict, total carbon emissions were 32 million tonnes, which is huge given that the territory has a total area of just 365 km². The number also continues to rise.
Rubble and ruins Khan Younis in the Gaza Strip on 11 February 2025, showing the widespread damage to buildings and infrastructure. (Courtesy: Shutterstock/Anas Mohammed)
Why does this number matter? When lives are being lost in Gaza, Ukraine, and across Sudan, Myanmar and other regions of the world, calculating the environmental and climate cost of war might seem like something only worth bothering about when the fighting stops.
But doing so even while conflicts are taking place can help protect important infrastructure and land, avoid environmentally disastrous events, and to ensure the long rebuild, wherever the conflict may be happening, is informed by science. The UNEP spokesperson says that it is important to “systematically integrate environmental considerations into humanitarian and early recovery planning from the outset” rather than treating the environment as an afterthought. They highlight that governments should “embed it within response plans – particularly in areas where it can directly impact life-saving activities, such as debris clearance and management”.
With Ukraine still in the midst of war, it seems right to leave the final word to Davybida. “Armed conflicts cause profound and often overlooked environmental damage that persists long after the fighting stops,” she says. “Recognizing and monitoring these impacts is vital to guide practical recovery efforts, protect public health, prevent irreversible harm to ecosystems and ensure a sustainable future.”
I used to set myself the challenge every December of predicting what might happen in physics over the following year. Gazing into my imaginary crystal ball, I tried to speculate on the potential discoveries, the likely trends, and the people who might make the news over the coming year. It soon dawned on me that making predictions in physics is a difficult, if not futile, task
Apart from space missions pencilled in for launch on set dates, or particle colliders or light sources due to open, so much in science is simply unknown. That uncertainty of science is, of course, also its beauty; if you knew what was out there, looking for it wouldn’t be quite as much fun. So if you’re wondering what’s in store for 2026, I don’t know – you’ll just have to read Physics World to find out.
Having said that – and setting aside the insane upheaval going on in US science – this year’s Physics World Live series will give you some sense of what’s hot in physics right now, at least as far as we here at Physics World headquarters are concerned.
The first online panel discussion will be on quantum metrology – a burgeoning field that seeks to ensure companies and academics can test, validate and commercialize new quantum tech. Yes the International Year of Quantum Science and Technology officially ends with a closing ceremony in Ghana in February, but the impact of quantum physics will continue to reverberate throughout 2026.
Another of our online panels will be on medical physics, bringing together the current and two past editors-in-chief of Physics in Medicine & Biology. Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine, the journal turns 70 this year. The speakers will be reflecting on the vital role of medical-physics research to medicine and biology and examining how the field’s evolved since the journal was set up.
Medical physics will also be the focus of a new “impact project” in 2026 from the IOP, which will be starting another on artificial intelligence (AI) as well. The IOP will in addition be continuing its existing impact work on metamaterials, which were of course pioneered by – among others – the Imperial College theorist John Pendry. I wonder if a Nobel prize could be in store for him this year? That’s one prediction I’ll make that would be great if it came true.
Until then, on behalf of everyone at Physics World, I wish all readers – wherever you are – a happy and successful 2026. Your continued support is greatly valued.
From cutting onions to a LEGO Jodrell Bank, physics has had its fair share of quirky stories this year. Here is our pick of the best, not in any particular order.
Flight of the nematode
Researchers in the US this year discovered that a tiny jumping worm uses static electricity to increase its chances of attaching to unsuspecting prey. The parasitic roundworm Steinernema carpocapsae can leap some 25 times its body length by curling into a loop and springing in the air. If the nematode lands successfully on a victim, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. To investigate whether static electricity aids their flight, a team at Emory University and the University of California, Berkeley, used high-speed microscopy to film the worms as they leapt onto a fruit fly that was tethered with a copper wire connected to a high-voltage power supply. The researchers found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly. They discovered that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing with 880 V resulting in an 80% probability of success. “We’re helping to pioneer the emerging field of electrostatic ecology,” notes Emory physicist Ranjiangshang Ran.
Tear-jerking result
While it is known that volatile chemicals released from onions irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear. To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades. They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets. The most energetic droplets were released during the initial contact between the blade and the onion’s skin. When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds. “That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.” Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to room-temperature vegetables.
LEGO telescope
Students at the University of Manchester in the UK created a 30 500-piece LEGO model of the iconic Lovell Telescope to mark the 80th anniversary of the Jodrell Bank Observatory, which was founded in December 1945. Built in 1957, the 76.2 m diameter telescope was the largest steerable dish radio telescope in the world at the time. The LEGO model has been designed by Manchester’s undergraduate physics society and is based on the telescope’s original engineering blueprints. Student James Ruxton spent six months perfecting the design, which even involved producing custom-designed LEGO bricks with a 3D printer. Ruxton and fellow students began construction in April and the end result is a model weighing 30 kg with 30500 pieces and a whopping 4000-page instruction manual. “It’s definitely the biggest and most challenging build I’ve ever done, but also the most fun,” says Ruxton. “I’ve been a big fan of LEGO since I was younger, and I’ve always loved creating my own models, so recreating something as iconic as the Lovell is like taking that to the next level!” The model has gone on display in a “specially modified cabinet” at the university’s Schuster building, taking pride of place alongside a decade-old LEGO model of CERN’s ATLAS detector.
Petal physics
The curves and curls of leaves and flower petals arise due to the interplay between their natural growth and geometry. Uneven growth in a flat sheet, in which the edges grow quicker than the interior, gives rise to strain and in plant leaves and petals, for example, this can result in a variety of shapes such as saddle and ripple shapes. Yet when it comes to rose petals, the sharply pointed cusps – a point where two curves meet – that form at the edge of the petals set it apart from soft, wavy patterns seen in many other plants.
To investigate this intriguing difference, researchers from the Hebrew University of Jerusalem carried out theoretical modelling and conducted a series of experiments with synthetic disc “petals”. They found that the pointed cusps that form at the edge of rose petals are due to a type of geometric frustration called a Mainardi–Codazzi–Peterson (MCP) incompatibility. This type of mechanism results in stress concentrating in a specific area, which goes on to form cusps to avoid tearing or forming unnatural folding. When the researchers suppressed the formation of cusps, they found that the discs revert to being smooth and concave. The researchers say that the findings could be used for applications in soft robotics and even in the deployment of spacecraft components.
Wild Card physics
The Wild Cards universe is a series of novels set largely during an alternate history of the US following the Second World War. The series follows events after an extraterrestrial virus, known as the Wild Card virus, has spread worldwide. It mutates human DNA causing profound changes in human physiology. The virus follows a fixed statistical distribution in that 90% of those infected die, 9% become physically mutated (referred to as “jokers”) and 1% gain superhuman abilities (known as “aces”). Such capabilities include the ability to fly as well as being able to move between dimensions. George R R Martin, the author who co-edits the Wild Cards series, co-authored a paper examining the complex dynamics of the Wild Card virus together with Los Alamos National Laboratory theoretical physicist Ian Tregillis, who is also a science-fiction author. The model takes into consideration the severity of the changes (for the 10% that don’t instantly die) and the mix of joker/ace traits. The result is a dynamical system in which a carrier’s state vector constantly evolves through the model space – until their “card” turns. At that point the state vector becomes fixed and its permanent location determines the fate of the carrier. “The fictional virus is really just an excuse to justify the world of Wild Cards, the characters who inhabit it, and the plot lines that spin out from their actions,” says Tregillis.
Bubble vision: researchers have discovered that triple-fermented beer feature the most stable beer foam heads (courtesy: AIP/Chatzigiannakis et al.)
Foamy top
And finally, a clear sign of a good brew is a big head of foam at the top of a poured glass. Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse. What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that can reduce surface tension and are found in soaps and detergents. To find out more, researchers from ETH Zurich and Eindhoven University of Technology investigated beer-foam stability for different types of beers at varying stages of the fermentation process. They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is mostly influenced by the proteins in the beer – the more they contain, the more viscous the film and more stable the foam will be. However, for double-fermented beers, the proteins in the beer are slightly denatured by the yeast cells and come together to form a two-dimensional membrane that keeps the foam intact longer. The head was found to be even more stable for triple-fermented beers, which include Trappist beers. The team says that the work could be used to identify ways to increase or decrease the amount of foam so that everyone can pour a perfect glass of beer every time. Cheers!
You can be sure that 2026 will throw up its fair share of quirky stories from the world of physics. See you next year!
Popularity isn’t everything. But it is something, so for the second year running, we’re finishing our trip around the Sun by looking back at the physics stories that got the most attention over the past 12 months. Here, in ascending order of popularity, are the 10 most-read stories published on the Physics World website in 2025.
We’ve had quantum science on our minds all year long, courtesy of 2025 being UNESCO’s International Year of Quantum Science and Technology. But according to theoretical work by Partha Ghose and Dimitris Pinotsis, it’s possible that the internal workings of our brains could also literally be driven by quantum processes.
Though neurons are generally regarded as too big to display quantum effects, Ghose and Pinotsis established that the equations describing the classical physics of brain responses are mathematically equivalent to the equations describing quantum mechanics. They also derived a Schrödinger-like equation specifically for neurons. So if you’re struggling to wrap your head around complex quantum concepts, take heart: it’s possible that your brain is ahead of you.
Testing times A toy model from Marco Pettini seeks to reconcile quantum entanglement with Einstein’s theory of relativity. (Courtesy: Shutterstock/Eugene Ivanov)
Einstein famously disliked the idea of quantum entanglement, dismissing its effects as “spooky action at a distance”. But would he have liked the idea of an extra time dimension any better? We’re not sure he would, but that is the solution proposed by theoretical physicist Marco Pettini, who suggests that wavefunction collapse could propagate through a second time dimension. Pettini got the idea from discussions with the Nobel laureate Roger Penrose and from reading old papers by David Bohm, but not everyone is impressed by these distinguished intellectual antecedents. In this article, Bohm’s former student and frequent collaborator Jeffrey Bub went on the record to say he “wouldn’t put any money on” Pettini’s theory being correct. Ouch.
Continuing the theme of intriguing, blue-sky theoretical research, the eighth-most-read article of 2025 describes how two theoretical physicists, Kaden Hazzard and Zhiyuan Wang, proposed a new class of quasiparticles called paraparticles. Based on their calculations, these paraparticles exhibit quantum properties that are fundamentally different from those of bosons and fermions. Notably, paraparticles strikes a balance between the exclusivity of fermions and the clustering tendency of bosons, with up to two paraparticles allowed to occupy the same quantum state (rather than zero for fermions or infinitely many for bosons). But do they really exist? No-one knows yet, but Hazzard and Wang say that experimental studies of ultracold atoms could hold the answer.
Capturing colour A still life taken by Lippmann using his method sometime between 1890 and 1910. By the latter part of this period, the method had fallen out of favour, superseded by the simpler Autochrome process. (Courtesy: Photo in public domain)
The list of early Nobel laureates in physics is full of famous names – Roentgen, Curie, Becquerel, Rayleigh and so on. But if you go down the list a little further, you’ll find that the 1908 prize went to a now mostly forgotten physicist by the name of Gabriel Lippmann, for a version of colour photography that almost nobody uses (though it’s rather beautiful, as the photo shows). This article tells the story of how and why this happened. A companion piece on the similarly obscure 1912 laureate, Gustaf Dalén, fell just outside this year’s top 10; if you’re a member of the Institute of Physics, you can read both of them together in the November issue of Physics World.
Why should physicists have all the fun of learning about the quantum world? This episode of the Physics World Weekly podcast focuses on the outreach work of Aleks Kissinger and Bob Coecke, who developed a picture-driven way of teaching quantum physics to a group of 15-17-year-old students. One of the students in the original pilot programme, Arjan Dhawan, is now studying mathematics at the University of Durham, and he joined his former mentors on the podcast to answer the crucial question: did it work?
Conflicting views Stalwart physicists Albert Einstein and Niels Bohr had opposing views on quantum fundamentals from early on, which turned into a lifelong scientific argument between the two. (Paul Ehrenfest/Wikimedia Commons)
Niels Bohr had many good ideas in his long and distinguished career. But he also had a few that didn’t turn out so well, and this article by science writer Phil Ball focuses on one of them. Known as the Bohr-Kramers-Slater (BKS) theory, it was developed in 1923 with help from two of the assistants/students/acolytes who flocked to Bohr’s institute in Copenhagen. Several notable physicists hated it because it violated both causality and the conservation of energy, and within two years, experiments by Walther Boethe and Hans Geiger proved them right. The twist, though, is that Boethe went on to win a share of the 1954 Nobel Prize for Physics for this work – making Bohr surely one of the only scientists who won himself a Nobel Prize for his good ideas, and someone else a Nobel Prize for a bad one.
Black holes are fascinating objects in their own right. Who doesn’t love the idea of matter-swallowing cosmic maws floating through the universe? For some theoretical physicists, though, they’re also a way of exploring – and even extending – Einstein’s general theory of relativity. This article describes how thinking about black hole collisions inspired Jiaxi Wu, Siddharth Boyeneni and Elias Most to develop a new formulation of general relativity that mirrors the equations that describe electromagnetic interactions. According to this formulation, general relativity behaves the same way as the gravitational described by Isaac Newton more than 300 years ago, with the “gravito-electric” field fading with the inverse square of distance.
“Best of” lists are a real win-win. If you agree with the author’s selections, you go away feeling confirmed in your mutual wisdom. If you disagree, you get to have a good old moan about how foolish the author was for forgetting your favourites or including something you deem unworthy. Either way, it’s a success – as this very popular list of the top 5 Nobel Prizes for Physics awarded since the year 2000 (as chosen by Physics World editor-in-chief Matin Durrani) demonstrates.
We’re back to black holes again for the year’s second-most-read story, which focuses on a possible link between gravity and quantum information theory via the concept of entropy. Such a link could help explain the so-called black hole information paradox – the still-unresolved question of whether information that falls into a black hole is retained in some form or lost as the black hole evaporates via Hawking radiation. Fleshing out this connection could also shed light on quantum information theory itself, and the theorist who’s proposing it, Ginestra Bianconi, says that experimental measurements of the cosmological constant could one day verify or disprove it.
Experiment schematic Two single atoms floating in a vacuum chamber are illuminated by a laser beam and act as the two slits. The interference of the scattered light is recorded with a highly sensitive camera depicted as a screen. Incoherent light appears as background and implies that the photon has acted as a particle passing only through one slit. (Courtesy: Wolfgang Ketterle, Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee and Jiahao Lyu)
Back in 2002, readers of Physics World voted Thomas Young’s electron double-slit experiment “the most beautiful experiment in physics”. More than 20 years later, it continues to fascinate the physics community, as this, the most widely read article of any that Physics World published in 2025, shows.
Young’s original experiment demonstrated the wave-like nature of electrons by sending them through a pair of slits and showing that they create an interference pattern on a screen even when they pass through the slits one-by-one. In this modern update, physicists at the Massachusetts Institute of Technology (MIT), US, stripped this back to the barest possible bones.
Using two single atoms as the slits, they inferred the path of photons by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when they didn’t observe the photons’ path, and two bright spots when they did.
It’s an elegant result, and the fact that the MIT team performed the experiment specifically to celebrate the International Year of Quantum Science and Technology 2025 makes its popularity with Physics World readers especially gratifying.
So here’s to another year full of elegant experiments and the theories that inspire them. Long may they both continue, and thank you, as always, for taking the time to read about them.
Our blue planet is a Goldilocks world. We’re at just the right distance from the Sun that Earth – like Baby Bear’s porridge – is not too hot or too cold, allowing our planet to be bathed in oceans of liquid water. But further out in our solar system are icy moons that eschew the Goldilocks principle, maintaining oceans and possibly even life far from the Sun.
We call them icy moons because their surface, and part of their interior, is made of solid water-ice. There are over 400 icy moons in the solar system – most are teeny moonlets just a few kilometres across, but a handful are quite sizeable, from hundreds to thousands of kilometres in diameter. Of the big ones, the best known are Jupiter’s moons, Europa, Ganymede and Callisto, and Saturn’s Titan and Enceladus.
Yet these moons are more than just ice. Deep beneath their frozen shells – some –160 to –200 °C cold and bathed in radiation – lie oceans of water, kept liquid thanks to tidal heating as their interiors flex in the strong gravitational grip of their parent planets. With water being a prerequisite for life as we know it, these frigid systems are our best chance for finding life beyond Earth.
The first hints that these icy moons could harbour oceans of liquid water came when NASA’s Voyager 1 and 2 missions flew past Jupiter in 1979. On Europa they saw a broken and geologically youthful-looking surface, just millions of years old, featuring dark cracks that seemed to have slushy material welling up from below. Those hints turned into certainty when NASA’s Galileo mission visited Jupiter between 1995 and 2003. Gravity and magnetometer experiments proved that not only does Europa contain a liquid layer, but so do Ganymede and Callisto.
Meanwhile at Saturn, NASA’s Cassini spacecraft (which arrived in 2004) encountered disturbances in the ringed planet’s magnetic field. They turned out to be caused by plumes of water vapour erupting out of giant fractures splitting the surface of Enceladus, and it is believed that this vapour originates from an ocean beneath the moon’s ice shell. Evidence for an ocean on Titan is a little less certain, but gravity and radio measurements performed by Cassini and its European-built lander Huygens point towards the possibility of some liquid or slushy water beneath the surface.
Water, ice and JUICE
“All of these ocean worlds are going to be different, and we have to go to all of them to understand the whole spectrum of icy moons,” says Amanda Hendrix, director of the Planetary Science Institute in Arizona, US. “Understanding what their oceans are like can tell us about habitability in the solar system and where life can take hold and evolve.”
To that end, an armada of spacecraft will soon be on their way to the icy moons of the outer planets, building on the successes of their predecessors Voyager, Galileo and Cassini–Huygens. Leading the charge is NASA’s Europa Clipper, which is already heading to Jupiter. Clipper will reach its destination in 2030, with the Jupiter Icy moons Explorer (JUICE) from the European Space Agency (ESA) just a year behind it. Europa is the primary target of scientists because it is possibly Jupiter’s most interesting moon as a result of its “astrobiological potential”. That’s the view of Olivier Witasse, who is JUICE project scientist at ESA, and it’s why Europa Clipper will perform nearly 50 fly-bys of the icy moon, some as low as 25 km above the surface. JUICE will also visit Europa twice on its tour of the Jovian system.
The challenge at Europa is that it’s close enough to Jupiter to be deep inside the giant planet’s magnetosphere, which is loaded with high-energy charged particles that bathe the moon’s surface in radiation. That’s why Clipper and JUICE are limited to fly-bys; the radiation dose in orbit around Europa would be too great to linger. Clipper’s looping orbit will take it back out to safety each time. Meanwhile, JUICE will focus more on Callisto and Ganymede – which are both farther out from Jupiter than Europa is – and will eventually go into orbit around Ganymede.
“Ganymede is a super-interesting moon,” says Witasse. For one thing, at 5262 km across it is larger than Mercury, a planet. It also has its own intrinsic magnetic field – one of only three solid bodies in the solar system to do so (the others being Mercury and Earth).
Beneath the icy exterior
It’s the interiors of these moons that are of the most interest to JUICE and Clipper. That’s where the oceans are, hidden beneath many kilometres of ice. While the missions won’t be landing on the Jovian moons, these internal structures aren’t as inaccessible as we might at first think. In fact, there are three independent methods for probing them.
Many layers A cross section of Jupiter’s moon Europa, showing its internal layering: a rocky core and ocean floor (possibly with hydrothermal vents), the ocean itself and the ice shell above. (Courtesy: NASA/JPL–Caltech)
If a moon’s ocean contains salts or other electrically conductive contaminants, interesting things happen when passing through the parent planet’s variable magnetic field. “The liquid is a conductive layer within a varying magnetic field and that induces a magnetic field in the ocean that we can measure with a magnetometer using Faraday’s law,” says Witasse. The amount of salty contaminants, plus the depth of the ocean, influence the magnetometer readings.
Then there’s radio science – the way that an icy moon’s mass bends a radio signal from a spacecraft to Earth. By making multiple fly-bys with different trajectories during different points in a moon’s orbit around its planet, the moon’s gravity field can be measured. Once that is known to exacting detail, it can be applied to models of that moon’s internal structure.
Perhaps the most remarkable method, however, is using a laser altimeter to search for a tidal bulge in the surface of a moon. This is exactly what JUICE will be doing when in orbit around Ganymede. Its laser altimeter will map the shape of the surface – such as hills and crevasses – but gravitational tidal forces from Jupiter are expected to cause a bulge on the surface, deforming it by 1–10 m. How large the bulge is depends upon how deep the ocean is.
“If the surface ice is sitting above a liquid layer then the tide will be much bigger because if you sit on liquid, you are not attached to the rest of the moon,” says Witasse. “Whereas if Ganymede were solid the tide would be quite small because it is difficult to move one big, solid body.”
As for what’s below the oceans, those same gravity and radio-science experiments during previous missions have given us a general idea about the inner structures of Jupiter’s Europa, Ganymede and Callisto. All three have a rocky core. Inside Europa, the ocean surrounds the core, with a ceiling of ice above it. The rock–ocean interface potentially provides a source of chemical energy and nutrients for the ocean and any life there.
Ganymede’s interior structure is more complex. Separating the 3400 km-wide rocky core and the ocean is a layer, or perhaps several layers, of high-pressure ice, and there is another ice layer above the ocean. Without that rock–ocean interface, Ganymede is less interesting from an astrobiological perspective.
Meanwhile, Callisto, being the farthest from Jupiter, receives the least tidal heating of the three. This is reflected in Callisto’s lack of evolution, with its interior having not differentiated into layers as distinct as Europa and Ganymede. “Callisto looks very old,” says Witasse. “We’re seeing it more or less as it was at the beginning of the solar system.”
Crazy cryovolcanism
Tidal forces don’t just keep the interiors of the icy moons warm. They can also drive dramatic activity, such as cryovolcanoes – icy eruptions that spew out gases and volatile materials like liquid water (which quickly freezes in space), ammonia and hydrocarbons. The most obvious example of this is found on Saturn’s Enceladus, where giant water plumes squirt out through “tiger stripe” cracks at the moon’s south pole.
But there’s also growing evidence of cryovolcanism on Europa. In 2012 the Hubble Space Telescope caught sight of what looked like a water plume jetting out 200 km from the moon. But the discovery is controversial despite more data from Hubble and even supporting evidence found in archive data from the Galileo mission. What’s missing is cast-iron proof for Europa’s plumes. That’s where Clipper comes in.
By Jove Three of Jupiter’s largest moons have solid water-ice. (Left) Europa, imaged by the JunoCam on NASA’s Juno mission to Jupiter. The surface sports myriad fractures and dark markings. (Middle) Ganymede, also imaged by the Juno mission, is the largest moon in our solar system. (Right) Our best image of ancient Callisto was taken by NASA’s Galileo spacecraft in 2001. The arrival of JUICE in the Jovian system in 2031 will place Callisto under much-needed scrutiny. (CC BY 3.0 NASA/JPL–Caltech/SwRI/MSS/ image processing by Björn Jónsson; CC BY 3.0 NASA/JPL–Caltech/SwRI/MSS/ image processing by Kalleheikki Kannisto; NASA/JPL/DLR)
“We need to find out if the plumes are real,” says Hendrix. “What we do know is if there is plume activity happening on Europa then it’s not as consistent or ongoing as is clearly happening at Enceladus.”
At Enceladus, the plumes are driven by tidal forces from Saturn, which squeeze and flex the 500 km-wide moon’s innards, forcing out water from an underground ocean through the tiger stripes. If there are plumes at Europa then they would be produced the same way, and would provide access to material from an ocean that’s dozens of kilometres below the icy crust. “I think we have a lot of evidence that something is happening at Europa,” says Hendrix.
These plumes could therefore be the key to characterizing the hidden oceans. One instrument on Clipper that will play an important role in investigating the plumes at Europa is an ultraviolet spectrometer, a technique that was very useful on the Cassini mission.
Because Enceladus’ plumes were not known until Cassini discovered them, the spacecraft’s instruments had not been designed to study them. However, scientists were able to use the mission’s ultraviolet imaging spectrometer to analyse the vapour when it was between Cassini and the Sun. The resulting absorption lines in the spectrum showed the plumes to be mostly pure water, ejected into space at a rate of 200 kg per second.
Ocean spray Geysers of water vapour loaded with salts and organic molecules spray out from the tiger stripes on Enceladus. (Courtesy: NASA/JPL/Space Science Institute)
The erupted vapour freezes as it reaches space and some of it snows back down onto the surface. Cassini’s ultraviolet spectrometer was again used, this time to detect solar ultraviolet light reflected and scattered off these icy particles in the uppermost layers of Enceladus’ surface. Scientists found that any freshly deposited snow from the plumes has a different chemistry from older surface material that has been weathered and chemically altered by micrometeoroids and radiation, and therefore a different ultraviolet spectrum.
Icy moon landing
Another two instruments that Cassini’s scientists adapted to study the plumes were the cosmic dust analyser, and the ion and neutral mass spectrometer. When Cassini flew through the fresh plumes and Saturn’s E-ring, which is formed from older plume ejections, it could “taste” the material by sampling it directly. Recent findings from this data indicate that the plumes are rich in salt as well as organic molecules, including aliphatic and cyclic esters and ethers (carbon-bonded acid-based compounds such as fatty acids) (Nature Astron.9 1662). Scientists also found nitrogen- and oxygen-bearing compounds that play a role in basic biochemistry and which could therefore potentially be building blocks of prebiotic molecules or even life in Enceladus’ ocean.
Blue moon Enceladus, as seen by Cassini in 2006. The tiger stripes are the blue fractures towards the south. (Courtesy: NASA/JPL/Space Science Institute)
While Cassini could only observe Enceladus’ plumes and fresh snow from orbit, astronomers are planning a lander that could let them directly inspect the surface snow. Currently in the technology development phase, it would be launched by ESA sometime in the 2040s to arrive at the moon in 2054, when winter at Enceladus’ southern, tiger stripe-adorned pole turns to spring and daylight returns.
“What makes the mission so exciting to me is that although it looks like every large icy moon has an ocean, Enceladus is one where there is a very high chance of actually sampling ocean water,” says Jörn Helbert, head of the solar system section at ESA, and the science lead on the prospective mission.
The planned spacecraft will fly through the plumes with more sophisticated instruments than Cassini’s, designed specifically to sample the vapour (like Clipper will do at Europa). Yet adding a lander could get us even closer to the plume material. By landing close to the edge of a tiger stripe, a lander would dramatically increase the mission’s ability to analyse the material from the ocean in the form of fresh snow. In particular, it would look for biosignatures – evidence of the ocean being habitable, or perhaps even inhabited by microbes.
However, new research urges caution in drawing hasty conclusions about organic molecules present in the plumes and snow. While not as powerful as Jupiter’s, Saturn also has a magnetosphere filled with high-energy ions that bombard Enceladus. A recent laboratory study, led by Grace Richards of the Istituto Nazionale di Astrofisica e Planetologia Spaziale (IAPS-INAF) in Rome, found that when these ions hit surface-ice they trigger chemical reactions that produce organic molecules, including some that are precursors to amino acids, similar to what Cassini tasted in the plumes.
So how can we be sure that the organics in Enceladus’ plumes originate from the ocean, and not from radiation-driven chemistry on the surface? It is the same quandary for dark patches around cracks on the surface of Europa, which seem to be rich with organic molecules that could either originate via upwelling from the ocean below, or just from radiation triggering organic chemistry. A lander on Enceladus might solve not just the mystery of that particular moon, but provide important pointers to explain what we’re seeing on Europa too.
More icy companions
Enceladus is not Saturn’s only icy moon; there’s Titan too. As the ringed planet’s largest moon at 5150 km across, Titan (like Ganymede) is larger than Mercury. However, unlike the other moons in the solar system, Titan has a thick atmosphere rich in nitrogen and methane. The atmosphere is opaque, hiding the surface from spacecraft in orbit except at infrared wavelengths and radar, which means that getting below the smoggy atmosphere is a must.
ESA did this in 2005 with the Huygens lander, which, as it parachuted down to Titan’s frozen surface, revealed it to be a land of hills and dune plains with river channels, lakes and seas of flowing liquid hydrocarbons. These organic molecules originate from the methane in its atmosphere reacting with solar ultraviolet.
Until recently, it was thought that Titan has a core of rock, surrounded by a shell of high-pressure ice, above which sits a layer of salty liquid water and then an outer crust of water ice. However, new evidence from re-analysing Cassini’s data suggests that rather than oceans of liquid water, Titan has “slush” below the frozen exterior, with pockets of liquid water (Nature648 556). The team, led byFlavio Petricca from NASA’s Jet Propulsion Laboratory, looked at how Titan’s shape morphs as it orbits Saturn. There is a several-hour lag between the moon passing the peak of Saturn’s gravitational pull and its shape shifting, implying that while there must be some form of non-solid substance below Titan’s surface to allow for deformation, more energy is lost or dissipated than would be if it was liquid water. Instead, the researchers found that a layer of high-pressure ice close to its melting point – or slush – better fits the data.
Hello halo Titan is different to other icy moons in that it has a thick atmosphere, seen here with the moon in silhouette. (Courtesy: NASA/JPL/Space Science Institute)
To find out more about Titan, NASA is planning to follow in Huygens’ footsteps with the Dragonfly mission but in an excitingly different way. Set to launch in 2028, Dragonfly should arrive at Titan in 2034 where it will deploy a rotorcraft that will fly over the moon’s surface, beneath the smog, occasionally touching down to take readings. Scientists are intending to use Dragonfly to sample surface material with a mass spectrometer to identify organic compounds and therefore better assess Titan’s biological potential. It will also perform atmospheric and geological measurements, even listening for seismic tremors while landed, which could provide further clues about Titan’s interior.
Jupiter and Saturn are also not the only planets to possess icy moons. We find them around Uranus and Neptune too. Even the dwarf planet Pluto and its largest moon Charon have strong similarities to icy moons. Whether any of these bodies, so far out from the Sun, can maintain an ocean is unclear, however.
Recent findings point to an ocean deep inside Uranus’ moon Ariel that may once have been 170 km deep, kept warm by tidal heating (Icarus444 116822). But over time Ariel’s orbit around Uranus has become increasingly circular, weakening the tidal forces acting on it, and the ocean has partly frozen. Another of Uranus’ moons, Miranda, has a chaotic surface that appears to have melted and refrozen, and the pattern of cracks on its surface strongly suggests that the moon also contains an ocean, or at least did 150 million years ago. A new mission to Uranus is a top priority in the US’s most recent Decadal Review.
It’s becoming clear that icy ocean moons could far outnumber more traditional habitable planets like Earth, not just in our solar system, but across the galaxy (although none have been confirmed yet). Understanding the internal structures of the icy moons in our solar system, and characterizing their oceans, is vital if we are to expand the search for life beyond Earth.
Particle and nuclear physics evokes evokes images of huge accelerators probing the extremes of matter. But in this round-up of my favourite research of 2025 I have chosen five stories in which particle and nuclear physics forms the basis for a range of quirky and fascinating research from astrophysics to archaeology.
Stable discovery The Fireball experiment installed in the HiRadMat irradiation area at CERN. (Courtesy: Gianluca Gregori)
My first pick involves simulating the vast cosmic plasma in the lab. Blazars are extremely bright galaxies that are powered by supermassive black holes. They emit intense jets of radiation, including teraelectronvolt gamma rays – which can be detected by astronomers if a jet happens to point at Earth. As these high-energy photons travel through intergalactic space, they interact with background starlight, producing numerous electron–positron pairs. These pairs should, in theory, generate gigaelectronvolt gamma rays – but this secondary radiation has never been observed. One explanation is that intergalactic magnetic fields deflect these pairs and the resulting gamma rays away from our line of sight. However, there is no conclusive evidence for such fields. Another theory is that plasma instabilities in the sparse intergalactic medium could dissipate the energy of the pair beams. Now, physicists working on the Fireball experiment at CERN have simulated the effect of plasma instabilities by firing a beam of electron–positron pairs through a metre-long argon plasma. They found that plasma instabilities are too weak to account for the missing gamma radiation – strengthening the case for the existence of primordial intergalactic magnetic fields.
A compact source of muons could soon be discovering hidden chambers in ancient pyramids. Muons are subatomic particles similar to electrons but 200 times heavier. They are produced in copious amounts in the atmosphere by cosmic rays. These cosmic muons can penetrate long distances into materials and are finding increasing use in “muon tomography” – a technique that has imaged the interiors of huge objects such as volcanoes, pyramids and nuclear reactors. One downside of muon tomography is that muons are always vertically incident, limiting opportunities for imaging. While beams of muons can be made in accelerators, these are large and expensive facilities – and the direction of such beams are also fixed. Now, physicists at Lawrence Berkeley National Laboratory have demonstrated a compact, and potentially portable method for generating high-energy muon beams using laser plasma acceleration. It uses an ultra-intense, tightly focused laser pulse to accelerate electrons in a short plasma channel. These electrons then strike a metal target creating a muon beam. With more work, compact and portable muon sources could be developed, leading to new possibilities for non-destructive imaging in archaeology, geology, and nuclear safety.
Could a “superradiant neutrino laser” be created using radioactive atoms in an ultracold Bose–Einstein condensate (BEC)? The answer is “maybe”, according to theoretical work by two physicists in the US. Their proposal involves creating a BEC of rubidium-83, which undergoes beta decay involving the emission of neutrinos. Unlike photons, neutrinos are fermions and therefore cannot form the basis of conventional laser. However, if the atoms in the BEC are close enough together, quantum interactions between the atomic nuclei could accelerate beta decay and create a coherent, laser-like burst of neutrinos. This is a well-known phenomenon called superradiance. While the idea could be tested using existing technologies for making BECs, it would be a challenge to deploy radioactive rubidium in a conventional atomic physics lab. Another drawback is that there are no obvious applications for a neutrino laser – at least for now. However, the very idea of a neutrino laser is so cool that I am hoping that someone will try to build one soon!
Lifted by crane The BASE-STEP system is moved to a lorry at CERN. Marcel Leonhardt (right), physicist at HHU, checks the status of the device and confinement of the protons on a tablet. (Courtesy: BASE/Julia Jäger)
If you happen to be driving between Geneva and Dusseldorf in the future, you might just overtake a shipment of antimatter. It will be on its way to an experiment that could solve some of the biggest mysteries in physics – including why there is much more matter than antimatter in the universe. While antielectrons (positrons) can be created in a small lab, antiprotons can only be created at large and expensive accelerators. This limits where antimatter experiments can be done. But now, physicists on the BASE collaboration at CERN have shown that it should be possible to transport antiprotons by road. Protons stood in for antiprotons in their demonstration and the particles were held in an electromagnetic trap at cryogenic temperatures and ultralow pressure. By transporting their BASE-STEP system around CERN’s Meyrin site, they showed it was stable and robust enough to handle the rigors of road travel. The system will now be re-configured to transport antiprotons about 700 km to Germany’s Heinrich Heine University. There, physicists hope to search for charge–parity–time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is currently possible at CERN. The BASE collaboration is also cited in our Top 10 Breakthroughs of 2025 for their quantum control of a single antiproton.
Solid quartz crystals revolutionized time keeping in the 20th century, so could solid-state nuclear clocks soon do the same? Today, the best timekeepers use the light emitted in atomic transitions. In principle, even better clocks could be made using very-low-energy gamma-rays emitted in some nuclear transitions. Nuclei are much smaller than atoms and these transitions are governed by the strong force. This means that such nuclear clocks would be far less susceptible to performance-degrading electromagnetic noise. And unlike atomic clocks, the nuclei could be embedded in solids – which would greatly simplify clock design. Thorium-229 shows great promise as a clock nucleus but it has two practical shortcomings: it is radioactive and extremely expensive. The solution to both of these problems is a clock design that uses only a tiny amount of thorium-229. Now researchers in the US have shown that physical vapour deposition can used to create extremely thin films of thorium tetrafluoride. Characterization using a vacuum ultraviolet laser confirmed the accessibility of the clock transition – but its lifetime was shorter and the signal less intense than measured in thorium-doped crystals. However, the researchers believe that these unexpected results should not dissuade those aiming to build nuclear clocks.
There’s only a few days left in the International Year of Quantum Science and Technology, but we’re still finding plenty to celebrate here at Physics World HQ thanks to a long list of groundbreaking work by quantum physicists in 2025. Here are a few of our favourite stories from the past 12 months.
By this point in 2025, “negative time” may sound like the answer to the question “How long have I got left to buy holiday presents for my loved ones?” Earlier in the year, though, physicists led by experimentalist Aephraim Steinberg of the University of Toronto, Canada and theorist Howard Wiseman of Griffith University in Australia showed that the concept can also describe the average amount of time a photon spends in an excited atomic state. While experts have cautioned against interpreting “negative time” too literally – we aren’t in time machine territory here – it does seem like there’s something interesting going on in this system of ultracold rubidium atoms.
It is a truth universally acknowledged that any sufficiently advanced technology must be in want of a simple system to operate it. In April, the quantum world passed this milestone thanks to Stephanie Wehner and colleagues at Delft University of Technology in the Netherlands. Their operating system is called QNodeOS, and they developed it with the aim of improving access to quantum computing for the 99.99999% percent of people who aren’t (and mostly don’t need to be) intimately familiar with how quantum information processors work. Another advantage of QNodeOS is that it makes it easier for classical and quantum machines (and quantum devices built with different qbit architectures) to communicate with each other.
How big does an object have to be before it stops being quantum and starts behaving like the billiard-ball-like solids familiar from introductory classical mechanics courses? It’s a question that featured in our annual “Breakthrough of the Year” back in 2021, when two independent teams demonstrated quantum entanglement in pairs of 10-micron drumheads, and we’re returning to it this year in a different system: levitated nanoparticles around 100 nm in diameter.
In one boundary-pushing experiment, Massimiliano Rossi and colleagues at ETH Zurich, Switzerland and the Institute of Photonic Sciences in Barcelona, Spain cooled silica nanoparticles enough to extend their wave-like behaviour to 73 pm. In another study, Kiyotaka Aikawa and colleagues at the University of Tokyo, Japan performed the first quantum mechanical squeezing on a nanoparticle, narrowing its velocity distribution at the expense of its momentum distribution. We may not know exactly where the quantum-classical boundary is yet, but the list of quantum behaviours we’ve observed in usually-not-quantum objects keeps getting longer.
What’s the best way to generate random numbers? In part, the answer depends on how random those numbers really need to be. For many applications, the pseudorandom numbers generated by classical computers, or the random-but-with-systematic-biases numbers found in, say, radio static, are good enough. But if you really, really need those numbers to be random, you need a quantum source – and thanks to work published this year by Scott Aaronson, Shi-Han Hung, Marco Pistoia and colleagues, that quantum source can now be a quantum computer. Which is a neat way of tying things together, don’t you think?
Quantum cats Left to right are UNSW researchers Benjamin Wilhelm, Xi Yu, Andrea Morello, Danielle Holmes. (Courtesy: UNSW Sydney)
Finally, we would be remiss not to mention the work of Andrea Morello and colleagues at the University of New South Wales, Australia. This year, they became the first to create quantum superpositions known as a Schrödinger’s cat states in a heavy atom, antimony, that has a large nuclear spin. They also created what is certainly the year’s best scientific team photo, posing with cats on their laps and deadpan expressions more usually associated with too-cool-for-school indie musicians.
So congratulations to them, and to all the other teams in this list, for setting the bar high in a year that offered plenty for the quantum community to celebrate. We hope you enjoyed the International Year of Quantum Science and Technology, and we look forward to many more exciting discoveries in 2026.
This year saw Physics World report on a raft of innovative and exciting developments in the worlds of medical physics and biotech. These included novel cancer therapies using low-temperature plasma or laser ablation, intriguing new devices such as biodegradable bone screws and a pacemaker smaller than a grain of rice, and neural engineering breakthroughs including an ultrathin bioelectric implant that improves movement in rats with spinal cord injuries and a tiny brain sensor that enables thought control of external devices. Here are a few more research highlights that caught my eye.
Vision transformed
One remarkable device introduced in 2025 was an eye implant that restored vision to patients with incurable sight loss. In a clinical study headed up at the University of Bonn, participants with sight loss due to age-related macular degeneration had a tiny wireless implant inserted under their retina. Used in combination with specialized glasses, the system restored the ability to read in 27 of 32 participants followed up a year later.
Learning to read again Study participant Sheila Irvine, a patient at Moorfields Eye Hospital, training with the PRIMA device. (Courtesy: Moorfields Eye Hospital)
We also described a contact lens that enables wearers to see near-infrared light without night vision goggles, reported on an fascinating retinal stimulation technique that enabled volunteers to see colours never before seen by the human eye, and chatted with researchers in Hungary about how a tiny dissolvable eye insert they are developing could help astronauts suffering from eye conditions.
Radiation therapy advances
2025 saw several firsts in the field of radiation therapy. Researchers in Germany performed the first cancer treatment using a radioactive carbon ion beam, on a mouse with a bone tumour close to the spine. And a team at the Trento Proton Therapy Centre in Italy delivered the first clinical treatments using proton arc therapy – a development that made it onto our top 10 Breakthroughs of the Year.
Meanwhile, the ASTRO meeting saw Leo Cancer Care introduce its first upright photon therapy system, called Grace, which will deliver X-ray radiation to patients in an upright position. This new take on radiation delivery is also under investigation by a team at RaySearch Laboratories, who showed that combining static arcs and shoot-through beams could increase plan quality and reduce delivery times in upright proton therapy.
It’s particularly interesting to examine how the rapid evolution of artificial intelligence (AI) is impacting healthcare, especially considering its potential for use in data-intensive tasks. Earlier this year, a team at Northwestern Medicine integrated a generative AI tool into a live clinical workflow for the first time, using it to draft radiology reports on X-ray images. In routine use, the AI model increased documentation efficiency by an average of 15.5%, while maintaining diagnostic accuracy.
Samir Abboud: “For me and my colleagues, it’s not an exaggeration to say that [the AI tool] doubled our efficiency.” (Courtesy: José M Osorio/Northwestern Medicine)
When introducing AI into the clinic, however, it’s essential that any AI-driven software is accurate, safe and trustworthy. To help assess these factors, a multinational research team identified potential pitfalls in the evaluation of algorithmic bias in AI radiology models, suggesting best practices to mitigate such bias.
A quantum focus
Finally, with 2025 being the International Year of Quantum Science and Technology, Physics World examined how quantum physics looks set to play a key role in medicine and healthcare. Many quantum-based companies and institutions are already working in the healthcare sector, with quantum sensors, in particular, close to being commercialized. As detailed in this feature on quantum sensing, such technologies are being applied for applications ranging from lab and point-of-care diagnostics to consumer wearables for medical monitoring, body scanning and microscopy.
Alongside, scientists at Jagiellonian University are applying quantum entanglement to cancer diagnostics and developing the world’s first whole-body quantum PET scanner, while researchers at the University of Warwick have created an ultrasensitive magnetometer based on nitrogen-vacancy centres in diamond that could detect small cancer metastases via keyhole surgery. There’s even a team designing a protein qubit that can be produced directly inside living cells and used as a magnetic field sensor (which also featured in this year’s top 10 breakthroughs).
And in September, we ran a Physics World Live event examining how quantum optics, quantum sensors and quantum entanglement can enable advanced disease diagnostics and transform medical imaging. The recording is available to watch here.
How well have you been following events in physics? There are 20 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.
16–20 Top quark – congratulations, you’ve hit Einstein level
11–15 Strong force – good but not quite Nobel standard
ZAP-X is a next-generation, cobalt-free, vault-free stereotactic radiosurgery system purpose-built for the brain. Delivering highly precise, non-invasive treatments with exceptionally low whole-brain and whole-body dose, ZAP-X’s gyroscopic beam delivery, refined beam geometry and fully integrated workflow enable state-of-the-art SRS without the burdens of radioactive sources or traditional radiation bunkers.
Theresa Hofman
Theresa Hofman is deputy head of medical physics at the European Radiosurgery Center Munich (ERCM), specializing in stereotactic radiosurgery with the CyberKnife and ZAP‑X systems. She has been part of the ERCM team since 2018 and has extensive clinical experience with ZAP‑X, one of the first centres worldwide to implement the technology in 2021. Since then, the team has treated more than 900 patients with ZAP‑X, and she is deeply involved in both clinical use and evaluation of its planning software.
She holds a master’s degree in physics from Ludwig Maximilian University of Munich, where she authored two first‑author publications on range verification in carbon‑ion therapy. At ERCM, she has published additional first‑author studies on CyberKnife kidney‑treatment accuracy and on comparative planning between ZAP‑X and CyberKnife. She is currently conducting further research on the latest ZAP‑X planning software. Her work is driven by the goal of advancing high‑quality radiosurgery and ensuring the best possible treatment for every patient.
This episode of the Physics World Weekly podcast features Pat Hanrahan, who studied nuclear engineering and biophysics before becoming a founding employee of Pixar Animation Studios. As well as winning three Academy Awards for his work on computer animation, Hanrahan won the Association for Computing Machinery’s A M Turing Award for his contributions to 3D computer graphics, or CGI.
Earlier this year, Hanrahan spoke to Physics World’s Margaret Harris at the Heidelberg Laureate Forum in Germany. He explains how he was introduced to computer graphics by his need to visualize the results of computer simulations of nervous systems. That initial interest led him to Pixar and his development of physically-based rendering, which uses the principles of physics to create realistic images.
Hanrahan explains that light interacts with different materials in very different ways, making detailed animations very challenging. Indeed, he says that creating realistic looking skin is particularly difficult – comparing it to the quest for a grand unified theory in physics.
He also talks about how having a background in physics has helped his career – citing his physicist’s knack for creating good models and then using them to solve problems.
Electrochemical impedance spectroscopy (EIS) provides valuable insights into the physical processes within batteries – but how can these measurements directly inform physics-based models? In this webinar, we present recent work showing how impedance data can be used to extract grouped parameters for physics-based models such as the Doyle–Fuller–Newman (DFN) model or the reduced-order single-particle model with electrolyte (SPMe).
We will introduce PyBaMM (Python Battery Mathematical Modelling), an open-source framework for flexible and efficient battery simulation, and show how our extension, PyBaMM-EIS, enables fast numerical impedance computation for any implemented model at any operating point. We also demonstrate how PyBOP, another open-source tool, performs automated parameter fitting of models using measured impedance data across multiple states of charge.
Battery modelling is challenging, and obtaining accurate fits can be difficult. Our technique offers a flexible way to update model equations and parameterize models using impedance data.
Join us to see how our tools create a smooth path from measurement to model to simulation.
An interactive Q&A session follows the presentation.
Noël Hallemans
Noël Hallemans is a postdoctoral research assistant in engineering science at the University of Oxford, where he previously lectured in mathematics at St Hugh’s College. He earned his PhD in 2023 from the Vrije Universiteit Brussel and the University of Warwick, focusing on frequency-domain, data-driven modelling of electrochemical systems.
His research at the Battery Intelligence Lab, led by Professor David Howey, integrates electrochemical impedance spectroscopy (EIS) with physics-based modelling to improve understanding and prediction of battery behaviour. He also develops multisine EIS techniques for battery characterisation during operation (for example, charging or relaxation).