I am intrigued by entrepreneurship. Is it something we all innately possess – or can entrepreneurship be taught to anyone (myself included) for whom it doesn’t come naturally? Could we all – with enough time, training and support – become the next Jeff Bezos, Richard Branson or Martha Lane Fox?
In my professional life as an engineer in industry, we often talk about the importance of invention and innovation. Without them, products will become dated and firms will lose their competitive edge. However, inventions don’t necessarily sell themselves, which is where entrepreneurs have a key influence.
So what’s the difference between inventors, innovators and entrepreneurs? An inventor, to me, is someone who creates a new process, application or machine. An innovator is a person who introduces something new or does something for the first time. An entrepreneur, however, is someone who sets up a business or takes on a venture, embracing financial risks with the aim of profit.
Scientists and engineers are naturally good inventors and innovators. We like to solve problems, improve how we do things, and make the world more ordered and efficient. In fact, many of the greatest inventors and innovators of all time were scientists and engineers – think James Watt, George Stephenson and Frank Whittle.
But entrepreneurship requires different, additional qualities. Many entrepreneurs come from a variety of different backgrounds – not just science and engineering – and tend to have finance in their blood. They embrace risk and have unlimited amounts of courage and business acumen – skills I’d need to pick up if wanted to be an entrepreneur myself.
Risk and reward
Engineers are encouraged to take risks, exploring new technologies and designs; in fact, it’s critical for companies seeking to stay competitive. But we take risks in a calculated and professional manner that prioritizes safety, quality, regulations and ethics, and project success. We balance risk taking with risk management, spotting and assessing potential risks – and mitigating or removing them if they’re big.
Courage is not something I’ve always had professionally. Over time, I have learned to speak up if I feel I have something to say that’s important to the situation or contributes to our overall understanding. Still, there’s always a fear of saying something silly in front of other people or being unable to articulate a view adequately. But entrepreneurs have courage in their DNA.
So can entrepreneurship be taught? Specifically, can it be taught to people like me with a technical background – and, if so, how? Some of the most famous innovators, like Henry Ford, Thomas Edison, Steve Jobs, James Dyson and Benjamin Franklin, had scientific or engineering backgrounds, so is there a formula for making more people like them?
Skill sets and gaps
Let’s start by listing the skills that most engineers have that could be beneficial for entrepreneurship. In no particular order, these include:
problem-solving ability: essential for designing effective solutions or to identify market gaps;
innovative mindset: critical for building a successful business venture;
analytical thinking: engineers make decisions based on data and logic, which is vital for business planning and decision making;
persistence: a pre-requisite for delivering engineering projects and needed to overcome the challenges of starting a business;
technical expertise: a significant competitive advantage and providing credibility, especially relevant for tech start-ups.
However, there are mindset differences between engineers and entrepreneurs that any training would need to overcome. These include:
risk tolerance: engineers typically focus on improving reliability and reducing risk, whilst entrepreneurs are more comfortable with embracing greater uncertainty;
focus: engineers concentrate on delivering to requirements, whilst entrepreneurs focus on consumer needs and speed to market;
business acumen: a typical engineering education doesn’t cover essential business skills such as marketing, sales and finance, all of which are vital for running a company.
Such skills may not always come naturally to engineers and scientists, but they can be incorporated into our teaching and learning. Some great examples of how to do this were covered in Physics World last year. In addition, there is a growing number of UK universities offering science and engineering degrees combined with entrepreneurship.
The message is that whilst some scientists and engineers become entrepreneurs, not all do. Simply having a science or engineering background is no guarantee of becoming an entrepreneur, nor is it a requirement. Nevertheless, the problem-solving and technical skills developed by scientists and engineers are powerful assets that, when combined with business acumen and entrepreneurial drive, can lead to business success.
Of course, entrepreneurship may not suit everybody – and that’s perfectly fine. No-one should be forced to become an entrepeneur if they don’t want to. We all need to play to our core strengths and interests and build well-rounded teams with complementary skillsets – something that every successful business needs. But surely there’s a way of teaching entrepeneurism too?
Shapiro steps – a series of abrupt jumps in the voltage–current characteristic of a Josephson junction that is exposed to microwave radiation – have been observed for the first time in ultracold gases by groups in Germany and Italy. Their work on atomic Josephson junctions provides new insights into the phenomenon, and could lead to a standard for chemical potential.
In 1962 Brian Josephson of the University of Cambridge calculated that, if two superconductors were separated by a thin insulating barrier, the phase difference between the wavefunctions on either side should induce quantum tunneling, leading to a current at zero potential difference.
A year later, Sidney Shapiro and colleagues at the consultants Arthur D. Little showed that inducing an alternating electric current using a microwave field causes the phase of the wavefunction on either side of a Josephson junction to evolve at different rates, leading to quantized increases in potential difference across the junction. The height of these “Shapiro steps” depends only on the applied frequency of the field and the electrical charge. This is now used as a reference standard for the volt.
Researchers have subsequently developed analogues of Josephson junctions in other systems such as liquid helium and ultracold atomic gases. In the new work, two groups have independently observed Shapiro steps in ultracold quantum gases. Instead of placing a fixed insulator in the centre and driving the system with a field, the researchers used focused laser beams to create potential barriers that divided the traps into two. Then they moved the positions of the barriers to alter the potentials of the atoms on either side.
Current emulation
“If we move the atoms with a constant velocity, that means there’s a constant velocity of atoms through the barrier,” says Herwig Ott of RPTU University Kaiserslautern-Landau in Germany, who led one of the groups. “This is how we emulate a DC current. Now for the Shapiro protocol you have to apply an AC current, and the AC current you simply get by modulating your barrier in time.”
Ott and colleagues in Kaiserslautern, in collaboration with researchers in Hamburg and the United Arab Emirates (UAE), used a Bose–Einstein condensate (BEC) of rubidium-87 atoms. Meanwhile in Italy, Giulia Del Pace of the European Laboratory for Nonlinear Spectroscopy at the University of Florence and colleagues (including the same UAE collaborators) studied ultracold lithium-6 atoms, which are fermions.
Both groups observed the theoretically-predicted Shapiro steps, but Ott and Del Pace explain that these observations do not simply confirm predictions. “The message is that no matter what your microscopic mechanism is, the phenomenon of Shapiro steps is universal,” says Ott. In superconductors, the Shapiro steps are caused by the breaking of Cooper pairs; in ultracold atomic gases, vortex rings are created. Nevertheless, the same mathematics applies. “This is really quite remarkable,” says Ott.
Del Pace says it was unclear whether Shapiro steps would be seen in strongly-interacting fermions, which are “way more interacting than the electrons in superconductors”. She asks, “Is it a limitation to have strong interactions or is it something that actually helps the dynamics to happen? It turns out it’s the latter.”
Magnetic tuning
Del Pace’s group applied a variable magnetic field to tune their system between a BEC of molecules, a system dominated by Cooper pairs and a unitary Fermi gas in which the particles were as strongly interacting as permitted by quantum mechanics. The size of the Shapiro steps was dependent on the strength of the interparticle interaction.
Ott and Del Pace both suggest that this effect could be used to create a reference standard for chemical potential – a measure of the strength of the atomic interaction (or equation of state) in a system.
“This equation of state is very well known for a BEC or for a strongly interacting Fermi gas…but there is a range of interaction strengths where the equation of state is completely unknown, so one can imagine taking inspiration from the way Josephson junctions are used in superconductors and using atomic Josephson junctions to study the equation of state in systems where the equation of state is not known,” explains Del Pace.
The two papers are published side by side in Science: Del Pace and Ott.
Rocío Jáuregui Renaud of the Autonomous University of Mexico is impressed, especially by the demonstration in both bosons and fermions. “The two papers are important, and they are congruent in their results, but the platform is different,” she says. “At this point, the idea is not to give more information directly about superconductivity, but to learn more about phenomena that sometimes you are not able to see in electronic systems but you would probably see in neutral atoms.”
Typical insect-inspired robot designs are often based on bees and flies. They feature constant flapping motion, yet that requires a lot of power so the robots either carry heavy batteries or are tethered to a power supply.
Grasshoppers, however, are able to jump and glide as well as flap their wings and while they are not the best gliding insect, they have another trick as they are able to retract and unfurl their wings.
Grasshoppers have two sets of wings, the forewings and hindwings. The front wing is mainly used for protection and camouflage while the hindwing is used for flight. The hindwing is corrugated, which allows it to fold in neatly like an accordion.
A team of engineers, biologists and entomologists, analysed the wings of the American grasshopper, also known as the bird grasshopper, due to its superior flying skills. They took CT scans of the insects and then used the findings to 3D-print model wings. They then attached the wings to small frames to create grasshopper-inspired gliders finding that their performance was on par with that of actual grasshoppers.
They also tweaked certain wing features such as the shape, camber and corrugation, finding that a smooth wing actually produced gliding that was more efficient and repeatable than one with corrugations. “This showed us that these corrugations might have evolved for other reasons,” notes Princeton engineer Aimy Wissa, who adds that “very little” is known about how grasshoppers deploy their wings.
The researchers say that further work could result in new ways to extend the flight time for insect-sized robots without the need for heavy batteries of tethering. “This grasshopper research opens up new possibilities not only for flight, but also for multimodal locomotion,” adds Lee. “By combining biology with engineering, we’re able to build and ideate on something completely new.”
What if a chemical reaction, ocean waves or even your heartbeat could all be used as clocks? That’s the starting point of a new study by Kacper Prech, Gabriel Landi and collaborators, who uncovered a fundamental, universal limit to how precisely time can be measured in noisy, fluctuating systems. Their discovery – the clock uncertainty relation (CUR) – doesn’t just refine existing theory, it reframes timekeeping as an information problem embedded in the dynamics of physical processes, from nanoscale biology to engineered devices.
The foundation of this work contains a simple but powerful reframing: anything that “clicks” regularly is a clock. In the research paper’s opening analogy, a castaway tries to cook a fish without a wristwatch. They could count bird calls, ocean waves, or heartbeats – each a potential timekeeper with different cadence and regularity. But questions remain: given real-world fluctuations, what’s the best way to estimate time, and what are the inescapable limits?
The authors answer both. They show for a huge class of systems – those described by classical, Markovian jump processes (systems where the future depends only on the present state, not the past history – a standard model across statistical physics and biophysics) – there is a tight achievable bound on timekeeping precision. The bound is controlled not by how often the system jumps on average (the traditional “dynamical activity”), but by a subtler quantity: the mean residual time, or the average time you’d wait for the next event if you start observing at a random moment. That distinction matters.
The inspection paradox The graphic illustrates the mean residual time used in the CUR and how it connects to the so-called inspection paradox – a counterintuitive bias where randomly arriving observers are more likely to land in longer gaps between events. Buses arrive in clusters (gaps of 5 min) separated by long intervals (15 min), so while the average time between buses might seem moderate, a randomly arriving passenger (represented by the coloured figures) is statistically more likely to land in one of the long 15-min gaps than in a short 5-min one. The mean residual time is the average time a passenger waits for their bus if they arrive at the bus stop at a random time. Counterintuitively, this can be much longer than the average time between buses. The visual also demonstrates why the mean residual time captures more information than the simple average interval, since it accounts for the uneven distribution of gaps that biases your real waiting experience. (Courtesy: IOP Publishing)
The study introduces CUR, a universal, tight bound on timekeeping precision that – unlike earlier bounds – can be saturated and the researchers identify the exact observables that achieve this limit. Surprisingly, the optimal strategy for estimating time from a noisy process is remarkably simple: sum the expected waiting times of each observed state along the trajectory, rather than relying on complex fitting methods. The work also reveals that the true limiting factor for precision isn’t the traditional dynamical activity, but rather the inverse of the mean residual time. This makes the CUR provably tighter than the earlier kinetic uncertainty relation, especially in systems far from equilibrium.
The team also connects precision to two practical clock metrics: resolution (how often a clock ticks) and accuracy (how many ticks before it drifts by one tick.) In other words, achieving steadier ticks comes at the cost of accepting fewer of them per unit of time.
This framework offers practical tools across several domains. It can serve as a diagnostic for detecting hidden states in complex biological or chemical systems: if measured event statistics violate the CUR, that signals the presence of hidden transitions or memory effects. For nanoscale and molecular clocks – like biomolecular oscillators (cellular circuits that produce rhythmic chemical signals) and molecular motors (protein machines that walk along cellular tracks) – the CUR sets fundamental performance limits and guides the design of optimal estimators. Finally, while this work focuses on classical systems, it establishes a benchmark for quantum clocks, pointing toward potential quantum advantages and opening new questions about what trade-offs emerge in the quantum regime.
Landi, an associate professor of theoretical quantum physics at the University of Rochester, emphasizes the conceptual shift: that clocks aren’t just pendulums and quartz crystals. “Anything is a clock,” he notes. The team’s framework “gives the recipe for constructing the best possible clock from whatever fluctuations you have,” and tells you “what the best noise-to-signal ratio” can be. In everyday terms, the Sun is accurate but low-resolution for cooking; ocean waves are higher resolution but noisier. The CUR puts that intuition on firm mathematical ground.
Looking forward, the group is exploring quantum generalizations and leveraging CUR-violations to infer hidden structure in biological data. A tantalizing foundational question lingers: can robust biological timekeeping emerge from many bad, noisy clocks, synchronizing into a good one?
Ultimately, this research doesn’t just sharpen a bound; it reframes timekeeping as a universal inference task grounded in the flow of events. Whether you’re a cell sensing a chemical signal, a molecular motor stepping along a track or an engineer building a nanoscale device, the message is clear: to tell time well, count cleverly – and respect the gaps.
A new microscope that can simultaneously measure both forward- and backward-scattered light from a sample could allow researchers to image both micro- and nanoscale objects at the same time. The device could be used to observe structures as small as individual proteins, as well as the environment in which they move, say the researchers at the University of Tokyo who developed it.
“Our technique could help us link cell structures with the motion of tiny particles inside and outside cells,” explains Kohki Horie of the University of Tokyo’s department of physics, who led this research effort. “Because it is label-free, it is gentler on cells and better for long observations. In the future, it could help quantify cell states, holding potential for drug testing and quality checks in the biotechnology and pharmaceutical industries.”
Detecting forward and backward scattered light at the same time
The new device combines two powerful imaging techniques routinely employed in biomedical applications: quantitative phase microscopy (QPM) and interferometric scattering (iSCAT).
QPM measures forward-scattered (FS) light – that is, light waves that travel in the same direction as before they were scattered. This technique is excellent at imaging structures in the Mie scattering region (greater than 100 nm, referred to as microscale in this study). This makes it ideal for visualizing complex structures such as biological cells. It falls short, however, when it comes to imaging structures in the Rayleigh scattering region (smaller than 100 nm, referred to as nanoscale in this study).
The second technique, iSCAT, detects backward-scattered (BS) light. This is light that’s reflected back towards the direction from which it came and which predominantly contains Rayleigh scattering. As such, iSCAT exhibits high sensitivity for detecting nanoscale objects. Indeed, the technique has recently been used to image single proteins, intracellular vesicles and viruses. It cannot, however, image microscale structures because of its limited ability to detect in the Mie scattering region.
The team’s new bidirectional quantitative scattering microscope (BiQSM) is able to detect both FS and BS light at the same time, thereby overcoming these previous limitations.
Cleanly separating the signals from FS and BS
The BiQSM system illuminates a sample through an objective lens from two opposite directions and detects both the FS and BS light using a single image sensor. The researchers use the spatial-frequency multiplexing method of off-axis digital holography to capture both images simultaneously. The biggest challenge, says Horie, was to cleanly separate the signals from FS and BS light in the images while keeping noise low and avoiding mixing between them.
Horie and colleagues, Keiichiro Toda, Takuma Nakamura and team leader Takuro Ideguchi, tested their technique by imaging live cells. They were able to visualize micron-sized cell structures, including the nucleus, nucleoli and lipid droplets, as well as nanoscale particles. They compared the FS and BS results using the scattering-field amplitude (SA), defined as the amplitude ratios between the scattered wave and the incident illumination wave.
“SA characterizes the light scattered in both the forward and backward directions within a unified framework,” says Horie, “so allowing for a direct comparison between FS and BS light images.”
Spurred on by their findings, which are detailed in Nature Communications, the researchers say they now plan to study even smaller particles such as exosomes and viruses.
This episode of the Physics World Weekly podcast features Alex May, whose research explores the intersection of quantum gravity and quantum information theory. Based at Canada’s Perimeter Institute for Theoretical Physics, May explains how ideas being developed in the burgeoning field of quantum information theory could help solve one of the most enduring mysteries in physics – how to reconcile quantum mechanics with Einstein’s general theory of relativity, creating a viable theory of quantum gravity.
This interview was recorded in autumn 2025 when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the last of those conversations to appear on the podcast.
Chess is a seemingly simple game, but one that hides incredible complexity. In the standard game, the starting positions of the pieces are fixed so top players rely on memorizing a plethora of opening moves, which can sometimes result in boring, predictable games. It’s also the case that playing as white, and therefore going first, offers an advantage.
In the 1990s, former chess world champion Bobby Fischer proposed another way to play chess to encourage more creative play.
This form of the game – dubbed Chess960 – keeps the pawns in the same position but randomizes where the pieces at the back of the board – the knights, bishops, rooks, king and queen – are placed at the start while keeping the rest of the rules the same. It is named after the 960 starting positions that result from mixing it up at the back.
It was thought that Chess960 could allow for more permutations that would make the game fairer for both players. Yet research by physicist Marc Barthelemy at Paris-Saclay University suggests it’s not as simple as this.
Initial advantage
He used the open-source chess program called Stockfish to analyse each of the 960 starting positions and developed a statistical method to measure decision-making complexity by calculating how much “information” a player needs to identify the best moves.
He found that the standard game can be unfair, as players with black pieces who go second have to keep up with the moves from the player with white.
Yet regardless of starting positions at the back, Barthelemy discovered that white still has an advantage in almost all – 99.6% – of the 960 positions. He also found that the standard set-up – rook, knight, bishop, queen, king, bishop, knight, rook – is nothing special and is presumably an historical accident possibly as the starting positions are easy to remember, being visually symmetrical.
“Standard chess, despite centuries of cultural evolution, does not occupy an exceptional location in this landscape: it exhibits a typical initial advantage and moderate total complexity, while displaying above-average asymmetry in decision difficulty,” writes Barthelemy.
For a more fair and balanced match, Barthelemy suggests playing position #198, which has the starting positions as queen, knight, bishop, rook, king, bishop, knight and rook.
The Compact Muon Solenoid (CMS) Collaboration has made the first measurements of the quantum properties of a family of three “all-charm” tetraquarks that was recently discovered at the Large Hadron Collider (LHC) at CERN. The findings could help shed more light on the properties of the strong nuclear force, which holds protons and neutrons together in nuclei. The result could help us better understand how ordinary matter forms.
In recent years, the LHC has discovered tens of massive particles called hadrons, which are made of quarks bound together by the strong force. Quarks come in six types: up, down, charm, strange, top and bottom. Most observed hadrons comprise two or three quarks (called mesons and baryons, respectively). Physicists have also observed exotic hadrons that comprise four or five quarks. These are the tetraquarks and pentaquarks respectively. Those seen so far usually contain a charm quark and its antimatter counterpart (a charm antiquark), with the remaining two or three quarks being up, down or strange quarks, or their antiquarks.
Identifying and studying tetraquarks and pentaquarks helps physicists to better understand how the strong force binds quarks together. This force also binds protons and neutrons in atomic nuclei.
Physicists are still divided as to the nature of these exotic hadrons. Some models suggest that their quarks are tightly bound via the strong force, so making these hadrons compact objects. Others say that the quarks are only loosely bound. To confuse things further, there is evidence that in some exotic hadrons, the quarks might be both tightly and loosely bound at the same time.
Now, new findings from the CMS Collaboration suggest that tetraquarks are tightly bound, but they do not completely rule out other models.
Measuring quantum numbers
In their work, which is detailed in Nature, CMS physicists studied all-charm tetraquarks. These comprise two charm quarks and two charm antiquarks and were produced by colliding protons at high energies at the LHC. Three states of this tetraquark have been identified at the LHC. These are: X(6900); X(6600); and X(7100), where the numbers denote their approximate mass in millions of electron volts. The team measured the fundamental properties of these tetraquarks, including their quantum numbers: parity (P); charge conjugation (C); angular momentum, and spin (J). P determines whether a particle has the same properties as its spatial mirror image; C whether it has the same properties as its antiparticle; and J, the total angular momentum of the hadron. These numbers provide information on the internal structure of a tetraquark.
The researchers used a version of a well-known technique called angular analysis, which is similar to the technique used to characterize the Higgs boson. This approach focuses on the angles at which the decay products of the all-charm tetraquarks are scattered.
“We call this technique quantum state tomography,” explains CMS team member Chiara Mariotti of the INFN Torino inItaly. “Here, we deduce the quantum state of an exotic state X from the analysis of its decay products. In particular, the angular distributions in the decay X -> J/ψJ/ψ, followed by J/ψ decays into two muons, serve as analysers of polarization of two J/ψ particles,” she explains.
The researchers analysed all-charm tetraquarks produced at the CMS experiment between 2016 and 2018. They calculated that J is likely to be 2 and that P and C are both +1. This combination of properties is expressed as 2++.
Result favours tightly-bound quarks
“This result favours models in which all four quarks are tightly bound,” says particle physicist Timothy Gershon of the UK’s University of Warwick, who was not involved in this study. “However, the question is not completely put to bed. The sample size in the CMS analysis is not sufficient to exclude fully other possibilities, and additionally certain assumptions are made that will require further testing in future.”
Gershon adds, “These include assumptions that all three states have the same quantum numbers, and that all correspond to tetraquark decays to two J/ψ mesons with no additional particles not included in the reconstruction (for example there could be missing photons that have been radiated in the decay).”
Further studies with larger data samples are warranted, he adds. “Fortunately, CMS as well as both the LHCb and the ATLAS collaborations [at CERN] already have larger samples in hand, so we should not have to wait too long for updates.”
Indeed, the CMS Collaboration is now gathering more data and exploring additional decay modes of these exotic tetraquarks. “This will ultimately improve our understanding how this matter forms, which, in turn, could help refine our theories of how ordinary matter comes into being,” Mariotti tells Physics World.
When people think of wind energy, they usually think of windmill-like turbines dotted among hills or lined up on offshore platforms. But there is also another kind of wind energy, one that replaces stationary, earthbound generators with tethered kites that harvest energy as they soar through the sky.
This airborne form of wind energy, or AWE, is not as well-developed as the terrestrial version, but in principle it has several advantages. Power-generating kites are much less massive than ground-based turbines, which reduces both their production costs and their impact on the landscape. They are also far easier to install in areas that lack well-developed road infrastructure. Finally, and perhaps most importantly, wind speeds are many times greater at high altitudes than they are near the ground, significantly enhancing the power densities available for kites to harvest.
There is, however, one major technical challenge for AWE, and it can be summed up in a single word: control. AWE technology is operationally more complex than conventional turbines, and the traditional method of controlling kites (known as model-predictive control) struggles to adapt to turbulent wind conditions. At best, this reduces the efficiency of energy generation. At worst, it makes it challenging to keep devices safe, stable and airborne.
In a paper published in EPL,Antonio Celani and his colleagues Lorenzo Basile and Maria Grazia Berni of the University of Trieste, Italy, and the Abdus Salam International Centre for Theoretical Physics (ICTP) propose an alternative control method based on reinforcement learning. In this form of machine learning, an agent learns to make decisions by interacting with its environment and receiving feedback in the form of “rewards” for good performance. This form of control, they say, should be better at adapting to the variable and uncertain conditions that power-generating kites encounter while airborne.
What was your motivation for doing this work?
Our interest originated from some previous work where we studied a fascinating bird behaviour called thermal soaring. Many birds, from the humble seagull to birds of prey and frigatebirds, exploit atmospheric currents to rise in the sky without flapping their wings, and then glide or swoop down. They then repeat this cycle of ascent and descent for hours, or even for weeks if they are migratory birds. They’re able to do this because birds are very effective at extracting energy from the atmosphere to turn it into potential energy, even though the atmospheric flow is turbulent, hence very dynamic and unpredictable.
Antonio Celani. (Courtesy: Antonio Celani)
In those works, we showed that we could use reinforcement learning to train virtual birds and also real toy gliders to soar. That got us wondering whether this same approach could be exported to AWE.
When we started looking at the literature, we saw that in most cases, the goal was to control the kite to follow a predetermined path, irrespective of the changing wind conditions. These cases typically used only simple models of atmospheric flow, and almost invariably ignored turbulence.
This is very different from what we see in birds, which adapt their trajectories on the fly depending on the strength and direction of the fluctuating wind they experience. This led us to ask: can a reinforcement learning (RL) algorithm discover efficient, adaptive ways of controlling a kite in a turbulent environment to extract energy for human consumption?
What is the most important advance in the paper?
We offer a proof of principle that it is indeed possible to do this using a minimal set of sensor inputs and control variables, plus an appropriately designed reward/punishment structure that guides trial-and-error learning. The algorithm we deploy finds a way to manoeuvre the kite such that it generates net energy over one cycle of operation. Most importantly, this strategy autonomously adapts to the ever-fluctuating conditions induced by turbulence.
Lorenzo Basile. (Courtesy: Lorenzo Basile)
The main point of RL is that it can learn to control a system just by interacting with the environment, without requiring any a priori knowledge of the dynamical laws that rule its behaviour. This is extremely useful when the systems are very complex, like the turbulent atmosphere and the aerodynamics of a kite.
What are the barriers to implementing RL in real AWE kites, and how might these barriers be overcome?
The virtual environment that we use in our paper to train the kite controller is very simplified, and in general the gap between simulations and reality is wide. We therefore regard the present work mostly as a stimulus for the AWE community to look deeper into alternatives to model-predictive control, like RL.
On the physics side, we found that some phases of an AWE generating cycle are very difficult for our system to learn, and they require a painful fine-tuning of the reward structure. This is especially true when the kite is close to the ground, where winds are weaker and errors are the most punishing. In those cases, it might be a wise choice to use other heuristic, hard-wired control strategies rather than RL.
Finally, in a virtual environment like the one we used to do the RL training in this work, it is possible to perform many trials. In real power kites, this approach is not feasible – it would take too long. However, techniques like offline RL might resolve this issue by interleaving a few field experiments where data are collected with extensive off-line optimization of the strategy. We successfully used this approach in our previous work to train real gliders for soaring.
What do you plan to do next?
We would like to explore the use of offline RL to optimize energy production for a small, real AWE system. In our opinion, the application to low-power systems is particularly relevant in contexts where access to the power grid is limited or uncertain. A lightweight, easily portable device that can produce even small amounts of energy might make a big difference in the everyday life of remote, rural communities, and more generally in the global south.
Circularly polarized (CP) light is encoded with information through its photon spin and can be utilized in applications such as low-power displays, encrypted communications and quantum technologies. Organic light emitting diodes (OLEDs) produce CP light with a left or right “handedness”, depending on the chirality of the light-emitting molecules used to create the device.
While OLEDs usually only emit either left- or right-handed CP light, researchers have now developed OLEDs that can electrically switch between emitting left- or right-handed CP light – without needing different molecules for each handedness.
“We had recently identified an alternative mechanism for the emission of circularly polarized light in OLEDs, using our chiral polymer materials, which we called anomalous circularly polarized electroluminescence,” says lead author Matthew Fuchter from the University of Oxford. “We set about trying to better understand the interplay between this new mechanism and the generally established mechanism for circularly polarized emission in the same chiral materials”.
Light handedness controlled by molecular chirality
The CP light handedness of an organic emissive molecule is controlled by its chirality. A chiral molecule is one that has two mirror-image structural isomers that can’t be superimposed on top of each other. Each of these non-superimposable molecules is called an enantiomer, and will absorb, emit and refract CP light with a defined spin angular momentum. Each enantiomer will produce CP light with a different handedness, through an optical mechanism called normal circularly polarized electroluminescence (NCPE).
OLED designs typically require access to both enantiomers, but most chemical synthesis processes will produce racemic mixtures (equal amounts of the two enantiomers) that are difficult to separate. Extracting each enantiomer so that they can be used individually is complex and expensive, but the research at Oxford has simplified this process by using a molecule that can switch between emitting left- and right-handed CP light.
The molecule in question is a helical molecule called (P)-aza[6]helicene, which is the right-handed enantiomer. Even though it is just a one-handed form, the researchers found a way to control the handedness of the OLED, enabling it to switch between both forms.
Switching handedness without changing the structure
The researchers designed the helicene molecules so that the handedness of the light could be switched electrically, without needing to change the structure of the material itself. “Our work shows that either handedness can be accessed from a single-handed chiral material without changing the composition or thickness of the emissive layer,” says Fuchter. “From a practical standpoint, this approach could have advantages in future circularly polarized OLED technologies.”
Instead of making a structural change, the researchers changed the way that the electric charges are recombined in the device, using interlayers to alter the recombination position and charge carrier mobility inside the device. Depending on where the recombination zone is located, this leads to situations where there is balanced or unbalanced charge transport, which then leads to different handedness of CP light in the device.
When the recombination zone is located in the centre of the emissive layer, the charge transport is balanced, which generates an NCPE mechanism. In these situations, the helicene adopts its normal handedness (right handedness).
However, when the recombination zone is located close to one of the transport layers, it creates an unbalanced charge transport mechanism called anomalous circularly polarized electroluminescence (ACPE). The ACPE overrides the NCPE mechanism and inverts the handedness of the device to left handedness by altering the balance of induced orbital angular momentum in electrons versus holes. The presence of these two electroluminescence mechanisms in the device enables it to be controlled electrically by tuning the charge carrier mobility and the recombination zone position.
The research allows the creation of OLEDs with controllable spin angular momentum information using a single emissive enantiomer, while probing the fundamental physics of chiral optoelectronics. “This work contributes to the growing body of evidence suggesting further rich physics at the intersection of chirality, charge and spin. We have many ongoing projects to try and understand and exploit such interplay,” Fuchter concludes.
Born in 1916, Crick studied physics at University College London in the mid-1930s, before working for the Admiralty Research Laboratory during the Second World War. But after reading physicist Erwin Schrödinger’s 1944 book What Is Life? The Physical Aspect of the Living Cell, and a 1946 article on the structure of biological molecules by chemist Linus Pauling, Crick left his career in physics and switched to molecular biology in 1947.
Six years later, while working at the University of Cambridge, he played a key role in decoding the double-helix structure of DNA, working in collaboration with biologist James Watson, biophysicist Maurice Wilkins and other researchers including chemist and X-ray crystallographer Rosalind Franklin. Crick, alongside Watson and Wilkins, went on to receive the 1962 Nobel Prize in Physiology and Medicine for the discovery.
Finally, Crick’s career took one more turn in the mid-1970s. After experiencing a mental health crisis, Crick left Britain and moved to California. He took up neuroscience in an attempt to understand the roots of human consciousness, as discussed in his 1994 book, The Astonishing Hypothesis: the Scientific Search for the Soul.
Parallel lives
When he died in 2004, Crick’s office wall at Salk Institute in La Jolla, US, carried portraits of Charles Darwin and Albert Einstein, as Cobb notes on the final page of his deeply researched and intellectually fascinating biography. But curiously, there is not a single other reference to Einstein in Cobb’s massive book. Furthermore, there is no reference at all to Einstein in the equally large 2009 biography of Crick, Francis Crick: Hunter of Life’s Secrets, by historian of science Robert Olby, who – unlike Cobb – knew Crick personally.
Nevertheless, a comparison of Crick and Einstein is illuminating. Crick’s family background (in the shoe industry), and his childhood and youth are in some ways reminiscent of Einstein’s. Both physicists came from provincial business families of limited financial success, with some interest in science yet little intellectual distinction. Both did moderately well at school and college, but were not academic stars. And both were exposed to established religion, but rejected it in their teens; they had little intrinsic respect for authority, without being open rebels until later in life.
The similarities continue into adulthood, with the two men following unconventional early scientific careers. Both of them were extroverts who loved to debate ideas with fellow scientists (at times devastatingly), although they were equally capable of long, solitary periods of concentration throughout their careers. In middle age, they migrated from their home countries – Germany (Einstein) and Britain (Crick) – to take up academic positions in the US, where they were much admired and inspiring to other scientists, but failed to match their earlier scientific achievements.
In their personal lives, both Crick and Einstein had a complicated history with women. Having divorced their first wives, they had a variety of extramarital affairs – as discussed by Cobb without revealing the names of these women – while remaining married to their second wives. Interestingly, Crick’s second wife, Odile Crick (whom he was married to for 55 years) was an artist, and drew the famous schematic drawing of the double helix published in Nature in 1953.
Stories of friendships
Although Cobb misses this fascinating comparison with Einstein, many other vivid stories light up his book. For example, he recounts Watson’s claim that just after their success with DNA in 1953, “Francis winged into the Eagle [their local pub in Cambridge] to tell everyone within hearing distance that we had found the secret of life” – a story that later appeared on a plaque outside the pub.
“Francis always denied he said anything of the sort,” notes Cobb, “and in 2016, at a celebration of the centenary of Crick’s birth, Watson publicly admitted that he had made it up for dramatic effect (a few years earlier, he had confessed as much to Kindra Crick, Francis’s granddaughter).” No wonder Watson’s much-read 1968 book The Double Helix caused a furious reaction from Crick and a temporary breakdown in their friendship, as Cobb dissects in excoriating detail.
Watson’s deprecatory comments on Franklin helped to provoke the current widespread belief that Crick and Watson succeeded by stealing Franklin’s data. After an extensive analysis of the available evidence, however, Cobb argues that the data was willingly shared with them by Franklin, but that they should have formally asked her permission to use it in their published work – “Ambition, or thoughtlessness, stayed their hand.”
In fact, it seems Crick and Franklin were friends in 1953, and remained so – with Franklin asking Crick for his advice on her draft scientific papers – until her premature death from ovarian cancer in 1958. Indeed, after her first surgery in 1956, Franklin went to stay with Crick and his wife at their house in Cambridge, and then returned to them after her second operation. There certainly appears to be no breakdown in trust between the two. When Crick was nominated for the Nobel prize in 1961, he openly stated, “The data which really helped us obtain the structure was mainly obtained by Rosalind Franklin.”
As for Crick’s later study of consciousness, Cobb comments, “It would be easy to dismiss Crick’s switch to studying the brain as the quixotic project of an ageing scientist who did not know his limits. After all, he did not make any decisive breakthrough in understanding the brain – nothing like the double helix… But then again, nobody else did, in Crick’s lifetime or since.” One is perhaps reminded once again of Einstein, and his preoccupation during later life with his unified field theory, which remains an open line of research today.
Sound waves can make small objects hover in the air, but applying this acoustic levitation technique to an array of objects is difficult because the objects tend to clump together. Physicists at the Institute of Science and Technology Austria (ISTA) have now overcome this problem thanks to hybrid structures that emerge from the interplay between attractive acoustic forces and repulsive electrostatic ones. By proving that it is possible to levitate many particles while keeping them separated, the finding could pave the way for advances in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics.
In acoustic levitation, particles ranging in size from tens of microns to millimetres are drawn up into the air and confined by an acoustic force. The origins of this force lie in the momentum that the applied acoustic field transfers to a particle as sound waves scatter off its surface. While the technique works well for single particles, multiple particles tend to aggregate into a single dense object in mid-air because the acoustic forces they scatter can, collectively, create an attractive interaction between them.
Keeping particles separated
Led by Scott Waitukaitis, the ISTA researchers found a way to avoid this so-called “acoustic collapse” by using a tuneable repulsive electrostatic force to counteract the attractive acoustic one. They began by levitating a single silver-coated poly(methyl methacrylate) (PMMA) microsphere 250‒300 µm in diameter above a reflector plate coated with a transparent and conductive layer of indium tin oxide (ITO). They then imbued the particle with a precisely controlled amount of electrical charge by letting it rest on the ITO plate with the acoustic field off, but with a high-voltage DC potential applied between the plate and a transducer. This produces a capacitive build-up of charge on the particle, and the amount of charge can be estimated from Maxwell’s solutions for two contacting conductive spheres (assuming, in the calculations, that the lower plate acts like a sphere with infinite radius).
The next step in the process is to switch on the acoustic field and, after just 10 ms, add the electric field to it. During the short period in which both fields are on, and provided the electric field is strong enough, either field is capable of launching the particle towards the centre of the levitation setup. The electric fields is then switched off. A few seconds later, the particle levitates stably in the trap, with a charge given, in principle, by Maxwell’s approximations.
A visually mesmerizing dance of particles
This charging method works equally well for multiple particles, allowing the researchers to load particles into the trap with high efficiency and virtually any charge they want, limited only by the breakdown voltage of the surrounding air. Indeed, the physicists found they could tune the charge to levitate particles separately or collapse them into a single, dense object. They could even create hybrid states that mix separated and collapsed particles.
And that wasn’t all. According to team member Sue Shi, a PhD student at ISTA and the lead author of a paper in PNAS about the research, the most exciting moment came when they saw the compact parts of the hybrid structures spontaneously begin to rotate, while the expanded parts remained in one place while oscillating in response to the rotation. The result was “a visually mesmerizing dance,” Shi says, adding that “this is the first time that such acoustically and electrostatically coupled interactions have been observed in an acoustically levitated system.”
As well as having applications in areas such as materials science and micro-robotics, Shi says the technique developed in this work could be used to study non-reciprocal effects that lead to the particles rotating or oscillating. “This would pave the way for understanding more elusive and complex non-reciprocal forces and many-body interactions that likely influence the behaviours of our system,” Shi tells Physics World.
Heat travels across a metal by the movement of electrons. However, in an insulator there are no free charge carriers; instead, vibrations in the atoms (phonons) move the heat from hot regions to cool regions in a straight path. In some materials, when a magnetic field is applied, the phonons begin to move sideways, this is known as the Phonon Hall Effect. Quantised collective excitations of the spin structure, called magnons, can also do this via the Magnon Hall Effect. A combined effect occurs when magnons and phonons strongly interact and traverse sideways in the Magnon–Polaron Hall Effect.
Scientists understand the quantum mechanical property known as Berry curvature that causes this transverse heat flow. Yet in some materials, the effect is greater than what Berry curvature alone can explain. In this research, an exceptionally large thermal Hall effect is recorded in MnPS₃, an insulating antiferromagnetic material with strong magnetoelastic coupling and a spin-flop transition. The thermal Hall angle remains large down to 4 K and cannot be accounted for by standard Berry curvature-based models.
This work provides an in-depth analysis of the role of the spin-flop transition in MnPS₃’s thermal properties and highlights the need for new theoretical approaches to understand magnon–phonon coupling and scattering. Materials with large thermal Hall effects could be used to control heat in nanoscale devices such as thermal diodes and transistors.
Topological insulators are materials that are insulating in the bulk within the bandgap, yet exhibit conductive states on their surface at frequencies within that same bandgap. These surface states are topologically protected, meaning they cannot be easily disrupted by local perturbations. In general, a material of n‑dimensions can host n‑1-dimensional topological boundary states. If the symmetry protecting these states is further broken, a bandgap can open between the n-1-dimensional states, enabling the emergence of n-2-dimensional topological states. For example, a 3D material can host 2D protected surface states, and breaking additional symmetry can create a bandgap between these surface states, allowing for protected 1D edge states. A material undergoing such a process is said to exhibit a phenomenon known as a higher-order topological insulator. In general, higher-order topological states appear in dimensions one lower than the parent topological phase due to the further unit-cell symmetry reduction. This requires at least a 2D lattice for second-order states, with the maximal order in 3D systems being three.
The researchers here introduce a new method for repeatedly opening the bandgap between topological states and generating new states within those gaps in an unbounded manner – without breaking symmetries or reducing dimensions. Their approach creates hierarchical topological insulators by repositioning domain walls between different topological regions. This process opens bandgaps between original topological states while preserving symmetry, enabling the formation of new hierarchical states within the gaps. Using one‑ and two‑dimensional Su–Schrieffer–Heeger models, they show that this procedure can be repeated to generate multiple, even infinite, hierarchical levels of topological states, exhibiting fractal-like behavior reminiscent of a Matryoshka doll. These higher-level states are characterized by a generalized winding number that extends conventional topological classification and maintains bulk-edge correspondence across hierarchies.
The researchers confirm the existence of second‑ and third-level domain‑wall and edge states and demonstrate that these states remain robust against perturbations. Their approach is scalable to higher dimensions and applicable not only to quantum systems but also to classical waves such as phononics. This broadens the definition of topological insulators and provides a flexible way to design complex networks of protected states. Such networks could enable advances in electronics, photonics, and phonon‑based quantum information processing, as well as engineered structures for vibration control. The ability to design complex, robust, and tunable hierarchical topological states could lead to new types of waveguides, sensors, and quantum devices that are more fault-tolerant and programmable.
The boundary between a substance’s liquid and solid phases may not be as clear-cut as previously believed. A new state of matter that is a hybrid of both has emerged in research by scientists at the University of Nottingham, UK and the University of Ulm, Germany, and they say the discovery could have applications in catalysis and other thermally-activated processes.
In liquids, atoms move rapidly, sliding over and around each other in a random fashion. In solids, they are fixed in place. The transition between the two states, solidification, occurs when random atomic motion transitions to an ordered crystalline structure.
At least, that’s what we thought. Thanks to a specialist microscopy technique, researchers led by Nottingham’s Andrei Khlobystov found that this simple picture isn’t entirely accurate. In fact, liquid metal nanoparticles can contain stationary atoms – and as the liquid cools, their number and position play a significant role in solidification.
Some atoms remain stationary
The team used a method called spherical and chromatic aberration-corrected high-resolution transmission electron microscopy (Cc/Cs-corrected HRTEM) at the low-voltage SALVE instrument at Ulm to study melted metal nanoparticles (such as platinum, gold and palladium) deposited on an atomically thin layer of graphene. This carbon-based material acted a sort of “hob” for heating the particles, says team member Christopher Leist, who was in charge of the HRTEM experiments. “As they melted, the atoms in the nanoparticles began to move rapidly, as expected,” Leist says. “To our surprise, however, we found that some atoms remained stationary.”
At high temperatures, these static atoms bind strongly to point defects in the graphene support. When the researchers used the electron beam from the transmission microscope to increase the number of these defects, the number of stationary atoms within the liquid increased, too. Khlobystov says that this had a knock-on effect on how the liquid solidified: when the stationary atoms are few in number, a crystal forms directly from the liquid and continues to grow until the entire particle has solidified. When their numbers increase, the crystallization process cannot take place and no crystals form.
“The effect is particularly striking when stationary atoms create a ring (corral) that surrounds and confines the liquid,” he says. “In this unique state, the atoms within the liquid droplet are in motion, while the atoms forming the corral remain motionless, even at temperatures well below the freezing point of the liquid.”
Unprecedented level of detail
The researchers chose to use Cc/Cs-corrected HRTEM in their study because minimizing spherical and chromatic aberrations through specialized hardware installed on the microscope enabled them to resolve single atoms in their images.
“Additionally, we can control both the energy of the electron beam and the sample temperature (the latter using MEMS-heated chip technology),” Khlobystov explains. “As a result, we can study metal samples at temperatures of up to 800 °C, even in a molten state, without sacrificing atomic resolution. We can therefore observe atomic behaviour during crystallization while actively manipulating the environment around the metal particles using the electron beam or by cooling the particles. This level of detail under such extreme conditions is unprecedented.”
Effect could be harnessed for catalysis
The Nottingham-Ulm researchers, who report their work in ACS Nano, say they obtained their results by chance while working on an EPSRC-funded project on 1-2 nm metal particles for catalysis applications. “Our approach involves assembling catalysts from individual metal atoms, utilizing on-surface phenomena to control their assembly and dynamics,” explains Khlobystov. “To gain this control, we needed to investigate the behaviour of metal atoms at varying temperatures and within different local environments on a support material.
“We suspected that the interplay between vacancy defects in the support and the sample temperature creates a powerful mechanism for controlling the size and structure of the metal particles,” he tells Physics World. “Indeed, this study revealed the fundamental mechanisms behind this process with atomic precision.”
The experiments were far from easy, he recalls, with one of the key challenges being to identify a thin, robust and thermally conductive support material for the metal. Happily, graphene meets all these criteria.
“Another significant hurdle to overcome was to be able to control the number of defect sites surrounding each particle,” he adds. “We successfully accomplished this by using the TEM’s electron beam not just as an imaging tool, but also as a means to modify the environment around the particles by creating defects.”
The researchers say they would now like to explore whether the effect can be harnessed for catalysis. To do this, Khlobystov says it will be essential to improve control over defect production and its scale. “We also want to image the corralled particles in a gas environment to understand how the phenomenon is influenced by reaction conditions, since our present measurements were conducted in a vacuum,” he adds.
Rob Farr is a theorist and computer modeller whose career has taken him down an unconventional path. He studied physics at the University of Cambridge, UK, from 1991 to 1994, staying on to do a PhD in statistical physics. But while many of his contemporaries then went into traditional research fields – such as quantum science, high-energy physics and photonic technologies – Farr got a taste for the food and drink manufacturing industry. It’s a multidisciplinary field in which Farr has worked for more than 25 years.
After leaving academia in 1998, first stop was Unilever’s €13bn foods division. For two decades, latterly as a senior scientist, Farr guided R&D teams working across diverse lines of enquiry – “doing the science, doing the modelling”, as he puts it. Along the way, Farr worked on all manner of consumer products including ice-cream, margarine and non-dairy spreads, as well as “dry” goods such as bouillon cubes. There was also the occasional foray into cosmetics, skin creams and other non-food products.
As a theoretical physicist working in industrial-scale food production, Farr’s focus has always been on the materials science of the end-product and how it gets processed. “Put simply,” says Farr, “that means making production as efficient as possible – regarding both energy and materials use – while developing ‘new customer experiences’ in terms of food taste, texture and appearance.”
Ice-cream physics
One tasty multiphysics problem that preoccupied Farr for a good chunk of his time at Unilever is ice cream. It is a hugely complex material that Farr likens to a high-temperature ceramic, in the sense that the crystalline part of it is stored very near to the melting point of ice. “Equally, the non-ice phase contains fats,” he says, “so there’s all sorts of emulsion physics and surface science to take into consideration.”
Ice cream also has polymers in the mix, so theoretical modelling needs to incorporate the complex physics of polymer–polymer phase separation as well as polymer flow, or “rheology”, which contributes to the product’s texture and material properties. “Air is another significant component of ice cream,” adds Farr, “which means it’s a foam as well as an emulsion.”
As well as trying to understand how all these subcomponents interact, there’s also the thorny issue of storage. After it’s produced, ice cream is typically kept at low temperatures of about –25 °C – first in the factory, then in transit and finally in a supermarket freezer. But once that tub of salted-caramel or mint choc chip reaches a consumer’s home, it’s likely to be popped in the ice compartment of a fridge freezer at a much milder –6 or –7 °C.
Manufacturers therefore need to control how those temperature transitions affect the recrystallization of ice. This unwanted outcome can lead to phenomena like “sintering” (which makes a harder product) and “ripening” (which can lead to big ice crystals that can be detected in the mouth and detract from the creamy texture).
“Basically, the whole panoply of soft-matter physics comes into play across the production, transport and storage of ice cream,” says Farr. “Figuring out what sort of materials systems will lead to better storage stability or a more consistent product texture are non-trivial questions given that the global market for ice cream is worth in excess of €100bn annually.”
A shot of coffee?
After almost 20 years working at Unilever, in 2017 Farr took up a role as coffee science expert at JDE Peet’s, the Dutch multinational coffee and tea company. Switching from the chilly depths of ice cream science to the dark arts of coffee production and brewing might seem like a steep career phase change, but the physics of the former provides a solid bridge to the latter.
The overlap is evident, for example, in how instant coffee gets freeze-dried – a low-temperature dehydration process that manufacturers use to extend the shelf-life of perishable materials and make them easier to transport. In the case of coffee, freeze drying (or lyophilization, as it’s commonly known) also helps to retain flavour and aromas.
If you want to study a parameter space that’s not been explored before, the only way to do that is to simulate the core processes using fundamental physics
After roasting and grinding the raw coffee beans, manufacturers extract a coffee concentrate using high pressure and water. This extract is then frozen, ground up and placed in a vacuum well below 0 °C. A small amount of heat is applied to sublime the ice away and remove the remaining water from the non-ice phase.
The quality of the resulting freeze-dried instant coffee is better than ordinary instant coffee. However, freeze-drying is also a complex and expensive process, which manufacturers seek to fine-tune by implementing statistical methods to optimize, for example, the amount of energy consumed during production.
Such approaches involve interpolating the gaps between existing experimental data sets, which is where a physics mind-set comes in. “If you want to study a parameter space that’s not been explored before,” says Farr, “the only way to do that is to simulate the core processes using fundamental physics.”
Beyond the production line, Farr has also sought to make coffee more stable when it’s stored at home. Sustainability is the big driver here: JDE Peet’s has committed to make all its packaging compostable, recyclable or reusable by 2030. “Shelf-life prediction has been a big part of this R&D initiative,” he explains. “The work entails using materials science and the physics of mass transfer to develop next-generation packaging and container systems.”
Line of sight
After eight years unpacking the secrets of coffee physics at JDE Peet’s, Farr was given the option to relocate to the Netherlands in mid-2025 as part of a wider reorganization of the manufacturer’s corporate R&D function. However, he decided to stay put in Oxford and is now deciding between another role in the food manufacturing sector, or moving into a new area of research, such as nuclear energy, or even education.
Cool science “The whole panoply of soft-matter physics comes into play across the production, transport and storage of ice-cream,” says industrial physicist Rob Farr. (Courtesy: London Institute for Mathematical Sciences)
Farr believes he gained a lot from his time at JDE Peet’s. As well as studying a wide range of physics problems, he also benefited from the company’s rigorous approach to R&D, whereby projects are regularly assessed for profitability and quickly killed off if they don’t make the cut. Such prioritization avoids wasted effort and investment, but it also demands agility from staff scientists, who have to build long-term research strategies against a project landscape in constant flux.
A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges
To thrive in that setting, Farr says collaboration and an open mind are essential. “A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges,” he says. “You can then find the scientific question which underpins seemingly disparate problems and work with colleagues to deliver commercially useful solutions.” For Farr, it’s a self-reinforcing dynamic. “As more people come to you, the more helpful you become – and I love that way of working.”
What Farr calls “line-of-sight” is another unique feature of industrial R&D in food materials. “Maybe you’re only building one span of a really long bridge,” he notes, “but when you can see the process end-to-end, as well as your part in in it, that is a fantastic motivator.” Indeed, Farr believes that for physicists who want a job doing something useful, the physics of food materials makes a great career. “There are,” he concludes, “no end of intriguing and challenging research questions.”
Physicists in the UK have succeeded in routing and teleporting entangled states of light between two four-user quantum networks – an important milestone in the development of scalable quantum communications. Led by Mehul Malik and Natalia Herrera Valencia of Heriot-Watt University in Edinburgh, Scotland, the team achieved this milestone thanks to a new method that uses light-scattering processes in an ordinary optical fibre to program a circuit. This approach, which is radically different from conventional methods based on photonic chips, allows the circuit to function as a programmable entanglement router that can implement several different network configurations on demand.
The team performed the experiments using commercially-available optical fibres, which are multi-mode structures that scatter light via random linear optical processes. In simple terms, Herrera Valencia explains that this means the light tends to ricochet chaotically through the fibres along hundreds of internal pathways. While this effect can scramble entanglement, researchers at the Institut Langevin in Paris, France had previously found that the scrambling can be calculated by analysing how the fibre transmits light. What is more, the light-scattering processes in such a medium can be harnessed to make programmable optical circuits – which is exactly what Malik, Herrera Valencia and colleagues did.
“Top-down” approach
The researchers explain that this “top-down” approach simplifies the circuit’s architecture because it separates the layer where the light is controlled from the layer in which it is mixed. Using waveguides for transporting and manipulating the quantum states of light also reduces optical losses. The result is a reconfigurable multi-port device that can distribute quantum entanglement between many users simultaneously in multiple patterns, switching between different channels (local connections, global connections or both) as required.
A further benefit is that the channels can be multiplexed, allowing many quantum processors to access the system at the same time. The researchers say this is similar to multiplexing in classical telecommunications networks, which makes it possible to send huge amounts of data through a single optical fibre using different wavelengths of light.
Access to a large number of modes
Although controlling and distributing entangled states of light is key for quantum networks, Malik says it comes with several challenges. One of these is that conventional methods based on photonics chips cannot be scaled up easily. They are also very sensitive to imperfections in how they’re made. In contrast, the waveguide-based approach developed by the Heriot-Watt team “opens up access to a large number of modes, providing significant improvements in terms of achievable circuit size, quality and loss,” Malik tells Physics World, adding that the approach also fits naturally with existing optical fibre infrastructures.
Gaining control over the complex scattering process inside a waveguide was not easy, though. “The main challenge was the learning curve and understanding how to control quantum states of light inside such a complex medium,” Herrera Valencia recalls. “It took time and iteration, but we now have the precise and reconfigurable control required for reliable entanglement distribution, and even more so for entanglement swapping, which is essential for scalable networks.”
While the Heriot-Watt team used the technique to demonstrate flexible quantum networking, Malik and Herrera Valencia say it might also be used for implementing large-scale photonic circuits. Such circuits could have many applications, ranging from machine learning to quantum computing and networking, they add.
Looking ahead, the researchers, who report their work in Nature Photonics, say they are now aiming to explore larger-scale circuits that can operate on more photons and light modes. “We would also like to take some of our network technology out of the laboratory and into the real world,” says Malik, adding that Herrera Valencia is leading a commercialization effort in that direction.
Multimodal monitoring Pressure and strain sensors on a clinical trial volunteer undergoing an ultrasound scan (left). Snapshot image of the ultrasound video recording (right). (Courtesy: Yap et al., Sci. Adv. 11 eady2661)
The ability to continuously monitor and interpret foetal movement patterns in the third trimester of a pregnancy could help detect any potential complications and improve foetal wellbeing. Currently, however, such assessment of foetal movement is performed only periodically, with an ultrasound exam at a hospital or clinic.
A lightweight, easily wearable, adhesive patch-based sensor developed by engineers and obstetricians at Monash University in Australia may change this. The patches, two of which are worn on the abdomen, can detect foetal movements such as kicking, waving, hiccups, breathing, twitching, and head and trunk motion.
Reduced foetal movement can be associated with potential impairment in the central nervous system and musculoskeletal system, and is a common feature observed in pregnancies that end in foetal death and stillbirth. A foetus compromised in utero may reduce movements as a compensatory strategy to lower oxygen consumption and conserve energy.
To help identify foetuses at risk of complications, the Monash team developed an artificial intelligence (AI)-powered wearable pressure–strain combo sensor system that continuously and accurately detects foetal movement-induced motion in the mother’s abdominal skin. As reported in Science Advances, the “band-aid”-like sensors can discriminate between foetal and non-foetal movement with over 90% accuracy.
The system comprises two soft, thin and flexible patches designed to conform to the abdomen of a pregnant woman. One patch incorporates an octagonal gold nanowire-based strain sensor (the “Octa” sensor), the other is an interdigitated electrode-based pressure sensor.
Pressure and strain combo Photograph of the sensors on a pregnant mother (A). Exploded illustration of the foetal kicks strain sensor (B) and the pressure sensor (C). Dimensions of the strain (D) and pressure (E) sensors. (Courtesy: Yap et al., Sci. Adv. 11 eady2661)
The patches feature a soft polyimide-based flexible printed circuit (FPC) that integrates a thin lithium polymer battery and various integrated circuit chips, including a Bluetooth radiofrequency system for reading the sensor’s electrical resistance, storing data and communicating with a smartphone app. Each patch is encapsulated with kinesiology tape and sticks to the abdomen using a medical double-sided silicone adhesive.
The Octa sensor is attached to a separate FPC connector attached to the primary device, enabling easy replacement after each study. The pressure sensor is mounted on the silicone adhesive, to connect with the interdigitated electrode beneath the primary device. The Octa and pressure sensor patches are lightweight (about 3 g) and compact, measuring 63 x 30 x 4 mm and 62 x 28 x 2 mm, respectively.
Trialling the device
The researchers validated their foetal movement monitoring system via comparison with simultaneous ultrasound exams, examining 59 healthy pregnant women at Monash Health. Each participant had the pressure sensor attached to the area of their abdomen where they felt the most vigorous foetal movements, typically in the lower quadrant, while the strain sensor was attached to the region closest to foetal limbs. An accelerometer placed on the participant’s chest captured non-foetal movement data for signal denoising and training the machine-learning model.
Principal investigator Wenlong Cheng, now at the University of Sydney, and colleagues report that “the wearable strain sensor featured isotropic omnidirectional sensitivity, enabling detection of maternal abdominal [motion] over a large area, whereas the wearable pressure sensor offered high sensitivity with a small domain, advantageous for accurate localized foetal movement detection”.
The researchers note that the pressure sensor demonstrated higher sensitivity to movements directly beneath it compared with motion farther away, while the Octa sensor performed consistently across a wider sensing area. “The combination of both sensor types resulted in a substantial performance enhancement, yielding an overall AUROC [area under the receiver operating characteristic curve] accuracy of 92.18% in binary detection of foetal movement, illustrating the potential of combining diverse sensing modalities to achieve more accurate and reliable monitoring outcomes,” they write.
In a press statement, co-author Fae Marzbanrad explains that the device’s strength lies in a combination of soft sensing materials, intelligent signal processing and AI. “Different foetal movements create distinct strain patterns on the abdominal surface, and these are captured by the two sensors,” she says. “The machine-learning system uses the signals to detect when movement occurs while cancelling maternal movements.”
The lightweight and flexible device can be worn by pregnant women for long periods without disrupting daily life. “By integrating sensor data with AI, the system automatically captures a wider range of foetal movements than existing wearable concepts while staying compact and comfortable,” Marzbanrad adds.
The next steps towards commercialization of the sensors will include large-scale clinical studies in out-of-hospital settings, to evaluate foetal movements and investigate the relationship between movement patterns and pregnancy complications.
Since the beginning of radiation therapy, almost all treatments have been delivered with the patient lying on a table while the beam rotates around them. But a resurgence in upright patient positioning is changing that paradigm. Novel radiation accelerators such as proton therapy, VHEE, and FLASH therapy are often too large to rotate around the patient, making access limited. By instead rotating the patient, these previously hard-to-access beams could now become mainstream in the future.
Join leading clinicians and experts as they discuss how this shift in patient positioning is enabling exploration of new treatment geometries and supporting the development of advanced future cancer therapies.
L-R Serdar Charyyev, Eric Deutsch, Bill Loo, Rock Mackie
Novel beams covered and their representative speaker
Serdar Charyyev – Proton Therapy – Clinical Assistant Professor at Stanford University School of Medicine Eric Deutsch – VHEE FLASH – Head of Radiotherapy at Gustave Roussy Bill Loo – FLASH Photons – Professor of Radiation Oncology at Stanford Medicine Rock Mackie – Emeritus Professor at University of Wisconsin and Co-Founder and Chairman of Leo Cancer Care
Be curious As CTO of a quantum tech company, Andrew Lamb values evidence-based decision making and being surrounded by experts. (Courtesy: Delta.g)
What skills do you use every day in your job?
A quantum sensor is a combination of lots of different parts working together in harmony: a sensor head containing the atoms and isolating them from the environment; a laser system to probe the quantum structure and manipulate atomic states; electronics to drive the power and timing of a device; and software to control everything and interpret the data. As the person building, developing and maintaining these devices you need to have expertise across all these areas. In addition to these skills, as the CTO my role also requires me to set the company’s technical priorities, determine the focus of R&D activities and act as the top technical authority in the firm.
In a developing field like quantum metrology, evidence-based decision making is crucial as you critically assess information, disregarding what is irrelevant and making an informed choice – especially when the “right answer” may not be obvious for months or even years. Challenges arise that may never have been solved before, and the best way to do so is to dive deep into the “why and how” something happens. Once the root cause is identified a creative solution then needs to be found; whether it is something brand new, or implementing an approach from an entirely different discipline.
What do you like best and least about your job?
The best thing about my job is the way in which it enables me to grow my knowledge and understanding of a wide variety of fields, while also providing me opportunities for creative problem solving. When you surround yourself with people who are experts in their field, there is no end to the opportunities to learn. Before co-founding Delta.g I was a researcher at the University of Birmingham where I learnt my technical skills. Moving into a start-up, we built a multidisciplinary team to address the operational, regulatory and technical barriers to establish a disruptive product in the marketplace. The diversity created within our company has afforded a greater pool of experts to learn from.
As the CTO, my role sits at the intersection of the technical and the commercial within the business. That means it is my responsibility to translate commercial milestones into a scientific plan, while also explaining our progress to non-experts. This can be challenging and quite stressful at times – particularly when I need to describe our scientific achievements in a way that truly reflects our advances, while still being accessible.
What do you know today that you wish you knew when you were starting out in your career?
For a long time, I didn’t know what direction I wanted to take, and I used to worry that the lack of a clear purpose would hold me back. Today I know that it doesn’t. Instead of fixating on finding a perfect path early on, it’s far more valuable to focus on developing skills that open doors. Whether those skills are technical, managerial or commercial, no knowledge is ever wasted. I’m still surprised by how often something I learned as far back as GCSE ends up being useful in my work now.
I also wish I had understood just how important it is to stay open to new opportunities. Looking back, every pivotal point in my career – switching from civil engineering to a physics degree, choosing certain undergraduate modules, applying for unexpected roles, even co-founding Delta.g – came from being willing to make a shift when an opportunity appeared. Being flexible and curious matters far more than having everything mapped out from the beginning.
Despite not being close to the frontline of Russia’s military assault on Ukraine, life at the Ivano-Frankivsk National Technical University of Oil and Gas is far from peaceful. “While we continue teaching and research, we operate under constant uncertainty – air raid alerts, electricity outages – and the emotional toll on staff and students,” says Lidiia Davybida, an associate professor of geodesy and land management.
Last year, the university became a target of a Russian missile strike, causing extensive damage to buildings that still has not been fully repaired – although, fortunately, no casualties were reported. The university also continues to leak staff and students to the war effort – some of whom will tragically never return – while new student numbers dwindle as many school graduates leave Ukraine to study abroad.
Despite these major challenges, Davybida and her colleagues remain resolute. “We adapt – moving lectures online when needed, adjusting schedules, and finding ways to keep research going despite limited opportunities and reduced funding,” she says.
Resolute research
Davybida’s research focuses on environmental monitoring using geographic information systems (GIS), geospatial analysis and remote sensing. She has been using these techniques to monitor the devastating impact that the war is having on the environment and its significant contribution to climate change.
In 2023 she published results from using Sentinel-5P satellite data and Google Earth Engine to monitor the air quality impacts of war on Ukraine (IOP Conf. Ser.: Earth Environ. Sci. 1254 012112). As with the COVID-19 lockdowns worldwide, her results reveal that levels of common pollutants such as carbon monoxide, nitrogen dioxide and sulphur dioxide were, on average, down from pre-invasion levels. This reflects the temporary disruption to economic activity that war has brought on the country.
Wider consequences Ukrainian military, emergency services and volunteers work together to rescue people from a large flooded area in Kherson on 8 June 2023. Two days earlier, the Russian army blew up the dam of the Kakhovka hydroelectric power station, meaning about 80 settlements in the flood zone had to be evacuated. (Courtesy: Sergei Chuzavkov/SOPPA Images/Shutterstock)
More worrying, from an environment and climate perspective, were the huge concentrations of aerosols, smoke and dust in the atmosphere. “High ozone concentrations damage sensitive vegetation and crops,” Davybida explains. “Aerosols generated by explosions and fires may carry harmful substances such as heavy metals and toxic chemicals, further increasing environmental contamination.” She adds that these pollutants can alter sunlight absorption and scattering, potentially disrupting local climate and weather patterns, and contributing to long-term ecological imbalances.
A significant toll has been wrought by individual military events too. A prime example is Russia’s destruction of the Kakhovka Dam in southern Ukraine in June 2023. An international team – including Ukrainian researchers – recently attempted to quantify this damage by combining on-the-ground field surveys, remote-sensing data and hydrodynamic modelling; a tool they used for predicting water flow and pollutant dispersion.
The results of this work are sobering (Science387 1181). Though 80% of the ecosystem is expected to re-establish itself within five years, the dam’s destruction released as much as 1.7 cubic kilometres of sediment contaminated by a host of persistent pollutants, including nitrogen, phosphorous and 83,000 tonnes of heavy metals. Discharging this toxic sludge across the land and waterways will have unknown long-term environmental consequences for the region, as the contaminants could be spread by future floods, the researchers concluded (figure 1).
This map shows areas of Ukraine affected or threatened by dam destruction in military operations. Arabic numbers 1 to 6 indicate rivers: Irpen, Oskil, Inhulets, Dnipro, Dnipro-Bug Estuary and Dniester, respectively. Roman numbers I to VII indicate large reservoir facilities: Kyiv, Kaniv, Kremenchuk, Kaminske, Dnipro, Kakhovka and Dniester, respectively. Letters A to C indicate nuclear power plants: Chornobyl, Zaporizhzhia and South Ukraine, respectively.
Dangerous data
A large part of the reason for the researchers’ uncertainty, and indeed more general uncertainty in environmental and climate impacts of war, stems from data scarcity. It is near-impossible for scientists to enter an active warzone to collect samples and conduct surveys and experiments. Environmental monitoring stations also get damaged and destroyed during conflict, explains Davybida – a wrong she is attempting to right in her current work. Many efforts to monitor, measure and hopefully mitigate the environmental and climate impact of the war in Ukraine are therefore less direct.
In 2022, for example, climate-policy researcher Mathijs Harmsen from the PBL Netherlands Environmental Assessment Agency and international collaborators decided to study the global energy crisis (which was sparked by Russia’s invasion of Ukraine) to look at how the war will alter climate policy (Environ. Res. Lett.19 124088).
They did this by plugging in the most recent energy price, trade and policy data (up to May 2023) into an integrated assessment model that simulates the environmental consequences of human activities worldwide. They then imposed different potential scenarios and outcomes and let it run to 2030 and 2050. Surprisingly, all scenarios led to a global reduction of 1–5% of carbon dioxide emissions by 2030, largely due to trade barriers increasing fossil fuel prices, which in turn would lead to increased uptake of renewables.
But even though the sophisticated model represents the global energy system in detail, some factors are hard to incorporate and some actions can transform the picture completely, argues Harmsen. “Despite our results, I think the net effect of this whole war is a negative one, because it doesn’t really build trust or add to any global collaboration, which is what we need to move to a more renewable world,” he says. “Also, the recent intensification of Ukraine’s ‘kinetic sanctions’ [attacks on refineries and other fossil fuel infrastructure] will likely have a larger effect than anything we explored in our paper.”
Elsewhere, Toru Kobayakawa was, until recently, working for the Japan International Cooperation Agency (JICA), leading the Ukraine support team. Kobayakawa used a non-standard method to more realistically estimate the carbon footprint of reconstructing Ukraine when the war ends (Environ. Res.: Infrastruct. Sustain.5 015015). The Intergovernmental Panel on Climate Change (IPCC) and other international bodies only account for carbon emissions within the territorial country. “The consumption-based model I use accounts for the concealed carbon dioxide from the production of construction materials like concrete and steel imported from outside of the country,” he says.
Using an open-source database Eora26 that tracks financial flows between countries’ major economic sectors in simple input–output tables, Kobayakawa calculated that Ukraine’s post-war reconstruction will amount to 741 million tonnes carbon dioxide equivalent over 10 years. This is 4.1 times Ukraine’s pre-war annual carbon-dioxide emissions, or the combined annual emissions of Germany and Austria.
However, as with most war-related findings, these figures come with a caveat. “Our input–output model doesn’t take into account the current situation,” notes Kobayakawa “It is the worst-case scenario.” Nevertheless, the research has provided useful insights, such as that the Ukrainian construction industry will account for 77% of total emissions.
“Their construction industry is notorious for inefficiency, needing frequent rework, which incurs additional costs, as well as additional carbon-dioxide emissions,” he says. “So, if they can improve efficiency by modernizing construction processes and implementing large-scale recycling of construction materials, that will contribute to reducing emissions during the reconstruction phase and ensure that they build back better.”
Military emissions gap
As the experiences of Davybida, Harmsen and Kobayakawa show, cobbling together relevant and reliable data in the midst of war is a significant challenge, from which only limited conclusions can be drawn. Researchers and policymakers need a fuller view of the environmental and climate cost of war if they are to improve matters once a conflict ends.
At present, reporting military emissions is voluntary, so data are often absent or incomplete – but gathering such data is vital. According to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is approximately 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.
The website is an attempt to fill this gap. “We hope that the UNFCCC picks up on this and mandates transparent and visible reporting of military emissions,” Neimark says (figure 2).
2 Closing the data gap
(Reused with permission from Neimark et al. 2025 War on the Climate: A Multitemporal Study of Greenhouse Gas Emissions of the Israel–Gaza Conflict. Available at SSRN)
Current United Nations Framework Convention on Climate Change (UNFCCC) greenhouse-gas emissions reporting obligations do not include all the possible types of conflict emissions, and there is no commonly agreed methodology or scope on how different countries collect emissions data. In a recent publication War on the Climate: a Multitemporal Study of Greenhouse Gas Emissions of the Israel-Gaza Conflict, Benjamin Neimark et al. came up with this framework, using the UNFCCC’s existing protocols. These reporting categories cover militaries and armed conflicts, and hope to highlight previously “hidden” emissions.
Measuring the destruction
Beyond plugging the military emissions gap, Neimark is also involved in developing and testing methods that he and other researchers can use to estimate the overall climate impact of war. Building on foundational work from his collaborator, Dutch climate specialist Lennard de Klerk – who developed a methodology for identifying, classifying and providing ways of estimating the various sources of emissions associated with the Russia–Ukraine war – Neimark and colleagues are trying to estimate the greenhouse-gas emissions from the Israel–Gaza conflict.
Their studies encompass pre-conflict preparation, the conflict itself and post-conflict reconstruction. “We were working with colleagues who were doing similar work in Ukraine, but every war is different,” says Neimark. “In Ukraine, they don’t have large tunnel networks, or they didn’t, and they don’t have this intensive, incessant onslaught of air strikes from carbon-intensive F16 fighter aircraft.” Some of these factors, like the carbon impact of Hamas’ underground maze of tunnels under Gaza, seem unquantifiable, but Neimark has found a way.
“There’s some pretty good data for how big these are in terms of height, the amount of concrete, how far down they’re dug and how thick they are,” says Neimark. “It’s just the length we had to work out based on reported documentation.” Finding the total amount of concrete and steel used in these tunnels involved triangulating open-source information with media reports to finalize an estimate of the dimensions of these structures. Standard emission factors could then be applied to obtain the total carbon emissions. According to data from Neimark’s Confronting Military Greenhouse Gas Emissions report, the carbon emissions from construction of concrete infrastructure by both Israel and Hamas were more than the annual emissions of 33 individual countries and territories (figure 3).
3 Climate change and the Gaza war
(Reused with permission from Neimark et al. 2024 Confronting Military Greenhouse Gas Emissions, Interactive Policy Brief, London, UK. Available from QMUL.)
Data from Benjamin Neimark, Patrick Bigger, Frederick Otu-Larbi and Reuben Larbi’s Confronting Military Greenhouse Gas Emissions report estimates the carbon emissions of the war in Gaza for three distinct periods: direct war activities; large-scale war infrastructure; and future reconstruction.
The impact of Hamas’ tunnels and Israel’s “iron wall” border fence are just two of many pre-war activities that must be factored in to estimate the Israel–Gaza conflict’s climate impact. Then, the huge carbon cost of the conflict itself must be calculated, including, for example, bombing raids, reconnaissance flights, tanks and other vehicles, cargo flights and munitions production.
Gaza’s eventual reconstruction must also be included, which makes up a big proportion of the total impact of the war, as Kobayakawa’s Ukraine reconstruction calculations showed. The United Nations Environment Programme (UNEP) has been systematically studying and reporting on “Sustainable debris management in Gaza” as it tracks debris from damaged buildings and infrastructure in Gaza since the outbreak of the conflict in October 2023. Alongside estimating the amounts of debris, UNEP also models different management scenarios – ranging from disposal to recycling – to evaluate the time, resource needs and environmental impacts of each option.
Visa restrictions and the security situation have prevented UNEP staff from entering the Gaza strip to undertake environmental field assessments to date. “While remote sensing can provide a valuable overview of the situation … findings should be verified on the ground for greater accuracy, particularly for designing and implementing remedial interventions,” says a UNEP spokesperson. They add that when it comes to the issue of contamination, UNEP needs “confirmation through field sampling and laboratory analysis” and that UNEP “intends to undertake such field assessments once conditions allow”.
The main risk from hazardous debris – which is likely to make up about 10–20% of the total debris – arises when it is mixed with and contaminates the rest of the debris stock. “This underlines the importance of preventing such mixing and ensuring debris is systematically sorted at source,” adds the UNEP spokesperson.
The ultimate cost
With all these estimates, and adopting a Monte Carlo analysis to account for uncertainties, Neimark and colleagues concluded that, from the first 15 months of the Israel–Gaza conflict, total carbon emissions were 32 million tonnes, which is huge given that the territory has a total area of just 365 km². The number also continues to rise.
Rubble and ruins Khan Younis in the Gaza Strip on 11 February 2025, showing the widespread damage to buildings and infrastructure. (Courtesy: Shutterstock/Anas Mohammed)
Why does this number matter? When lives are being lost in Gaza, Ukraine, and across Sudan, Myanmar and other regions of the world, calculating the environmental and climate cost of war might seem like something only worth bothering about when the fighting stops.
But doing so even while conflicts are taking place can help protect important infrastructure and land, avoid environmentally disastrous events, and to ensure the long rebuild, wherever the conflict may be happening, is informed by science. The UNEP spokesperson says that it is important to “systematically integrate environmental considerations into humanitarian and early recovery planning from the outset” rather than treating the environment as an afterthought. They highlight that governments should “embed it within response plans – particularly in areas where it can directly impact life-saving activities, such as debris clearance and management”.
With Ukraine still in the midst of war, it seems right to leave the final word to Davybida. “Armed conflicts cause profound and often overlooked environmental damage that persists long after the fighting stops,” she says. “Recognizing and monitoring these impacts is vital to guide practical recovery efforts, protect public health, prevent irreversible harm to ecosystems and ensure a sustainable future.”
I used to set myself the challenge every December of predicting what might happen in physics over the following year. Gazing into my imaginary crystal ball, I tried to speculate on the potential discoveries, the likely trends, and the people who might make the news over the coming year. It soon dawned on me that making predictions in physics is a difficult, if not futile, task
Apart from space missions pencilled in for launch on set dates, or particle colliders or light sources due to open, so much in science is simply unknown. That uncertainty of science is, of course, also its beauty; if you knew what was out there, looking for it wouldn’t be quite as much fun. So if you’re wondering what’s in store for 2026, I don’t know – you’ll just have to read Physics World to find out.
Having said that – and setting aside the insane upheaval going on in US science – this year’s Physics World Live series will give you some sense of what’s hot in physics right now, at least as far as we here at Physics World headquarters are concerned.
The first online panel discussion will be on quantum metrology – a burgeoning field that seeks to ensure companies and academics can test, validate and commercialize new quantum tech. Yes the International Year of Quantum Science and Technology officially ends with a closing ceremony in Ghana in February, but the impact of quantum physics will continue to reverberate throughout 2026.
Another of our online panels will be on medical physics, bringing together the current and two past editors-in-chief of Physics in Medicine & Biology. Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine, the journal turns 70 this year. The speakers will be reflecting on the vital role of medical-physics research to medicine and biology and examining how the field’s evolved since the journal was set up.
Medical physics will also be the focus of a new “impact project” in 2026 from the IOP, which will be starting another on artificial intelligence (AI) as well. The IOP will in addition be continuing its existing impact work on metamaterials, which were of course pioneered by – among others – the Imperial College theorist John Pendry. I wonder if a Nobel prize could be in store for him this year? That’s one prediction I’ll make that would be great if it came true.
Until then, on behalf of everyone at Physics World, I wish all readers – wherever you are – a happy and successful 2026. Your continued support is greatly valued.
From cutting onions to a LEGO Jodrell Bank, physics has had its fair share of quirky stories this year. Here is our pick of the best, not in any particular order.
Flight of the nematode
Researchers in the US this year discovered that a tiny jumping worm uses static electricity to increase its chances of attaching to unsuspecting prey. The parasitic roundworm Steinernema carpocapsae can leap some 25 times its body length by curling into a loop and springing in the air. If the nematode lands successfully on a victim, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. To investigate whether static electricity aids their flight, a team at Emory University and the University of California, Berkeley, used high-speed microscopy to film the worms as they leapt onto a fruit fly that was tethered with a copper wire connected to a high-voltage power supply. The researchers found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly. They discovered that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing with 880 V resulting in an 80% probability of success. “We’re helping to pioneer the emerging field of electrostatic ecology,” notes Emory physicist Ranjiangshang Ran.
Tear-jerking result
While it is known that volatile chemicals released from onions irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear. To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades. They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets. The most energetic droplets were released during the initial contact between the blade and the onion’s skin. When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds. “That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.” Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to room-temperature vegetables.
LEGO telescope
Students at the University of Manchester in the UK created a 30 500-piece LEGO model of the iconic Lovell Telescope to mark the 80th anniversary of the Jodrell Bank Observatory, which was founded in December 1945. Built in 1957, the 76.2 m diameter telescope was the largest steerable dish radio telescope in the world at the time. The LEGO model has been designed by Manchester’s undergraduate physics society and is based on the telescope’s original engineering blueprints. Student James Ruxton spent six months perfecting the design, which even involved producing custom-designed LEGO bricks with a 3D printer. Ruxton and fellow students began construction in April and the end result is a model weighing 30 kg with 30500 pieces and a whopping 4000-page instruction manual. “It’s definitely the biggest and most challenging build I’ve ever done, but also the most fun,” says Ruxton. “I’ve been a big fan of LEGO since I was younger, and I’ve always loved creating my own models, so recreating something as iconic as the Lovell is like taking that to the next level!” The model has gone on display in a “specially modified cabinet” at the university’s Schuster building, taking pride of place alongside a decade-old LEGO model of CERN’s ATLAS detector.
Petal physics
The curves and curls of leaves and flower petals arise due to the interplay between their natural growth and geometry. Uneven growth in a flat sheet, in which the edges grow quicker than the interior, gives rise to strain and in plant leaves and petals, for example, this can result in a variety of shapes such as saddle and ripple shapes. Yet when it comes to rose petals, the sharply pointed cusps – a point where two curves meet – that form at the edge of the petals set it apart from soft, wavy patterns seen in many other plants.
To investigate this intriguing difference, researchers from the Hebrew University of Jerusalem carried out theoretical modelling and conducted a series of experiments with synthetic disc “petals”. They found that the pointed cusps that form at the edge of rose petals are due to a type of geometric frustration called a Mainardi–Codazzi–Peterson (MCP) incompatibility. This type of mechanism results in stress concentrating in a specific area, which goes on to form cusps to avoid tearing or forming unnatural folding. When the researchers suppressed the formation of cusps, they found that the discs revert to being smooth and concave. The researchers say that the findings could be used for applications in soft robotics and even in the deployment of spacecraft components.
Wild Card physics
The Wild Cards universe is a series of novels set largely during an alternate history of the US following the Second World War. The series follows events after an extraterrestrial virus, known as the Wild Card virus, has spread worldwide. It mutates human DNA causing profound changes in human physiology. The virus follows a fixed statistical distribution in that 90% of those infected die, 9% become physically mutated (referred to as “jokers”) and 1% gain superhuman abilities (known as “aces”). Such capabilities include the ability to fly as well as being able to move between dimensions. George R R Martin, the author who co-edits the Wild Cards series, co-authored a paper examining the complex dynamics of the Wild Card virus together with Los Alamos National Laboratory theoretical physicist Ian Tregillis, who is also a science-fiction author. The model takes into consideration the severity of the changes (for the 10% that don’t instantly die) and the mix of joker/ace traits. The result is a dynamical system in which a carrier’s state vector constantly evolves through the model space – until their “card” turns. At that point the state vector becomes fixed and its permanent location determines the fate of the carrier. “The fictional virus is really just an excuse to justify the world of Wild Cards, the characters who inhabit it, and the plot lines that spin out from their actions,” says Tregillis.
Bubble vision: researchers have discovered that triple-fermented beer feature the most stable beer foam heads (courtesy: AIP/Chatzigiannakis et al.)
Foamy top
And finally, a clear sign of a good brew is a big head of foam at the top of a poured glass. Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse. What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that can reduce surface tension and are found in soaps and detergents. To find out more, researchers from ETH Zurich and Eindhoven University of Technology investigated beer-foam stability for different types of beers at varying stages of the fermentation process. They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is mostly influenced by the proteins in the beer – the more they contain, the more viscous the film and more stable the foam will be. However, for double-fermented beers, the proteins in the beer are slightly denatured by the yeast cells and come together to form a two-dimensional membrane that keeps the foam intact longer. The head was found to be even more stable for triple-fermented beers, which include Trappist beers. The team says that the work could be used to identify ways to increase or decrease the amount of foam so that everyone can pour a perfect glass of beer every time. Cheers!
You can be sure that 2026 will throw up its fair share of quirky stories from the world of physics. See you next year!
Popularity isn’t everything. But it is something, so for the second year running, we’re finishing our trip around the Sun by looking back at the physics stories that got the most attention over the past 12 months. Here, in ascending order of popularity, are the 10 most-read stories published on the Physics World website in 2025.
We’ve had quantum science on our minds all year long, courtesy of 2025 being UNESCO’s International Year of Quantum Science and Technology. But according to theoretical work by Partha Ghose and Dimitris Pinotsis, it’s possible that the internal workings of our brains could also literally be driven by quantum processes.
Though neurons are generally regarded as too big to display quantum effects, Ghose and Pinotsis established that the equations describing the classical physics of brain responses are mathematically equivalent to the equations describing quantum mechanics. They also derived a Schrödinger-like equation specifically for neurons. So if you’re struggling to wrap your head around complex quantum concepts, take heart: it’s possible that your brain is ahead of you.
Testing times A toy model from Marco Pettini seeks to reconcile quantum entanglement with Einstein’s theory of relativity. (Courtesy: Shutterstock/Eugene Ivanov)
Einstein famously disliked the idea of quantum entanglement, dismissing its effects as “spooky action at a distance”. But would he have liked the idea of an extra time dimension any better? We’re not sure he would, but that is the solution proposed by theoretical physicist Marco Pettini, who suggests that wavefunction collapse could propagate through a second time dimension. Pettini got the idea from discussions with the Nobel laureate Roger Penrose and from reading old papers by David Bohm, but not everyone is impressed by these distinguished intellectual antecedents. In this article, Bohm’s former student and frequent collaborator Jeffrey Bub went on the record to say he “wouldn’t put any money on” Pettini’s theory being correct. Ouch.
Continuing the theme of intriguing, blue-sky theoretical research, the eighth-most-read article of 2025 describes how two theoretical physicists, Kaden Hazzard and Zhiyuan Wang, proposed a new class of quasiparticles called paraparticles. Based on their calculations, these paraparticles exhibit quantum properties that are fundamentally different from those of bosons and fermions. Notably, paraparticles strikes a balance between the exclusivity of fermions and the clustering tendency of bosons, with up to two paraparticles allowed to occupy the same quantum state (rather than zero for fermions or infinitely many for bosons). But do they really exist? No-one knows yet, but Hazzard and Wang say that experimental studies of ultracold atoms could hold the answer.
Capturing colour A still life taken by Lippmann using his method sometime between 1890 and 1910. By the latter part of this period, the method had fallen out of favour, superseded by the simpler Autochrome process. (Courtesy: Photo in public domain)
The list of early Nobel laureates in physics is full of famous names – Roentgen, Curie, Becquerel, Rayleigh and so on. But if you go down the list a little further, you’ll find that the 1908 prize went to a now mostly forgotten physicist by the name of Gabriel Lippmann, for a version of colour photography that almost nobody uses (though it’s rather beautiful, as the photo shows). This article tells the story of how and why this happened. A companion piece on the similarly obscure 1912 laureate, Gustaf Dalén, fell just outside this year’s top 10; if you’re a member of the Institute of Physics, you can read both of them together in the November issue of Physics World.
Why should physicists have all the fun of learning about the quantum world? This episode of the Physics World Weekly podcast focuses on the outreach work of Aleks Kissinger and Bob Coecke, who developed a picture-driven way of teaching quantum physics to a group of 15-17-year-old students. One of the students in the original pilot programme, Arjan Dhawan, is now studying mathematics at the University of Durham, and he joined his former mentors on the podcast to answer the crucial question: did it work?
Conflicting views Stalwart physicists Albert Einstein and Niels Bohr had opposing views on quantum fundamentals from early on, which turned into a lifelong scientific argument between the two. (Paul Ehrenfest/Wikimedia Commons)
Niels Bohr had many good ideas in his long and distinguished career. But he also had a few that didn’t turn out so well, and this article by science writer Phil Ball focuses on one of them. Known as the Bohr-Kramers-Slater (BKS) theory, it was developed in 1923 with help from two of the assistants/students/acolytes who flocked to Bohr’s institute in Copenhagen. Several notable physicists hated it because it violated both causality and the conservation of energy, and within two years, experiments by Walther Boethe and Hans Geiger proved them right. The twist, though, is that Boethe went on to win a share of the 1954 Nobel Prize for Physics for this work – making Bohr surely one of the only scientists who won himself a Nobel Prize for his good ideas, and someone else a Nobel Prize for a bad one.
Black holes are fascinating objects in their own right. Who doesn’t love the idea of matter-swallowing cosmic maws floating through the universe? For some theoretical physicists, though, they’re also a way of exploring – and even extending – Einstein’s general theory of relativity. This article describes how thinking about black hole collisions inspired Jiaxi Wu, Siddharth Boyeneni and Elias Most to develop a new formulation of general relativity that mirrors the equations that describe electromagnetic interactions. According to this formulation, general relativity behaves the same way as the gravitational described by Isaac Newton more than 300 years ago, with the “gravito-electric” field fading with the inverse square of distance.
“Best of” lists are a real win-win. If you agree with the author’s selections, you go away feeling confirmed in your mutual wisdom. If you disagree, you get to have a good old moan about how foolish the author was for forgetting your favourites or including something you deem unworthy. Either way, it’s a success – as this very popular list of the top 5 Nobel Prizes for Physics awarded since the year 2000 (as chosen by Physics World editor-in-chief Matin Durrani) demonstrates.
We’re back to black holes again for the year’s second-most-read story, which focuses on a possible link between gravity and quantum information theory via the concept of entropy. Such a link could help explain the so-called black hole information paradox – the still-unresolved question of whether information that falls into a black hole is retained in some form or lost as the black hole evaporates via Hawking radiation. Fleshing out this connection could also shed light on quantum information theory itself, and the theorist who’s proposing it, Ginestra Bianconi, says that experimental measurements of the cosmological constant could one day verify or disprove it.
Experiment schematic Two single atoms floating in a vacuum chamber are illuminated by a laser beam and act as the two slits. The interference of the scattered light is recorded with a highly sensitive camera depicted as a screen. Incoherent light appears as background and implies that the photon has acted as a particle passing only through one slit. (Courtesy: Wolfgang Ketterle, Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee and Jiahao Lyu)
Back in 2002, readers of Physics World voted Thomas Young’s electron double-slit experiment “the most beautiful experiment in physics”. More than 20 years later, it continues to fascinate the physics community, as this, the most widely read article of any that Physics World published in 2025, shows.
Young’s original experiment demonstrated the wave-like nature of electrons by sending them through a pair of slits and showing that they create an interference pattern on a screen even when they pass through the slits one-by-one. In this modern update, physicists at the Massachusetts Institute of Technology (MIT), US, stripped this back to the barest possible bones.
Using two single atoms as the slits, they inferred the path of photons by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when they didn’t observe the photons’ path, and two bright spots when they did.
It’s an elegant result, and the fact that the MIT team performed the experiment specifically to celebrate the International Year of Quantum Science and Technology 2025 makes its popularity with Physics World readers especially gratifying.
So here’s to another year full of elegant experiments and the theories that inspire them. Long may they both continue, and thank you, as always, for taking the time to read about them.