Born in 1898, Lysenko was a Ukrainian plant breeder, who in 1927 found he could make pea and grain plants develop at different rates by applying the right temperatures to their seeds. The Soviet news organ Pravda was enthusiastic, saying his discovery could make crops grow in winter, turn barren fields green, feed starving cattle and end famine.
Despite having trained as a horticulturist, Lysenko rejected the then-emerging science of genetics in favour of Lamarckism, according to which organisms can pass on inherited traits to offspring. This meshed well with the Soviet philosophy of “dialectical materialism”, which sees both the natural and human worlds as evolving not through mechanisms but environment.
Stalin took note of Lysenko’s activities and had him installed as head of key Soviet science agencies. Once in power, Lysenko dismissed scientists who opposed his views, cancelled their meetings, funded studies of discredited theories, and stocked committees with loyalists. Although Lysenko had lost his influence by the time Stalin died in 1953 – with even Pravda having turned against him – Soviet agricultural science had been destroyed.
A modern parallel
Lysenko’s views and actions have a resonance today when considering the activities of Robert F Kennedy Jr, who was appointed by Donald Trump as secretary of the US Department of Health and Human Services in February 2025. Of course, Trump has repeatedly sought to impose his own agenda on US science, with his destructive impact outlined in a detailed report published by the Union of Concerned Scientists in July 2025.
But after Trump appointed Kennedy, the assault on science continued into US medicine, health and human services. In what might be called a philosophy of “political materialism”, Kennedy fired all 17 members of the Advisory Committee on Immunization Practices of the US Centers for Disease Control and Prevention (CDC), cancelled nearly $500m in mRNA vaccine contracts, hired a vaccine sceptic to study its connection with autism despite numerous studies that show no connection, and ordered the CDC to revise its website to reflect his own views on the cause of autism.
Of course, there are fundamental differences between the 1930s Soviet Union and the 2020s United States. Stalin murdered and imprisoned his opponents, while the US administration only defunds and fires them. Stalin and Lysenko were not voted in, while Trump came democratically to power, with elected representatives confirming Kennedy. Kennedy has also apologized for his most inflammatory remarks, though Stalin and Lysenko never did (nor does Trump for that matter).
What’s more, Stalin’s and Lysenko’s actions were more grounded in apparent scientific realities and social vision than Trump’s or Kennedy’s. Stalin substantially built up much of the Soviet science and technology infrastructure, whose dramatic successes include launching the first Earth satellite Sputnik in 1957. Though it strains credulity to praise Stalin, his vision to expand Soviet agricultural production during a famine was at least plausible and its intention could be portrayed as humanitarian. Lysenko was a scientist, Kennedy is not.
As for Lysenko, his findings seemed to carry on those of his scientific predecessors. Experimentally, he expanded the work of Russian botanist Ivan Michurin, who bred new kinds of plants able to grow in different regions. Theoretically, his work connected not only with dialectical materialism but also with that of the French naturalist Jean-Baptiste Lamarck, who claimed that acquired traits can be inherited.
US Presidents often have pet scientific projects. Harry Truman created the National Science Foundation, Dwight D Eisenhower set up NASA, John F Kennedy started the Apollo programme, while Richard Nixon launched the Environmental Protection Agency (EPA) and the War on Cancer. But it’s one thing to support science that might promote a political agenda and another to quash science that will not.
One ought to be able to take comfort in the fact that if you fight nature, you lose – except that the rest of us lose as well. Thanks to Lysenko’s actions, the Soviet Union lost millions of tons of grain and hundreds of herds of cattle. The promise of his work evaporated and Stalin’s dreams vanished.
Lysenko, at least, was motivated by seeming scientific promise and social vision; the US has none. Trump has damaged the most important US scientific agencies, destroyed databases and eliminated the EPA’s research arm, while Kennedy has replaced health advisory committees with party loyalists.
While Kennedy may not last his term – most Trump Cabinet officials don’t – the paths he has sent science policy on surely will. For Trump and Kennedy, the policy seems to consist only of supporting pet projects. Meanwhile, cases of measles in the US have reached their highest level in three decades, the seas continue to rise and the climate is changing. It is hard to imagine how enemy agents could damage US science more effectively.
Early diagnosis of primary central nervous system lymphoma (PCNSL) remains challenging because brain biopsies are invasive and imaging often lacks molecular specificity. A team led by researchers at Shenzhen University has now developed a minimally invasive fibre-optic plasmonic sensor capable of detecting PCNSL-associated microRNAs in the eye’s aqueous humor with attomolar sensitivity.
At the heart of the approach is a black phosphorus (BP)–engineered surface plasmon resonance (SPR) interface. An ultrathin BP layer is deposited on a gold-coated fiber tip. Because of the work-function difference between BP and gold, electrons transfer from BP into the Au film, creating a strongly enhanced local electric field at the metal–semiconductor interface. This BP–Au charge-transfer nano-interface amplifies refractive-index changes at the surface far more efficiently than conventional metal-only SPR chips, enabling the detection of molecular interactions that would otherwise be too subtle to resolve and pushing the limit of detection down to 21 attomolar without nucleic-acid amplification. The BP layer also provides a high-area, biocompatible surface for immobilizing RNA reporters.
To achieve sequence specificity, the researchers integrated CRISPR-Cas13a, an RNA-guided nuclease that becomes catalytically active only when its target sequence is perfectly matched to a designed CRISPR RNA (crRNA). When the target microRNA (miR-21) is present, activated Cas13a cleaves RNA reporters attached to the BP-modified fiber surface, releasing gold nanoparticles and reducing the local refractive index. The resulting optical shift is read out in real time through the SPR response of the BP-enhanced fiber probe, providing single-nucleotide-resolved detection directly on the plasmonic interface.
With this combined strategy, the sensor achieved a limit of detection of 21 attomolar in buffer and successfully distinguished single-base-mismatched microRNAs. In tests on aqueous-humor samples from patients with PCNSL, the CRISPR-BP-FOSPR assay produced results that closely matched clinical qPCR data, despite operating without any amplification steps.
Because aqueous-humor aspiration is a minimally invasive ophthalmic procedure, this BP-driven plasmonic platform may offer a practical route for early PCNSL screening, longitudinal monitoring, and potentially the diagnosis of other neurological diseases reflected in eye-fluid biomarkers. More broadly, the work showcases how black-phosphorus-based charge-transfer interfaces can be used to engineer next-generation, fibre-integrated biosensors that combine extreme sensitivity with molecular precision.
Plutonium is considered a fascinating element. It was first chemically isolated in 1941 at the University of California, but its discovery was hidden until after the Second World War. There are six distinct allotropic phases of plutonium with very different properties. At ambient pressure, continuously increasing the temperature converts the room-temperature, simple monoclinic a phase through five phase transitions, the final one occurring at approximately 450°C.
The delta (δ) phase is perhaps the most interesting allotrope of plutonium. δ-plutonium is technologically important, has a very simple crystal structure, but its electronic structure has been debated for decades. Researchers have attempted to understand its anomalous behaviour and how the properties of δ-plutonium are connected to the 5f electrons.
The 5f electrons are found in the actinide group of elements which includes plutonium. Their behaviour is counterintuitive. They are sensitive to temperature, pressure and composition, and behave in both a localised manner, staying close to the nucleus and in a delocalised (itinerant) manner, more spread out and contributing to bonding. Both these states can support magnetism depending on actinide element. The 5f electrons contribute to δ-phase stability, anomalies in the material’s volume and bulk modulus, and to a negative thermal expansion where the δ-phase reduces in size when heated.
Research group from Lawrence Livermore National Laboratory. Left to right: Lorin Benedict, Alexander Landa, Kyoung Eun Kweon, Emily Moore, Per Söderlind, Christine Wu, Nir Goldman, Randolph Hood and Aurelien Perron. Not in image: Babak Sadigh and Lin Yang (Courtesy: Blaise Douros/Lawrence Livermore National Laboratory)
In this work, the researchers present a comprehensive model to predict the thermodynamic behaviour of δ-plutonium, which has a face-centred cubic structure. They use density functional theory, a computational technique that explores the overall electron density of the system and incorporate relativistic effects to capture the behaviour of fast-moving electrons and complex magnetic interactions. The model includes a parameter-free orbital polarization mechanism to account for orbital-orbital interactions, and incorporates anharmonic lattice vibrations and magnetic fluctuations, both transverse and longitudinal modes, driven by temperature-induced excitations. Importantly, it is shown that negative thermal expansion results from magnetic fluctuations.
This is the first model to integrate electronic effects, magnetic fluctuations, and lattice vibrations into a cohesive framework that aligns with experimental observations and semi-empirical models such as CALPHAD. It also accounts for fluctuating states beyond the ground state and explains how gallium composition influences thermal expansion. Additionally, the model captures the positive thermal expansion behaviour of the high-temperature epsilon phase, offering new insight into plutonium’s complex thermodynamics.
Silver iodide crystals have long been used to “seed” clouds and trigger precipitation, but scientists have never been entirely sure why the material works so well for that purpose. Researchers at TU Wien in Austria are now a step closer to solving the mystery thanks to a new study that characterized surfaces of the material in atomic-scale detail.
“Silver iodide has been used in atmospheric weather modification programs around the world for several decades,” explains Jan Balajka from TU Wien’s Institute of Applied Physics, who led this research. “In fact, it was chosen for this purpose as far back as the 1940s because of its atomic crystal structure, which is nearly identical to that of ice – it has the same hexagonal symmetry and very similar distances between atoms in its lattice structure.”
The basic idea, Balajka continues, originated with the 20th-century American atmospheric scientist Bernard Vonnegut, who suggested in 1947 that introducing small silver iodide (AgI) crystals into a cloud could provide nuclei for ice to grow on. But while Vonnegut’s proposal worked (and helped to inspire his brother Kurt’s novel Cat’s Cradle), this simple picture is not entirely accurate. The stumbling block is that nucleation occurs at the surface of a crystal, not inside it, and the atomic structure of an AgI surface differs significantly from its interior.
A task that surface science has solved
To investigate further, Balajka and colleagues used high-resolution atomic force microscopy (AFM) and advanced computer simulations to study the atomic structure of 2‒3 nm diameter AgI crystals when they are broken into two pieces. The team’s measurements revealed that the surfaces of both freshly cleaved structures differed from those found inside the crystal.
More specifically, team member Johanna Hütner, who performed the experiments, explains that when an AgI crystal is cleaved, the silver atoms end up on one side while the iodine atoms appear on the other. This has implications for ice growth, because while the silver side maintains a hexagonal arrangement that provides an ideal template for the growth of ice layers, the iodine side reconstructs into a rectangular pattern that no longer lattice-matches the hexagonal symmetry of ice crystals. The iodine side is therefore incompatible with the epitaxial growth of hexagonal ice.
“Our works solves this decades-long controversy of the surface vs bulk structure of AgI, and shows that structural compatibility does matter,” Balajka says.
Difficult experiments
According to Balajka, the team’s experiments were far from easy. Many experimental methods for studying the structure and properties of material surfaces are based on interactions with charged particles such as electrons or ions, but AgI is an electrical insulator, which “excludes most of the tools available,” he explains. Using AFM enabled them to overcome this problem, he adds, because this technique detects interatomic forces between a sharp tip and the surface and does not require a conductive sample.
Another problem is that AgI is photosensitive and decomposes when exposed to visible light. While this property is useful in other contexts – AgI was a common ingredient in early photographic plates – it created complications for the TU Wien team. “Conventional AFM setups make use of optical laser detection to map the topography of a sample,” Balajka notes.
To avoid destroying their sample while studying it, the researchers therefore had to use a non-contact AFM based on a piezoelectric sensor that detects electrical signals and does not require optical readout. They also adapted their setup to operate in near-darkness, using only red light while manipulating the Ag to ensure that stray light did not degrade the samples.
The computational modelling part of the work introduced yet another hurdle to overcome. “Both Ag and I are atoms with a high number of electrons in their electron shells and are thus highly polarizable,” Balajka explains. “The interaction between such atoms cannot be accurately described by standard computational modelling methods such as density functional theory (DFT), so we had to employ highly accurate random-phase approximation (RPA) calculations to obtain reliable results.”
Highly controlled conditions
The researchers acknowledge that their study, which is detailed in Science Advances, was conducted under highly controlled conditions – ultrahigh vacuum, low pressure and temperature and a dark environment – that are very different from those that prevail inside real clouds. “The next logical step for us is therefore to confirm whether our findings hold under more representative conditions,” Balajka says. “We would like to find out whether the structure of AgI surfaces is the same in air and water, and if not, why.”
The researchers would also like to better understand the atomic arrangement of the rectangular reconstruction of the iodine surface. “This would complete the picture for the use of AgI in ice nucleation, as well as our understanding of AgI as a material overall,” Balajka says.
Using a novel spectroscopy technique, physicists in Japan have revealed how organic materials accumulate electrical charge through long-term illumination by sunlight – leading to material degradation. Ryota Kabe and colleagues at the Okinawa Institute of Science and Technology have shown how charge separation occurs gradually via a rare multi-photon ionization process, offering new insights into how plastics and organic semiconductors degrade in sunlight.
In a typical organic solar cell, an electron-donating material is interfaced with an electron acceptor. When the donor absorbs a photon, one of its electrons may jump across the interface, creating a bound electron-hole pair which may eventually dissociate – creating two free charges from which useful electrical work can be extracted.
Although such an interface vastly boosts the efficiency of this process, it is not necessary for charge separation to occur when an electron donor is illuminated. “Even single-component materials can generate tiny amounts of charge via multiphoton ionization,” Kabe explains. “However, experimental evidence has been scarce due to the extremely low probability of this process.”
To trigger charge separation in this way, an electron needs to absorb one or more additional photons while in its excited state. Since the vast majority of electrons fall back into their ground states before this can happen, the spectroscopic signature of this charge separation is very weak. This makes it incredibly difficult to detect using conventional spectroscopy techniques, which can generally only make observations over timescales of up to a few milliseconds.
The opposite approach
“While weak multiphoton pathways are easily buried under much stronger excited-state signals, we took the opposite approach in our work,” Kabe describes. “We excited samples for long durations and searched for traces of accumulated charges in the slow emission decay.”
Key to this approach was an electron donor called NPD. This organic material has a relatively long triplet lifetime, where an excited electron is prevented from transitioning back to its ground state. As a result, these molecules emit phosphorescence over relatively long timescales.
In addition, Kabe’s team dispersed their NPD samples into different host materials with carefully selected energy levels. In one medium, the energies of both the highest-occupied and lowest-unoccupied molecular orbitals lay below NPD’s corresponding levels, so that the host material acted as an electron acceptor. As a result, charge transfer occurred in the same way as it would across a typical donor-acceptor interface.
Yet in another medium, the host’s lowest-unoccupied orbital lay above NPD’s – blocking charge transfer, and allowing triplet states to accumulate instead. In this case, the only way for charge separation to occur was through multi-photon ionization.
Slow emission decay analysis
Since NPD’s long triplet lifetime allowed its electrons to be excited gradually over an extended period of illumination, its weak charge accumulation became detectable through slow emission decay analysis. In contrast, more conventional methods involve multiple, ultra-fast laser pulses, severely restricting the timescale over which measurements can be made. Altogether, this approach enabled the team to clearly distinguish between the two charge generation pathways.
“Using this method, we confirmed that charge generation occurred via resonance-enhanced multiphoton ionization mediated by long-lived triplet states, even in single-component organic materials,” Kabe describes.
This result offers insights into how plastics and organic semiconductors are degraded by sunlight over years or decades. The conventional explanation is that sunlight generates free radicals. These are molecules that lose an electron through ionization, leaving behind an unpaired electron which readily reacts with other molecules in the surrounding environment. Since photodegradation unfolds over such a long timescale, researchers could not observe this charge generation in single-component organic materials – until now.
“The method will be useful for analysing charge behaviour in organic semiconductor devices and for understanding long-term processes such as photodegradation that occur gradually under continuous light exposure,” Kabe says.
Fermilab has officially opened a new building named after the particle physicist Helen Edwards. Officials from the lab and the US Department of Energy (DOE) opened the Helen Edwards Engineering Research Center at a ceremony held on 5 December. The new building is the lab’s largest purpose-built lab and office space since the lab’s iconic Wilson Hall, which was completed in 1974.
Construction of the Helen Edwards Engineering Research Center began in 2019 and was completed three years later. The centre is an 7500 m2 multi-story lab and office building that is adjacent and connected to Wilson Hall.
The new centre is designed as a collaborative lab where engineers, scientists and technicians design, build and test technologies across several areas of research such as neutrino science, particle detectors, quantum science and electronics.
The centre also features cleanrooms, vibration-sensitive labs and cryogenic facilities in which the components of the near detector for the Deep Underground Neutrino Experiment will be assembled and tested.
A pioneering spirit
With a PhD in experimental particle physics from Cornell University, Edwards was heavily involved with commissioning the university’s 10 GeV electron synchrotron. In 1970 Fermilab’s director Robert Wilson appointed Edwards as associate head of the lab’s booster section and she later became head of the accelerator division.
While at Fermilab, Edwards’ primary responsibility was designing, constructing, commissioning and operating the Tevatron, which led to the discoveries of the top quark in 1995 and the tau neutrino in 2000.
Edwards retired in the early 1990s but continued to work as guest scientists at Fermilab and officially switched the Tevatron off during a ceremony held on 30 September 2011. Edwards died in 2016.
Darío Gil, the undersecretary for science at the DOE says that Edwards’ scientific work “is a symbol of the pioneering spirit of US research”.
“Her contributions to the Tevatron and the lab helped the US become a world leader in the study of elementary particles,” notes Gil. “We honour her legacy by naming this research centre after her as Fermilab continues shaping the next generation of research using [artificial intelligence], [machine learning] and quantum physics.”
A proposed new way of defining the standard unit of electrical resistance would do away with the need for strong magnetic fields when measuring it. The new technique is based on memristors, which are programmable resistors originally developed as building blocks for novel computing architectures, and its developers say it would considerably simplify the experimental apparatus required to measure a single quantum of resistance for some applications.
Electrical resistance is a physical quantity that represents how much a material opposes the flow of electrical current. It is measured in ohms (Ω), and since 2019, when the base units of the International System of Units (SI) were most recently revised, the ohm has been defined in terms of the von Klitzing constant h/e2, where h and e are the Planck constant and the charge on an electron, respectively.
To measure this resistance with high precision, scientists use the fact that the von Klitzing constant is related to the quantized change in the Hall resistance of a two-dimensional electron system (such as the one that forms in a semiconductor heterostructure) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in a material like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. Generating such high fields typically requires a superconducting electromagnet, however.
A completely different approach
Researchers connected to a European project called MEMQuD are now advocating a completely different approach. Their idea is based on memristors, which are programmable resistors that “remember” their previous resistance state even after they have been switched off. This previous resistance state can be changed by applying a voltage or current.
The MEMQuD team reports that the quantum conductance levels achieved in this set-up are precise enough to be exploited as intrinsic standard values. Indeed, a large inter-laboratory comparison confirmed that the values deviated by just -3.8% and 0.6% from the agreed SI values for the fundamental quantum of conductance, G0, and 2G0, respectively. The researchers attribute this precision to tight, atomic-level control over the morphology of the nanochannels responsible for quantum conductance effects, which they achieved by electrochemically polishing the silver filaments into the desired configuration.
A national metrology institute condensed into a microchip
The researchers say their results are building towards a concept known as an “NMI-in-a-chip” – that is, condensing the services of a national metrology institute into a microchip. “This could lead to measuring devices that have their resistance references built-in directly into the chip,” says Milano, “so doing away with complex measurements in laboratories and allowing for devices with zero-chain traceability – that is, those that do not require calibration since they have embedded intrinsic standards.”
“Notably, this method can be demonstrated at room temperature and under ambient conditions, in contrast to conventional methods that require cryogenic and vacuum equipment, which is expensive and require a lot of electrical power,” Okazaki says. “If such a user-friendly quantum standard becomes more stable and its uncertainty is improved, it could lead to a new calibration scheme for ensuring the accuracy of electronics used in extreme environments, such as space or the deep ocean, where traditional quantum standards that rely on cryogenic and vacuum conditions cannot be readily used.”
The MEMQuD researchers, who report their work in Nature Nanotechnology, now plan to explore ways to further decrease deviations from the agreed SI values for G0 and 2G0. These include better material engineering, an improved measurement protocol, and strategies for topologically protecting the memristor’s resistance.
Travis Humble is a research leader who’s thinking big, dreaming bold, yet laser-focused on operational delivery. The long-game? To translate advances in fundamental quantum science into a portfolio of enabling technologies that will fast-track the practical deployment of quantum computers for at-scale scientific, industrial and commercial applications.
Validation came in spades last month when, despite the current turbulence around US science funding, QSC was given follow-on DOE backing of $125 million over five years (2025–30) to create “a new scientific ecosystem” for fault-tolerant, quantum-accelerated high-performance computing (QHPC). In short, QSC will target the critical research needed to amplify the impact of quantum computing through its convergence with leadership-class exascale HPC systems.
“Our priority in Phase II QSC is the creation of a common software ecosystem to host the compilers, programming libraries, simulators and debuggers needed to develop hybrid-aware algorithms and applications for QHPC,” explains Humble. Equally important, QSC researchers will develop and integrate new techniques in quantum error correction, fault-tolerant computing protocols and hybrid algorithms that combine leading-edge computing capabilities for pre- and post-processing of quantum programs. “These advances will optimize quantum circuit constructions and accelerate the most challenging computational tasks within scientific simulations,” Humble adds.
Classical computing, quantum opportunity
At the heart of the QSC programme sits ORNL’s leading-edge research infrastructure for classical HPC, a capability that includes Frontier, the first supercomputer to break the exascale barrier and still one of the world’s most powerful. On that foundation, QSC is committed to building QHPC architectures that take advantage of both quantum computers and exascale supercomputing to tackle all manner of scientific and industrial problems beyond the reach of today’s HPC systems alone.
“Hybrid classical-quantum computing systems are the future,” says Humble. “With quantum computers connecting both physically and logically to existing HPC systems, we can forge a scalable path to integrate quantum technologies into our scientific infrastructure.”
Quantum acceleration ORNL’s current supercomputer, Frontier, was the first high-performance machine to break the exascale barrier. Plans are in motion for a next-generation supercomputer, Discovery, to come online at ORNL by 2028. (Courtesy: Carlos Jones/ORNL, US DOE)
Industry partnerships are especially important in this regard. Working in collaboration with the likes of IonQ, Infleqtion and QuEra, QSC scientists are translating a range of computationally intensive scientific problems – quantum simulations of exotic matter, for example – onto the vendors’ quantum computing platforms, generating excellent results out the other side.
“With our broad representation of industry partners,” notes Humble, “we will establish a common framework by which scientific end-users, software developers and hardware architects can collaboratively advance these tightly coupled, scalable hybrid computing systems.”
It’s a co-development model that industry values greatly. “Reciprocity is key,” Humble adds. “At QSC, we get to validate that QHPC can address real-world research problems, while our industry partners gather user feedback to inform the ongoing design and optimization of their quantum hardware and software.”
Quantum impact
Innovation being what it is, quantum computing systems will continue to trend on an accelerating trajectory, with more qubits, enhanced fidelity, error correction and fault-tolerance key reference points on the development roadmap. Phase II QSC, for its part, will integrate five parallel research thrusts to advance the viability and uptake of QHPC technologies.
The collaborative software effort, led by ORNL’s Vicente Leyton, will develop openQSE, an adaptive, end-to-end software ecosystem for QHPC systems and applications. Yigit Subasi from Los Alamos National Laboratory (LANL) will lead the hybrid algorithms thrust, which will design algorithms that combine conventional and quantum methods to solve challenging problems in the simulation of model materials.
Meanwhile, the QHPC architectures thrust, under the guidance of ORNL’s Chris Zimmer, will co-design hybrid computing systems that integrate quantum computers with leading-edge HPC systems. The scientific applications thrust, led by LANL’s Andrew Sornberger, will develop and validate applications of quantum simulation to be implemented on prototype QHPC systems. Finally, ORNL’s Michael McGuire will lead the thrust to establish experimental baselines for quantum materials that ultimately validate QHPC simulations against real-world measurements.
Longer term, ORNL is well placed to scale up the QHPC model. After all, the laboratory is credited with pioneering the hybrid supercomputing model that uses graphics processing units in addition to conventional central processing units (including the launch in 2012 of Titan, the first supercomputer of this type operating at over 10 petaFLOPS).
“The priority for all the QSC partners,” notes Humble, “is to transition from this still-speculative research phase in quantum computing, while orchestrating the inevitable convergence between quantum technology, existing HPC capabilities and evolving scientific workflows.”
Collaborate, coordinate, communicate
Much like its NQISRC counterparts (which have also been allocated further DOE funding through 2030), QSC provides the “operational umbrella” for a broad-scope collaboration of more than 300 scientists and engineers from 20 partner institutions. With its own distinct set of research priorities, that collective activity cuts across other National Laboratories (Los Alamos and Pacific Northwest), universities (among them Berkeley, Cornell and Purdue) and businesses (including IBM and IQM) to chart an ambitious R&D pathway addressing quantum-state (qubit) resilience, controllability and, ultimately, the scalability of quantum technologies.
“QSC is a multidisciplinary melting pot,” explains Humble, “and I would say, alongside all our scientific and engineering talent, it’s the pooled user facilities that we are able to exploit here at Oak Ridge and across our network of partners that gives us our ‘grand capability’ in quantum science [see box, “Unique user facilities unlock QSC opportunities”]. Certainly, when you have a common research infrastructure, orchestrated as part a unified initiative like QSC, then you can deliver powerful science that translates into real-world impacts.”
Unique user facilities unlock QSC opportunities
Neutron insights ORNL director Stephen Streiffer tours the linear accelerator tunnel at the Spallation Neutron Source (SNS). QSC scientists are using the SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement. (Courtesy: Alonda Hines/ORNL, US DOE)
Deconstructed, QSC’s Phase I remit (2020–25) spanned three dovetailing and cross-disciplinary research pathways: discovery and development of advanced materials for topological quantum computing (in which quantum information is stored in a stable topological state – or phase – of a physical system rather than the properties of individual particles or atoms); development of next-generation quantum sensors (to characterize topological states and support the search for dark matter); as well as quantum algorithms and simulations (for studies in fundamental physics and quantum chemistry).
Underpinning that collective effort: ORNL’s unique array of scientific user facilities. A case in point is the Spallation Neutron Source (SNS), an accelerator-based neutron-scattering facility that enables a diverse programme of pure and applied research in the physical sciences, life sciences and engineering. QSC scientists, for example, are using SNS to investigate entirely new classes of strongly correlated materials that demonstrate topological order and quantum entanglement – properties that show great promise for quantum computing and quantum metrology applications.
“The high-brightness neutrons at SNS give us access to this remarkable capability for materials characterization,” says Humble. “Using the SNS neutron beams, we can probe exotic materials, recover the neutrons that scatter off of them and, from the resultant signals, infer whether or not the materials exhibit quantum properties such as entanglement.”
While SNS may be ORNL’s “big-ticket” user facility, the laboratory is also home to another high-end resource for quantum studies: the Center for Nanophase Material Science (CNMS), one of the DOE’s five national Nanoscience Research Centers, which offers QSC scientists access to specialist expertise and equipment for nanomaterials synthesis; materials and device characterization; as well as theory, modelling and simulation in nanoscale science and technology.
Thanks to these co-located capabilities, QSC scientists pioneered another intriguing line of enquiry – one that will now be taken forward elsewhere within ORNL – by harnessing so-called quantum spin liquids, in which electron spins can become entangled with each other to demonstrate correlations over very large distances (relative to the size of individual atoms).
In this way, it is possible to take materials that have been certified as quantum-entangled and use them to design new types of quantum devices with unique geometries – as well as connections to electrodes and other types of control systems – to unlock novel physics and exotic quantum behaviours. The long-term goal? Translation of quantum spin liquids into a novel qubit technology to store and process quantum information.
SNS, CNMS and Oak Ridge Leadership Computing Facility (OLCF) are DOE Office of Science user facilities.
When he’s not overseeing the technical direction of QSC, Humble is acutely attuned to the need for sustained and accessible messaging. The priority? To connect researchers across the collaboration – physicists, chemists, material scientists, quantum information scientists and engineers – as well as key external stakeholders within the DOE, government and industry.
“In my experience,” he concludes, ”the ability of the QSC teams to communicate efficiently – to understand each other’s concepts and reasoning and to translate back and forth across disciplinary boundaries – remains fundamental to the success of our scientific endeavours.”
The next generation Quantum science graduate students and postdoctoral researchers present and discuss their work during a poster session at the fifth annual QSC Summer School. Hosted at Purdue University in April this year, the school is one of several workforce development efforts supported by QSC. (Courtesy: Dave Mason/Purdue University)
With an acknowledged shortage of skilled workers across the quantum supply chain, QSC is doing its bit to bolster the scientific and industrial workforce. Front-and-centre: the fifth annual QSC Summer School, which was held at Purdue University in April this year, hosting 130 graduate students (the largest cohort to date) through an intensive four-day training programme.
The Summer School sits as part of a long-term QSC initiative to equip ambitious individuals with the specialist domain knowledge and skills needed to thrive in a quantum sector brimming with opportunity – whether that’s in scientific research or out in industry with hardware companies, software companies or, ultimately, the end-users of quantum technologies in key verticals like pharmaceuticals, finance and healthcare.
“While PhD students and postdocs are integral to the QSC research effort, the Summer School exposes them to the fundamental ideas of quantum science elaborated by leading experts in the field,” notes Vivien Zapf, a condensed-matter physicist at Los Alamos National Laboratory who heads up QSC’s advanced characterization efforts.
“It’s all about encouraging the collective conversation,” she adds, “with lots of opportunities for questions and knowledge exchange. Overall, our emphasis is very much on training up scientists and engineers to work across the diversity of disciplines needed to translate quantum technologies out of the lab into practical applications.”
The programme isn’t for the faint-hearted, though. Student delegates kicked off this year’s proceedings with a half-day of introductory presentations on quantum materials, devices and algorithms. Next up: three and a half days of intensive lectures, panel discussions and poster sessions covering everything from entangled quantum networks to quantum simulations of superconducting qubits.
Many of the Summer School’s sessions were also made available virtually on Purdue’s Quantum Coffeehouse Live Stream on YouTube – the streamed content reaching quantum learners across the US and further afield. Lecturers were drawn from the US National Laboratories, leading universities (such as Harvard and Northwestern) and the quantum technology sector (including experts from IBM, PsiQuantum, NVIDIA and JPMorganChase).
As a physicist in industry, I spend my days developing new types of photovoltaic (PV) panels. But I’m also keen to do something for the transition to green energy outside work, which is why I recently installed two PV panels on the balcony of my flat in Munich. Fitting them was great fun – and I can now enjoy sunny days even more knowing that each panel is generating electricity.
However, the panels, which each have a peak power of 440 W, don’t cover all my electricity needs, which prompted me to take an interest in a plan to build six wind turbines in a forest near me on the outskirts of Munich. Curious about the project, I particularly wanted to find out when the turbines will start generating electricity for the grid. So when I heard that a weekend cycle tour of the site was being organized to showcase it to local residents, I grabbed my bike and joined in.
As we cycle, I discover that the project – located in Forstenrieder Park – is the joint effort of four local councils and two “citizen-energy” groups, who’ve worked together for the last five years to plan and start building the six turbines. Each tower will be 166 m high and the rotor blades will be 80 m long, with the plan being for them to start operating in 2027.
I’ve never thought of Munich as a particularly windy city, but at the height at which the blades operate, there’s always a steady, reliable flow of wind
I’ve never thought of Munich as a particularly windy city. But tour leader Dieter Maier, who’s a climate adviser to Neuried council, explains that at the height at which the blades operate, there’s always a steady, reliable flow of wind. In fact, each turbine has a designed power output of 6.5 MW and will deliver a total of 10 GWh in energy over the course of a year.
Practical questions
Cycling around, I’m excited to think that a single turbine could end up providing the entire electricity demand for Neuried. But installing wind turbines involves much more than just the technicalities of generating electricity. How do you connect the turbines to the grid? How do you ensure planes don’t fly into the turbines? What about wildlife conservation and biodiversity?
At one point of our tour, we cycle round a 90-degree bend in the forest and I wonder how a huge, 80 m-long blade will be transported round that kind of tight angle? Trees will almost certainly have to be felled to get the blade in place, which sounds questionable for a supposedly green project. Fortunately, project leaders have been working with the local forest manager and conservationists, finding ways to help improve the local biodiversity despite the loss of trees.
As a representative of BUND (one of Germany’s biggest conservation charities) explains on the tour, a natural, or “unmanaged”, forest consists of a mix of areas with a higher or lower density of trees. But Forstenrieder Park has been a managed forest for well over a century and is mostly thick with trees. Clearing trees for the turbines will therefore allow conservationists to grow more of the bushes and plants that currently struggle to find space to flourish.
Cut and cover Trees in Forstenrieder Park have had to be chopped down to provide room for new wind turbines to be installed, but the open space will let conservationists grow plants and bushes to boost biodiversity. (Courtesy: Janina Moereke)
To avoid endangering birds and bats native to this forest, meanwhile, the turbines will be turned off when the animals are most active, which coincidentally corresponds to low wind periods in Munich. Insurance costs have to be factored in too. Thankfully, it’s quite unlikely that a turbine will burn down or get ice all over its blades, which means liability insurance costs are low. But vandalism is an ever-present worry.
In fact, at the end of our bike tour, we’re taken to a local wind turbine that is already up and running about 13 km further south of Forstenrieder Park. This turbine, I’m disappointed to discover, was vandalized back in 2024, which led to it being fenced off and video surveillance cameras being installed.
But for all the difficulties, I’m excited by the prospect of the wind turbines supporting the local energy needs. I can’t wait for the day when I’m on my balcony, solar panels at my side, sipping a cup of tea made with water boiled by electricity generated by the rotor blades I can see turning round and round on the horizon.
Excess radiation Gamma-ray intensity map excluding components other than the halo, spanning approximately 100° in the direction of the centre of the Milky Way. The blank horizontal bar is the galactic plane area, which was excluded from the analysis to avoid strong astrophysical radiation. (Courtesy: Tomonori Totani/The University of Tokyo)
Gamma rays emitted from the halo of the Milky Way could be produced by hypothetical dark-matter particles. That is the conclusion of an astronomer in Japan who has analysed data from NASA’s Fermi Gamma-ray Space Telescope. The energy spectrum of the emission is what would be expected from the annihilation of particles called WIMPs. If this can be verified, it would mark the first observation of dark matter via electromagnetic radiation.
Since the 1930s astronomers have known that there is something odd about galaxies, galaxy clusters and larger structures in the universe. The problem is that there is not nearly enough visible matter in these objects to explain their dynamics and structure. A rotating galaxy, for example, should be flinging out its stars because it does not have enough self-gravitation to hold itself together.
Today, the most popular solution to this conundrum is the existence of a hypothetical substance called dark matter. Dark-matter particles would have mass and interact with each other and normal matter via the gravitational force, gluing rotating galaxies together. However, the fact that we have never observed dark matter directly means that the particles must rarely, if ever, interact via the other three forces.
Annihilating WIMPs
The weakly interacting massive particle (WIMP) is a dark-matter candidate that interacts via the weak nuclear force (or a similarly weak force). As a result of this interaction, pairs of WIMPs are expected to occasionally annihilate to create high-energy gamma rays and other particles. If this is true, dense areas of the universe such as galaxies should be sources of these gamma rays.
Now, Tomonori Totani of the University of Tokyo has analysed data from the Fermi telescope and identified an excess of gamma rays emanating from the halo of the Milky Way. What is more, Totani’s analysis suggests that the energy spectrum of the excess radiation (from about 10−100 GeV) is consistent with hypothetical WIMP annihilation processes.
“If this is correct, to the extent of my knowledge, it would mark the first time humanity has ‘seen’ dark matter,” says Totani. “This signifies a major development in astronomy and physics,” he adds.
While Totani is confident of his analysis, his conclusion must be verified independently. Furthermore, work will be needed to rule out conventional astrophysical sources of the excess radiation.
Catherine Heymans, who is Astronomer Royal for Scotland told Physics World, “I think it’s a really nice piece of work, and exactly what should be happening with the Fermi data”. The research is described in Journal of Cosmology and Astroparticle Physics. Heymans describes Totani’s paper as “well written and thorough”.
Researchers in the US have shed new light on the puzzling and complex flight physics of creatures such as hummingbirds, bumblebees and dragonflies that flap their wings to hover in place. According to an interdisciplinary team at the University of Cincinnati, the mechanism these animals deploy can be described by a very simple, computationally basic, stable and natural feedback mechanism that operates in real time. The work could aid the development of hovering robots, including those that could act as artificial pollinators for crops.
If you’ve ever watched a flapping insect or hummingbird hover in place – often while engaged in other activities such as feeding or even mating – you’ll appreciate how remarkable they are. To stay aloft and stable, these animals must constantly sense their position and motion and make corresponding adjustments to their wing flaps.
Feedback mechanism relies on two main components
Biophysicists have previously put forward many highly complex explanations for how they do this, but according to the Cincinnati team of Sameh Eisa and Ahmed Elgohary, some of this complexity is not necessary. Earlier this year, the pair developed their own mathematical and control theory based on a mechanism they call “extremum seeking for vibrational stabilization”.
Eisa describes this mechanism as “very natural” because it relies on just two main components. The first is the wing flapping motion itself, which he says is “naturally built in” for flapping creatures that use it to propel themselves. The second is a simple feedback mechanism involving sensations and measurements related to the altitude at which the creatures aim to stabilize their hovering.
The general principle, he continues, is that a system (in this case an insect or hummingbird) can steer itself towards a stable position by continuously adjusting a high-amplitude, high-frequency input control or signal (in this case, a flapping wing action). “This adjustment is simply based on the feedback of measurement (the insects’ perceptions) and stabilization (hovering) occurs when the system optimizes what it is measuring,” he says.
As well as being relatively easy to describe, Eisa tells Physics World that this mechanism is biologically plausible and computationally basic, dramatically simplifying the physics of hovering. “It is also categorically different from all available results and explanations in the literature for how stable hovering by insects and hummingbirds can be achieved,” he adds.
The researchers and colleagues. (Courtesy: S Eisa)
Interdisciplinary work
In the latest study, which is detailed in Physical Review E, the researchers compared their simulation results to reported biological data on a hummingbird and five flapping insects (a bumblebee, a cranefly, a dragonfly, a hawkmoth and a hoverfly). They found that their simulation fit the data very closely. They also ran an experiment on a flapping, light-sensing robot and observed that it behaved like a moth: it elevated itself to the level of the light source and then stabilized its hovering motion.
Eisa says he has always been fascinated by such optimized biological behaviours. “This is especially true for flyers, where mistakes in execution could potentially mean death,” he says. “The physics behind the way they do it is intriguing and it probably needs elegant and sophisticated mathematics to be described. However, the hovering creatures appear to be doing this very simply and I found discovering the secret of this puzzle very interesting and exciting.”
Eisa adds that this element of the work ended up being very interdisciplinary, and both his own PhD in applied mathematics and the aerospace engineering background of Elgohary came in very useful. “We also benefited from lengthy discussions with a biologist colleague who was a reviewer of our paper,” Eisa says. “Luckily, they recognized the value of our proposed technique and ended up providing us with very valuable inputs.”
Eisa thinks the work could open up new lines of research in several areas of science and engineering. “For example, it opens up new ideas in neuroscience and animal sensory mechanisms and could almost certainly be applied to the development of airborne robotics and perhaps even artificial pollinators,” he says. “The latter might come in useful in the future given the high rate of death many species of pollinating insects are encountering today.”
This episode of the Physics World Weekly podcast features Tim Hsieh of Canada’s Perimeter Institute for Theoretical Physics. We explore some of today’s hottest topics in quantum science and technology – including topological phases of matter; quantum error correction and quantum simulation.
Our conversation begins with an exploration of the quirky properties quantum matter and how these can be exploited to create quantum technologies. We look at the challenges that must be overcome to create large-scale quantum computers; and Hsieh reveals which problem he would solve first if he had access to a powerful quantum processor.
This interview was recorded earlier this autumn when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the third of those conversations to appear on the podcast.
Generative classification The CytoDiffusion classifier accurately identifies a wide range of blood cell appearances and detects unusual or rare blood cells that may indicate disease. The diagonal grid elements display original images of each cell type, while the off-diagonal elements show heat maps that provide insight into the model’s decision-making rationale. (Courtesy: Simon Deltadahl)
The shape and structure of blood cells provide vital indicators for diagnosis and management of blood disease and disorders. Recognizing subtle differences in the appearance of cells under a microscope, however, requires the skills of experts with years of training, motivating researchers to investigate whether artificial intelligence (AI) could help automate this onerous task. A UK-led research team has now developed a generative AI-based model, known as CytoDiffusion, that characterizes blood cell morphology with greater accuracy and reliability than human experts.
Conventional discriminative machine learning models can match human performance at classifying cells in blood samples into predefined classes. But discriminative models, which learn to recognise cell images based on expert labels, struggle with never-before-seen cell types and images from differing microscopes and staining techniques.
To address these shortfalls, the team – headed up at the University of Cambridge, University College London and Queen Mary University of London – created CytoDiffusion around a diffusion-based generative AI classifier. Rather than just learning to separate cell categories, CytoDiffusion models the full range of blood cell morphologies to provide accurate classification with robust anomaly detection.
“Our approach is motivated by the desire to achieve a model with superhuman fidelity, flexibility and metacognitive awareness that can capture the distribution of all possible morphological appearances,” the researchers write.
Authenticity and accuracy
For AI-based analysis to be adopted in the clinic, it’s essential that users trust a model’s learned representations. To assess whether CytoDiffusion could effectively capture the distribution of blood cell images, the team used it to generate synthetic blood cell images. Analysis by experienced haematologists revealed that these synthetic images were near-indistinguishable from genuine images, showing that CytoDiffusion genuinely learns the morphological distribution of blood cells rather than using artefactual shortcuts.
The researchers used multiple datasets to develop and evaluate their diffusion classifier, including CytoData, a custom dataset containing more than half a million anonymized cell images from almost 3000 blood smear slides. In standard classification tasks across these datasets, CytoDiffusion achieved state-of-the-art performance, matching or exceeding the capabilities of traditional discriminative models.
Effective diagnosis from blood smear samples also requires the ability to detect rare or previously unseen cell types. The researchers evaluated CytoDiffusion’s ability to detect blast cells (immature blood cells) in the test datasets. Blast cells are associated with blood malignancies such as leukaemia, and high detection sensitivity is essential to minimize false negatives.
In one dataset, CytoDiffusion detected blast cells with sensitivity and specificity of 0.905 and 0.962, respectively. In contrast, a discriminative model exhibited a poor sensitivity of 0.281. In datasets with erythroblasts as the abnormal cells, CytoDiffusion again outperformed the discriminative model, demonstrating that it can detect abnormal cell types not present in its training data, with the high sensitivity required for clinical applications.
Robust model
It’s important that a classification model is robust to different imaging conditions and can function with sparse training data, as commonly found in clinical applications. When trained and tested on diverse image datasets (different hospitals, microscopes and staining procedures), CytoDiffusion achieved state-of-the-art accuracy in all cases. Likewise, after training on limited subsets of 10, 20 and 50 images per class, CytoDiffusion consistently outperformed discriminative models, particularly in the most data-scarce conditions.
Another essential feature of clinical classification tasks, whether performed by a human or an algorithm, is knowing the uncertainty in the final decision. The researchers developed a framework for evaluating uncertainty and showed that CytoDiffusion produced superior uncertainty estimates to human experts. With uncertainty quantified, cases with high certainty could be processed automatically, with uncertain cases flagged for human review.
“When we tested its accuracy, the system was slightly better than humans,” says first author Simon Deltadahl from the University of Cambridge in a press statement. “But where it really stood out was in knowing when it was uncertain. Our model would never say it was certain and then be wrong, but that is something that humans sometimes do.”
Finally, the team demonstrated CytoDiffusion’s ability to create heat maps highlighting regions that would need to change for an image to be reclassified. This feature provides insight into the model’s decision-making process and shows that it understands subtle differences between similar cell types. Such transparency is essential for clinical deployment of AI, making models more trustworthy as practitioners can verify that classifications are based on legitimate morphological features.
“The true value of healthcare AI lies not in approximating human expertise at lower cost, but in enabling greater diagnostic, prognostic and prescriptive power than either experts or simple statistical models can achieve,” adds co-senior author Parashkev Nachev from University College London.
Almost every image that will be taken by future space observatories in low-Earth orbit could be tainted due to light contamination from satellites. That is according to a new analysis from researchers at NASA, which stresses that light pollution from satellites orbiting Earth must be reduced to guarantee astronomical research is not affected.
The number of satellites orbiting Earth has increased from about 2000 in 2019 to 15 000 today. Many of these are part of so-called mega-constellations that provide services such as Internet coverage around the world, including in areas that were previously unable to access it. Examples of such constellations include SpaceX’s Starlink as well as Amazon’s Kuiper and Eutelsat’s OneWeb.
Many of these mega-constellations share the same space as space-based observatories such as NASA’s Hubble Space Telescope. This means that the telescopes can capture streaks of reflected light from the satellites that render the images or data completely unusable for research purposes. That is despite anti-reflective coating that is applied to some newer satellites in SpaceX’s Starlink constellation, for example.
Previous work has explored the impact of such satellites constellations on ground-based astronomy, both optical and radioastronomy. Yet their impact on telescopes in space has been overlooked.
To find out more, Alejandro Borlaff from NASA’s Ames Research Center, and colleagues simulated the view of four space-based telescopes: Hubble and the near-infrared observatory SPHEREx, which launched in 2025, as well at the European Space Agency’s proposed near-infrared ARRAKIHS mission and China’s planned Xuntian telescopes.
These observatories are, or will be placed, between 400 and 800 km from the Earth’s surface.
The authors found that if the population of mega-constellation satellites grows to the 56 000 that is projected by the end of the decade, it would contaminate about 39.6% of Hubble’s images and 96% of images from the other three telescopes.
Borlaff and colleagues predict that the average number of satellites observed per exposure would be 2.14 for Hubble, 5.64 for SPHEREx, 69 for ARRAKIHS, and 92 for Xuntian.
The authors note that one solution could be to deploy satellites at lower orbits than the telescopes operate, which would make them about four magnitudes dimmer. The downside is that emissions from these lower satellites could have implications for Earth’s ozone layer.
An ‘urgent need for dialogue’
Katherine Courtney, chair of the steering board for the Global Network on Sustainability in Space, says that without astronomy, the modern space economy “simply wouldn’t exist”.
“The space industry owes its understanding of orbital mechanics, and much of the technology development that has unlocked commercial opportunities for satellite operators, to astronomy,” she says. “The burgeoning growth of the satellite population brings many benefits to life on Earth, but the consequences for the future of astronomy must be taken into consideration.”
Courtney adds that there is now “an urgent need for greater dialogue and collaboration between astronomers and satellite operators to mitigate those impacts and find innovative ways for commercial and scientific operations to co-exist in space.”
Katherine Courtney, chairs the Global Network on Sustainability in Space, and Alice Gorman from Flinders University in Adelaide, Australia, appeared on a Physics World Live panel discussion about the impact of space debris that was held on 10 November. A recording of the event is available here.
Physicists have obtained the first detailed picture of the internal structure of radium monofluoride (RaF) thanks to the molecule’s own electrons, which penetrated the nucleus of the molecule and interacted with its protons and neutrons. This behaviour is known as the Bohr-Weisskopf effect, and study co-leader Shane Wilkins says that this marks the first time it has been observed in a molecule. The measurements themselves, he adds, are an important step towards testing for nuclear symmetry violation, which might explain why our universe contains much more matter than antimatter.
RaF contains the radioactive isotope 225Ra, which is not easy to make, let alone measure. Producing it requires a large accelerator facility at high temperature and high velocity, and it is only available in tiny quantities (less than a nanogram in total) for short periods (it has a nuclear half-life of around 15 days).
“This imposes significant challenges compared to the study of stable molecules, as we need extremely selective and sensitive techniques in order to elucidate the structure of molecules containing 225Ra,” says Wilkins, who performed the measurements as a member of Ronald Fernando Garcia Ruiz’s research group at the Massachusetts Institute of Technology (MIT), US.
The team chose RaF despite these difficulties because theory predicts that it is particularly sensitive to small nuclear effects that break the symmetries of nature. “This is because, unlike most atomic nuclei, the radium atom’s nucleus is octupole deformed, which basically means it has a pear shape,” explains the study’s other co-leader, Silviu-Marian Udrescu.
Electrons inside the nucleus
In their study, which is detailed in Science, the MIT team and colleagues at CERN, the University of Manchester, UK and KU Leuven in the Netherlands focused on RaF’s hyperfine structure. This structure arises from interactions between nuclear and electron spins, and studying it can reveal valuable clues about the nucleus. For example, the nuclear magnetic dipole moment can provide information on how protons and neutrons are distributed inside the nucleus.
In most experiments, physicists treat electron-nucleus interactions as taking place at (relatively) long ranges. With RaF, that’s not the case. Udrescu describes the radium atom’s electrons as being “squeezed” within the molecule, which increases the probability that they will interact with, and penetrate, the radium nucleus. This behaviour manifests itself as a slight shift in the energy levels of the radium atom’s electrons, and the team’s precision measurements – combined with state-of-the-art molecular structure calculations – confirm that this is indeed what happens.
“We see a clear breakdown of this [long-range interactions] picture because the electrons spend a significant amount of time within the nucleus itself due to the special properties of this radium molecule,” Wilkins explains. “The electrons thus act as highly sensitive probes to study phenomena inside the nucleus.”
Searching for violations of fundamental symmetries
According to Udrescu, the team’s work “lays the foundations for future experiments that use this molecule to investigate nuclear symmetry violation and test the validity of theories that go beyond the Standard Model of particle physics.” In this model, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed).
The problem is that the Standard Model predicts that the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter – yet measurements and observations made today reveal an almost entirely matter-based universe. Subtler differences between matter particles and their antimatter counterparts might explain why the former prevailed, so by searching for these differences, physicists hope to explain antimatter-matter asymmetry.
Wilkins says the team’s work will be important for future such searches in species like RaF. Indeed, Wilkins, who is now at Michigan State University’s Facility for Rare Isotope Beams (FRIB), is building a new setup to cool and slow beams of radioactive molecules to enable higher-precision spectroscopy of species relevant to nuclear structure, fundamental symmetries and astrophysics. His long-term goal, together with other members of the RaX collaboration (which includes FRIB and the MIT team as well as researchers at Harvard University and the California Institute of Technology), is to implement advanced laser-based techniques using radium-containing molecules.
A new, microscopic formulation of the second law of thermodynamics for coherently driven quantum systems has been proposed by researchers in Switzerland and Germany. The researchers applied their formulation to several canonical quantum systems, such as a three-level maser. They believe the result provides a tighter definition of entropy in such systems, and could form a basis for further exploration.
In any physical process, the first law of thermodynamics says that the total energy must always be conserved, with some converted to useful work and the remainder dissipated as heat. The second law of thermodynamics says that, in any allowed process, the total amount of heat (the entropy) must always increase.
“I like to think of work being mediated by degrees of freedom that we control and heat being mediated by degrees of freedom that we cannot control,” explains theoretical physicist Patrick Potts of the University of Basel in Switzerland. “In the macroscopic scenario, for example, work would be performed by some piston – we can move it.” The heat, meanwhile, goes into modes such as phonons generated by friction.
Murky at small scales
This distinction, however, becomes murky at small scales: “Once you go microscopic everything’s microscopic, so it becomes much more difficult to say ‘what is it that that you control – where is the work mediated – and what is it that you cannot control?’,” says Potts.
Potts and colleagues in Basel and at RWTH Aachen University in Germany examined the case of optical cavities driven by laser light, systems that can do work: “If you think of a laser as being able to promote a system from a ground state to an excited state, that’s very important to what’s being done in quantum computers, for example,” says Potts. “If you rotate a qubit, you’re doing exactly that.”
The light interacts with the cavity and makes an arbitrary number of bounces before leaking out. This emergent light is traditionally treated as heat in quantum simulations. However, it can still be partially coherent – if the cavity is empty, it can be just as coherent as the incoming light and can do just as much work.
In 2020, quantum optician Alexia Auffèves of Université Grenoble Alpes in France and colleagues noted that the coherent component of the light exiting a cavity could potentially do work. In the new study, the researchers embedded this in a consistent thermodynamic framework. They studied several examples and formulated physically consistent laws of thermodynamics.
In particular, they looked at the three-level maser, which is a canonical example of a quantum heat engine. However, it has generally been modelled semi-classically by assuming that the cavity contains a macroscopic electromagnetic field.
Work vanishes
“The old description will tell you that you put energy into this macroscopic field and that is work,” says Potts, “But once you describe the cavity quantum mechanically using the old framework then – poof! – the work is gone…Putting energy into the light field is no longer considered work, and whatever leaves the cavity is considered heat.”
The researchers new thermodynamic treatment allows them to treat the cavity quantum mechanically and to parametrize the minimum degree of entropy in the radiation that emerges – how much radiation must be converted to uncontrolled degrees of freedom that can do no useful work and how much can remain coherent.
The researchers are now applying their formalism to study thermodynamic uncertainty relations as an extension of the traditional second law of thermodynamics. “It’s actually a trade-off between three things – not just efficiency and power, but fluctuations also play a role,” says Potts. “So the more fluctuations you allow for, the higher you can get the efficiency and the power at the same time. These three things are very interesting to look at with this new formalism because these thermodynamic uncertainty relations hold for classical systems, but not for quantum systems.”
“This [work] fits very well into a question that has been heavily discussed for a long time in the quantum thermodynamics community, which is how to properly define work and how to properly define useful resources,” says quantum theorist Federico Cerisola of the UK’s University of Exeter. “In particular, they very convincingly argue that, in the particular family of experiments they’re describing, there are resources that have been ignored in the past when using more standard approaches that can still be used for something useful.”
Cerisola says that, in his view, the logical next step is to propose a system – ideally one that can be implemented experimentally – in which radiation that would traditionally have been considered waste actually does useful work.
When I was five years old, my family moved into a 1930s semi-detached house with a long strip of garden. At the end of the garden was a miniature orchard of eight apple trees the previous owners had planted – and it was there that I, much like another significantly more famous physicist, learned an important lesson about gravity.
As I read in the shade of the trees, an apple would sometimes fall with a satisfying thunk into the soft grass beside me. Less satisfyingly, they sometimes landed on my legs, or even my head – and the big cooking apples really hurt. I soon took to sitting on old wooden pallets crudely wedged among the higher branches. It was not comfortable, but at least I could return indoors without bruises.
The effects of gravity become common sense so early in life that we rarely stop to think about them past childhood. In his new book Crush: Close Encounters with Gravity, James Riordon has decided to take us back to the basics of this most fundamental of forces. Indeed, he explores an impressively wide range of topics – from why we dream of falling and why giraffes should not exist (but do), to how black holes form and the existence of “Planet 9”.
Riordon, a physicist turned science writer, makes for a deeply engaging author. He is not afraid to put himself into the story, introducing difficult concepts through personal experience and explaining them with the help of everything including the kitchen sink, which in his hands becomes an analogue for a black hole.
Gravity as a subject can easily be both too familiar and too challenging. In Riordon’s words, “Things with mass attract each other. That’s really all there is to Newtonian gravity.” While Albert Einstein’s theory of general relativity, by contrast, is so intricate that it takes years of university-level study to truly master. Riordon avoids both pitfalls: he manages to make the simple fascinating again, and the complex understandable.
He provides captivating insights into how gravity has shaped the animal kingdom, a perspective I had never much considered. Did you know that tree snakes have their hearts positioned closer to their heads than their land-based cousins? I certainly didn’t. The higher placement ensures a steady blood flow to the brain, even when the snake is climbing vertically. It is one of many examples that make you look again at the natural world with fresh eyes.
Riordon’s treatment of gravity in Einstein’s abstract space–time is equally impressive, perhaps unsurprisingly, as his previous books include Very Easy Relativity and Relatively Easy Relativity. Riordon takes a careful, patient approach – though I have never before heard general relativity reduced to “space–time is squishy”. But why not? The phrase sticks and gives us a handhold as we scale the complications of the theory. For those who want to extend the challenge, a mathematical background to the theory is provided in an appendix, and every chapter is well referenced and accompanied with suggestions for further reading.
If anything, I found myself wanting more examples of gravity as experienced by humans and animals on Earth, as opposed to in the context of the astronomical realm. I found these down-to-earth chapters the most fascinating: they formed a bridge between the vast and the local, reminding us that the same force that governs the orbits of galaxies also brings an apple to the ground. This may be a reaction only felt by astronomers like me, who already spend their days looking upward. I can easily see how the balance Riordon chose is necessary for someone without that background, and Einstein’s gravity does require galactic scales to appreciate, after all.
Crush is a generally uncomplicated and pleasurable read. The anecdotes can sometimes be a little long-winded and there are parts of the book that are not without challenge. But it is pitched perfectly for the curious general reader and even for those dipping their toes into popular science for the first time. I can imagine an enthusiastic A-level student devouring it; it is exactly the kind of book I would have loved at that age. Even if some of it would have gone over my head, Riordon’s enthusiasm and gift for storytelling would have kept me more than interested, as I sat up on that pallet in my favourite apple tree.
I left that house, and that tree, a long time ago, but just a few miles down the road from where I live now stands another, far more famous apple tree. In the garden of Woolsthorpe Manor near Grantham, Newton is said to have watched an apple fall. From that small event, he began to ask the questions that reshaped his and our understanding of the universe. Whether or not the story is true hardly matters – Newton was constantly inspired by the natural world, so it isn’t improbable, and that apple tree remains a potent symbol of curiosity and insight.
“[Newton] could tell us that an apple falls, and how quickly it will do it. As for the question of why it falls, that took Einstein to answer,” writes Riordon. Crush is a crisp and fresh tour through a continuum from orchards to observatories, showing that every planetary orbit, pulse of starlight and even every apple fall is part of the same wondrous story.
A new phase of water ice, dubbed ice XXI, has been discovered by researchers working at the European XFEL and PETRA III facilities. The ice, which exists at room temperature and is structurally distinct from all previously observed phases of ice, was produced by rapidly compressing water to high pressures of 2 GPa. The finding could shed light on how different ice phases form at high pressures, including on icy moons and planets.
On Earth, ice can take many forms, and its properties depend strongly on its structure. The main type of naturally-occurring ice is hexagonal ice (Ih), so-called because the water molecules arrange themselves in a hexagonal lattice (this is the reason why snowflakes have six-fold symmetry). However, under certain conditions – usually involving very high pressures and low temperatures – ice can take on other structures. Indeed, 20 different forms of ice have been identified so far, denoted by roman numerals (ice I, II, III and so on up to ice XX).
Pressures of up to 2 GPa allow ice to form even at room temperature
Researchers from the Korea Research Institute of Standards and Science (KRISS) have now produced a 21st form of ice by applying pressures of up to two gigapascals. Such high pressures are roughly 20 000 times higher than normal air pressure at sea level, and they allow ice to form even at room temperature – albeit only within a device known as a dynamic diamond anvil cell (dDAC) that is capable of producing such extremely high pressures.
“In this special pressure cell, samples are squeezed between the tips of two opposing diamond anvils and can be compressed along a predefined pressure pathway,” explains Cornelius Strohm, a member of the DESY HIBEF team that set up the experiment using the High Energy Density (HED) instrument at the European XFEL.
Much more tightly packed molecules
The structure of ice XXI is different from all previously observed phases of ice because its molecules are much more tightly packed. This gives it the largest unit cell volume of all currently known types of ice, says KRISS scientist Geun Woo Lee. It is also metastable, meaning that it can exist even though another form of ice (in this case ice VI) would be more stable under the conditions in the experiment.
“This rapid compression of water allows it to remain liquid up to higher pressures, where it should have already crystallized to ice VI,” explains Lee. “Ice VI is an especially intriguing phase, thought to be present in the interior of icy moons such as Titan and Ganymede. Its highly distorted structure may allow complex transition pathways that lead to metastable ice phases.”
Ice XXI has a body-centred tetragonal crystal structure
To study how the new ice sample formed, the researchers rapidly compressed and decompressed it over 1000 times in the diamond anvil cell while imaging it every microsecond using the European XFEL, which produces megahertz frequency X-ray pulses at extremely high rates. They found that the liquid water crystallizes into different structures depending on how supercompressed it is.
The KRISS team then used the P02.2 beamline at PETRA III to determine that the ice XXI has a body-centred tetragonal crystal structure with a large unit cell (a = b = 20.197 Å and c = 7.891 Å) at approximately 1.6 GPa. This unit cell contains 152 water molecules, resulting in a density of 1.413 g cm−3.
The experiments were far from easy, recalls Lee. Upon crystallization, Ice XXI grows upwards (that is, in the vertical direction), which makes it difficult to precisely analyse its crystal structure. “The difficulty for us is to keep it stable for a long enough period to make precise structural measurements in single crystal diffraction study,” he says.
The multiple pathways of ice crystallization unearthed in this work, which is detailed in Nature Materials, imply that many more ice phases may exist. Lee says it is therefore important to analyse the mechanism behind the formation of these phases. “This could, for example, help us better understand the formation and evolution of these phases on icy moons or planets,” he tells Physics World.
Attosecond science is undoubtedly one of the fastest growing branches of physics today.
Its popularity was demonstrated by the award of the 2023 Nobel Prize in Physics to Anne L’Huillier, Paul Corkum and Ferenc Krausz for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter.
One of the most important processes in this field is dephasing. This happens when an electron loses its phase coherence because of interactions with its surroundings.
This loss of coherence can obscure the fine details of electron dynamics, making it harder to capture precise snapshots of these rapid processes.
The most common way to model this process in light-matter interactions is by using the relaxation time approximation. This approach greatly simplifies the picture as it avoids the need to model every single particle in the system.
Its use is fine for dilute gases, but it doesn’t work as well with intense lasers and denser materials, such as solids, because it greatly overestimates ionisation.
This is a significant problem as ionisation is the first step in many processes such as electron acceleration and high-harmonic generation.
To address this problem, a team led by researchers from the University of Ottawa have developed a new method to correct for this problem.
By introducing a heat bath into the model they were able to represent the many-body environment that interacts with electrons, without significantly increasing the complexity.
This new approach should enable the identification of new effects in attosecond science or wherever strong electromagnetic fields interact with matter.
Describing the non-classical properties of a complex many-body system (such as entanglement or coherence) is an important part of quantum technologies.
An ideal tool for this task would work well with large systems, be easily computable and easily measurable. Unfortunately, such a tool for every situation does not yet exist.
With this goal in mind a team of researchers – Marcin Płodzień and Maciej Lewenstein (ICFO, Barcelona, Spain) and Jan Chwedeńczuk (University of Warsaw, Poland) – began work on a special type of quantum state used in quantum computing – graph states.
These states can be visualised as graphs or networks where each vertex represents a qubit, and each edge represents an interaction between pairs of qubits.
The team studied four different shapes of graph states using new mathematical tools they developed. They found that one of these in particular, the Turán graph, could be very useful in quantum metrology.
Their method is (relatively) straightforward and does not require many assumptions. This means that it could be applied to any shape of graph beyond the four studied here.
The results will be useful in various quantum technologies wherever precise knowledge of many-body quantum correlations is necessary.
Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation.
“While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.”
The sporadic E-layer
One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice.
The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+, and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes between around 80‒100 km.
Effects of increasing CO2 concentrations
While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm.
“The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.”
The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100-120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.”
According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides.
“These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.”
Switchable camouflage A toy gecko featuring a flexible layer of the thermally tunable colour coating appears greenish blue at room temperature (left); upon heating (right), its body changes to a dark magenta colour. (Courtesy: Aritra Biswa)
Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms.
In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication.
Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours.
The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours.
The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer.
The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater.
“These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes.
Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies.
Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies.
A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons.
Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality.
This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems.
‘A great honour’
In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap.
Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths.
The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational.
Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.”
Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based.
Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June.
As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed.
In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R).
Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics.
What are the most important findings in your work?
One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020.
Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times.
Simulation trajectories a) A plot of the decline in the number of infected individuals over time. b) Probability density of extinction times for the same parameters as in a), showing that the most likely extinction times are measured in months. (Courtesy: Bhavin S. Khatri 2025 EPL152 11003 DOI 10.1209/0295-5075/ae0c31 CC-BY 4.0 https://creativecommons.org/licenses/by/4.0/)
Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold.
The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation.
In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year.
How realistic is this counterfactual scenario of eliminating SARS-CoV-2 within a year?
Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct.
The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans.
Modelling the decline of a virus: Theoretical physicist and biologist Bhavin Khatri. (Courtesy: Bhavin Khatri)
As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants.
What lessons would you like experts (and the public) to take from this work when considering future pandemic scenarios?
I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future.
I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits.
What do you plan to do next?
I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir.
It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper.
I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants.