An effect first observed decades ago by Nobel laureate Arthur Ashkin has been used to fine tune the electrical charge on objects held in optical tweezers. Developed by an international team led by Scott Waitukaitis of the Institute of Science and Technology Austria, the new technique could improve our understanding of aerosols and clouds.
Optical tweezers use focused laser beams to trap and manipulate small objects about 100 nm to 1 micron in size. Their precision and versatility have made them a staple across fields from quantum optics to biochemistry.
Ashkin shared the 2018 Nobel prize for inventing optical tweezers and in the 1970s he noticed that trapped objects can be electrically charged by the laser light. “However, his paper didn’t get much attention, and the observation has essentially gone ignored,” explains Waitukaitis.
Waitukaitis’ team rediscovered the effect while using optical tweezers to study how charges build up in the ice crystals accumulating inside clouds. In their experiment, micron-sized silica spheres stood in for the ice, but Ashkin’s charging effect got in their way.
Bummed out
“Our goal has always been to study charged particles in air in the context of atmospheric physics – in lightning initiation or aerosols, for example,” Waitukaitis recalls. “We never intended for the laser to charge the particle, and at first we were a bit bummed out that it did so.”
Their next thought was that they had discovered a new and potentially useful phenomenon. “Out of due diligence we of course did a deep dive into the literature to be sure that no one had seen it, and that’s when we found the old paper from Ashkin, “ says Waitukaitis.
In 1976, Ashkin described how optically trapped objects become charged through a nonlinear process whereby electrons absorb two photons simultaneously. These electrons can acquire enough energy to escape the object, leaving it with a positive charge.
Yet beyond this insight, Ashkin “wasn’t able to make much sense of the effect,” Waitukaitis explains. “I have the feeling he found it an interesting curiosity and then moved on.”
Shaking and scattering
To study the effect in more detail, the team modified their optical tweezers setup so its two copper lens holders doubled as electrodes, allowing them to apply an electric field along the axis of the confining, opposite-facing laser beams. If the silica sphere became charged, this field would cause it to shake, scattering a portion of the laser light back towards each lens.
The researchers picked off this portion of the scattered light using a beam splitter, then diverted it to a photodiode, allowing them to track the sphere’s position. Finally, they converted the measured amplitude of the shaking particle into a real-time charge measurement. This allowed them to track the relationship between the sphere’s charge and the laser’s tuneable intensity.
Their measurements confirmed Ashkin’s 1976 hypothesis that electrons on optically-trapped objects undergo two-photon absorption, allowing them to escape. Waitukaitis and colleagues improved on this model and showed how the charge on a trapped object can be controlled precisely by simply adjusting the laser’s intensity.
As for the team’s original research goal, the effect has actually been very useful for studying the behaviour of charged aerosols.
“We can get [an object] so charged that it shoots off little ‘microdischarges’ from its surface due to breakdown of the air around it, involving just a few or tens of electron charges at a time,” Waitukaitis says. “This is going to be really cool for studying electrostatic phenomena in the context of particles in the atmosphere.“
In the early universe, moments after the Big Bang and cosmic inflation, clusters of exotic, massive particles could have collapsed to form bizarre objects called cannibal stars and boson stars. In turn, these could have then collapsed to form primordial black holes – all before the first elements were able to form.
This curious chain of events is predicted by a new model proposed by a trio of scientists at SISSA, the International School for Advanced Studies in Trieste, Italy.
Their proposal involves a hypothetical moment in the early universe called the early matter-dominated (EMD) epoch. This would have lasted only a few seconds after the Big Bang, but could have been dominated by exotic particles, such as the massive, supersymmetric particles predicted by string theory.
“There are no observations that hint at the existence of an EMD epoch – yet!” says SISSA’s Pranjal Ralegankar. “But many cosmologists are hoping that an EMD phase occurred because it is quite natural in many models.”
Some models of the early universe predict the formation of primordial black holes from quantum fluctuations in the inflationary field. Now, Ralegankar and his colleagues, Daniele Perri and Takeshi Kobayashi propose a new and more natural pathway for forming primordial holes via an EMD epoch.
They postulate that in the first second of existence when the universe was small and incredibly hot, exotic massive particles emerged and clustered in dense haloes. The SISSA physicists propose that the haloes then collapsed into hypothetical objects called cannibal stars and boson stars.
Cannibal stars are powered by particles annihilating each other, which would have allowed the objects to resist further gravitational collapse for a few seconds. However, they would not have produced light like normal stars.
“The particles in a cannibal star can only talk to each other, which is why they are forced to annihilate each other to counter the immense pressure from gravity,” Ralegankar tells Physics World. “They are immensely hot, simply because the particles that we consider are so massive. The temperature of our cannibal stars can range from a few GeV to on the order of 1010 GeV. For comparison, the Sun is on the order of keV.”
Boson stars, meanwhile, would be made from pure a Bose–Einstein condensate, which is a state of matter whereby the individual particles quantum mechanically act as one.
Both the cannibal stars and boson stars would exist within larger haloes that would quickly collapse to form primordial black holes with masses about the same as asteroids (about 1014–1019 kg). All of this could have taken place just 10 s after the Big Bang.
Dark matter possibility
Ralegankar, Perri and Kobayashi point out that the total mass of primordial black holes that their model produces matches the amount of dark matter in the universe.
“Current observations rule out black holes to be dark matter, except in the asteroid-mass range,” says Ralegankar. “We showed that our models can produce black holes in that mass range.”
Richard Massey, who is a dark-matter researcher at Durham University in the UK, agrees that microlensing observations by projects such as the Optical Gravitational Lensing Experiment (OGLE) have ruled out a population of black holes with planetary masses, but not asteroid masses. However, Massey is doubtful that these black holes could make up dark matter.
“It would be pretty contrived for them to make up a large fraction of what we call dark matter,” he says. “It’s possible that dark matter could be these primordial black holes, but they’d need to have been created with the same mass no matter where they were and whatever environment they were in, and that mass would have to be tuned to evade current experimental evidence.”
In the coming years, upgrades to OGLE and the launch of NASA’s Roman Space Telescope should finally provide sensitivity to microlensing events produced by objects in the asteroid mass range, allowing researchers to settle the matter.
It is also possible that cannibal and boson stars exist today, produced by collapsing haloes of dark matter. But unlike those proposed for the early universe, modern cannibal and boson stars would be stable and long-lasting.
“Much work has already been done for boson stars from dark matter, and we are simply suggesting that future studies should also think about the possibility of cannibal stars from dark matter,” explains Ralegankar. “Gravitational lensing would be one way to search for them, and depending on models, maybe also gamma rays from dark-matter annihilation.”
The deliberate targeting of scientists in recent years has become one of the most disturbing, and overlooked, developments in modern conflict. In particular, Iranian physicists and engineers have been singled out for almost two decades, with sometimes fatal consequences. In 2007 Ardeshir Hosseinpour, a nuclear physicist at Shiraz University, died in mysterious circumstances that were widely attributed to poisoning or radioactive exposure.
Over the following years, at least five more Iranian researchers have been killed. They include particle physicist Masoud Ali-Mohammadi, who was Iran’s representative at the Synchrotron-light for Experimental Science and Applications in the Middle East project. Known as SESAME, it is the only scientific project in the Middle East where Iran and Israel collaborate.
Others to have died include nuclear engineer Majid Shahriari, another Iranian representative at SESAME, and nuclear physicist Mohsen Fakhrizadeh, who were both killed by bombing or gunfire in Tehran. These attacks were never formally acknowledged, nor were they condemned by international scientific institutions. The message, however, was implicit: scientists in politically sensitive fields could be treated as strategic targets, even far from battlefields.
What began as covert killings of individual researchers has now escalated, dangerously, into open military strikes on academic communities. Israeli airstrikes on residential areas in Tehran and Isfahan during the 12-day conflict between the two countries in June led to at least 14 Iranian scientists and engineers and members of their family being killed. The scientists worked in areas such as materials science, aerospace engineering and laser physics. I believe this shift, from covert assassinations to mass casualties, crossed a line. It treats scientists as enemy combatants simply because of their expertise.
The assassinations of scientists are not just isolated tragedies; they are a direct assault on the global commons of knowledge, corroding both international law and international science. Unless the world responds, I believe the precedent being set will endanger scientists everywhere and undermine the principle that knowledge belongs to humanity, not the battlefield.
Drawing a red line
International humanitarian law is clear: civilians, including academics, must be protected. Targeting scientists based solely on their professional expertise undermines the Geneva Convention and erodes the civilian–military distinction at the heart of international law.
Iran, whatever its politics, remains a member of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. Its scientists are entitled under international law to conduct peaceful research in medicine, energy and industry. Their work is no more inherently criminal than research that other countries carry out in artificial intelligence (AI), quantum technology or genetics.
If we normalize the preemptive assassination of scientists, what stops global rivals from targeting, say, AI researchers in Silicon Valley, quantum physicists in Beijing or geneticists in Berlin? Once knowledge itself becomes a liability, no researcher is safe. Equally troubling is the silence of the international scientific community with organizations such as the UN, UNESCO and the European Research Council as well as leading national academies having not condemned these killings, past or present.
Silence is not neutral. It legitimizes the treatment of scientists as military assets. It discourages international collaboration in sensitive but essential research and it creates fear among younger researchers, who may abandon high-impact fields to avoid risk. Science is built on openness and exchange, and when researchers are murdered for their expertise, the very idea of science as a shared human enterprise is undermined.
The assassinations are not solely Iran’s loss. The scientists killed were part of a global community; collaborators and colleagues in the pursuit of knowledge. Their deaths should alarm every nation and every institution that depends on research to confront global challenges, from climate change to pandemics.
I believe that international scientific organizations should act. At a minimum, they should publicly condemn the assassination of scientists and their families; support independent investigations under international law; as well as advocate for explicit protections for scientists and academic facilities in conflict zones.
Importantly, voices within Israel’s own scientific community can play a critical role too. Israeli academics, deeply committed to collaboration and academic freedom, understand the costs of blurring the boundary between science and war. Solidarity cannot be selective.
Recent events are a test case for the future of global science. If the international community tolerates the targeting of scientists, it sets a dangerous precedent that others will follow. What appears today as a regional assault on scientists from the Global South could tomorrow endanger researchers in China, Europe, Russia or the US.
Science without borders can only exist if scientists are recognized and protected as civilians without borders. That principle is now under direct threat and the world must draw a red line – killing scientists for their expertise is unacceptable. To ignore these attacks is to invite a future in which knowledge itself becomes a weapon, and the people who create it expendable. That is a world no-one should accept.
In proton exchange membrane water electrolysis (PEMWE) systems, voltage cycles dropping below a threshold are associated with reversible performance improvements, which remain poorly understood despite being documented in literature. The distinction between reversible and irreversible performance changes is crucial for accurate degradation assessments. One approach in literature to explain this behaviour is the oxidation and reduction of iridium. Iridium-based electrocatalyst activity and stability in PEMWE hinge on their oxidation state, influenced by the applied voltage. Yet, full-cell PEMWE dynamic performance remains under-explored, with a focus typically on stability rather than activity. This study systematically investigates reversible performance behaviour in PEMWE cells using Ir-black as an anodic catalyst. Results reveal a recovery effect when the low voltage level drops below 1.5 V, with further enhancements observed as the voltage decreases, even with a short holding time of 0.1 s. This reversible recovery is primarily driven by improved anode reaction kinetics, likely due to changing iridium oxidation states, and is supported by alignment between the experimental data and a dynamic model that links iridium oxidation/reduction processes to performance metrics. This model allows distinguishing between reversible and irreversible effects and enables the derivation of optimized operation schemes utilizing the recovery effect.
Tobias Krenz
Tobias Krenz is a simulation and modelling engineer at Siemens Energy in the Transformation of Industry business area focusing on reducing energy consumption and carbon-dioxide emissions in industrial processes. He completed his PhD from Liebniz University Hannover in February 2025. He earned a degree from Berlin University of Applied Sciences in 2017 and a MSc from Technische Universität Darmstadt in 2020.
Alexander Rex
Alexander Rex is a PhD candidate at the Institute of Electric Power Systems at Leibniz University Hannover. He holds a degree in mechanical engineering from Technische Universität Braunschweig, an MEng from Tongji University, and an MSc from Karlsruhe Institute of Technology (KIT). He was a visiting scholar at Berkeley Lab from 2024 to 2025.
Quantum advantage international standardization efforts will, over time, drive economies of scale and multivendor interoperability across the nascent quantum supply chain. (Courtesy: iStock/Peter Hansen)
How do standards support the translation of quantum science into at-scale commercial opportunities?
The standardization process helps to promote the legitimacy of emerging quantum technologies by distilling technical inputs and requirements from all relevant stakeholders across industry, research and government. Put simply: if you understand a technology well enough to standardize elements of it, that’s when you know it’s moved beyond hype and theory into something of practical use for the economy and society.
What are the upsides of standardization for developers of quantum technologies and, ultimately, for end-users in industry and the public sector?
Standards will, over time, help the quantum technology industry achieve critical mass on the supply side, with those economies of scale driving down prices and increasing demand. As the nascent quantum supply chain evolves – linking component manufacturers, subsystem developers and full-stack quantum computing companies – standards will also ensure interoperability between products from different vendors and different regions.
Those benefits flow downstream as well because standards, when implemented properly, increase trust among end-users by defining a minimum quality of products, processes and services. Equally important, as new innovations are rolled out into the marketplace by manufacturers, standards will ensure compatibility across current and next-generation quantum systems, reducing the likelihood of lock-ins to legacy technologies.
What’s your role in coordinating NPL’s standards effort in quantum science and technology?
I have strategic oversight of our core technical programmes in quantum computing, quantum networking, quantum metrology and quantum-enabled PNT (position, navigation and timing). It’s a broad-scope remit that spans research, training as well as responsibility for standardization and international collaboration, with the latter often going hand-in-hand.
Right now, we have over 150 people working within the NPL quantum metrology programme. Their collective focus is on developing the measurement science necessary to build, test and evaluate a wide range of quantum devices and systems. Our research helps innovators, whether in an industry or university setting, to push the limits of quantum technology by providing leading-edge capabilities and benchmarking to measure the performance of new quantum products and services.
Tim Prior “We believe that quantum metrology and standardization are key enablers of quantum innovation.” (Courtesy: NPL)
It sounds like there are multiple layers of activity.
That’s right. For starters, we have a team focusing on the inter-country strategic relationships, collaborating closely with colleagues at other National Metrology Institutes (like NIST in the US and PTB in Germany). A key role in this regard is our standards specialist who, given his background working in the standards development organizations (SDOs), acts as a “connector” between NPL’s quantum metrology teams and, more widely, the UK’s National Quantum Technology Programme and the international SDOs.
We also have a team of technical experts who sit on specialist working groups within the SDOs. Their inputs to standards development are not about NPL’s interests, rather providing expertise and experience gained from cutting-edge metrology; also building a consolidated set of requirements gathered from stakeholders across the quantum community to further the UK’s strategic and technical priorities in quantum.
So NPL’s quantum metrology programme provides a focal point for quantum standardization?
Absolutely. We believe that quantum metrology and standardization are key enablers of quantum innovation, fast-tracking the adoption and commercialization of quantum technologies while building confidence among investors and across the quantum supply chain and early-stage user base. For NPL and its peers, the task right now is to agree on the terminology and best practice as we figure out the performance metrics, benchmarks and standards that will enable quantum to go mainstream.
How does NPL engage the UK quantum community on standards development?
Front-and-centre is the UK Quantum Standards Network Pilot. This initiative – which is being led by NPL – brings together representatives from industry, academia and government to work on all aspects of standards development: commenting on proposals and draft standards; discussing UK standards policy and strategy; and representing the UK in the European and international SDOs. The end-game? To establish the UK as a leading voice in quantum standardization, both strategically and technically, and to ensure that UK quantum technology companies have access to global supply chains and markets.
What about NPL outreach to prospective end-users of quantum technologies?
The Quantum Standards Network Pilot also provides a direct line to prospective end-users of quantum technologies in business sectors like finance, healthcare, pharmaceuticals and energy. What’s notable is that the end-users are often preoccupied with questions that link in one way or another to standardization. For example: how well do quantum technologies stack up against current solutions? Are quantum systems reliable enough yet? What does quantum cost to implement and maintain, including long-term operational costs? Are there other emerging technologies that could do the same job? Is there a solid, trustworthy supply chain?
It’s clear that international collaboration is mandatory for successful standards development. What are the drivers behind the recently announced NMI-Q collaboration?
The quantum landscape is changing fast, with huge scope for disruptive innovation in quantum computing, quantum communications and quantum sensing. Faced with this level of complexity, NMI-Q leverages the combined expertise of the world’s leading National Metrology Institutes – from the G7 countries and Australia – to accelerate the development and adoption of quantum technologies.
No one country can do it all when it comes to performance metrics, benchmarks and standards in quantum science and technology. As such, NMI-Q’s priorities are to conduct collaborative pre-standardization research; develop a set of “best measurement practices” needed by industry to fast-track quantum innovation; and, ultimately, shape the global standardization effort in quantum. NPL’s prominent role within NMI-Q (I am the co-chair along with Barbara Goldstein of NIST) underscores our commitment to evidence-based decision-making in standards development and, ultimately, to the creation of a thriving quantum ecosystem.
What are the attractions of NPL’s quantum programme for early-career physicists?
Every day, our measurement scientists address cutting-edge problems in quantum – as challenging as anything they’ll have encountered previously in an academic setting. What’s especially motivating, however, is that the NPL is a mission-driven endeavour with measurement outcomes linking directly to wider societal and economic benefits – not just in the UK, but internationally as well.
Quantum metrology: at your service
Measurement for Quantum (M4Q) is a flagship NPL programme that provides industry partners with up to 20 days of quantum metrology expertise to address measurement challenges in applied R&D and product development. The service – which is free of charge for projects approved after peer review – helps companies to bridge the gap from technology prototype to full commercialization.
To date, more than two-thirds of the companies to participate in M4Q report that their commercial opportunity has increased as a direct result of NPL support. In terms of specifics, the M4Q offering includes the following services:
Small-current and quantum-noise measurements
Measurement of material-induced noise in superconducting quantum circuits
Nanoscale imaging of physical properties for applications in quantum devices
Characterization of single-photon sources and detectors
Characterization of compact lasers and other photonic components
Semiconductor device characterisation at cryogenic temperatures
New experiments at CERN by an international team have ruled out a potential source of intergalactic magnetic fields. The existence of such fields is invoked to explain why we do not observe secondary gamma rays originating from blazars.
Led by Charles Arrowsmith at the UK’s University of Oxford, the team suggests the absence of gamma rays could be the result of an unexplained phenomenon that took place in the early universe.
A blazar is an extraordinarily bright object with a supermassive black hole at its core. Some of the matter falling into the black hole is accelerated outwards in a pair of opposing jets, creating intense beams of radiation. If a blazar jet points towards Earth, we observe a bright source of light including high-energy teraelectronvolt gamma rays.
During their journey across intergalactic space, these gamma-ray photons will occasionally collide with the background starlight that permeates the universe. These collisions can create cascades of electrons and positrons that can then scatter off photons to create gamma rays in the gigaelectronvolt energy range. These gamma-rays should travel in the direction of the original jet, but this secondary radiation has never been detected.
Deflecting field
Magnetic fields could be the reason for this dearth, as Arrowsmith explains: “The electrons and positrons in the pair cascade would be deflected by an intergalactic magnetic field, so if this is strong enough, we could expect these pairs to be steered away from the line of sight to the blazar, along with the reprocessed gigaelectronvolt gamma rays.” It is not clear, however, that such fields exist – and if they do, what could have created them.
Another explanation for the missing gamma rays involves the extremely sparse plasma that permeates intergalactic space. The beam of electron–positron pairs could interact with this plasma, generating magnetic fields that separate the pairs. Over millions of years of travel, this process could lead to beam–plasma instabilities that reduce the beam’s ability to create gigaelectronvolt gamma rays that are focused on Earth.
Oxford’s Gianluca Gregori explains, “We created an experimental platform at the HiRadMat facility at CERN to create electron–positron pairs and transport them through a metre-long ambient argon plasma, mimicking the interaction of pair cascades from blazars with the intergalactic medium”. Once the pairs had passed through the plasma, the team measured the degree to which they had been separated.
Tightly focused
Called Fireball, the experiment found that the beams remained far more tightly focused than expected. “When these laboratory results are scaled up to the astrophysical system, they confirm that beam–plasma instabilities are not strong enough to explain the absence of the gigaelectronvolt gamma rays from blazars,” Arrowsmith explains. Unless the pair beam is perfectly collimated, or composed of pairs with exactly equal energies, instabilities were actively suppressed in the plasma.
While the experiment suggests that an intergalactic magnetic field remains the best explanation for the lack of gamma rays, the mystery is far from solved. Gregori explains, “The early universe is believed to be extremely uniform – but magnetic fields require electric currents, which in turn need gradients and inhomogeneities in the primordial plasma.” As a result, confirming the existence of such a field could point to new physics beyond the Standard Model, which may have dominated in the early universe.
More information could come with opening of the Cherenkov Telescope Array Observatory. This will comprise ground-based gamma-ray detectors planned across facilities in Spain and Chile, which will vastly improve on the resolutions of current-generation detectors.
Physics Around the Clock: Adventures in the Science of Everyday Living By Michael Banks
Why do Cheerios tend to stick together while floating in a bowl of milk? Why does a runner’s ponytail swing side to side? These might not be the most pressing questions in physics, but getting to the answers is both fun and provides insights into important scientific concepts. These are just two examples of everyday physics that Physics World news editor Michael Banks explores in his book Physics Around the Clock, which begins with the physics (and chemistry) of your morning coffee and ends with a formula for predicting the winner of those cookery competitions that are mainstays of evening television. Hamish Johnston
Quantum 2.0: the Past, Present and Future of Quantum Physics By Paul Davies
You might wonder why the world needs yet another book about quantum mechanics, but for physicists there’s no better guide than Paul Davies. Based for the last two decades at Arizona State University in the US, in Quantum 2.0 Davies tackles the basics of quantum physics – along with its mysteries, applications and philosophical implications – with great clarity and insight. The book ends with truly strange topics such as quantum Cheshire cats and delayed-choice quantum erasers – see if you prefer his descriptions to those we’ve attempted in Physics World this year. Matin Durrani
Can You Get Music on the Moon? the Amazing Science of Sound and Space By Sheila Kanani, illustrated by Liz Kay
Why do dogs bark but wolves howl? How do stars “sing”? Why does thunder rumble? This delightful, fact-filled children’s book answers these questions and many more, taking readers on an adventure through sound and space. Written by planetary scientist Sheila Kanani and illustrated by Liz Kay, Can you get Music on the Moon? reveals not only how sound is produced but why it can make us feel certain things. Each of the 100 or so pages brims with charming illustrations that illuminate the many ways that sound is all around us. Michael Banks
2025 Puffin Books
A Short History of Nearly Everything 2.0 By Bill Bryson
Alongside books such as Stephen Hawking’s A Brief History of Time and Carl Sagan’s Cosmos, British-American author Bill Bryson’s A Short History of Nearly Everything is one of the bestselling popular-science books of the last 50 years. First published in 2003, the book became a fan favourite of readers across the world and across disciplines as Bryson wove together a clear and humorous narrative of our universe. Now, 22 years later, he has released an updated and revised volume – A Short History of Nearly Everything 2.0 – that covers major updates in science from the past two decades. This includes the discovery of the Higgs boson and the latest on dark-matter research. The new edition is still imbued with all the wit and wisdom of the original, making it the perfect Christmas present for scientists and anyone else curious about the world around us. Tushna Commissariat
Coherent crystalline interfaces Atomic-resolution image of a superconducting germanium:gallium (Ge:Ga) trilayer with alternating Ge:Ga and silicon layers demonstrating precise control of atomic interfaces. (Courtesy: Salva Salmani-Rezaie)
The ability to induce superconductivity in materials that are inherently semiconducting has been a longstanding research goal. Improving the conductivity of semiconductor materials could help develop quantum technologies with a high speed and energy efficiency, including superconducting quantum bits (qubits) and cryogenic CMOS control circuitry. However, this task has proved challenging in traditional semiconductors – such as silicon or germanium – as it is difficult to maintain the optimal superconductive atomic structure.
In a new study, published in Nature Nanotechnology, researchers have used molecular beam epitaxy (MBE) to grow gallium-hyperdoped germanium films that retain their superconductivity. When asked about the motivation for this latest work, Peter Jacobson from the University of Queensland tells Physics World about his collaboration with Javad Shabani from New York University.
“I had been working on superconducting circuits when I met Javad and discovered the new materials their team was making,” he explains. “We are all trying to understand how to control materials and tune interfaces in ways that could improve quantum devices.”
Germanium: from semiconductor to superconductor
Germanium is a group IV element, so its properties bridge those of both metals and insulators. Superconductivity can be induced in germanium by manipulating its atomic structure to introduce more electrons into the atomic lattice. These extra electrons interact with the germanium lattice to create electron pairs that move without resistance, or in other words, they become superconducting.
Hyperdoping germanium (at concentrations well above the solid solubility limit) with gallium induces a superconducting state. However, this material is traditionally unstable due to the presence of structural defects, dopant clustering and poor thickness control. There have also been many questions raised as to whether these materials are intrinsically superconducting, or whether it is actually gallium clusters and unintended phases that are solely responsible for the superconductivity of gallium-doped germanium.
Considering these issues and looking for a potential new approach, Jacobson notes that X-ray absorption measurements at the Australian Synchrotron were “the first real sign” that Shabani’s team had grown something special. “The gallium signal was exceptionally clean, and early modelling showed that the data lined up almost perfectly with a purely substitutional picture,” he explains. “That was a genuine surprise. Once we confirmed and extended those results, it became clear that we could probe the mechanism of superconductivity in these films without the usual complications from disorder or spurious phases.”
Epitaxial growth improves superconductivity control
In a new approach, Jacobson, Shabani and colleagues used MBE to grow the crystals instead of relying on ion implantation techniques, allowing the germanium to by hyperdoped with gallium. Using MBE forces the gallium atoms to replace germanium atoms within the crystal lattice at levels much higher than previously seen. The process also provided better control over parasitic heating during film growth, allowing the researchers to achieve the structural precision required to understand and control the superconductivity of these germanium:gallium (Ge:Ga) materials, which were found to become superconducting at 3.5 K with a carrier concentration of 4.15 × 1021 holes/cm3. The critical gallium dopant threshold to achieve this was 17.9%.
Using synchrotron-based X-ray absorption, the team found that the gallium dopants were substitutionally incorporated into the germanium lattice and induced a tetragonal distortion to the unit cell. Density functional theory calculations showed that this causes a shift in the Fermi level into the valence band and flattens electronic bands. This suggests that the structural order of gallium in the germanium lattice creates a narrow band that facilitates superconductivity in germanium, and that this superconductivity arises intrinsically in the germanium, rather than being governed by defects and gallium clusters.
The researchers tested trilayer heterostructures – Ge:Ga/Si/Ge:Ga and Ge:Ga/Ge/Ge:Ga – as proof-of-principle designs for vertical Josephson junction device architectures. In the future, they hope to develop these into fully fledged Josephson junction devices.
Commenting on the team’s future plans for this research, Jacobson concludes: “I’m very keen to examine this material with low-temperature scanning tunnelling microscopy (STM) to directly measure the superconducting gap, because STM adds atomic-scale insights that complement our other measurements and will help clarify what sets hyperdoped germanium apart”.
“This is one of the big remaining frontiers in astronomy,” says Phil Bull, a cosmologist at the Jodrell Bank Centre for Astrophysics at the University of Manchester. “It’s quite a pivotal era of cosmic history that, it turns out, we don’t actually understand.”
Bull is referring to the vital but baffling period in the early universe – from 380,000 years to one billion years after the Big Bang – when its structure went from simple to complex. To lift the veil on this epoch, experiments around the world – from Australia to the Arctic – are racing to find a specific but elusive signal from the earliest hydrogen atoms. This signal could confirm or disprove scientists’ theories of how the universe evolved and the physics that governs it.
Hydrogen is the most abundant element in the universe. As neutral hydrogen atoms change states, they can emit or absorb photons. This spectral transition, which can be stimulated by radiation, produces an emission or absorption radio wave signal with a wavelength of 21 cm. To find out what happened during that early universe, astronomers are searching for these 21 cm photons that were emitted by primordial hydrogen atoms.
But despite more teams joining the hunt every year, no-one has yet had a confirmed detection of this radiation. So who will win the race to find this signal and how is the hunt being carried out?
A blank spot
Let’s first return to about 380,000 years after the Big Bang, when the universe had expanded and cooled to below 3000 K. At this stage, neutral atoms, including atomic hydrogen, could form. Thanks to the absence of free electrons, ordinary matter particles could decouple from light, allowing it to travel freely across the universe. This ancient radiation that permeates the sky is known as the cosmic microwave background (CMB).
But after that we don’t know much about what happened for the next few hundred million years. Meanwhile, the oldest known galaxy MoM-z14 – which existed about 280 million years after the Big Bang – was observed in April 2025 by the James Webb Space Telescope. So there is currently a gap of just under 280 million years in our observations of the early universe. “It’s one of the last blank spots,” says Anastasia Fialkov, an astrophysicist at the Institute of Astronomy of the University of Cambridge.
This “blank spot” is a bridge between the early, simple universe and today’s complex structured cosmos. During this early epoch, the universe went from being filled with a thick cloud of neutral hydrogen, to being diversely populated with stars, black holes and everything in between. It covers the end of the cosmic dark ages, the cosmic dawn, and the epoch of reionization – and is arguably one of the most exciting periods in our universe’s evolution.
During the cosmic dark ages, after the CMB flooded the universe, the only “ordinary” matter (made up of protons, neutrons and electrons) was neutral hydrogen (75% by mass) and neutral helium (25%), and there were no stellar structures to provide light. It is thought that gravity then magnified any slight fluctuations in density, causing some of this primordial gas to clump and eventually form the first stars and galaxies – a time called the cosmic dawn. Next came the epoch of reionization, when ultraviolet and X-ray emissions from those first celestial objects heated and ionized the hydrogen atoms, turning the neutral gas into a charged plasma of electrons and protons.
Stellar imprint
The 21 cm signal astronomers are searching for was produced when the spectral transition was excited by collisions in the hydrogen gas during the dark ages and then by the first photons from the first stars during the cosmic dawn. However, the intensity of the 21 cm signal can only be measured against the CMB, which acts as a steady background source of 21 cm photons.
When the hydrogen was colder than the background radiation, there were few collisions, and the atoms would have absorbed slightly more 21 cm photons from the CMB than they emitted themselves. The 21 cm signal would appear as a deficit, or absorption signal, against the CMB. But when the neutral gas was hotter than the CMB, the atoms would emit more photons than they absorbed, causing the 21 cm signal to be seen as a brighter emission against the CMB. These absorption and emission rates depend on the density and temperature of the gas, and the timing and intensity of radiation from the first cosmic sources. Essentially, the 21 cm signal became imprinted with how those early sources transformed the young universe.
One way scientists are trying to observe this imprint is to measure the average – or “global” – signal across the sky, looking at how it shifts from absorption to emission compared to the CMB. Normally, a 21 cm radio wave signal has a frequency of about 1420 MHz. But this ancient signal, according to theory, has been emitted and absorbed at different intensities throughout this cosmic “blank spot”, depending on the universe’s evolutionary processes at the time. The expanding universe has also stretched and distorted the signal as it travelled to Earth. Theories predict that it would now be in the 1 to 200 MHz frequency range – with lower frequencies corresponding to older eras – and would have a wavelength of metres rather than centimetres.
Importantly, the shape of the global 21 cm signal over time could confirm the lambda-cold dark matter (ΛCDM) model, which is the most widely accepted theory of the cosmos; or it could upend it. Many astronomers have dedicated their careers to finding this radiation, but it is challenging for a number of reasons.
Unfortunately, the signal is incredibly faint. Its brightness temperature, which is measured as the change in the CMB’s black body temperature (2.7 K), will only be in the region of 0.1 K.
a A simulation of the sky-averaged (global) signal as a function of time (horizontal) and space (vertical). b A typical model of the global 21 cm line with the main cosmic events highlighted. Each experiment searching for the global 21 cm signal focuses on a particular frequency band. For example, the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) is looking at the 50–170 MHz range (blue).
There is also no single source of this emission, so, like the CMB, it permeates the universe. “If it was the only signal in the sky, we would have found it by now,” says Eloy de Lera Acedo, head of Cavendish Radio Astronomy and Cosmology at the University of Cambridge. But the universe is full of contamination, with the Milky Way being a major culprit. Scientists are searching for 0.1 K in an environment “that’s a million times brighter”, he explains.
And even before this signal reaches the radio-noisy Earth, it has to travel through the atmosphere, which further distorts and contaminates it. “It’s a very difficult measurement,” says Rigel Cappallo, a research scientist at the MIT Haystack Observatory. “It takes a really, really well calibrated instrument that you understand really well, plus really good modelling.”
The EDGES instrument is a dipole antenna, which resembles a ping-pong table with a gap in the middle (see photo at top of article for the 2024 set-up). It is mounted on a large metal groundsheet, which is about 30 × 30 m. Its ground-breaking observation was made at a remote site in western Australia, far from radio frequency interference.
But in the intervening seven years, no-one else has been able to replicate the EDGES results.
The spectrum dip that EDGES detected was very different from what theorists had expected. “There is a whole family of models that are predicted by the different cosmological scenarios,” explains Ravi Subrahmanyan, a research scientist at Australia’s national science agency CSIRO. “When we take measurements, we compare them with the models, so that we can rule those models in or out.”
In general, the current models predict a very specific envelope of signal possibilities (see figure 1). First, they anticipate an absorption dip in brightness temperature of around 0.1 to 0.2 K, caused by the temperature difference between the cold hydrogen gas (in an expanding universe) and the warmer CMB. Then, a speedy rise and photon emission is predicted as the gas starts to warm when the first stars form, and the signal should spike dramatically when the first X-ray binary stars fire up and heat up the surrounding gas. The signal is then expected to fade as the epoch of reionization begins, because ionized particles cannot undergo the spectral transition. With models, scientists theorize when this happened, how many stars there were, and how the cosmos unfurled.
2 Weird signal
(Courtesy: SARAS Team)
The 21 cm signals predicted by current cosmology models (coloured lines) and the detection by the EDGES experiment (dashed black line).
“It’s just one line, but it packs in so many physical phenomena,” says Fialkov, referring to the shape of the 21 cm signal’s brightness temperature over time. The timing of the dip, its gradient and magnitude all represent different milestones in cosmic history, which affect how it evolved.
The EDGES team, however, reported a dip of more than double the predicted size, at about 78 MHz (see figure 2). While the frequency was consistent with predictions, the very wide and deep dip of the signal took the community by surprise.
“It would be a revolution in physics, because that signal will call for very, very exotic physics to explain it,” says de Lera Acedo. “Of course, the first thing we need to do is to make sure that that is actually the signal.”
A spanner in the works
The EDGES claim has galvanized the cosmology community. “It set a cat among the pigeons,” says Bull. “People realized that, actually, there’s some very exciting science to be done here.” Some groups are trying to replicate the EDGES observation, while others are trying new approaches to detect the signal that the models promise.
The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) – a collaboration between the University of Cambridge and Stellenbosch University in South Africa – focuses on the 50–170 MHz frequency range. Sitting on the dry and empty plains of South Africa’s Northern Cape, it is targeting the EDGES observation (Nature Astronomy 6 984).
The race to replicate REACH went online in the Karoo region of South Africa in December 2023. (Courtesy: Saurabh Pegwal, REACH collaboration)
In this radio-quiet environment, REACH has set up two antennas: one looks like EDGES’ dipole ping-pong table, while the other is a spiral cone. They sit on top of a giant metallic mesh – the ground plate – in the shape of a many-pointed star, which aims to minimize reflections from the ground.
Hunting for this signal “requires precision cosmology and engineering”, says de Lera Acedo, the principal investigator on REACH. Reflections from the ground or mesh, calibration errors, and signals from the soil, are the kryptonite of cosmic dawn measurements. “You need to reduce your systemic noise, do better analysis, better calibration, better cleaning [to remove other sources from observations],” he says.
Desert, water, snow
Another radio telescope, dubbed the Shaped Antenna measurement of the background Radio Spectrum (SARAS) – which was established in the late 2000s by the Raman Research Institute (RRI) in Bengaluru, India – has undergone a number of transformations to reduce noise and limit other sources of radiation. Over time, it has morphed from a dipole on the ground to a metallic cone floating on a raft. It is looking at 40 to 200 MHz (Exp. Astron.51 193).
After the EDGES claim, SARAS pivoted its attention to verifying the detection, explains Saurabh Singh, a research scientist at the RRI. “Initially, we were not able to get down to the required sensitivity to be able to say anything about their detection,” he explains. “That’s why we started floating our radiometer on water.” Buoying the experiment reduces ground contamination and creates a more predictable surface to include in calculations.
Floating telescope Evolution of the SARAS experiment and sites up to 2020. The third edition of the telescope, SARAS 3, was deployed on lakes to further reduce radio interference. (Courtesy: SARAS Team)
Using data from their floating radiometer, in 2022 Singh and colleagues disfavoured EDGES’ claim (Nature Astronomy 6 607), but for many groups the detection still remains a target for observations.
While SARAS has yet to detect a cosmic-dawn signal of its own, Singh says that non-detection is also an important element of finding the global 21 cm signal. “Non-detection gives us an opportunity to rule out a lot of these models, and that has helped us to reject a lot of properties of these stars and galaxies,” he says.
Raul Monsalve Jara – a cosmologist at the University of California, Berkeley – has been part of the EDGES collaboration since 2012, but decided to also explore other ways to detect the signal. “My view is that we need several experiments doing different things and taking different approaches,” he says.
The Mapper of the IGM Spin Temperature (MIST) experiment, of which Monsalve is co-principal investigator, is a collaboration between Chilean, Canadian, Australian and American researchers. These instruments are looking at 25 to 105 MHz (MNRAS 530 4125). “Our approach was to simplify the instrument, get rid of the metal ground plate, and to take small, portable instruments to remote locations,” he explains. These locations have to fulfil very specific requirements – everything around the instrument, from mountains to the soil, can impact the instrument’s performance. “If the soil itself is irregular, that will be very difficult to characterize and its impact will be difficult to remove [from observations],” Monsalve says.
Physics on the move MIST conducts measurements of the sky-averaged radio spectrum at frequencies below 200 MHz. Its monopole and dipole variants are highly portable and have been deployed in some of the most remote sites on Earth, including the Arctic (top) and the Nevada desert (bottom). (Courtesy: Raul Monsalve)
So far, the MIST instrument, which is also a dipole ping-pong table, has visited a desert in California, another in Nevada, and even the Arctic. Each time, the researchers spend a few weeks at the site collecting data, and it is portable and easy to set up, Monsalve explains. The team is planning more observations in Chile. “If you suspect that your environment could be doing something to your measurements, then you need to be able to move around,” continues Monsalve. “And we are contributing to the field by doing that.”
Aaron Parsons, also from the University of California, Berkeley, decided that the best way to detect this elusive signal would be to try and eliminate the ground entirely – by suspending a rotating antenna over a giant canyon with 100 m empty space in every direction.
His Electromagnetically Isolated Global Signal Estimation Platform (EIGSEP) includes an antenna hanging four storeys above the ground, attached to Kevlar cable strung across a canyon in Utah. It’s observing at 50 to 250 MHz. “It continuously rotates around and twists every which way,” Parsons explains. Hopefully, that will allow them to calibrate the instrument very accurately. Two antennas on the ground cross-correlate observations. EIGSEP began making observations last year.
More experiments are expected to come online in the next year. The Remote HI eNvironment Observer (RHINO), an initiative of the University of Manchester, will have a horn-shaped receiver made of a metal mesh that is usually used to construct skyscrapers. Horn shapes are particularly good for calibration, allowing for very precise measurements. The most famous horn-shaped antenna is Bell Laboratories’ Holmdel Horn Antenna in the US, with which two scientists accidentally discovered the CMB in 1965.
Initially, RHINO will be based at Jodrell Bank Observatory in the UK, but like other experiments, it could travel to other remote locations to hunt for the 21 cm signal.
Similarly, Subrahmanyan – who established the SARAS experiment in India and is now with CSIRO in Australia – is working to design a new radiometer from scratch. The instrument, which will focus on 40–160 MHz, is called Global Imprints from Nascent Atoms to Now (GINAN). He says that it will feature a recently patented self-calibrating antenna. “It gives a much more authentic measurement of the sky signal as measured by the antenna,” he explains.
In the meanwhile, the EDGES collaboration has not been idle. MIT Haystack Observatory’s Cappallo project manages EDGES, which is currently in its third iteration. It is still the size of a desk, but its top now looks like a box, with closed sides and its electronics tucked inside, and an even larger metal ground plate. The team has now made observations from islands in the Canadian archipelago and in Alaska’s Aleutian island chain (see photo at top of article).
“The 2018 EDGES result is not going to be accepted by the community until somebody completely independently verifies it,” Cappallo explains. “But just for our own sanity and also to try to improve on what we can do, we want to see it from as many places as possible and as many conditions as possible.” The EDGES team has replicated its results using the same data analysis pipeline, but no-one else has been able to reproduce the unusual signal.
All the astronomers interviewed welcomed the introduction of new experiments. “I think it’s good to have a rich field of people trying to do this experiment because nobody is going to trust any one measurement,” says Parsons. “We need to build consensus here.”
Taking off
Some astronomers have decided to avoid the struggles of trying to detect the global 21 cm signal from Earth – instead, they have their sights set on the Moon. Earth’s atmosphere is one of the reasons why the 21 cm signal is so difficult to measure. The ionosphere, a charged region of the atmosphere, distorts and contaminates this incredibly faint signal. On the far side of the Moon, any antenna would also be shielded from the cacophony of radio-frequency interference from Earth.
“This is why some experiments are going to the Moon,” says Parsons, adding that he is involved in NASA’s LuSEE-Night experiment. LuSEE-Night, or the Lunar Surface Electromagnetics Experiment, aims to land a low-frequency experiment on the Moon next year.
In July, at the National Astronomical Meeting in Durham, the University of Cambridge’s de Lera Acedo presented a proposal to put a miniature radiometer into lunar orbit. Dubbed “Cosmocube”, it will be a nanosatellite that will orbit the Moon searching for this 21 cm signal.
Taking the hunt to space Provisional illustration of the CosmoCube with its antenna deployed for the 21 cm signal detection, i.e. in operational mode in space. This nanosatellite would travel to the far side of the Moon to get away from the Earth’s ionosphere, which introduces substantial distortions and absorption effects to any radio signal detection. (CC BY 4.0 Artuc and de Lera Acedo 2024 RAS Techniques and Instruments4 rzae061)
“It is just in the making,” says de Lera Acedo, adding that it will not be in operation for at least a decade. “But it is the next step.”
In the meanwhile, groups here on Earth are in a race to detect this elusive signal. The instruments are getting more sensitive, the modelling is improving, and the unknowns are reducing. “If we do the experiments right, we will find the signal,” Monsalve believes. The big question is who, of the many groups with their hat in the ring, is doing the experiment “right”.
Measuring blood flow to the brain is essential for diagnosing and developing treatments for neurological disorders such as stroke, vascular dementia or traumatic brain injury. Performing this measurement non-invasively is challenging, however, and achieved predominantly using costly MRI and nuclear medicine imaging techniques.
Emerging as an alternative, modalities based on optical transcranial measurement are cost-effective and easy to use. In particular, speckle contrast optical spectroscopy (SCOS) – an offshoot of laser speckle contrast imaging, which uses laser light speckles to visualize blood vessels – can measure cerebral blood flow (CBF) with high temporal resolution, typically above 30 Hz, and cerebral blood volume (CBV) through optical signal attenuation.
Researchers at the California Institute of Technology (Caltech) and the Keck School of Medicine’s USC Neurorestoration Center have designed a lightweight SCOS system that accurately measures blood flow to the brain, distinguishing it from blood flow to the scalp. Co-senior author Charles Liu of the Keck School of Medicine and team describe the system and their initial experimentation with it in APL Bioengineering.
Seven simultaneous measurements Detection channels with differing source-to-detector distances monitor blood dynamics in the scalp, skull and brain layers. (Courtesy: CC BY 4.0/APL Bioeng. 10.1063/5.0263953)
The SCOS system consists of a 3D-printed head mount designed for secure placement over the temple region. It holds a single 830 nm laser illumination fibre and seven detector fibres positioned at seven different source-to-detector (S–D) distances (between 0.6 and 2.6 cm) to simultaneously capture blood flow dynamics across layers of the scalp, skull and brain. Fibres with shorter S–D distances acquire shallower optical data from the scalp, while those with greater distances obtain deeper and broader data. The seven channels are synchronized to exhibit identical oscillation frequencies corresponding to the heart rate and cardiac cycle.
When the SCOS system directs the laser light onto a sample, multiple random scattering events occur before the light exits the sample, creating speckles. These speckles, which materialize on rapid timescales, are the result of interference of light travelling along different trajectories. Movement within the sample (of red blood cells, for instance) causes dynamic changes in the speckle field. These changes are captured by a multi-million-pixel camera with a frame rate above 30 frames/s and quantified by calculating the speckle contrast value for each image.
Human testing
The researchers used the SCOS system to perform CBF and CBV measurements in 20 healthy volunteers. To isolate and obtain surface blood dynamics from brain signals, the researchers gently pressed on the superficial temporal artery (a terminal branch of the external carotid artery that supplies blood to the face and scalp) to block blood flow to the scalp.
In tests on the volunteers, when temporal artery blood flow was occluded for 8 s, scalp-sensitive channels exhibited significant decreases in blood flow while brain-sensitive channels showed minimal change, enabling signals from the internal carotid artery that supplies blood to the brain to be clearly distinguished. Additionally, the team found that positioning the detector 2.3 cm or more away from the source allowed for optimal brain blood flow measurement while minimizing interference from the scalp.
“Combined with the simultaneous measurements at seven S–D separations, this approach enables the first quantitative experimental assessment of how scalp and brain signal contributions vary with depth in SCOS-based CBF measurements and, more broadly, in optical measurements,” they write. “This work also provides crucial insights into the optimal device S–D distance configuration for preferentially probing brain signal over scalp signal, with a practical and subject-friendly alternative for evaluating depth sensitivity, and complements more advanced, hardware-intensive strategies such as time-domain gating.”
The researchers are now working to improve the signal-to-noise ratio of the system. They plan to introduce a compact, portable laser and develop a custom-designed extended camera that spans over 3 cm in one dimension, enabling simultaneous and continuous measurement of blood dynamics across S–D distances from 0.5 to 3.5 cm. These design advancements will enhance spatial resolution and enable deeper brain measurements.
“This crucial step will help transition the system into a compact, wearable form suitable for clinical use,” comments Liu. “Importantly, the measurements described in this publication were achieved in human subjects in a very similar manner to how the final device will be used, greatly reducing barriers to clinical application.”
“I believe this study will advance the engineering of SCOS systems and bring us closer to a wearable, clinically practical device for monitoring brain blood flow,” adds co-author Simon Mahler, now at Stevens Institute of Technology. “I am particularly excited about the next stage of this project: developing a wearable SCOS system that can simultaneously measure both scalp and brain blood flow, which will unlock many fascinating new experiments.”
Significant progress towards answering one of the Clay Mathematics Institute’s seven Millennium Prize Problems has been achieved using deep learning. The challenge is to establish whether or not the Navier-Stokes equation of fluid dynamics develops singularities. The work was done by researchers in the US and UK – including some at Google Deepmind. Some team members had already shown that simplified versions of the equation could develop stable singularities, which reliably form. In the new work, the researchers found unstable singularities, which form only under very specific conditions.
The Navier–Stokes partial differential equation was developed in the early 19th century by Claude-Louis Navier and George Stokes. It has proved its worth for modelling incompressible fluids in scenarios including water flow in pipes; airflow around aeroplanes; blood moving in veins; and magnetohydrodynamics in plasmas.
No-one has yet proved, however, whether smooth, non-singular solutions to the equation always exist in three dimensions. “In the real world, there is no singularity…there is no energy going to infinity,” says fluid dynamics expert Pedram Hassanzadeh of the University of Chicago. “So if you have an equation that has a singularity, it tells you that there is some physics that is missing.” In 2000, the Clay Mathematics Institute in Denver, Colorado listed this proof as one of seven key unsolved problems in mathematics, offering a reward of $1,000,000 for an answer.
Computational approaches
Researchers have traditionally tackled the problem analytically, but in recent decades high-level computational simulations have been used to assist in the search. In a 2023 paper, mathematician Tristan Buckmaster of New York University and colleagues used a special type of machine learning algorithm called a physics-informed neural network to address the question.
“The main difference is…you represent [the solution] in a highly non-linear way in terms of a neural network,” explains Buckmaster. This allows it to occupy a lower-dimensional space with fewer free parameters, and therefore to be optimized more efficiently. Using this approach, the researchers successfully obtained the first stable singularity in the Euler equation. This is an analogy to the Navier-Stokes equation that does not include viscosity.
A stable singularity will still occur if the initial conditions of the fluid are changed slightly – although the time taken for them to form may be altered. An unstable singularity, however, may never occur if the initial conditions are perturbed even infinitesimally. Some researchers have hypothesized that any singularities in the Navier-Stokes equation must be unstable, but finding unstable singularities in a computer model is extraordinarily difficult.
“Before our result there hadn’t been an unstable singularity for an incompressible fluid equation found numerically,” says geophysicist Ching-Yao Lai of California’s Stanford University.
Physics-informed neural network
In the new work the authors of the original paper and others teamed up with researchers at Google Deepmind to search for unstable singularities in a bounded 3D version of the Euler equation using a physics-informed neural network. “Unlike conventional neural networks that learn from vast datasets, we trained our models to match equations that model the laws of physics,” writes Yongji Wang of New York University and Stanford on Deepmind’s blog. “The network’s output is constantly checked against what the physical equations expect, and it learns by minimizing its ‘residual’, the amount by which its solution fails to satisfy the equations.”
After an exhaustive search at a precision that is orders of magnitude higher than a normal deep learning protocol, the researchers discovered new families of singularities in the 3D Euler equation. They also found singularities in the related incompressible porous media equation used to model fluid flows in soil or rock; and in the Boussinesq equation that models atmospheric flows.
The researchers also gleaned insights into the strength of the singularities. This could be important as stronger singularities might be less readily smoothed out by viscosity when moving from the Euler equation to the Navier-Stokes equation. The researchers are now seeking to model more open systems to study the problem in a more realistic space.
Hassanzadeh, who was not involved in the work, believes that it is significant – although the results are not unexpected. “If the Euler equation tells you that ‘Hey, there is a singularity,’ it just tells you that there is physics that is missing and that physics becomes very important around that singularity,” he explains. “In the case of Euler we know that you get the singularity because, at the very smallest scales, the effects of viscosity become important…Finding a singularity in the Euler equation is a big achievement, but it doesn’t answer the big question of whether Navier-Stokes is a representation of the real world, because for us Navier-Stokes represents everything.”
He says the extension to studying the full Navier-Stokes equation will be challenging but that “they are working with the best AI people in the world at Deepmind,” and concludes “I’m sure it’s something they’re thinking about”.
The work is available on the arXiv pre-print server.
NASA’s Goddard Space Flight Center (GSFC) looks set to lose a big proportion of its budget as a two-decade reorganization plan for the centre is being accelerated. The move, which is set to be complete by March, has left the Goddard campus with empty buildings and disillusioned employees. Some staff even fear that the actions during the 43-day US government shutdown, which ended on 12 November, could see the end of much of the centre’s activities.
Based in Greenbelt, Maryland, the GSFC has almost 10 000 scientists and engineers, about 7000 of whom are directly employed by NASA contractors. Responsible for many of NASA’s most important uncrewed missions, telescopes, and probes, the centre is currently working on the Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, as well as the Dragonfly mission that is due to head for Saturn’s largest moon Titan in 2028.
The ability to meet those schedules has now been put in doubt by the Trump administration’s proposed budget for financial year 2026, which started in September. It calls for NASA to receive almost $19bn – far less than the $25bn it has received for the past two years. If passed, Goddard would lose more than 42% of its staff.
Congress, which passes the final budget, is not planning to cut NASA so deeply as it prepares its 2026 budget proposal. But on 24 September, Goddard managers began what they told employees was “a series of moves…that will reduce our footprint into fewer buildings”. The shift is intended to “bring down overall operating costs while maintaining the critical facilities we need for our core capabilities of the future”.
While this is part of a 20-year “master plan” for the GSFC that NASA’s leadership approved in 2019, the management’s memo stated that “all planned moves will take place over the next several months and be completed by March 2026″. A report in September by Democratic members of the Senate Committee on Commerce, Science, and Transportation, which is responsible for NASA, asserts that the cuts are “in clear violation of the [US] constitution [without] regard for the impacts on NASA’s science missions and workforce”.
On 3 November, the Goddard Engineers, Scientists and Technicians Association, a union representing NASA workers, reported that the GSFC had already closed over a third of its buildings, including some 100 labs. This had been done, it says, “with extreme haste and with no transparent strategy or benefit to NASA or the nation”. The union adds that the “closures are being justified as cost-saving but no details are being provided and any short-term savings are unlikely to offset a full account of moving costs and the reduced ability to complete NASA missions”.
Accounting for the damage
Zoe Lofgren, the lead Democrat on the House of Representatives Science Committee, has demanded of Sean Duffy, NASA’s acting administrator, that the agency “must now halt” any laboratory, facility and building closure and relocation activities at Goddard. In a letter to Duffy dated 10 November, she also calls for the “relocation, disposal, excessing, or repurposing of any specialized equipment or mission-related activities, hardware and systems” to also end immediately.
Lofgren now wants NASA to carry out a “full accounting of the damage inflicted on Goddard thus far” by 18 November. Owing to the government shutdown, no GSFC or NASA official was available to respond to Physics World’s requests for a response.
Meanwhile, the Trump administration has renominated billionaire entrepreneur Jared Isaacman as NASA’s administrator. Trump had originally nominated Isaacman, who had flown on a private SpaceX mission and carried out spacewalk, on the recommendation of SpaceX founder Elon Musk. But the administration withdrew the nomination in May following concerns among some Republicans that Isaacman had funded the Democrat party.
Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.
Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.
In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.
This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.
Planning for future contingencies
To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.
“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”
For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.
The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.
Implications of the research
Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.
Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.
In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.
For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.
This article was amended on 19 November 2025 to correct a reference to the minimum energy cost of erasing information. It is the Landauer minimum, not the Landau minimum.
When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before.
These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.
This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.
Human and mechanical computers
Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.
Human computers The term “computer” originally referred to people who performed calculations by hand. Here, Kay McNulty, Alyse Snyder and Sis Stump operate the differential analyser in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, circa 1942–1945. (Courtesy: US government)
In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.
As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.
The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.
The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.
Electronic computers
Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.
World first The ENIAC was the first programmable, electronic, general-purpose digital computer. It was built at the US Army’s Ballistic Research Laboratory in 1945, then moved to the University of Pennsylvania in 1946. Its initial team of six coders and operators were all women, including Betty Jean Jennings (later Bartik – left of photo) and Frances Bilas (later Spence – right of photo). They are shown preparing the computer for Demonstration Day in February 1946. (Courtesy: US Army/ ARL Technical Library)
Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.
Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.
Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).
Early algorithms
The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.
In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys.21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.
Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).
A mind for chess Paul Stein (left) and Nicholas Metropolis play “Los Alamos” chess against the MANIAC. “Los Alamos” chess was a simplified version of the game, with the bishops removed to reduce the MANIAC’s processing time between moves. The computer still needed about 20 minutes between moves. The MANIAC became the first computer to beat a human opponent at chess in 1956. (Courtesy: US government / Los Alamos National Laboratory)
Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.
The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.
Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.
In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.
The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.
Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.
The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.
The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.
The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.
A clear-cut division of labour?
In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.
Skilled role Operating the ENIAC required an analytical mind as well as technical skills. (Top) Irwin Goldstein setting the switches on one of the ENIAC’s function tables at the Moore School of Electrical Engineering in 1946. (Middle) Gloria Gordon (later Bolotsky – crouching) and Ester Gerston (standing) wiring the right side of the ENIAC with a new program, c. 1946. (Bottom) Glenn A Beck changing a tube on the ENIAC. Replacing a bad tube meant checking among the ENIAC’s 19,000 possibilities. (Courtesy: US Army / Harold Breaux; US Army / ARL Technical Library; US Army)
There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.
Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.
Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.
Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.
With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”
An evolving identity
What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory)
In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.
1930s–1940s
Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.
Late 1940s – early 1950s
A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.
Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.
As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”
For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.
1960s
Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.
1980s – early 2000s
Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.
Early 2010s
A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.
Overlooked then, overlooked now
Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?
Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.
But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.
The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.
Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.
Gravity might be able to quantum-entangle particles even if the gravitational field itself is classical. That is the conclusion of a new study by Joseph Aziz and Richard Howl at Royal Holloway University of London. This challenges a popular view that such entanglement would necessarily imply that gravity must be quantized. This could be important in the ongoing attempt to develop a theory of quantum gravity that unites quantum mechanics with Einstein’s general theory of relativity.
“When you try to quantize the gravitational interaction in exactly the same way we tried to mathematically quantize the other forces, you end up with mathematically inconsistent results – you end up with infinities in your calculations that you can’t do anything about,” Howl tells Physics World.
“With the other interactions, we quantized them assuming they live within an independent background of classical space and time,” Howl explains. “But with quantum gravity, arguably you cannot do this [because] gravity describes space−time itself rather than something within space−time.”
Quantum entanglement occurs when two particles share linked quantum states even when separated. While it has become a powerful probe of the gravitational field, the central question is whether gravity can mediate entanglement only if it is itself quantum in nature.
General treatment
“It has generally been considered that the gravitational interaction can only entangle matter if the gravitational field is quantum,” Howl says. “We have argued that you could treat the gravitational interaction as more general than just the mediation of the gravitational field such that even if the field is classical, you could in principle entangle matter.”
Quantum field theory postulates that entanglement between masses arises through the exchange of virtual gravitons. These are hypothetical, transient quantum excitations of the gravitational field. Aziz and Howl propose that even if the field remains classical, virtual-matter processes can still generate entanglement indirectly. These processes, he says, “will persist even when the gravitational field is considered classical and could in principle allow for entanglement”.
The idea of probing the quantum nature of gravity through entanglement goes back to a suggestion by Richard Feynman in the 1950s. He envisioned placing a tiny mass in a superposition of two locations and checking whether its gravitational field was also superposed. Though elegant, the idea seemed untestable at the time.
“Recently, two proposals showed that one way you could test that the field is in a superposition (and thus quantum) is by putting two masses in a quantum superposition of two locations and seeing if they become entangled through the gravitational interaction,” says Howl. “This also seemed to be much more feasible than Feynman’s original idea.” Such experiments might use levitated diamonds, metallic spheres, or cold atoms – systems where both position and gravitational effects can be precisely controlled.
Aziz and Howl’s work, however, considers whether such entanglement could arise even if gravity is not quantum. They find that certain classical-gravity processes can in principle entangle particles, though the predicted effects are extremely small.
“These classical-gravity entangling effects are likely to be very small in near-future experiments,” Howl says. “This though is actually a good thing: it means that if we see entanglement…we can be confident that this means that gravity is quantized.”
The paper has drawn a strong response from some leading figures in the field, including Marletto at the University of Oxford, who co-developed the original idea of using gravitationally induced entanglement as a test of quantum gravity.
“The phenomenon of gravitationally induced entanglement … is a game changer in the search for quantum gravity, as it provides a way to detect quantum effects in the gravitational field indirectly, with laboratory-scale equipment,” she says. Detecting it would, she adds, “constitute the first experimental confirmation that gravity is quantum, and the first experimental refutation of Einstein’s relativity as an adequate theory of gravity”.
However, Marletto disputes Aziz and Howl’s interpretation. “No classical theory of gravity can mediate entanglement via local means, contrary to what the study purports to show,” she says. “What the study actually shows is that a classical theory with direct, non-local interactions between the quantum probes can get them entangled.” In her view, that mechanism “is not new and has been known for a long time”.
Despite the controversy, Howl and Marletto agree that experiments capable of detecting gravitationally induced entanglement would be transformative. “We see our work as strengthening the case for these proposed experiments,” Howl says. Marletto concurs that “detecting gravitationally induced entanglement will be a major milestone … and I hope and expect it will happen within the next decade.”
Howl hopes the work will encourage further discussion about quantum gravity. “It may also lead to more work on what other ways you could argue that classical gravity can lead to entanglement,” he says.
At-scale quantum By integrating Delft Circuits’ Cri/oFlex® cabling technology (above) into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from an industrially proven and fully scalable I/O solution for their quantum systems. Cri/oFlex® cabling combines fully integrated filtering with a compact footprint and low heatload. (Courtesy: Delft Circuits)
Better together. That’s the headline take on a newly inked technology partnership between Bluefors, a heavyweight Finnish supplier of cryogenic measurement systems, and Delft Circuits, a Dutch manufacturer of specialist I/O cabling solutions designed for the scale-up and industrial deployment of next-generation quantum computers.
The drivers behind the tie-up are clear: as quantum systems evolve – think vastly increased qubit counts plus ever-more exacting requirements on gate fidelity – developers in research and industry will reach a point where current coax cabling technology doesn’t cut it anymore. The answer? Collaboration, joined-up thinking and product innovation.
In short, by integrating Delft Circuits’ Cri/oFlex® cabling technology into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from a complete, industrially proven and fully scalable I/O solution for their quantum systems. The end-game: to overcome the quantum tech industry’s biggest bottleneck, forging a development pathway from quantum computing systems with hundreds of qubits today to tens of thousands of qubits by 2030.
Joined-up thinking
For context, Cri/oFlex® cryogenic RF cables comprise a stripline (a type of transmission line) based on planar microwave circuitry – essentially a conducting strip encapsulated in dielectric material and sandwiched between two conducting ground planes. The use of the polyimide Kapton® as the dielectric ensures Cri/oFlex® cables remain flexible in cryogenic environments (which are necessary to generate quantum states, manipulate them and read them out), with silver or superconducting NbTi providing the conductive strip and ground layer. The standard product comes as a multichannel flex (eight channels per flex) with a range of I/O channel configurations tailored to the customer’s application needs, including flux bias lines, microwave drive lines, signal lines or read-out lines.
“Together with Bluefors, we will accelerate the journey to quantum advantage,” says Robby Ferdinandus of Delft Circuits. (Courtesy: Delft Circuits)
“Reliability is a given with Cri/oFlex®,” says Robby Ferdinandus, global chief commercial officer for Delft Circuits and a driving force behind the partnership with Bluefors. “By integrating components such as attenuators and filters directly into the flex,” he adds, “we eliminate extra parts and reduce points of failure. Combined with fast thermalization at every temperature stage, our technology ensures stable performance across thousands of channels, unmatched by any other I/O solution.”
Technology aside, the new partnership is informed by a “one-stop shop” mindset, offering the high-density Cri/oFlex® solution pre-installed and fully tested in Bluefors cryogenic measurement systems. For the end-user, think turnkey efficiency: streamlined installation, commissioning, acceptance and, ultimately, enhanced system uptime.
Scalability is front-and-centre too, thanks to Delft Circuits’ pre-assembled and tested side-loading systems. The high-density I/O cabling solution delivers up to 50% more channels per side-loading port to Bluefors’ (current) High Density Wiring, providing a total of 1536 input or control lines to an XLDsl cryostat. In addition, more wiring lines can be added to multiple KF ports as a custom option.
Doubling up for growth
“Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex®,” says Reetta Kaila of Bluefors. (Courtesy: Bluefors)
Reciprocally, there’s significant commercial upside to this partnership. Bluefors is the quantum industry’s leading cryogenic systems OEM and, by extension, Delft Circuits now has access to the former’s established global customer base, amplifying its channels to market by orders of magnitude. “We have stepped into the big league here and, working together, we will ensure that Cri/oFlex® becomes a core enabling technology on the journey to quantum advantage,” notes Ferdinandus.
That view is amplified by Reetta Kaila, director for global technical sales and new products at Bluefors (and, alongside Ferdinandus, a main-mover behind the partnership). “Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex® into our dilution refrigerators,” she explains.
A win-win, it seems, along several coordinates. “The Bluefors sales teams are excited to add Cri/oFlex® into the product portfolio,” Kaila adds. “It’s worth noting, though, that the collaboration extends across multiple functions – technical and commercial – and will therefore ensure close alignment of our respective innovation roadmaps.”
Scalable I/O will accelerate quantum innovation
Deconstructed, Delft Circuits’ value proposition is all about enabling, from an I/O perspective, the transition of quantum technologies out of the R&D lab into at-scale practical applications. More specifically: Cri/oFlex® technology allows quantum scientists and engineers to increase the I/O cabling density of their systems easily – and by a lot – while guaranteeing high gate fidelities (minimizing noise and heating) as well as market-leading uptime and reliability.
To put some hard-and-fast performance milestones against that claim, the company has published a granular product development roadmap that aligns Cri/oFlex® cabling specifications against the anticipated evolution of quantum computing systems – from 150+ qubits today out to 40,000 qubits and beyond in 2029 (see figure below, “Quantum alignment”).
The resulting milestones are based on a study of the development roadmaps of more than 10 full-stack quantum computing vendors – a consolidated view that will ensure the “guiding principles” of Delft Circuits’ innovation roadmap align versus the aggregate quantity and quality of qubits targeted by the system developers over time.
Quantum alignment The new product development roadmap from Delft Circuits starts with the guiding principles, highlighting performance milestones to be achieved by the quantum computing industry over the next five years – specifically, the number of physical qubits per system and gate fidelities. By extension, cabling metrics in the Delft Circuits roadmap focus on “quantity”: the number of I/O channels per loader (i.e. the wiring trees that insert into a cryostat, with typical cryostats having between 6–24 slots for loaders) and the number of channels per cryostat (summing across all loaders); also on “quality” (the crosstalk in the cabling flex). To complete the picture, the roadmap outlines product introductions at a conceptual level to enable both the quantity and quality timelines. (Courtesy: Delft Circuits)
International research collaborations will be increasingly led by scientists in China over the coming decade. That is according to a new study by researchers at the University of Chicago, which finds that the power balance in international science has shifted markedly away from the US and towards China over the last 25 years (Proc. Natl. Acad. Sci. 122 e2414893122).
To explore China’s role in global science, the team used a machine-learning model to predict the lead researchers of almost six million scientific papers that involved international collaboration listed by online bibliographic catalogue OpenAlex. The model was trained on author data from 80 000 papers published in high-profile journals that routinely detail author contributions, including team leadership.
The study found that between 2010 and 2012 there were only 4429 scientists from China who were likely to have led China-US collaborations. By 2023, this number had risen to 12714, meaning that the proportion of team leaders affiliated with Chinese institutions had risen from 30% to 45%.
Key areas
If this trend continues, China will hit “leadership parity” with the US in chemistry, materials science and computer science by 2028, with maths, physics and engineering being level by 2031. The analysis also suggests that China will achieve leadership parity with the US in eight “critical technology” areas by 2030, including AI, semiconductors, communications, energy and high-performance computing.
For China-UK partnerships, the model found that equality had already been reached in 2019, while EU and China leadership roles will be on par this year or next. The authors also found that China has been actively training scientists in nations in the “Belt and Road Initiative” which seeks to connect China closer to the world through investments and infrastructure projects.
This, the researchers warn, limits the ability to isolate science done in China. Instead, they suggest that it could inspire a different course of action, with the US and other countries expanding their engagement with the developing world to train a global workforce and accelerate scientific advancements beneficial to their economies.
The LIGO–Virgo–KAGRA collaboration has detected strong evidence for second-generation black holes, which were formed from earlier mergers of smaller black holes. The two gravitational wave signals provide one of the strongest confirmations to date for how Einstein’s general theory of relativity describes rotating black holes. Studying such objects also provides a testbed for probing new physics beyond the Standard Model.
Over the past decade, the global network of interferometers operated by LIGO, Virgo, and KAGRA have detected close to 300 gravitational waves (GWs) – mostly from the mergers of binary black holes.
In October 2024 the network detected a clear signal that pointed back to a merger that occurred 700 million light-years away. The progenitor black holes were 20 and 6 solar masses and the larger object was spinning at 370 Hz, which makes it one of the fastest-spinning black holes ever observed.
Just one month later, the collaboration detected the coalescence of another highly imbalanced binary (17 and 8 solar masses), 2.4 billion light-years away. This signal was even more unusual – showing for the first time that the larger companion was spinning in the opposite direction of the binary orbit.
Massive and spinning
While conventional wisdom says black holes should not be spinning at such high rates, the observations were not entirely unexpected. “With both events having one black hole, which is both significantly more massive than the other and rapidly spinning, [the observations] provide tantalizing evidence that these black holes were formed from previous black hole mergers,” explains Stephen Fairhurst at Cardiff University, spokesperson of the LIGO Collaboration. If this were the case, the two GW signals – called GW241011 and GW241110 – are first observations of second-generation black holes. This is because when a binary merges, the resulting second-generation object tends to have a large spin.
The GW241011 signal was particularly clear, which allowed the team to make the third-ever observation of higher harmonic modes. These are overtones in the GW signal that become far clearer when the masses of the coalescing bodies are highly imbalanced.
The precision of the GW241011 measurement provides one of the most stringent verifications so far of general relativity. The observations also support Roy Kerr’s prediction that rapid rotation distorts the shape of a black hole.
Kerr and Einstein confirmed
“We now know that black holes are shaped like Einstein and Kerr predicted, and general relativity can add two more checkmarks in its list of many successes,” says team member Carl-Johan Haster at the University of Nevada, Las Vegas. “This discovery also means that we’re more sensitive than ever to any new physics that might lie beyond Einstein’s theory.”
This new physics could include hypothetical particles called ultralight bosons. These could form in clouds just outside the event horizons of spinning black holes, and would gradually drain a black hole’s rotational energy via a quantum effect called superradiance.
The idea is that the observed second-generation black holes had been spinning for billions of years before their mergers occurred. This means that if ultralight bosons were present, they cannot have removed lots of angular momentum from the black holes. This places the tightest constraint to date on the mass of ultralight bosons.
“Planned upgrades to the LIGO, Virgo and KAGRA detectors will enable further observations of similar systems,” Fairhurst says. “They will enable us to better understand both the fundamental physics governing these black hole binaries and the astrophysical mechanisms that lead to their formation.”
Haster adds, “Each new detection provides important insights about the universe, reminding us that each observed merger is both an astrophysical discovery but also an invaluable laboratory for probing the fundamental laws of physics”.
Using a new type of low-power, compact, fluid-based prism to steer the beam in a laser scanning microscope could transform brain imaging and help researchers learn more about neurological conditions such as Alzheimer’s disease.
“We quickly became interested in biological imaging, and work with a neuroscience group at University of Colorado Denver Anschutz Medical Campus that uses mouse models to study neuroscience,” Gopinath tells Physics World. “Neuroscience is not well understood, as illustrated by the neurodegenerative diseases that don’t have good cures. So a great benefit of this technology is the potential to study, detect and treat neurodegenerative diseases such as Alzheimer’s, Parkinson’s and schizophrenia,” she explains.
The researchers fabricated their patented electrowetting prism using custom deposition and lithography methods. The device consists of two immiscible liquids housed in a 5 mm tall, 4 mm diameter glass tube, with a dielectric layer on the inner wall coating four independent electrodes. When an electric field is produced by applying a potential difference between a pair of electrodes on opposite sides of the tube, it changes the surface tension and therefore the curvature of the meniscus between the two liquids. Light passing through the device is refracted by a different amount depending on the angle of tilt of the meniscus (as well as on the optical properties of the liquids chosen), enabling beams to be steered by changing the voltage on the electrodes.
Beam steering for scanning in imaging and microscopy can be achieved via several means, including mechanically controlled mirrors, glass prisms or acousto-optic deflectors (in which a sound wave is used to diffract the light beam). But, unlike the new electrowetting prisms, these methods consume too much power and are not small or lightweight enough to be used for miniature microscopy of neural activity in the brains of living animals.
In tests detailed in Optics Express, the researchers integrated their electrowetting prism into an existing two-photon laser scanning microscope and successfully imaged individual 5 µm-diameter fluorescent polystyrene beads, as well as large clusters of those beads.
They also used computer simulation to study how the liquid–liquid interface moved, and found that when a sinusoidal voltage is used for actuation, at 25 and 75 Hz, standing wave resonance modes occur at the meniscus – a result closely matched by a subsequent experiment that showed resonances at 24 and 72 Hz. These resonance modes are important for enhancing device performance since they increase the angle through which the meniscus can tilt and thus enable optical beams to be steered through a greater range of angles, which helps minimize distortions when raster scanning in two dimensions.
Bright explains that this research built on previous work in which an electrowetting prism was used in a benchtop microscope to image a mouse brain. He cites seeing the individual neurons as a standout moment that, coupled with the current results, shows their prism is now “proven and ready to go”.
Gopinath and Bright caution that “more work is needed to allow human brain scans, such as limiting voltage requirements, allowing the device to operate at safe voltage levels, and miniaturization of the device to allow faster scan speeds and acquiring images at a much faster rate”. But they add that miniaturization would also make the device useful for endoscopy, robotics, chip-scale atomic clocks and space-based communication between satellites.
The team has already begun investigating two other potential applications: LiDAR (light detection and ranging) systems and optical coherence tomography (OCT). Next, the researchers “hope to integrate the device into a miniaturized microscope to allow imaging of the brain in freely moving animals in natural outside environments,” they say. “We also aim to improve the packaging of our devices so they can be integrated into many other imaging systems.”
Modular and scalable: the ICE-Q cryogenics platform delivers the performance and reliability needed for professional computing environments while also providing a flexible and extendable design. The standard configuration includes a cooling module, a payload with a large sample space, and a side-loading wiring module for scalable connectivity (Courtesy: ICEoxford)
At the centre of most quantum labs is a large cylindrical cryostat that keeps the delicate quantum hardware at ultralow temperatures. These cryogenic chambers have expanded to accommodate larger and more complex quantum systems, but the scientists and engineers at UK-based cryogenics specialist ICEoxford have taken a radical new approach to the challenge of scalability. They have split the traditional cryostat into a series of cube-shaped modules that slot into a standard 19-inch rack mount, creating an adaptable platform that can easily be deployed alongside conventional computing infrastructure.
“We wanted to create a robust, modular and scalable solution that enables different quantum technologies to be integrated into the cryostat,” says Greg Graf, the company’s engineering manager. “This approach offers much more flexibility, because it allows different modules to be used for different applications, while the system also delivers the efficiency and reliability that are needed for operational use.”
The standard configuration of the ICE-Q platform has three separate modules: a cryogenics unit that provides the cooling power, a large payload for housing the quantum chip or experiment, and a patent-pending wiring module that attaches to the side of the payload to provide the connections to the outside world. Up to four of these side-loading wiring modules can be bolted onto the payload at the same time, providing thousands of external connections while still fitting into a standard rack. For applications where space is not such an issue, the payload can be further extended to accommodate larger quantum assemblies and potentially tens of thousands of radio-frequency or fibre-optic connections.
The cube-shaped form factor provides much improved access to these external connections, whether for designing and configuring the system or for ongoing maintenance work. The outer shell of each module consists of panels that are easily removed, offering a simple mechanism for bolting modules together or stacking them on top of each other to provide a fully scalable solution that grows with the qubit count.
The flexible design also offers a more practical solution for servicing or upgrading an installed system, since individual modules can be simply swapped over as and when needed. “For quantum computers running in an operational environment it is really important to minimize the downtime,” says Emma Yeatman, senior design engineer at ICEoxford. “With this design we can easily remove one of the modules for servicing, and replace it with another one to keep the system running for longer. For critical infrastructure devices, it is possible to have built-in redundancy that ensures uninterrupted operation in the event of a failure.”
Other features have been integrated into the platform to make it simple to operate, including a new software system for controlling and monitoring the ultracold environment. “Most of our cryostats have been designed for researchers who really want to get involved and adapt the system to meet their needs,” adds Yeatman. “This platform offers more options for people who want an out-of-the-box solution and who don’t want to get hands on with the cryogenics.”
Such a bold design choice was enabled in part by a collaborative research project with Canadian company Photonic Inc, funded jointly by the UK and Canada, that was focused on developing an efficient and reliable cryogenics platform for practical quantum computing. That R&D funding helped to reduce the risk of developing an entirely new technology platform that addresses many of the challenges that ICEoxford and its customers had experienced with traditional cryostats. “Quantum technologies typically need a lot of wiring, and access had become a real issue,” says Yeatman. “We knew there was an opportunity to do better.”
However, converting a large cylindrical cryostat into a slimline and modular form factor demanded some clever engineering solutions. Perhaps the most obvious was creating a frame that allows the modules to be bolted together while still remaining leak tight. Traditional cryostats are welded together to ensure a leak-proof seal, but for greater flexibility the ICEoxford team developed an assembly technique based on mechanical bonding.
The side-loading wiring module also presented a design challenge. To squeeze more wires into the available space, the team developed a high-density connector for the coaxial cables to plug into. An additional cold-head was also integrated into the module to pre-cool the cables, reducing the overall heat load generated by such large numbers of connections entering the ultracold environment.
Flexible for the future: the outer shell of the modules is covered with removable panels that make it easy to extend or reconfigure the system (Courtesy: ICEoxford)
Meanwhile, the speed of the cooldown and the efficiency of operation have been optimized by designing a new type of heat exchanger that is fabricated using a 3D printing process. “When warm gas is returned into the system, a certain amount of cooling power is needed just to compress and liquefy that gas,” explains Kelly. “We designed the heat exchangers to exploit the returning cold gas much more efficiently, which enables us to pre-cool the warm gas and use less energy for the liquefaction.”
The initial prototype has been designed to operate at 1 K, which is ideal for the photonics-based quantum systems being developed by ICEoxford’s research partner. But the modular nature of the platform allows it to be adapted to diverse applications, with a second project now underway with the Rutherford Appleton Lab to develop a module that that will be used at the forefront of the global hunt for dark matter.
Already on the development roadmap are modules that can sustain temperatures as low as 10 mK – which is typically needed for superconducting quantum computing – and a 4 K option for trapped-ion systems. “We already have products for each of those applications, but our aim was to create a modular platform that can be extended and developed to address the changing needs of quantum developers,” says Kelly.
As these different options come onstream, the ICEoxford team believes that it will become easier and quicker to deliver high-performance cryogenic systems that are tailored to the needs of each customer. “It normally takes between six and twelve months to build a complex cryogenics system,” says Graf. “With this modular design we will be able to keep some of the components on the shelf, which would allow us to reduce the lead time by several months.”
More generally, the modular and scalable platform could be a game-changer for commercial organizations that want to exploit quantum computing in their day-to-day operations, as well as for researchers who are pushing the boundaries of cryogenics design with increasingly demanding specifications. “This system introduces new avenues for hardware development that were previously constrained by the existing cryogenics infrastructure,” says Kelly. “The ICE-Q platform directly addresses the need for colder base temperatures, larger sample spaces, higher cooling powers, and increased connectivity, and ensures our clients can continue their aggressive scaling efforts without being bottlenecked by their cooling environment.”
You can find out more about the ICE-Q platform by contacting the ICEoxford team at iceoxford.com, or via email at sales@iceoxford.com. They will also be presenting the platform at the UK’s National Quantum Technologies Showcase in London on 7 November, with a further launch at the American Physical Society meeting in March 2026.
When it comes to building a fully functional “fault-tolerant” quantum computer, companies and government labs all over the world are rushing to be the first over the finish line. But a truly useful universal quantum computer capable of running complex algorithms would have to entangle millions of coherent qubits, which are extremely fragile. Because of environmental factors such as temperature, interference from other electronic systems in hardware, and even errors in measurement, today’s devices would fail under an avalanche of errors long before reaching that point.
So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).
Protected states
The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.
There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” saysMichael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”
At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.
Qubits are so fragile that their quantum state is very susceptible to the local environment, and can easily be lost through the process of decoherence. Current quantum computers therefore have very high error rates – roughly one error in every few hundred operations. For quantum computers to be truly useful, this error rate will have to be reduced to the scale of one in a million; especially as larger more complex algorithms would require one in a billion or even trillion error rates. This requires real-time quantum error correction (QEC).
To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.
According to Maria Maragkou, commercial vice-president of quantum error-correction company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.
With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).
The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.
Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.
It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).
Others suspect the breakthrough threshold may be a little lower: Steve Brierley, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.
Platform independent
Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.
“At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierley thinks this will require fault-tolerant machines.
“In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”
Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.
On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”
Early adopters?
Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?
In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.
Joint effort In June 2025, IBM in the US and Japan’s national research laboratory RIKEN, unveiled the IBM Quantum System Two, the first to be used outside the US. It involved IBM’s 156-qubit IBM Heron quantum computing system (left) being paired with RIKEN’s supercomputer Fugaku (right) — one of the most powerful classical systems on Earth. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. (Courtesy: IBM and RIKEN)
For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”
Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).
“For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”
3 Structural insights
(Courtesy: Phasecraft)
A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.
Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.
Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)
One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.
Building the market
Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.
The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.
Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”
The road ahead IBM’s current roadmap charts how the company plans on scaling up its devices to achieve a fault-tolerant device by 2029. Alongside hardware development, the firm will also focus on developing new algorithms and software for these devices. (Courtesy: IBM)
It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.
According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?
But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”
Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”
However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.
That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.
In the first instalment of this two-part article, Philip Ball explores the latest developments in the quantum-computing industry
Female university students do much better in introductory physics exams if they have the option of retaking the tests. That’s according to a new analysis of almost two decades of US exam results for more than 26,000 students. The study’s authors say it shows that female students benefit from lower-stakes assessments – and that the persistent “gender grade gap” in physics exam results does not reflect a gender difference in physics knowledge or ability.
The study has been carried out by David Webb from the University of California, Davis, and Cassandra Paul from San Jose State University. It builds on previous work they did in 2023, which showed that the gender gap disappears in introductory physics classes that offer the chance for all students to retake the exams. That study did not, however, explore why the offer of a retake has such an impact.
In the new study, the duo analysed exam results from 1997 to 2015 for a series of introductory physics classes at a public university in the US. The dataset included 26,783 students, mostly in biosciences, of whom about 60% were female. Some of the classes let students retake exams while others did not, thereby letting the researchers explore why retakes close the gender gap.
When Webb and Paul examined the data for classes that offered retakes, they found that in first-attempt exams female students slightly outperformed their male counterparts. But male students performed better than female students in retakes.
This, the researchers argue, discounts the notion that retakes close the gender gap by allowing female students to improve their grades. Instead, they suggest that the benefit of retakes is that they lower the stakes of the first exam.
The team then compared the classes that offered retakes with those that did not, which they called high-stakes courses. They found that the gender gap in exam results was much larger in the high-stakes classes than the lower-stakes classes that allowed retakes.
“This suggests that high-stakes exams give a benefit to men, on average, [and] lowering the stakes of each exam can remove that bias,” Webb told Physics World. He thinks that as well as allowing students to retake exams, physics might benefit from not having comprehensive high-stakes final exams but instead “use final exam time to let students retake earlier exams”.
Improving the efficiency of solar cells will likely be one of the key approaches to achieving net zero emissions in many parts of the world. Many types of solar cells will be required, with some of the better performances and efficiencies expected to come from multi-junction solar cells. Multi-junction solar cells comprise a vertical stack of semiconductor materials with distinct bandgaps, with each layer converting a different part of the solar spectrum to maximize conversion of the Sun’s energy to electricity.
When there are no constraints on the choice of materials, triple-junction solar cells can outperform double-junction and single-junction solar cells, with a power conversion efficiency (PCE) of up to 51% theoretically possible. But material constraints – due to fabrication complexity, cost or other technical challenges – mean that many such devices still perform far from the theoretical limits.
Perovskites are one of the most promising materials in the solar cell world today, but fabricating practical triple-junction solar cells beyond 1 cm2 in area has remained a challenge. A research team from Australia, China, Germany and Slovenia set out to change this, recently publishing a paper in Nature Nanotechnology describing the largest and most efficient triple-junction perovskite–perovskite–silicon tandem solar cell to date.
When asked why this device architecture was chosen, Anita Ho-Baillie, one of the lead authors from The University of Sydney, states: “I am interested in triple-junction cells because of the larger headroom for efficiency gains”.
Addressing surface defects in perovskite solar cells
Solar cells formed from metal halide perovskites have potential to be commercially viable, due to their cost-effectiveness, efficiency, ease of fabrication and their ability to be paired with silicon in multi-junction devices. The ease of fabrication means that the junctions can be directly fabricated on top of each other through monolithic integration – which leads to only two terminal connections, instead of four or six. However, these junctions can still contain surface defects.
To enhance the performance and resilience of their triple-junction cell (top and middle perovskite junctions on a bottom silicon cell), the researchers optimized the chemistry of the perovskite material and the cell design. They addressed surface defects in the top perovskite junction by replacing traditional lithium fluoride materials with piperazine-1,4-diium chloride (PDCl). They also replaced methylammonium – which is commonly used in perovskite cells – with rubidium. “The rubidium incorporation in the bulk and the PDCl surface treatment improved the light stability of the cell,” explains Ho-Baillie.
To connect the two perovskite junctions, the team used gold nanoparticles on tin oxide. Because the gold was in a nanoparticle form, the junctions could be engineered to maximize the flow of electric charge and light absorption by the solar cell.
“Another interesting aspect of the study is the visualization of the gold nanoparticles [using transmission electron microscopy] and the critical point when they become a semi-continuous film, which is detrimental to the multi-junction cell performance due to its parasitic absorption,” says Ho-Baillie. “The optimization for achieving minimal particle coverage while achieving sufficient ohmic contact for vertical carrier flow are useful insights”.
Record performance for a large-scale perovskite triple-junction cell
Using these design strategies, Ho-Baillie and colleagues developed a 16 cm2 triple-junction cell that achieved an independently certified steady-state PCE of 23.3% – the highest reported for a large-area device. While triple-junction perovskite solar cells have exhibited higher PCEs – with all-perovskite triple-junction cells reaching 28.7% and perovskite–perovskite–silicon devices reaching 27.1% – these were all achieved on a 1 cm2 cell, not a large-area cell.
In this study, the researchers also developed a 1 cm2 cell that was close to the best, with a PCE of 27.06%, but it is the large-area cell that’s the record breaker. The 1 cm2 cell also passed the International Electrotechnical Commission’s (IEC) 61215 thermal cycling test, which exposes the cell to 200 cycles under extreme temperature swings, ranging from –40 to 85°C. During this test, the 1 cm2 cell retained 95% of its initial efficiency after 407 h of continuous operation.
The combination of the successful thermal cycling test combined with the high efficiencies on a larger cell shows that there could be potential for this triple-junction architecture in real-world settings in the near future, even though they are still far away from their theoretical limits.
It’s rare to come across someone who’s been responsible for enabling a seismic shift in society that has affected almost everyone and everything. Tim Berners-Lee, who invented the World Wide Web, is one such person. His new memoir This is for Everyone unfolds the history and development of the Web and, in places, of the man himself.
Berners-Lee was born in London in 1955 to parents, originally from Birmingham, who met while working on the Ferranti Mark 1 computer and knew Alan Turing. Theirs was a creative, intellectual and slightly chaotic household. His mother could maintain a motorbike with fence wire and pliers, and was a crusader for equal rights in the workplace. His father – brilliant and absent minded – taught Berners-Lee about computers and queuing theory. A childhood of camping and model trains, it was, in Berners-Lee’s view, idyllic.
Berners-Lee had the good fortune to be supported by a series of teachers and managers who recognized his potential and unique way of working. He studied physics at the University of Oxford (his tutor “going with the flow” of Berners-Lee’s unconventional notation and ability to approach problems from oblique angles) and built his own computer. After graduating, he married and, following a couple of jobs, took a six-month placement at the CERN particle-physics lab in Geneva in 1985.
This placement set “a seed that sprouted into a tool that shook up the world”. Berners-Lee saw how difficult it was to share information stored in different languages in incompatible computer systems and how, in contrast, information flowed easily when researchers met over coffee, connected semi-randomly and talked. While at CERN, he therefore wrote a rough prototype for a program to link information in a type of web rather than a structured hierarchy.
Back at CERN, Tim Berners-Lee developed his vision of a “universal portal” to information
The placement ended and the program was ignored, but four years later Berners-Lee was back at CERN. Now divorced and soon to remarry, he developed his vision of a “universal portal” to information. It proved to be the perfect time. All the tools necessary to achieve the Web – the Internet, address labelling of computers, network cables, data protocols, the hypertext language that allowed cross-referencing of text and links on the same computer – had already been developed by others.
Berners-Lee saw the need for a user-friendly interface, using hypertext that could link to information on other computers across the world. His excitement was “uncontainable”, and according to his line manager “few of us if any could understand what he was talking about”. But Berners-Lee’s managers supported him and freed his time away from his actual job to become the world’s first web developer.
Having a vision was one thing, but getting others to share it was another. People at CERN only really started to use the Web properly once the lab’s internal phone book was made available on it. As a student at the time, I can confirm that it was much, much easier to use the Web than log on to CERN’s clunky IBM mainframe, where phone numbers had previously been stored.
Wider adoption relied on a set of volunteer developers, working with open-source software, to make browsers and platforms that were attractive and easy to use. CERN agreed to donate the intellectual property for web software to the public domain, which helped. But the path to today’s Web was not smooth: standards risked diverging and companies wanted to build applications that hindered information sharing.
Feeling that “the Web was outgrowing my institution” and “would be a distraction” to a lab whose core mission was physics, Berners-Lee moved to the Massachusetts Institute of Technology in 1994. There he founded the World Wide Web Consortium (W3C) to ensure consistent, accessible standards were followed by everyone as the Web developed into a global enterprise. The progression sounds straightforward although earlier accounts, such as James Gillies and Robert Caillau’s 2000 book How the Web Was Born, imply some rivalry between institutions that is glossed over here.
Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks
The rest is history, but not quite the history that Berners-Lee had in mind. By 1995 big business had discovered the possibilities of the Web to maximize influence and profit. Initially inclined to advise people to share good things and not search for bad things, Berners-Lee had reckoned without the insidious power of “manipulative and coercive” algorithms on social networks. Collaborative sites like Wikipedia are closer to his vision of an ideal Web; an emergent good arising from individual empowerment. The flip side of human nature seems to come as a surprise.
The rest of the book brings us up to date with Berners-Lee’s concerns (data, privacy, misuse of AI, toxic online culture), his hopes (the good use of AI), a third marriage and his move into a data-handling business. There are some big awards and an impressive amount of name dropping; he is excited by Order of Merit lunches with the Queen and by sitting next to Paul McCartney’s family at the opening ceremony to the London Olympics in 2012. A flick through the index reveals names ranging from Al Gore and Bono to Lucien Freud. These are not your average computing technology circles.
There are brief character studies to illustrate some of the main players, but don’t expect much insight into their lives. This goes for Berners-Lee too, who doesn’t step back to particularly reflect on those around him, or indeed his own motives beyond that vision of a Web for all enabling the best of humankind. He is firmly future focused.
Still, there is no-one more qualified to describe what the Web was intended for, its core philosophy, and what caused it to develop to where it is today. You’ll enjoy the book whether you want an insight into the inner workings that make your web browsing possible, relive old and forgotten browser names, or see how big tech wants to monetize and monopolize your online time. It is an easy read from an important voice.
The book ends with a passionate statement for what the future could be, with businesses and individuals working together to switch the Web from “the attention economy to the intention economy”. It’s a future where users are no longer distracted by social media and manipulated by attention-grabbing algorithms; instead, computers and services do what users want them to do, with the information that users want them to have.
Berners-Lee is still optimistic, still an incurable idealist, still driven by vision. And perhaps still a little naïve too in believing that everyone’s values will align this time.