↩ Accueil

Vue normale

Reçu hier — 5 juin 2025

Superconducting innovation: SQMS shapes up for scalable success in quantum computing

5 juin 2025 à 16:00

Developing quantum computing systems with high operational fidelity, enhanced processing capabilities plus inherent (and rapid) scalability is high on the list of fundamental problems preoccupying researchers within the quantum science community. One promising R&D pathway in this regard is being pursued by the Superconducting Quantum Materials and Systems (SQMS) National Quantum Information Science Research Center at the US Department of Energy’s Fermi National Accelerator Laboratory, the pre-eminent US particle physics facility on the outskirts of Chicago, Illinois.

The SQMS approach involves placing a superconducting qubit chip (held at temperatures as low as 10–20 mK) inside a three-dimensional superconducting radiofrequency (3D SRF) cavity – a workhorse technology for particle accelerators employed in high-energy physics (HEP), nuclear physics and materials science. In this set-up, it becomes possible to preserve and manipulate quantum states by encoding them in microwave photons (modes) stored within the SRF cavity (which is also cooled to the millikelvin regime).

Put another way: by pairing superconducting circuits and SRF cavities at cryogenic temperatures, SQMS researchers create environments where microwave photons can have long lifetimes and be protected from external perturbations – conditions that, in turn, make it possible to generate quantum states, manipulate them and read them out. The endgame is clear: reproducible and scalable realization of such highly coherent superconducting qubits opens the way to more complex and scalable quantum computing operations – capabilities that, over time, will be used within Fermilab’s core research programme in particle physics and fundamental physics more generally.

Fermilab is in a unique position to turn this quantum technology vision into reality, given its decadal expertise in developing high-coherence SRF cavities. In 2020, for example, Fermilab researchers demonstrated record coherence lifetimes (of up to two seconds) for quantum states stored in an SRF cavity.

“It’s no accident that Fermilab is a pioneer of SRF cavity technology for accelerator science,” explains Sir Peter Knight, senior research investigator in physics at Imperial College London and an SQMS advisory board member. “The laboratory is home to a world-leading team of RF engineers whose niobium superconducting cavities routinely achieve very high quality factors (Q) from 1010 to above 1011 – figures of merit that can lead to dramatic increases in coherence time.”

Moreover, Fermilab offers plenty of intriguing HEP use-cases where quantum computing platforms could yield significant research dividends. In theoretical studies, for example, the main opportunities relate to the evolution of quantum states, lattice-gauge theory, neutrino oscillations and quantum field theories in general. On the experimental side, quantum computing efforts are being lined up for jet and track reconstruction during high-energy particle collisions; also for the extraction of rare signals and for exploring exotic physics beyond the Standard Model.

SQMS associate scientists Yao Lu and Tanay Roy
Collaborate to accumulate SQMS associate scientists Yao Lu (left) and Tanay Roy (right) worked with PhD student Taeyoon Kim (centre) to develop a two-qudit superconducting QPU with a record coherence lifetime (>20 ms). (Courtesy: Hannah Brumbaugh, Fermilab)

Cavities and qubits

SQMS has already notched up some notable breakthroughs on its quantum computing roadmap, not least the demonstration of chip-based transmon qubits (a type of charge qubit circuit exhibiting decreased sensitivity to noise) showing systematic and reproducible improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

Key to success here is an extensive collaborative effort in materials science and the development of novel chip fabrication processes, with the resulting transmon qubit ancillas shaping up as the “nerve centre” of the 3D SRF cavity-based quantum computing platform championed by SQMS. What’s in the works is essentially a unique quantum analogue of a classical computing architecture: the transmon chip providing a central logic-capable quantum information processor and microwave photons (modes) in the 3D SRF cavity acting as the random-access quantum memory.

As for the underlying physics, the coupling between the transmon qubit and discrete photon modes in the SRF cavity allows for the exchange of coherent quantum information, as well as enabling quantum entanglement between the two. “The pay-off is scalability,” says Alexander Romanenko, a senior scientist at Fermilab who leads the SQMS quantum technology thrust. “A single logic-capable processor qubit, such as the transmon, can couple to many cavity modes acting as memory qubits.”

In principle, a single transmon chip could manipulate more than 10 qubits encoded inside a single-cell SRF cavity, substantially streamlining the number of microwave channels required for system control and manipulation as the number of qubits increases. “What’s more,” adds Romanenko, “instead of using quantum states in the transmon [coherence times just crossed into milliseconds], we can use quantum states in the SRF cavities, which have higher quality factors and longer coherence times [up to two seconds].”

In terms of next steps, continuous improvement of the ancilla transmon coherence times will be critical to ensure high-fidelity operation of the combined system – with materials breakthroughs likely to be a key rate-determining step. “One of the unique differentiators of the SQMS programme is this ‘all-in’ effort to understand and get to grips with the fundamental materials properties that lead to losses and noise in superconducting qubits,” notes Knight. “There are no short-cuts: wide-ranging experimental and theoretical investigations of materials physics – per the programme implemented by SQMS – are mandatory for scaling superconducting qubits into industrial and scientifically useful quantum computing architectures.”

Laying down a marker, SQMS researchers recently achieved a major milestone in superconducting quantum technology by developing the longest-lived multimode superconducting quantum processor unit (QPU) ever built (coherence lifetime >20 ms). Their processor is based on a two-cell SRF cavity and leverages its exceptionally high quality factor (~1010) to preserve quantum information far longer than conventional superconducting platforms (typically 1 or 2 ms for rival best-in-class implementations).

Coupled with a superconducting transmon, the two-cell SRF module enables precise manipulation of cavity quantum states (photons) using ultrafast control/readout schemes (allowing for approximately 104 high-fidelity operations within the qubit lifetime). “This represents a significant achievement for SQMS,” claims Yao Lu, an associate scientist at Fermilab and co-lead for QPU connectivity and transduction in SQMS. “We have demonstrated the creation of high-fidelity [>95%] quantum states with large photon numbers [20 photons] and achieved ultra-high-fidelity single-photon entangling operations between modes [>99.9%]. It’s work that will ultimately pave the way to scalable, error-resilient quantum computing.”

The SQMS multiqubit QPU prototype
Scalable thinking The SQMS multiqudit QPU prototype (above) exploits 3D SRF cavities held at millikelvin temperatures. (Courtesy: Ryan Postel, Fermilab)

Fast scaling with qudits

There’s no shortage of momentum either, with these latest breakthroughs laying the foundations for SQMS “qudit-based” quantum computing and communication architectures. A qudit is a multilevel quantum unit that can be more than two states and, in turn, hold a larger information density – i.e. instead of working with a large number of qubits to scale information processing capability, it may be more efficient to maintain a smaller number of qudits (with each holding a greater range of values for optimized computations).

Scale-up to a multiqudit QPU system is already underway at SQMS via several parallel routes (and all with a modular computing architecture in mind). In one approach, coupler elements and low-loss interconnects integrate a nine-cell multimode SRF cavity (the memory) to a two-cell SRF cavity quantum processor. Another iteration uses only two-cell modules, while yet another option exploits custom-designed multimodal cavities (10+ modes) as building blocks.

One thing is clear: with the first QPU prototypes now being tested, verified and optimized, SQMS will soon move to a phase in which many of these modules will be assembled and put together in operation. By extension, the SQMS effort also encompasses crucial developments in control systems and microwave equipment, where many devices must be synchronized optimally to encode and analyse quantum information in the QPUs.

Along a related coordinate, complex algorithms can benefit from fewer required gates and reduced circuit depth. What’s more, for many simulation problems in HEP and other fields, it’s evident that multilevel systems (qudits) – rather than qubits – provide a more natural representation of the physics in play, making simulation tasks significantly more accessible. The work of encoding several such problems into qudits – including lattice-gauge-theory calculations and others – is similarly ongoing within SQMS.

Taken together, this massive R&D undertaking – spanning quantum hardware and quantum algorithms – can only succeed with a “co-design” approach across strategy and implementation: from identifying applications of interest to the wider HEP community to full deployment of QPU prototypes. Co-design is especially suited to these efforts as it demands sustained alignment of scientific goals with technological implementation to drive innovation and societal impact.

In addition to their quantum computing promise, these cavity-based quantum systems will play a central role in serving both as the “adapters” and low-loss channels at elevated temperatures for interconnecting chip or cavity-based QPUs hosted in different refrigerators. These interconnects will provide an essential building block for the efficient scale-up of superconducting quantum processors into larger quantum data centres.

Researchers in the control room of the SQMS Quantum Garage facility
Quantum insights Researchers in the control room of the SQMS Quantum Garage facility, developing architectures and gates for SQMS hardware tailored toward HEP quantum simulations. From left to right: Nick Bornman, Hank Lamm, Doga Kurkcuoglu, Silvia Zorzetti, Julian Delgado, Hans Johnson (Courtesy: Hannah Brumbaugh)

 “The SQMS collaboration is ploughing its own furrow – in a way that nobody else in the quantum sector really is,” says Knight. “Crucially, the SQMS partners can build stuff at scale by tapping into the phenomenal engineering strengths of the National Laboratory system. Designing, commissioning and implementing big machines has been part of the ‘day job’ at Fermilab for decades. In contrast, many quantum computing start-ups must scale their R&D infrastructure and engineering capability from a far-less-developed baseline.”

The last word, however, goes to Romanenko. “Watch this space,” he concludes, “because SQMS is on a roll. We don’t know which quantum computing architecture will ultimately win out, but we will ensure that our cavity-based quantum systems will play an enabling role.”

Scaling up: from qubits to qudits

Conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit
Left: conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit (AI-generated). Right: an ancilla qubit with two energy levels – ground ∣g⟩ and excited ∣e⟩ – is used to control a high-coherence (d+1) dimensional qudit encoded in a cavity resonator. The ancilla enables state preparation, control and measurement of the qudit. (Courtesy: Fermilab)

The post Superconducting innovation: SQMS shapes up for scalable success in quantum computing appeared first on Physics World.

Reçu avant avant-hier

Black-hole scattering calculations could shed light on gravitational waves

4 juin 2025 à 17:00

By adapting mathematical techniques used in particle physics, researchers in Germany have developed an approach that could boost our understanding of the gravitational waves that are emitted when black holes collide. Led by Jan Plefka at The Humboldt University of Berlin, the team’s results could prove vital to the success of future gravitational-wave detectors.

Nearly a decade on from the first direct observations of gravitational waves, physicists are hopeful that the next generation of ground- and space-based observatories will soon allow us to study these ripples in space–time with unprecedented precision. But to ensure the success of upcoming projects like the LISA space mission, the increased sensitivity offered by these detectors will need to be accompanied with a deeper theoretical understanding of how gravitational waves are generated through the merging of two black holes.

In particular, they will need to predict more accurately the physical properties of gravitational waves produced by any given colliding pair and account for factors including their respective masses and orbital velocities. For this to happen, physicists will need to develop more precise solutions to the relativistic two-body problem. This problem is a key application of the Einstein field equations, which relate the geometry of space–time to the distribution of matter within it.

No exact solution

“Unlike its Newtonian counterpart, which is solved by Kepler’s Laws, the relativistic two-body problem cannot be solved exactly,” Plefka explains. “There is an ongoing international effort to apply quantum field theory (QFT) – the mathematical language of particle physics – to describe the classical two-body problem.”

In their study, Plefka’s team started from state-of-the-art techniques used in particle physics for modelling the scattering of colliding elementary particles, while accounting for their relativistic properties. When viewed from far away, each black hole can be approximated as a single point which, much like an elementary particle, carries a single mass, charge, and spin.

Taking advantage of this approximation, the researchers modified existing techniques in particle physics to create a framework called worldline quantum field theory (WQFT). “The advantage of WQFT is a clean separation between classical and quantum physics effects, allowing us to precisely target the classical physics effects relevant for the vast distances involved in astrophysical observables,” Plefka describes

Ordinarily, doing calculations with such an approach would involve solving millions of integrals that sum-up every single contribution to the black hole pair’s properties across all possible ways that the interaction between them could occur. To simplify the problem, Plefka’s team used a new algorithm that identified relationships between the integrals. This reduced the problem to just 250 “master integrals”, making the calculation vastly more manageable.

With these master integrals, the team could finally produce expressions for three key physical properties of black hole binaries within WQFT. These includes the changes in momentum during the gravity-mediated scattering of two black holes and the total energy radiated by both bodies over the course of the scattering.

Genuine physical process

Altogether, the team’s WQFT framework produced the most accurate solution to the Einstein field equations ever achieved to date. “In particular, the radiated energy we found contains a new class of mathematical functions known as ‘Calabi–Yau periods’,” Plefka explains. “While these functions are well-known in algebraic geometry and string theory, this marks the first time they have been shown to describe a genuine physical process.”

With its unprecedented insights into the structure of the relativistic two-body problem, the team’s approach could now be used to build more precise models of gravitational-wave formation, which could prove invaluable for the next generation of gravitational-wave detectors.

More broadly, however, Plefka predicts that the appearance of Calabi–Yau periods in their calculations could lead to an entirely new class of mathematical functions applicable to many areas beyond gravitational waves.

“We expect these periods to show up in other branches of physics, including collider physics, and the mathematical techniques we employed to calculate the relevant integrals will no doubt also apply there,” he says.

The research is described in Nature.

The post Black-hole scattering calculations could shed light on gravitational waves appeared first on Physics World.

Harmonious connections: bridging the gap between music and science

4 juin 2025 à 12:00

CP Snow’s classic The Two Cultures lecture, published in book form in 1959, is the usual go-to reference when exploring the divide between the sciences and humanities. It is a culture war that was raging long before the term became social-media shorthand for today’s tribal battles over identity, values and truth.

While Snow eloquently lamented the lack of mutual understanding between scientific and literary elites, the 21st-century version of the two-cultures debate often plays out with a little less decorum and a lot more profanity. Hip hop duo Insane Clown Posse certainly didn’t hold back in their widely memed 2010 track “Miracles”, which included the lyric “And I don’t wanna talk to a scientist / Y’all motherfuckers lying and getting me pissed”. An extreme example to be sure, but it hammers home the point: Snow’s two-culture concerns continue to resonate strongly almost 70 years after his influential lecture and writings.

A Perfect Harmony: Music, Mathematics and Science by David Darling is the latest addition to a growing genre that seeks to bridge that cultural rift. Like Peter Pesic’s Music and the Making of Modern Science, Susan Rogers and Ogi Ogas’ This Is What It Sounds Like, and Philip Ball’s The Music Instinct, Darling’s book adds to the canon that examines the interplay between musical creativity and the analytical frameworks of science (including neuroscience) and mathematics.

I’ve also contributed, in a nanoscopically small way, to this music-meets-science corpus with an analysis of the deep and fundamental links between quantum physics and heavy metal (When The Uncertainty Principle Goes To 11), and have a long-standing interest in music composed from maths and physics principles and constants (see my Lateral Thoughts articles from September 2023 and July 2024). Darling’s book, therefore, struck a chord with me.

Darling is not only a talented science writer with an expansive back-catalogue to his name but he is also an accomplished musician (check out his album Songs Of The Cosmos ), and his enthusiasm for all things musical spills off the page. Furthermore, he is a physicist, with a PhD in astronomy from the University of Manchester. So if there’s a writer who can genuinely and credibly inhabit both sides of the arts–science cultural divide, it’s Darling.

But is A Perfect Harmony in tune with the rest of the literary ensemble, or marching to a different beat? In other words, is this a fresh new take on the music-meets-maths (meets pop sci) genre or, like too many bands I won’t mention, does it sound suspiciously like something you’ve heard many times before? Well, much like an old-school vinyl album, Darling’s work has the feel of two distinct sides. (And I’ll try to make that my final spin on groan-worthy musical metaphors. Promise.)

Not quite perfect pitch

Although the subtitle for A Perfect Harmony is “Music, Mathematics and Science”, the first half of the book is more of a history of the development and evolution of music and musical instruments in various cultures, rather than a new exploration of the underpinning mathematical and scientific principles. Engaging and entertaining though this is – and all credit to Darling for working in a reference to Van Halen in the opening lines of chapter 1 – it’s well-worn ground: Pythagorean tuning, the circle of fifths, equal temperament, Music of the Spheres (not the Coldplay album, mercifully), resonance, harmonics, etc. I found myself wishing, at times, for a take that felt a little more off the beaten track.

One case in point is Darling’s brief discussion of the theremin. If anything earns the title of “The Physicist’s Instrument”, it’s the theremin – a remarkable device that exploits the innate electrical capacitance of the human body to load a resonant circuit and thus produce an ethereal, haunting tone whose pitch can be varied, without, remarkably, any physical contact.

While I give kudos to Darling for highlighting the theremin, the brevity of the description is arguably a lost opportunity when put in the broader context of the book’s aim to explain the deeper connections between music, maths and science. This could have been a novel and fascinating take on the links between electrical and musical resonance that went well beyond the familiar territory mapped out in standard physics-of-music texts.

Using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired

As the book progresses, however, Darling moves into more distinctive territory, choosing a variety of inventive examples that are often fascinating and never short of thought-provoking. I particularly enjoyed his description of orbital resonance in the system of seven planets orbiting the red dwarf TRAPPIST-1, 41 light-years from Earth. The orbital periods have ratios, which, when mapped to musical intervals, correspond to a minor sixth, a major sixth, two perfect fifths, a perfect fourth and another perfect fifth. And it’s got to be said that using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired.

A Perfect Harmony doesn’t entirely close the cultural gap highlighted by Snow all those years ago, but it does hum along pleasantly in the space between. Though the subject matter occasionally echoes well-trodden themes, Darling’s perspective and enthusiasm lend it freshness. There’s plenty here to enjoy, especially for physicists inclined to tune into the harmonies of the universe.

  • 2025 Oneworld Publications 288pp £10.99pb/£6.99e-book

The post Harmonious connections: bridging the gap between music and science appeared first on Physics World.

Bury it, don’t burn it: turning biomass waste into a carbon solution

3 juin 2025 à 12:00

If a tree fell in a forest almost 4000 years ago, did it make a sound? Well, in the case of an Eastern red cedar in what is now Quebec, Canada, it’s certainly still making noise today.

That’s because in 2013, a team of scientists were digging a trench when they came across the 3775-year-old log. Despite being buried for nearly four millennia, the wood wasn’t rotten and useless. In fact, recent analysis unearthed an entirely different story.

The team, led by atmospheric scientist Ning Zeng of the University of Maryland in the US, found that the wood had only lost 5% of its carbon compared with a freshly cut Eastern red cedar log. “The wood is nice and solid – you could probably make a piece of furniture out of it,” says Zeng. The log had been preserved in such remarkable shape because the clay soil it was buried in was highly impermeable. That limited the amount of oxygen and water reaching the wood, suppressing the activity of micro-organisms that would otherwise have made it decompose.

Asian man in an office holding an ancient wooden log
Fortified and ancient Ning Zeng and colleagues discovered this 3775-year-old preserved log while conducting a biomass burial pilot project in Quebec, Canada. (Courtesy: Mark Sherwood)

This ancient log is a compelling example of “biomass burial”. When plants decompose or are burnt, they release the carbon dioxide (CO2) they had absorbed from the atmosphere. One idea to prevent this CO2 being released back into the atmosphere is to bury the waste biomass under conditions that prevent or slow decomposition, thereby trapping the carbon underground for centuries.

In fact, Zeng and his colleagues discovered the cedar log while they were digging a huge trench to bury 35 tonnes of wood to test this very idea. Nine years later, when they dug up some samples, they found that the wood had barely decomposed. Further analysis suggested that if the logs had been left buried for a century, they would still hold 97% of the carbon that was present when they were felled.

Digging holes

To combat climate change, there is often much discussion about how to remove carbon from the atmosphere. As well as conventional techniques like restoring peatland and replanting forests, there are a variety of more technical methods being developed (figure 1). These include direct air capture (DAC) and ocean alkalinity enhancement, which involves tweaking the chemistry of oceans so that they absorb more CO2. But some scientists – like Sinéad Crotty, a managing director at the Carbon Containment Lab in Connecticut, US – think that biomass burial could be a simpler and cheaper way to sequester carbon.

1 Ready or not

Diagram showing a list of 15 methods of carbon removal
(Adapted from Smith et al. (2024) State of Carbon Dioxide Removal – Edition 2. DOI:10.17605/OSF.IO/F85QJ)

There are multiple methods being developed for capturing, converting and storing carbon dioxide (CO2), each at different stages of readiness for deployment, with varying removal capabilities and storage durability timescales.

This figure – adapted from the State of Carbon Dioxide Removal report – shows methods that are already deployed or analysed in research literature. They are categorized as either “conventional”, processes that are widely established and deployed at scale; or “novel”, those that are at a lower level of readiness and therefore only used on smaller scales. The figure also rates their Technology Readiness Level (TRL), maximum mitigation potential (how many gigatonnes (109 tonnes) of CO2 can be sequestered per year), and storage timescale.

The report defines each technique as follows:

  • Afforestation – Conversion to forest of land that was previously not forest.
  • Reforestation – Conversion to forest of land that was previously deforested.
  • Agroforestry – Growing trees on agricultural land while maintaining agricultural production.
  • Forest management – Stewardship and use of existing forests. To count as carbon dioxide removal (CDR), forest management practices must enhance the long-term average carbon stock in the forest system.
  • Peatland and coastal wetland restoration – Assisted recovery of inland ecosystems that are permanently or seasonally flooded or saturated by water (such as peatlands) and of coastal ecosystems (such as tidal marshes, mangroves and seagrass meadows). To count as CDR, this recovery must lead to a durable increase in the carbon content of these systems.
  • Durable wood products – Wood products which meet a given threshold of durability, typically used in construction. These can include sawn wood, wood panels and composite beams, but exclude less durable products such as paper.
  • Biochar – Relatively stable, carbon-rich material produced by heating biomass in an oxygen-limited environment. Assumed to be applied as a soil amendment unless otherwise stated.
  • Mineral products – Production of solid carbonate materials for use in products such as aggregates, asphalt, cement and concrete, using CO2 captured from the atmosphere.
  • Enhanced rock weathering – Increasing the natural rate of removal of CO2 from the atmosphere by applying crushed rocks, rich in calcium and magnesium, to soil or beaches.
  • Biomass burial – Burial of biomass in land sites such as soils or exhausted mines. Excludes storage in the typical geological formations associated with carbon capture and storage (CCS).
  • Bio-oil storage – Oil made by biomass conversion and placed into geological storage.
  • Bioenergy with carbon capture and storage – Process by which biogenic CO2 is captured from a bioenergy facility, with subsequent geological storage.
  • Direct air carbon capture and storage – Chemical process by which CO2 is captured from the ambient air, with subsequent geological storage.
  • Ocean fertilization – Enhancement of nutrient supply to the near-surface ocean with the aim of sequestering additional CO2 from the atmosphere stimulated through biological production. Methods include direct addition of micro-nutrients or macro-nutrients. To count as CDR, the biomass must reach the deep ocean where the carbon has the potential to be sequestered durably.
  • Ocean alkalinity enhancement – Spreading of alkaline materials on the ocean surface to increase the alkalinity of the water and thus increase ocean CO2 uptake.
  • Biomass sinking – Sinking of terrestrial (e.g. straw) or marine (e.g. macroalgae) biomass in the marine environment. To count as CDR, the biomass must reach the deep ocean where the carbon has the potential to be sequestered durably.
  • Direct ocean carbon capture and storage – Chemical process by which CO2 is captured directly from seawater, with subsequent geological storage. To count as CDR, this capture must lead to increased ocean CO2 uptake.

The 3775-year-old log shows that carbon can be stored for centuries underground, but the wood has to be buried under specific conditions. “People tend to think, ‘Who doesn’t know how to dig a hole and bury some wood?’” Zeng says. “But think about how many wooden coffins were buried in human history. How many of them survived? For a timescale of hundreds or thousands of years, we need the right conditions.”

The key for scientists seeking to test biomass burial is to create dry, low-oxygen environments, similar to those in the Quebec clay soil. Last year, for example, Crotty and her colleagues dug more than 100 pits at a site in Colorado, in the US, filled them with woody material and then covered them up again. In five years’ time they plan to dig the biomass back out of the pits to see how much it has decomposed.

The pits vary in depth, and have been refilled and packed in different ways, to test how their build impacts carbon storage. The researchers will also be calculating the carbon emissions of processes such as transporting and burying the biomass – including the amount of carbon released from the soil when the pits are dug. “What we are trying to do here is build an understanding of what works and what doesn’t, but also how we can measure, report and verify that what we are doing is truly carbon negative,” Crotty says.

Over the next five years the team will continuously measure surface CO2 and methane fluxes from several of the pits, while every pit will have its CO2 and methane emissions measured monthly. There are also moisture sensors and oxygen probes buried in the pits, plus a full weather station on the site.

Crotty says that all this data will allow them to assess how different depths, packing styles and the local environment alter conditions in the chambers. When the samples are excavated in five years, the researchers will also explore what types of decomposition the burial did and did not suppress. This will include tests to identify different fungal and bacterial signatures, to uncover the micro-organisms involved in any decay.

The big questions

Experiments like Crotty’s will help answer one of the key concerns about terrestrial storage of biomass: how long can the carbon be stored?

In 2023 a team led by Lawrence Livermore National Laboratory (LLNL) did a large-scale analysis of the potential for CO2 removal in the US. The resulting Road to Removal report outlined how CO2 removal could be used to help the US achieve its net zero goals (these have since been revoked by the Trump administration), focusing on techniques like direct air capture (DAC), increasing carbon uptake in forests and agricultural lands, and converting waste biomass into fuels and CO2.

The report did not, however, look at biomass burial. One of the report authors, Sarah Baker – an expert in decarbonization and CO2 removal at LLNL – told Physics World that this was because of a lack of evidence around the durability of the carbon stored. The report’s minimum requirement for carbon storage was at least 100 years, and there were not enough data available to show how much carbon stored in biomass would remain after that period, Baker explains.

The US Department of Energy is also working to address this question. It has funded a set of projects, which Baker is involved with, to bridge some of the knowledge gaps on carbon-removal pathways. This includes one led by the National Renewable Energy Lab, measuring how long carbon in buried biomass remains stored under different conditions.

Bury the problem

Crotty’s Colorado experiment is also addressing another question: are all forms of biomass equally appropriate for burial? To test this, Crotty’s team filled the pits with a range of woody materials, including different types of wood and wood chip as well as compressed wood, and “slash” – small branches, leaves, bark and other debris created by logging and other forestry work.

Indeed, Crotty and her colleagues see biomass storage as crucial for those managing our forests. The western US states, in particular, have seen an increased risk of wildfires through a mix of climate change and aggressive fire-suppression policies that do not allow smaller fires to burn and thereby produce overgrown forests. “This has led to a build-up of fuels across the landscape,” Crotty says. “So, in a forest that would typically have a high number of low-severity fires, it’s changed the fire regime into a very high-intensity one.”

These concerns led the US Forest Service to announce a 10-year wildfire crisis plan in 2022 that seeks to reduce the risk of fires by thinning and clearing 50 million acres of forest land, in addition to 20 million acres already slated for treatment. But this creates a new problem.

“There are currently very few markets for the types of residues that need to come out of these forests – it is usually small-diameter, low-value timber,” explains Crotty. “They typically can’t pay their way out of the forests, so business as usual in many areas is to simply put them in a pile and burn them.”

Large pile of wood burning in snowy landscape at edge of forest
Cheap but costly Typically, waste biomass from forest management is burnt, like this pile of slash at the edge of Coconino National Forest in Arizona – but doing so releases carbon dioxide. (Courtesy: Josh Goldstein/Coconino National Forest)

A recent study Crotty co-authored suggests that every year “pile burning” in US National Forests emits greenhouse gases equivalent to almost two million tonnes of CO2, and more than 11 million tonnes of fine particulate matter – air pollution that is linked to a range of health problems. Conservative estimates by the Carbon Containment Lab indicate that the material scheduled for clearance under the Forest Service’s 10-year crisis plan will contain around two gigatonnes (Gt) of CO2 equivalents. This is around 5% of current annual global CO2 emissions.

There are also cost implications. Crotty’s recent analysis found that piling and burning forest residue costs around $700 to $1300 per acre. By adding value to the carbon in the forest residues and keeping it out of the atmosphere, biomass storage may offer a solution to these issues, Crotty says.

As an incentive to remove carbon from the atmosphere, trading mechanisms exist whereby individuals, companies and governments can buy and sell carbon emissions. In essence, carbon has a price attached to it, meaning that someone who has emitted too much, say, can pay someone else to capture and store the equivalent amount of emissions, with an often-touted figure being $100 per tonne of CO2 stored. For a long time, this has been seen as the price at which carbon capture becomes affordable, enabling scale up to the volumes needed to tackle climate change.

“There is only so much capital that we will ever deploy towards [carbon removal] and thus the cheaper the solution, the more credits we’ll be able to generate, the more carbon we will be able to remove from the atmosphere,” explains Justin Freiberg, a managing director of the Carbon Containment Lab. “$100 is relatively arbitrary, but it is important to have a target and aim low on pricing for high quality credits.”

DAC has not managed to reach this magical price point. Indeed, the Swiss firm Climeworks – which is one of the biggest DAC companies – has stated that its costs might be around $300 per tonne by 2030.

A tomb in a mine

Another carbon-removal company, however, claims it has hit this benchmark using biomass burial. “We’re selling our first credits at $100 per tonne,” says Hannah Murnen, chief technology officer at Graphyte – a US firm backed by Bill Gates.

Graphyte is confident that there is significant potential in biomass burial. Based in Pine Bluff, Arkansas, the firm dries and compresses waste biomass into blocks before storage. “We dry it to below a level at which life can exist,” says Murnen, which effectively halts decomposition.

The company claims that it will soon be storing 50,000 tonnes of CO2 per year and is aiming for five million tonnes per year by 2030. Murnen acknowledges that these are “really significant figures”, particularly compared with what has been achieved in carbon capture so far. Nevertheless, she adds, if you look at the targets around carbon capture “this is the type of scale we need to get to”.

The need for carbon capture

The Intergovernmental Panel on Climate Change says that carbon capture is essential to limit global warming to 1.5 °C above pre-industrial levels.

To stay within the Paris Agreement’s climate targets, the 2024 State of Carbon Dioxide Removal report estimated that 7–9 gigatonnes (Gt) of CO2 removal will be needed annually by 2050. According to the report – which was put together by multiple institutions, led by the University of Oxford – currently two billion tonnes of CO2 are being removed per year, mostly through “conventional” methods like tree planting and wetland restoration. “Novel” methods – such as direct air capture (DAC), bioenergy with carbon capture, and ocean alkalinity enhancement – contribute 1.3 million tonnes of CO₂ removal per year, less than 0.1% of the total.

Graphyte is currently working with sawmill residue and rice hulls, but in the future Murnen says it plans to accept all sorts of biomass waste. “One of the great things about biomass for the purpose of carbon removal is that, because we are not doing any sort of chemical transformation on the biomass, we’re very flexible to the type of biomass,” Murnen adds.

And there appears to be plenty available. Estimates by researchers in the UK and India (NPJ Climate and Atmospheric Science 2 35) suggest that every year around 140 Gt of biomass waste is generated globally from forestry and agriculture. Around two-thirds of the agricultural residues are from cereals, like wheat, rice, barley and oats, while sugarcane stems and leaves are the second largest contributors. The rest is made up of things like leaves, roots, peels and shells from other crops. Like forest residues, much of this waste ends up being burnt or left to rot, releasing its carbon.

Currently, Graphyte has one storage site about 30 km from Pine Bluff, where its compressed biomass blocks are stored underground, enclosed in an impermeable layer that prevents water ingress. “We took what used to be an old gravel mine – so basically a big hole in the ground – and we’ve created a lined storage tomb where we are placing the biomass and then sealing it closed,” says Murnen.

Large quarry-like area with hundreds of black blocks stacked in rows and large plant machinery moving more blocks around
Big hole in the ground Graphyte is using an old gravel mine 30 km from Pine Bluff in Arkansas to store its compressed biomass bricks. (Courtesy: Graphyte)

Once sealed, Graphyte monitors the CO2 and methane concentrations in the headspace of the vaults, to check for any decomposition of the biomass. The company also analyses biomass as it enters the facility, to track how much carbon it is storing. Wood residues, like sawmill waste are generally around 50% carbon, says Murnen, but rice hulls are closer to 35% carbon.

Graphyte is confident that its storage is physically robust and could avoid any re-emission for what Murnen calls “a very long period of time”. However, it is also exploring how to prevent accidental disturbance of the biomass in the future – possibly long after the company ceases to exist. One option is to add a conservation easement to the site, a well-established US legal mechanism for adding long-term protection to land.

“We feel pretty strongly that the way we are approaching [carbon removal] is one of the most scalable ways,” Murnen says. “In as far as impediments or barriers to scale, we have a much easier permitting pathway, we don’t need pipelines, we are pretty flexible on the type of land that we can use for our storage sites, and we have a huge variety of feedstocks that we can take into the process.”

A simple solution

Back at LLNL, Baker says that although she hasn’t “run the numbers”, and there are a lot caveats, she suspects that biomass burial is “true carbon removal because it is so simple”.

Once associated upstream and downstream emissions are taken into account, many techniques that people call carbon removal are probably not, she says, because they emit more fossil CO2 than they store.

Biomass burial is also cheap. As the Road to Removal analysis found, “thermal chemical” techniques, like pyrolysis, have great potential for removing and storing carbon while converting biomass into hydrogen and sustainable aviation fuel. But they require huge investment, with larger facilities potentially costing hundreds of millions of dollars. Biomass burial could even act as temporary storage until facilities are ready to convert the carbon into sustainable fuels. “Buy ourselves time and then use it later,” says Baker.

Either way, biomass burial has great potential for the future of carbon storage, and therefore our environment. “The sooner we can start doing these things the greater the climate impact,” Baker says.

We just need to know that the storage is durable – and if that 3775-year-old log is any indication, there’s the potential to store biomass for hundreds, maybe thousands of years.

The post Bury it, don’t burn it: turning biomass waste into a carbon solution appeared first on Physics World.

Andromeda galaxy may not collide with the Milky Way after all

2 juin 2025 à 17:00

Since 1912, we’ve known that the Andromeda galaxy is racing towards our own Milky Way at about 110 kilometres per second. A century later, in 2012, astrophysicists at the Space Telescope Science Institute (STScI) in Maryland, US came to a striking conclusion. In four billion years, they predicted, a collision between the two galaxies was a sure thing.

Now, it’s not looking so sure.

Using the latest data from the European Space Agency’s Gaia astrometric mission, astrophysicists led by Till Sawala of the University of Helsinki, Finland re-modelled the impending crash, and found that it’s 50/50 as to whether a collision happens or not.

This new result differs from the 2012 one because it considers the gravitational effect of an additional galaxy, the Large Magellanic Cloud (LMC), alongside the Milky Way, Andromeda and the nearby Triangulum spiral galaxy, M33. While M33’s gravity, in effect, adds to Andromeda’s motion towards us, Sawala and colleagues found that the LMC’s gravity tends to pull the Milky Way out of Andromeda’s path.

“We’re not predicting that the merger is not going to happen within 10 billion years, we’re just saying that from the data we have now, we can’t be certain of it,” Sawala tells Physics World.

“A step in the right direction”

While the LMC contains only around 10% of the Milky Way’s mass, Sawala and colleagues’ work indicates that it may nevertheless be massive enough to turn a head-on collision into a near-miss. Incorporating its gravitational effects into simulations is therefore “a step in the right direction”, says Sangmo Tony Sohn, a support scientist at the STScI and a co-author of the 2012 paper that predicted a collision.

Even with more detailed simulations, though, uncertainties in the motion and masses of the galaxies leave room for a range of possible outcomes. According to Sawala, the uncertainty with the greatest effect on merger probability lies in the so-called “proper motion” of Andromeda, which is its motion as it appears on our night sky. This motion is a mixture of Andomeda’s radial motion towards the centre of the Milky Way and the two galaxies’ transverse motion perpendicular to one another.

If the combined transverse motion is large enough, Andromeda will pass the Milky Way at a distance greater than 200 kiloparsecs (652,000 light years). This would avert a collision in the next 10 billion years, because even when the two galaxies loop back on each other, their next pass would still be too distant, according to the models.

Conversely, a smaller transverse motion would limit the distance at closest approach to less than 200 kiloparsecs. If that happens, Sawala says the two galaxies are “almost certain to merge” because of the dynamical friction effect, which arises from the diffuse halo of old stars and dark matter around galaxies. When two galaxies get close enough, these haloes begin interacting with each other, generating tidal and frictional heating that robs the galaxies of orbital energy and makes them fall ever closer.

The LMC itself is an excellent example of how this works. “The LMC is already so close to the Milky Way that it is losing its orbital energy, and unlike [Andromeda], it is guaranteed to merge with the Milky Way,” Sawala says, adding that, similarly, M33 stands a good chance of merging with Andromeda.

“A very delicate task”

Because Andromeda is 2.5 million light years away, its proper motion is very hard to measure. Indeed, no-one had ever done it until the STScI team spent 10 years monitoring the galaxy, which is also known as M31, with the Hubble Space Telescope – something Sohn describes as “a very delicate task” that continues to this day.

Another area where there is some ambiguity is in the mass estimate of the LMC. “If the LMC is a little more massive [than we think], then it pulls the Milky Way off the collision course with M31 a little more strongly, reducing the possibility of a merger between the Milky Way and M31,” Sawala explains.

The good news is that these ambiguities won’t be around forever. Sohn and his team are currently analysing new Hubble data to provide fresh constraints on the Milky Way’s orbital trajectory, and he says their results have been consistent with the Gaia analyses so far. Sawala agrees that new data will help reduce uncertainties. “There’s a good chance that we’ll know more about what is going to happen fairly soon, within five years,” he says.

Even if the Milky Way and Andromeda don’t collide in the next 10 billion years, though, that won’t be the end of the story. “I would expect that there is a very high probability that they will eventually merge, but that could take tens of billions of years,” Sawala says.

The research is published in Nature Astronomy.

The post Andromeda galaxy may not collide with the Milky Way after all appeared first on Physics World.

Thinking of switching research fields? Beware the citation ‘pivot penalty’ revealed by new study

2 juin 2025 à 14:00

Scientists who switch research fields suffer a drop in the impact of their new work – a so-called “pivot penalty”. That is according to a new analysis of scientific papers and patents, which finds that the pivot penalty increases the further away a researcher shifts from their previous topic of research.

The analysis has been carried out by a team led by Dashun Wang and Benjamin Jones of Northwestern University in Illinois. They analysed more than 25 million scientific papers published between 1970 and 2015 across 154 fields as well as 1.7 million US patents across 127 technology classes granted between 1985 and 2020.

To identify pivots and quantify how far a scientist moves from their existing work, the team looked at the scientific journals referenced in a paper and compared them with those cited by previous work. The more the set of journals referenced in the main work diverged from those usually cited, the larger the pivot. For patents, the researchers used “technological field codes” to measure pivots.

Larger pivots are associated with fewer citations and a lower propensity for high-impact papers, defined as those in the top 5% of citations received in their field and publication year. Low-pivot work – moving only slightly away from the typical field of research – led to a high-impact paper 7.4% of the time, yet the highest-pivot shift resulted in a high-impact paper only 2.2% of the time. A similar trend was seen for patents.

When looking at the output of an individual researcher, low-pivot work was 2.1% more likely to have a high-impact paper while high-pivot work was 1.8% less likely to do so. The study found the pivot penalty to be almost universal across scientific fields and it persists regardless of a scientist’s career stage, productivity and collaborations.

COVID impact

The researchers also studied the impact of COVID-19, when many researchers pivoted to research linked to the pandemic. After analyzing 83 000 COVID-19 papers and 2.63 million non-COVID papers published in 2020, they found that COVID-19 research was not immune to the pivot penalty. Such research had a higher impact than average, but the further a scientist shifted from their previous work to study COVID-19 the less impact the research had.

“Shifting research directions appears both difficult and costly, at least initially, for individual researchers,” Wang told Physics World. He thinks, however, that researchers should not avoid change but rather “approach it strategically”. Researchers should, for example, try anchoring their new work in the conventions of their prior field or the one they are entering.

To help researchers pivot, Wang says research institutions should “acknowledge the friction” and not “assume that a promising researcher will thrive automatically after a pivot”. Instead, he says, institutions need to design support systems, such as funding or protected time to explore new ideas, or pairing researchers with established scholars in the new field.

The post Thinking of switching research fields? Beware the citation ‘pivot penalty’ revealed by new study appeared first on Physics World.

Majorana bound states spotted in system of three quantum dots

31 mai 2025 à 12:46

Firm evidence of Majorana bound states in quantum dots has been reported by researchers in the Netherlands. Majorana modes appeared at both edges of a quantum dot chain when an energy gap suppressed them in the centre, and the experiment could allow researchers to investigate the unique properties of these particles in hitherto unprecedented detail. This could bring topologically protected quantum bits (qubits) for quantum computing one step closer.

Majorana fermions were first proposed in 1937 by the Italian physicist Ettore Majorana. They were imagined as elementary particles that would be their own antiparticles. However, such elementary particles have never been definitively observed. Instead, physicists have worked to create Majorana quasiparticles (particle-like collective excitations) in condensed matter systems.

In 2001, the theoretical physicist Alexei Kitaev  at Microsoft Research, proposed that “Majorana bound states” could be produced in nanowires comprising topological superconductors. The Majorana quasiparticle would exist as a single nonlocal mode at either end of a wire, while being zero-valued in the centre. Both ends would be constrained by the laws of physics to remain identical despite being spatially separated. This phenomenon could produce “topological qubits” robust to local disturbance.

Microsoft and others continue to research Majorana modes using this platform to this day.  Multiple groups claim to have observed them, but this remains controversial. “It’s still a matter of debate in these extended 1D systems: have people seen them? Have they not seen them?”, says Srijit Goswami of QuTech in Delft.

 Controlling disorder

 In 2012, theoretical physicists Jay Sau, then of Harvard University and Sankar Das Sarma of the University of Maryland proposed looking for Majorana bound states in quantum dots. “We looked at [the nanowires] and thought ‘OK, this is going to be a while given the amount of disorder that system has – what are the ways this disorder could be controlled?’ and this is exactly one of the ways we thought it could work,” explains Sau. The research was not taken seriously at the time, however, Sau says, partly because people underestimated the problem of disorder.

Goswami and others have previously observed “poor man’s Majoranas” (PMMs) in two quantum dots. While they share some properties with Majorana modes, PMMs lack topological protection. Last year the group coupled two spin-polarized quantum dots connected by a semiconductor–superconductor hybrid material. At specific points, the researchers found zero-bias conductance peaks.

“Kitaev says that if you tune things exactly right you have one Majorana on one dot and another Majorana on another dot,” says Sau. “But if you’re slightly off then they’re talking to each other. So it’s an uncomfortable notion that they’re spatially separated if you just have two dots next to each other.”

Recently, a group that included Goswami’s colleagues at QuTech found that the introduction of a third quantum dot stabilized the Majorana modes. However, they were unable to measure the energy levels in the quantum dots.

Zero energy

In new work, Goswami’s team used systems of three electrostatically-gated, spin-polarized quantum dots in a 2D electron gas joined by hybrid semiconductor–superconductor regions. The quantum dots had to be tuned to zero energy. The dots exchanged charge in two ways: by standard electron hopping through the semiconductor and by Cooper-pair mediated coupling through the superconductor.

“You have to change the energy level of the superconductor–semiconductor hybrid region so that these two processes have equal probability,” explains Goswami. “Once you satisfy these conditions, then you get Majoranas at the ends.”

In addition to more topological protection, the addition of a third qubit provided the team with crucial physical insight. “Topology is actually a property of a bulk system,” he explains; “Something special happens in the bulk which gives rise to things happening at the edges. Majoranas are something that emerge on the edges because of something happening in the bulk.” With three quantum dots, there is a well-defined bulk and edge that can be probed separately: “We see that when you have what is called a gap in the bulk your Majoranas are protected, but if you don’t have that gap your Majoranas are not protected,” Goswami says.

To produce a qubit will require more work to achieve the controllable coupling of four Majorana bound states and the integration of a readout circuit to detect this coupling. In the near-term, the researchers are investigating other phenomena, such as the potential to swap Majorana bound states.

Sau is now at the University of Maryland and says that an important benefit of the experimental platform is that it can be determined unambiguously whether or not Majorana bound states have been observed. “You can literally put a theory simulation next to the experiment and they look very similar.”

 The research is published in Nature.

The post Majorana bound states spotted in system of three quantum dots appeared first on Physics World.

How magnetar flares give birth to gold and platinum

30 mai 2025 à 10:21

Powerful flares on highly-magnetic neutron stars called magnetars could produce up to 10% of the universe’s gold, silver and platinum, according to a new study. What is more, astronomers may have already observed this cosmic alchemy in action.

Gold, silver, platinum and a host of other rare heavy nuclei are known as rapid-process (r-process) elements. This is because astronomers believe that these elements are produced by the rapid capture of neutrons by lighter nuclei. Neutrons can only exist outside of an atomic nucleus for about 15 min before decaying (except in the most extreme environments). This means that the r-process must be fast and take place in environments rich in free neutrons.

In August 2017, an explosion resulting from the merger of two neutron stars was witnessed by telescopes operating across the electromagnetic spectrum and by gravitational-wave detectors. Dubbed a kilonova, the explosion produced approximately 16,000 Earth-masses worth of r-process elements, including about ten Earth masses of gold and platinum.

While the observations seem to answer the question of where precious metals came from, there remains a suspicion that neutron-star mergers cannot explain the entire abundance of r-process elements in the universe.

Giant flares

Now researchers led by Anirudh Patel, who is a PhD student at New York’s Columbia University, have created a model that describes how flares on the surface of magnetars can create r-process elements.

Patel tells Physics World that “The rate of giant flares is significantly greater than mergers.” However, given that one merger “produces roughly 10,000 times more r-process mass than a single magnetar flare”, neutron-star mergers are still the dominant factory of rare heavy elements.

A magnetar is an extreme type of neutron star with a magnetic field strength of up to a thousand trillion gauss. This makes magnetars the most magnetic objects in the universe. Indeed, if a magnetar were as close to Earth as the Moon, its magnetic field would wipe your credit card.

Astrophysicists believe that when a magnetar’s powerful magnetic fields are pulled taut, the magnetic tension will inevitably snap. This would result in a flare, which is an energetic ejection of neutron-rich material from the magnetar’s surface.

Mysterious mechanism

However, the physics isn’t entirely understood, according to Jakub Cehula of Charles University in the Czech Republic, who is a member of Patel’s team. “While the source of energy for a magnetar’s giant flares is generally agreed to be the magnetic field, the exact mechanism by which this energy is released is not fully understood,” he explains.

One possible mechanism is magnetic reconnection, which creates flares on the Sun. Flares could also be produced by energy released during starquakes following a build-up of magnetic stress. However, neither satisfactorily explains the giant flares, of which only nine have thus far been detected.

In 2024 Cehula led research that attempted to explain the flares by combining starquakes with magnetic reconnection. “We assumed that giant flares are powered by a sudden and total dissipation of the magnetic field right above a magnetar’s surface,” says Cehula.

This sudden release of energy drives a shockwave into the magnetar’s neutron-rich crust, blasting a portion of it into space at velocities greater than a tenth of the speed of light, where in theory heavy elements are formed via the r-process.

Gamma-ray burst

Remarkably, astronomers may have already witnessed this in 2004, when a giant magnetar flare was spotted as a half-second gamma-ray burst that released more energy than the Sun does in a million years. What happened next remained unexplained until now. Ten minutes after the initial burst, the European Space Agency’s INTEGRAL satellite detected a second, weaker signal that was not understood.

Now, Patel and colleagues have shown that the r-process in this flare created unstable isotopes that quickly decayed into stable heavy elements – creating the gamma-ray signal.

Patel calculates that the 2004 flare resulted in the creation of two million billion billion kilograms of r-process elements, equivalent to about the mass of Mars.

Extrapolating, Patel calculates that giant flares on magnetars contribute between 1–10% of all the r-process elements in the universe.

Lots of magnetars

“This estimate accounts for the fact that these giant flares are rare,” he says, “But it’s also important to note that magnetars have lifetimes of 1000 to 10,000 years, so while there may only be a couple of dozen magnetars known to us today, there have been many more magnetars that have lived and died over the course of the 13 billion-year history of our galaxy.”

Magnetars would have been produced early in the universe by the supernovae of massive stars, whereas it can take a billion years or longer for two neutron stars to merge. Hence, magnetars would have been a more dominant source of r-process elements in the early universe. However, they may not have been the only source.

“If I had to bet, I would say there are other environments in which r-process elements can be produced, for example in certain rare types of core-collapse supernovae,” says Patel.

Either way, it means that some of the gold and silver in your jewellery was forged in the violence of immense magnetic fields snapping on a dead star.

The research is described in Astrophysical Journal Letters.

The post How magnetar flares give birth to gold and platinum appeared first on Physics World.

Shengxi Huang: how defects can boost 2D materials as single-photon emitters

28 mai 2025 à 17:01
Photo of researchers in a lab at Rice University.
Hidden depths Shengxi Huang (left) with members of her lab at Rice University in the US, where she studies 2D materials as single-photon sources. (Courtesy: Jeff Fitlow)

Everyday life is three dimensional, with even a sheet of paper having a finite thickness. Shengxi Huang from Rice University in the US, however, is attracted by 2D materials, which are usually just one atomic layer thick. Graphene is perhaps the most famous example — a single layer of carbon atoms arranged in a hexagonal lattice. But since it was first created in 2004, all sorts of other 2D materials, notably boron nitride, have been created.

An electrical engineer by training, Huang did a PhD at the Massachusetts Institute of Technology and postdoctoral research at Stanford University before spending five years as an assistant professor at the Pennsylvania State University. Huang has been at Rice since 2022, where she is now an associate professor in the Department of Electrical and Computer Engineering, the Department of Material Science and NanoEngineering, and the Department of Bioengineering.

Her group at Rice currently has 12 people, including eight graduate students and four postdocs. Some are physicists, some are engineers, while others have backgrounds in material science or chemistry. But they all share an interest in understanding the optical and electronic properties of quantum materials and seeing how they can be used, for example, as biochemical sensors. Lab equipment from Picoquant is vital in helping in that quest, as Huang explains in an interview with Physics World.

Why are you fascinated by 2D materials?

I’m an electrical engineer by training, which is a very broad field. Some electrical engineers focus on things like communication and computing, but others, like myself, are more interested in how we can use fundamental physics to build useful devices, such as semiconductor chips. I’m particularly interested in using 2D materials for optoelectronic devices and as single-photon emitters.

What kinds of 2D materials do you study?

The materials I am particularly interested in are transition metal dichalcogenides, which consist of a layer of transition-metal atoms sandwiched between two layers of chalcogen atoms – sulphur, selenium or tellurium. One of the most common examples is molybdenum disulphide, which in its monolayer form has a layer of sulphur on either side of a layer of molybdenum. In multi-layer molybdenum disulphide, the van der Waals forces between the tri-layers are relatively weak, meaning that the material is widely used as a lubricant – just like graphite, which is a many-layer version of graphene.

Why do you find transition metal dichalcogenides interesting?

Transition metal dichalcogenides have some very useful optoelectronic properties. In particular, they emit light whenever the electron and hole that make up an “exciton” recombine. Now because these dichalcogenides are so thin, most of the light they emit can be used. In a 3D material, in contrast, most light is generated deep in the bulk of the material and doesn’t penetrate beyond the surface. Such 2D materials are therefore very efficient and, what’s more, can be easily integrated onto chip-based devices such as waveguides and cavities.

Transition metal dichalcogenide materials also have promising electronic applications, particularly as the active material in transistors. Over the years, we’ve seen silicon-based transistors get smaller and smaller as we’ve followed Moore’s law, but we’re rapidly reaching a limit where we can’t shrink them any further, partly because the electrons in very thin layers of silicon move so slowly. In 2D transition metal dichalcogenides, in contrast, the electron mobility can actually be higher than in silicon of the same thickness, making them a promising material for future transistor applications.

What can such sources of single photons be used for?

Single photons are useful for quantum communication and quantum cryptography. Carrying information as zero and one, they basically function as a qubit, providing a very secure communication channel. Single photons are also interesting for quantum sensing and even quantum computing. But it’s vital that you have a highly pure source of photons. You don’t want them mixed up with “classical photons”, which — like those from the Sun — are emitted in bunches as otherwise the tasks you’re trying to perform cannot be completed.

What approaches are you taking to improve 2D materials as single-photon emitters?

What we do is introduce atomic defects into a 2D material to give it optical properties that are different to what you’d get in the bulk. There are several ways of doing this. One is to irradiate a sample with ions or electrons, which can bombard individual atoms out to generate “vacancy defects”. Another option is to use plasmas, whereby atoms in the sample get replaced by atoms from the plasma.

So how do you study the samples?

We can probe defect emission using a technique called photoluminescence, which basically involves shining a laser beam onto the material. The laser excites electrons from the ground state to an excited state, prompting them to emit light. As the laser beam is about 500-1000 nm in diameter, we can see single photon emission from an individual defect if the defect density is suitable.

Photo of researchers in a lab at Rice University
Beyond the surface Shengxi Huang (second right) uses equipment from PicoQuant to probe 2D materials. (Courtesy: Jeff Fitlow)

What sort of experiments do you do in your lab?

We start by engineering our materials at the atomic level to introduce the correct type of defect. We also try to strain the material, which can increase how many single photons are emitted at a time. Once we’ve confirmed we’ve got the correct defects in the correct location, we check the material is emitting single photons by carrying out optical measurements, such as photoluminescence. Finally, we characterize the purity of our single photons – ideally, they shouldn’t be mixed up with classical photons but in reality, you never have a 100% pure source. As single photons are emitted one at a time, they have different statistical characteristics to classical light. We also check the brightness and lifetime of the source, the efficiency, how stable it is, and if the photons are polarized. In fact, we have a feedback loop: what improvements can we do at the atomic level to get the properties we’re after?

Is it difficult adding defects to a sample?

It’s pretty challenging. You want to add just one defect to an area that might be just one micron square so you have to control the atomic structure very finely. It’s made harder because 2D materials are atomically thin and very fragile. So if you don’t do the engineering correctly, you may accidentally introduce other types of defects that you don’t want, which will alter the defects’ emission.

What techniques do you use to confirm the defects are in the right place?

Because the defect concentration is so low, we cannot use methods that are typically used to characterise materials, such as X-ray photo-emission spectroscopy or scanning electron microscopy. Instead, the best and most practical way is to see if the defects generate the correct type of optical emission predicted by theory. But even that is challenging because our calculations, which we work on with computational groups, might not be completely accurate.

How do your PicoQuant instruments help in that regard?

We have two main pieces of equipment – a MicroTime 100 photoluminescence microscope and a FluoTime 300 spectrometer. These have been customized to form a Hanbury Brown Twiss interferometer, which measures the purity of a single photon source. We also use the microscope and spectrometer to characterise photoluminescence spectrum and lifetime. Essentially, if the material emits light, we can then work out how long it takes before the emission dies down.

Did you buy the equipment off-the-shelf?

It’s more of a customised instrument with different components – lasers, microscopes, detectors and so on — connected together so we can do multiple types of measurement. I put in a request to Picoquant, who discussed my requirements with me to work out how to meet my needs. The equipment has been very important for our studies as we can carry out high-throughput measurements over and over again. We’ve tailored it for our own research purposes basically.

So how good are your samples?

The best single-photon source that we currently work with is boron nitride, which has a single-photon purity of 98.5% at room temperature. In other words, for every 200 photons only three are classical. With transition-metal dichalcogenides, we get a purity of 98.3% at cryogenic temperatures.

What are your next steps?

There’s still lots to explore in terms of making better single-photon emitters and learning how to control them at different wavelengths. We also want to see if these materials can be used as high-quality quantum sensors. In some cases, if we have the right types of atomic defects, we get a high-quality source of single photons, which we can then entangle with their spin. The emitters can therefore monitor the local magnetic environment with better performance than is possible with classical sensing methods.

The post Shengxi Huang: how defects can boost 2D materials as single-photon emitters appeared first on Physics World.

No laughing matter: a comic book about the climate crisis

28 mai 2025 à 12:00
Comic depicting a parachutist whose chute is on fire and their thought process about not using their backup chute
Blunt message Anti-nuclear thinking is mocked in World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Comics are regarded as an artform in France, where they account for a quarter of all book sales. Nevertheless, the graphic novel World Without End: an Illustrated Guide to the Climate Crisis was a surprise French bestseller when it first came out in 2022. Taking the form of a Socratic dialogue between French climate expert Jean-Marc Jancovici and acclaimed comic artist Christophe Blain, it’s serious, scientific stuff.

Now translated into English by Edward Gauvin, the book follows the conventions of French-language comic strips or bandes dessinées. Jancovici is drawn with a small nose – denoting seriousness – while Blain’s larger nose signals humour. The first half explores energy and consumption, with the rest addressing the climate crisis and possible solutions.

Overall, this is a Trojan horse of a book: what appears to be a playful comic is packed with dense, academic content. Though marketed as a graphic novel, it reads more like illustrated notes from a series of sharp, provocative university lectures. It presents a frightening vision of the future and the humour doesn’t always land.

The book spans a vast array of disciplines – not just science and economics but geography and psychology too. In fact, there’s so much to unpack that, had I Blain’s skills, I might have reviewed it in the form of a comic strip myself. The old adage that “a picture is worth a thousand words” has never rung more true.

Absurd yet powerful visual metaphors feature throughout. We see a parachutist with a flaming main chute that represents our dependence on fossil fuels. The falling man jettisons his reserve chute – nuclear power – and tries to knit an alternative using clean energy, mid-fall. The message is blunt: nuclear may not be ideal, but it works.

World Without End is bold, arresting, provocative and at times polemical.

The book is bold, arresting, provocative and at times polemical. Charts and infographics are presented to simplify complex issues, even if the details invite scrutiny. Explanations are generally clear and concise, though the author’s claim that accidents like Chernobyl and Fukushima couldn’t happen in France smacks of hubris.

Jancovici makes plenty of attention-grabbing statements. Some are sound, such as the notion that fossil fuels spared whales from extinction as we didn’t need this animal’s oil any more. Others are dubious – would a 4 °C temperature rise really leave a third of humanity unable to survive outdoors?

But Jancovici is right to say that the use of fossil fuels makes logical sense. Oil can be easily transported and one barrel delivers the equivalent of five years of human labour. A character called Armor Man (a parody of Iron Man) reminds us that fossil fuels are like having 200 mechanical slaves per person, equivalent to an additional 1.5 trillion people on the planet.

Fossil fuels brought prosperity – but now threaten our survival. For Jancovici, the answer is nuclear power, which is perhaps not surprising as it produces 72% of electricity in the author’s homeland. But he cherry picks data, accepting – for example – the United Nations figure that only about 50 people died from the Chernobyl nuclear accident.

While acknowledging that many people had to move following the disaster, the author downplays the fate of those responsible for “cleaning up” the site, the long-term health effects on the wider population and the staggering economic impact – estimated at €200–500bn. He also sidesteps nuclear-waste disposal and the cost and complexity of building new plants.

While conceding that nuclear is “not the whole answer”, Jancovici dismisses hydrogen and views renewables like wind and solar as too intermittent – they require batteries to ensure electricity is supplied on demand – and diffuse. Imagine blanketing the Earth in wind turbines.

Cartoon of a doctor and patient. The patient has increased their alcohol intake but also added in some healthy orange juice
Humorous point A joke from World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Still, his views on renewables seem increasingly out of step. They now supply nearly 30% of global electricity – 13% from wind and solar, ahead of nuclear at 9%. Renewables also attract 70% of all new investment in electricity generation and (unlike nuclear) continue to fall in price. It’s therefore disingenuous of the author to say that relying on renewables would be like returning to pre-industrial life; today’s wind turbines are far more efficient than anything back then.

Beyond his case for nuclear, Jancovici offers few firm solutions. Weirdly, he suggests “educating women” and providing pensions in developing nations – to reduce reliance on large families – to stabilize population growth. He also cites French journalist Sébastien Bohler, who thinks our brains are poorly equipped to deal with long-term threats.

But he says nothing about the need for more investment in nuclear fusion or for “clean” nuclear fission via, say, liquid fluoride thorium reactors (LFTRs), which generate minimal waste, won’t melt down and cannot be weaponized.

Perhaps our survival depends on delaying gratification, resisting the lure of immediate comfort, and adopting a less extravagant but sustainable world. We know what changes are needed – yet we do nothing. The climate crisis is unfolding before our eyes, but we’re paralysed by a global-scale bystander effect, each of us hoping someone else will act first. Jancovici’s call for “energy sobriety” (consuming less) seems idealistic and futile.

Still, World Without End is a remarkable and deeply thought-provoking book that deserves to be widely read. I fear that it will struggle to replicate its success beyond France, though Raymond Briggs’ When the Wind Blows – a Cold War graphic novel about nuclear annihilation – was once a British bestseller. If enough people engaged with the book, it would surely spark discussion and, one day, even lead to meaningful action.

  • 2024 Particular Books £25.00hb 196pp

The post No laughing matter: a comic book about the climate crisis appeared first on Physics World.

The quantum eraser doesn’t rewrite the past – it rewrites observers

27 mai 2025 à 15:00

“Welcome to this special issue of Physics World, marking the 200th anniversary of quantum mechanics. In this double-quantum edition, the letters in this text are stored using qubits. As you read, you project the letters into a fixed state, and that information gets copied into your mind as the article that you are reading. This text is actually in a superposition of many different articles, but only one of them gets copied into your memory. We hope you enjoy the one that you are reading.”

That’s how I imagine the opening of the 2125 Physics World quantum special issue, when fully functional quantum computers are commonplace, and we have even figured out how to control individual qubits on display screens. If you are lucky enough to experience reading such a magazine, you might be disappointed as you can read only one of the articles the text gets projected into. The problem is that by reading the superposition of articles, you made them decohere, because you copied the information about each letter into your memory. Can you figure out a way to read the others too? After all, more Physics World articles is always better.

A possible solution may be if you could restore the coherence of the text by just erasing your memory of the particular article you read. Once you no longer have information identifying which article your magazine was projected into, there is then no fundamental reason for it to remain decohered into a single state. You could then reread it to enjoy a different article.

While this thought experiment may sound fantastical, the concept is closely connected to a mind-bending twist on the famous double-slit experiment, known as the delayed-choice quantum eraser. It is often claimed to exhibit a radical phenomenon: where measurements made in the present alter events that occurred in the past. But is such a paradoxical suggestion real, even in the notoriously strange quantum realm?

A double twist on the double slit

In a standard double-slit experiment, photons are sent one by one through two slits to create an interference pattern on a screen, illustrating the wave-like behaviour of light. But if we add a detector that can spot which of the two slits the photon goes through, the interference disappears and we see only two distinct clumps on the screen, signifying particle-like behaviour. Crucially, gaining information about which path the photon took changes the photon’s quantum state, from the wave-like interference pattern to the particle-like clumps.

The first twist on this thought experiment is attributed to proposals from physicist John Wheeler in 1978, and a later collaboration with Wojciech Zurek in 1983. Wheeler’s idea was to delay the measurement of which slit the photon goes through. Instead of measuring the photon as it passes through the double-slit, the measurement could be delayed until just before the photon hits the screen. Interestingly, the delayed detection of which slit the photon goes through still determines whether or not it displays the wave-like or particle-like behaviour. In other words, even a detection done long after the photon has gone through the slit determines whether or not that photon is measured to have interfered with itself.

If that’s not strange enough, the delayed-choice quantum eraser is a further modification of this idea. First proposed by American physicists Marlan Scully and Kai Drühl in 1982 (Phys. Rev. A 25 2208), it was later experimentally implemented by Yoon-Ho Kim and collaborators using photons in 2000 (Phys. Rev. Lett. 84 1). This variation adds a second twist: if recording which slit the photon passes through causes it to decohere, then what happens if we were to erase that information? Imagine shrinking the detector to a single qubit that becomes entangled with the photon: “left” slit might correlate to the qubit being 0, “right” slit to 1. Instead of measuring whether the qubit is a 0 or 1 (revealing the path), we could measure it in a complementary way, randomising the 0s and 1s (erasing the path information).

1 Delayed detections, path revelations and complementary measurements

Detailed illustration explaining the quantum eraser effect
(Courtesy: Mayank Shreshtha)

This illustration depicts how the quantum eraser restores the wave-like behaviour of photons in a double-slit experiment, using 3D-glasses as an analogy.

The top left box shows the set-up for the standard double-slit experiment. As there are no detectors at the slits measuring which pathway a photon takes, an interference pattern emerges on the screen.  In box 1, detectors are present at each slit, and measuring which slit the photon might have passed through, the interference patter is destroyed. Boxes 2 and 3 show that by erasing the “which-slit” information, the interference patterns are restored. This is done by separating out the photons using the eraser, represented here by a red filter and a blue filter of the 3D glasses. The final box 4 shows that the overall pattern with the eraser has no interference, identical to patten seen in box 1.

In boxes 2, 3 and 4, a detector qubit measures “which-slit” information, with states |0> for left and |1> for right. These are points on the z-axis of the “Bloch sphere”, an abstract representation of the qubit. Then the eraser measures the detector qubit in a complementary way, along the x-axis of the Bloch sphere. This destroys the “which-slit information”, but reveals the red and blue lens information used to filter the outcomes, as depicted in the image of the 3D glasses.

Strikingly, while the screen still shows particle-like clumps overall, these complementary measurements of the single-qubit detector can actually be used to extract a wave-like interference pattern. This works through a sorting process: the two possible outcomes of the complementary measurements are used to separate out the photon detections on the screen. The separated patterns then each individually show bright and dark fringes.

I like to visualize this using a pair of 3D glasses, with one blue and one red lens. Each colour lens reveals a different individual image, like the two separate interference patterns. Without the 3D glasses, you see only the overall sum of the images. In the quantum eraser experiment, this sum of the images is a fully decohered pattern, with no trace of interference. Having access to the complementary measurements of the detector is like getting access to the 3D glasses: you now get an extra tool to filter out the two separate interference patterns.

Rewriting the past – or not?

If erasing the information at the detector lets us extract wave-like patterns, it may seem like we’ve restored wave-like behaviour to an already particle-like photon. That seems truly head-scratching. However, Jonte Hance, a quantum physicist at Newcastle University in the UK, highlights a different conclusion, focused on how the individual interference patterns add up to show the usual decohered pattern. “They all feel like they shouldn’t be able to fit together,” Hance explains. “It’s really showing that the correlations you get through entanglement have to be able to fit every possible way you could measure a system.” The results therefore reveal an intriguing aspect of quantum theory – the rich, counterintuitive structure of quantum correlations from entanglement – rather than past influences.

Even Wheeler himself did not believe the thought experiment implies backward-in-time influence, as explained by Lorenzo Catani, a researcher at the International Iberian Nanotechnology Laboratory (INL) in Portugal. Commenting on the history of the thought experiment, Catani notes that “Wheeler concluded that one must abandon a certain type of realism – namely, the idea that the past exists independently of its recording in the present. As far as I know, only a minority of researchers have interpreted the experiment as evidence for retrocausality.”

Eraser vs Bell: a battle of the bizarre

One physicist who is attempting to unpack this problem is Johannes Fankhauser at the University of Innsbruck, Austria. “I’d heard about the quantum eraser, and it had puzzled me a lot because of all these bizarre claims of backwards-in-time influence”, he explains. “I see something that sounds counterintuitive and puzzling and bizarre and then I want to understand it, and by understanding it, it gets a bit demystified.”

Fankhauser realized that the quantum eraser set-up can be translated into a very standard Bell experiment. These experiments are based on entangling a pair of qubits, the idea being to rule out local “hidden-variable” models of quantum theory. This led him to see that there is no need to explain the eraser using backwards-in-time influence, since the related Bell experiments can be understood without it, as explained in his 2017 paper (Quanta 8 44). Fankhauser then further analysed the thought experiment using the de Broglie–Bohm interpretation of quantum theory, which gives a physical model for the quantum wavefunction (as particles are guided by a “pilot” wave). Using this, he showed explicitly that the outcomes of the eraser experiment can be fully explained without requiring backwards-in-time influences.

So does that mean that the eraser doesn’t tell us anything else beyond what Bell experiments already tell us? Not quite. “It turns different knobs than the Bell experiment,” explains Fankhauser. “I would say it asks the question ‘what do measurements signify?’, and ‘when can I talk about the system having a property?’. That’s an interesting question and I would say we don’t have a full answer to this.”

In particular, the eraser demonstrates the importance that the very act of observation has on outcomes, with the detector playing the role of an observer. “You measure some of its properties, you change another property,” says Fankhauser. “So the next time you measure it, the new property was created through the observation. And I’m trying to formalize this now more concretely. I’m trying to come up with a new approach and framework to study these questions.”

Meanwhile, Catani found an intriguing contrast between Bell experiments and the eraser in his research. “The implications of Bell’s theorem are far more profound,” says Catani. In the 2023 paper (Quantum 7 1119) he co-authored, Catani considers a model for classical physics, with an extra condition: there is a restriction on what you can know about the underlying physical states. Applying this model to the quantum eraser, he finds that its results can be reproduced by such a classical theory. By contrast, the classical model cannot reproduce the statistical violations of a Bell experiment. This shows that having incomplete knowledge of the physical state is not, by itself, enough to explain the strange results of the Bell experiment. It is therefore demonstrating a more powerful deviation from classical physics than the eraser. Catani also contrasts the mathematical rigour of the two cases. While Bell experiments are based on explicitly formulated assumptions, claims about backwards-in-time influence in the quantum eraser rely on a particular narrative – one that gives rise to the apparent paradox

The eraser as a brainteaser

Physicists therefore broadly agree that the mathematics of the quantum eraser thought experiment fits well within standard quantum theory. Even so, Hance argues that formal results alone are not the entire story: “This is something we need to pick apart, not just in terms of mathematical assumptions, but also in terms of building intuitions for us to be able to actually play around with what quantumness is.” Hance has been analysing the physical implications of different assumptions in the thought experiment, with some options discussed in his 2021 preprint (arXiv:2111.09347) with collaborators on the quantum eraser paradox.

It therefore provides a tool for understanding how quantum correlations match up in a way that is not described by classical physics. “It’s a great thinking aid – partly brainteaser, partly demonstration of the nature of this weirdness.”

Information, observers and quantum computers

Every quantum physicist takes something different from the quantum eraser, whether it is a spotlight on the open problems surrounding the properties of measured systems; a lesson from history in mathematical rigour; or a counterintuitive puzzle to make sense of. For a minority that deviate from standard approaches to quantum theory, it may even be some form of backwards-in-time influence.

For myself, as explained in my video on YouTube and my 2023 paper (IEEE International Conference on Quantum Computing and Engineering 10.1109/QCE57702.2023.20325) on quantum thought experiments, the most dramatic implication of the quantum eraser is explaining the role of observers in the double-slit experiment. The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required. This also explains why building a quantum computer is so challenging, as unwanted entanglement with even one particle can cause the whole computation to collapse into a random state.

The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required

Where does this leave the futuristic readers of our 200-year double-quantum special issue of Physics World? Simply erasing their memories is not enough to restore the quantum behaviour of the article. It is too late to change which article was selected. Though, following an eraser-type protocol, our futurists can do one better than those sneaky magazine writers: they can use the outcomes of complementary measurements on their memory, to sort the article into two individual smaller articles, each displaying their own quantum entanglement structure that was otherwise hidden. So even if you can’t use the quantum eraser to rewrite the past, perhaps it can rewrite what you read in the future.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post The quantum eraser doesn’t rewrite the past – it rewrites observers appeared first on Physics World.

Proton arc therapy eliminates hard-to-treat cancer with minimal side effects

27 mai 2025 à 09:30

Head-and-neck cancers are difficult to treat with radiation therapy because they are often located close to organs that are vital for patients to maintain a high quality-of-life. Radiation therapy can also alter a person’s shape, through weight loss or swelling, making it essential to monitor such changes throughout the treatment to ensure effective tumour targeting.

Researchers from Corewell Health William Beaumont University Hospital have now used a new proton therapy technique called step-and-shoot proton arc therapy (a spot-scanning proton arc method) to treat head-and-neck cancer in a human patient – the first person in the US to receive this highly accurate treatment.

“We envisioned that this technology could significantly improve the quality of treatment plans for patients and the treatment efficiency compared with the current state-of-the-art technique of intensity-modulated proton therapy (IMPT),” states senior author Xuanfeng Ding.

Progression towards dynamic proton arc therapy

“The first paper on spot-scanning proton arc therapy was published in 2016 and the first prototype for it was built in 2018,” says Ding. However, step-and-shoot proton arc therapy is an interim solution towards a more advanced technique known as dynamic proton arc therapy – which delivered its first pre-clinical treatment in 2024. Dynamic proton arc therapy is still undergoing development and regulatory approval clearance, so researchers have chosen to use step-and-shoot proton arc therapy clinically in the meantime.

Other proton therapies are more manual in nature and require a lot of monitoring, but the step-and-shoot technology delivers radiation directly to a tumour in a more continuous and automated fashion, with less lag time between radiation dosages. “Step-and-shoot proton arc therapy uses more beam angles per plan compared to the current clinical practice using IMPT and optimizes the spot and energy layers sparsity level,” explains Ding.

The extra beam angles provide a greater degree-of-freedom to optimize the treatment plan and provide a better dose conformity, robustness and linear energy transfer (LET, the energy deposited by ionizing radiation) through a more automated approach. During treatment delivery, the gantry rotates to each beam angle and stops to deliver the treatment irradiation.

In the dynamic proton arc technique that is also being developed, the gantry rotates continuously while irradiating the proton spot or switching energy layer. The step-and-shoot proton arc therapy therefore acts as an interim stage that is allowing more clinical data to be acquired to help dynamic proton arc therapy become clinically approved. The pinpointing ability of these proton therapies enables tumours to be targeted more precisely without damaging surrounding healthy tissue and organs.

The first clinical treatment

The team trialled the new technique on a patient with adenoid cystic carcinoma in her salivary gland – a rare and highly invasive cancer that’s difficult to treat as it targets the nerves in the body. This tendency to target nerves also means that fighting such tumours typically causes a lot of side effects. Using the new step-and-shoot proton arc therapy, however, the patient experienced minimal side effects and no radiation toxicity to other areas of her body (including the brain) after 33 treatments. Since finishing her treatment in August 2024, she continues to be cancer-free.

Tiffiney Beard and Rohan Deraniyagala
First US patient Tiffiney Beard, who underwent step-and-shoot proton arc therapy to treat her rare head-and-neck cancer, at a follow-up appointment with Rohan Deraniyagala. (Courtesy: Emily Rose Bennett, Corewell Health)

“Radiation to the head-and-neck typically results in dryness of the mouth, pain and difficulty swallowing, abnormal taste, fatigue and difficulty with concentration,” says Rohan Deraniyagala, a Corewell Health radiation oncologist involved with this research. “Our patient had minor skin irritation but did not have any issues with eating or performing at her job during treatment and for the last year since she was diagnosed.”

Describing the therapeutic process, Ding tells Physics World that “we developed an in-house planning optimization algorithm to select spot and energy per beam angle so the treatment irradiation time could be reduced to four minutes. However, because the gantry still needs to stop at each beam angle, the total treatment time is about 16 minutes per fraction.”

On monitoring the progression of the tumour over time and developing treatment plans, Ding confirms that the team “implemented a machine-learning-based synthetic CT platform which allows us to track the daily dosage of radiation using cone-beam computed tomography (CBCT) so that we can schedule an adaptive treatment plan for the patient.”

On the back of this research, Ding says that the next step is to help further develop the dynamic proton arc technique – known as DynamicARC – in collaboration with industry partner IBA.

The research was published in the International Journal of Particle Therapy.

The post Proton arc therapy eliminates hard-to-treat cancer with minimal side effects appeared first on Physics World.

Superconducting microwires detect high-energy particles

23 mai 2025 à 10:10

Arrays of superconducting wires have been used to detect beams of high-energy charged particles. Much thinner wires are already used to detect single photons, but this latest incarnation uses thicker wires that can absorb the large amounts of energy carried by fast-moving protons, electrons, and pions. The new detector was created by an international team led by Cristián Peña at Fermilab.

In a single-photon detector, an array of superconducting nanowires is operated below the critical temperature for superconductivity – with current flowing freely through the nanowires. When a nanowire absorbs a photon it creates a hotspot that temporarily destroys superconductivity and boosts the electrical resistance. This creates a voltage spike across the nanowire, allowing the location and time of the photon detection to be determined very precisely.

“These detectors have emerged as the most advanced time-resolved single-photon sensors in a wide range of wavelengths,” Peña explains. “Applications of these photon detectors include quantum networking and computing, space-to-ground communication, exoplanet exploration and fundamental probes for new physics such as dark matter.”

A similar hotspot is created when a superconducting wire is impacted by a high-energy charged particle. In principle, this could be used to create particle detectors that could be used in experiments at labs such as Fermilab and CERN.

New detection paradigm

“As with photons, the ability to detect charged particles with high spatial and temporal precision, beyond what traditional sensing technologies can offer, has the potential to propel the field of high-energy physics towards a new detection paradigm,” Peña explains.

However, the nanowire single-photon detector design is not appropriate for detecting charged particles. Unlike photons, charged particles do not deposit all of their energy at a single point in a wire. Instead, the energy can be spread out along a track, which becomes longer as particle energy increases. Also, at the relativistic energies reached at particle accelerators, the nanowires used in single-photon detectors are too thin to collect the energy required to trigger a particle detection.

To create their new particle detector, Peña’s team used the latest advances in superconductor fabrication. On a thin film of tungsten silicide, they deposited an 8×8, 2 mm2 array of micron-thick superconducting wires.

Tested at Fermilab

To test out their superconducting microwire single-photon detector (SMSPD), they used it to detect high-energy particle beams generated at the Fermilab Test Beam Facility. These included a 12 GeV beam of protons and 8 GeV beams of electrons and pions.

“Our study shows for the first time that SMSPDs are sensitive to protons, electrons, and pions,” Peña explains. “In fact, they behave very similarly when exposed to different particle types. We measured almost the same detection efficiency, as well as spatial and temporal properties.”

The team now aims to develop a deeper understanding of the physics that unfolds as a charged particle passes through a superconducting microwire. “That will allow us to begin optimizing and engineering the properties of the superconducting material and sensor geometry to boost the detection efficiency, the position and timing precision, as well as optimize for the operating temperature of the sensor,” Peña says. With further improvements SMSPDs to become an integral part of high-energy physics experiments – perhaps paving the way for a deeper understanding of fundamental physics.

The research is described in the Journal of Instrumentation.

The post Superconducting microwires detect high-energy particles appeared first on Physics World.

What is meant by neuromorphic computing – a webinar debate

23 mai 2025 à 10:08
AI circuit board
(Courtesy: Shutterstock/metamorworks)

There are two main approaches to what we consider neuromorphic computing. The first involves emulating biological neural processing systems through the physics of computation of computational substrates that have similar properties and constraints as real neural systems, with potential for denser structures and advantages in energy cost. The other simulates neural processing systems on scalable architectures that allow the simulation of large neural networks, with higher degree of abstraction, arbitrary precision, high resolution, and no constraints imposed by the physics of the computing medium.

Both may be required to advance the field, but is either approach ‘better’? Hosted by Neuromorphic Computing and Engineering, this webinar will see teams of leading experts in the field of neuromorphic computing argue the case for either approach, overseen by an impartial moderator.

Speakers image. Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo
Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo

Team emulation:
Elisa Donati. Elisa’s research interests aim at designing neuromorphic circuits that are ideally suited for interfacing with the nervous system and show how they can be used to build closed-loop hybrid artificial and biological neural processing systems.  She is also involved in the development of neuromorphic hardware and software systems able to mimic the functions of biological brains to apply for medical and robotics applications.

Jennifer Hasler received her BSE and MS degrees in electrical engineering from Arizona State University in August 1991. She received her PhD in computation and neural systems from California Institute of Technology in February 1997. Jennifer is a professor at the Georgia Institute of Technology in the School of Electrical and Computer Engineering; Atlanta is the coldest climate in which she has lived. Jennifer founded the Integrated Computational Electronics (ICE) laboratory at Georgia Tech, a laboratory affiliated with the Laboratories for Neural Engineering. She is a member of Tau Beta P, Eta Kappa Nu, and the IEEE.

Team simulation:
Catherine (Katie) Schuman is an assistant professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee (UT). She received her PhD in computer science from UT in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. Katie previously served as a research scientist at Oak Ridge National Laboratory, where her research focused on algorithms and applications of neuromorphic systems. Katie co-leads the TENNLab Neuromorphic Computing Research Group at UT. She has written for more than 70 publications as well as seven patents in the field of neuromorphic computing. She received the Department of Energy Early Career Award in 2019. Katie is a senior member of the Association of Computing Machinery and the IEEE.

Emre Neftci received his MSc degree in physics from EPFL in Switzerland, and his PhD in 2010 at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. He is currently an institute director at the Jülich Research Centre and professor at RWTH Aachen. His current research explores the bridges between neuroscience and machine learning, with a focus on the theoretical and computational modelling of learning algorithms that are best suited to neuromorphic hardware and non-von Neumann computing architectures.

Discussion chair:
Giulia D’Angelo is currently a Marie Skłodowska-Curie postdoctoral fellow at the Czech Technical University in Prague, where she focuses on neuromorphic algorithms for active vision. She obtained a bachelor’s degree in biomedical engineering from the University of Genoa and a master’s degree in neuroengineering with honours. During her master’s, she developed a neuromorphic system for the egocentric representation of peripersonal visual space at King’s College London. She earned her PhD in neuromorphic algorithms at the University of Manchester, receiving the President’s Doctoral Scholar Award, in collaboration with the Event-Driven Perception for Robotics Laboratory at the Italian Institute of Technology. There, she proposed a biologically plausible model for event-driven, saliency-based visual attention. She was recently awarded the Marie Skłodowska-Curie Fellowship to explore sensorimotor contingency theories in the context of neuromorphic active vision algorithms.

About this journal
Neuromorphic Computing and Engineering journal cover

Neuromorphic Computing and Engineering is a multidisciplinary, open access journal publishing cutting-edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective.

Editor-in-chief: Giacomo Indiveri, University of Zurich, Switzerland

 

The post What is meant by neuromorphic computing – a webinar debate appeared first on Physics World.

Bacteria-killing paint could dramatically improve hospital hygiene

21 mai 2025 à 17:20
Antimicrobial efficacy of chlorhexidine epoxy resin
Antimicrobial efficacy SEM images of steel surfaces inoculated with bacteria show a large bacterial concentration on surfaces painted with control epoxy resin (left) and little to no bacteria on those painted with chlorhexidine epoxy resin. (Courtesy: University of Nottingham)

Scientists have created a novel antimicrobial coating that, when mixed with paint, can be applied to a range of surfaces to destroy bacteria and viruses – including particularly persistent and difficult to kill strains like MRSA, flu virus and SARS-CoV-2. The development potentially paves the way for substantial improvements in scientific, commercial and clinical hygiene.

The University of Nottingham-led team made the material by combining chlorhexidine digluconate (CHX) – a disinfectant commonly used by dentists to treat mouth infections and by clinicians for cleaning before surgery – with everyday paint-on epoxy resin. Using this material, the team worked with staff at Birmingham-based specialist coating company Indestructible Paint to create a prototype antimicrobial paint. They found that, when dried, the coating can kill a wide range of pathogens.

The findings of the study, which was funded by the Royal Academy of Engineering Industrial Fellowship Scheme, were published in Scientific Reports.

Persistent antimicrobial protection

As part of the project, the researchers painted the antimicrobial coating onto a surface and used a range of scientific techniques to analyse the distribution of the biocide in the paint, to confirm that it remained uniformly distributed at a molecular level.

According to project leader Felicity de Cogan, the new paint can be used to provide antimicrobial protection on a wide array of plastic and hard non-porous surfaces. Crucially, it could be effective in a range of clinical environments, where surfaces like hospital beds and toilet seats can act as a breeding ground for bacteria for extended periods of time – even after the introduction of stringent cleaning regimes.

The team, based at the University’s School of Pharmacy, is also investigating the material’s use in the transport and aerospace industries, especially on frequently touched surfaces in public spaces such as aeroplane seats and tray tables.

“The antimicrobial in the paint is chlorhexidine – a biocide commonly used in products like mouthwash. Once it is added, the paint works in exactly the same way as all other paint and the addition of the antimicrobial doesn’t affect its application or durability on the surface,” says de Cogan.

Madeline Berrow from the University of Nottingham
In the lab Co-first author Madeline Berrow, who performed the laboratory work for the study. (Courtesy: University of Nottingham)

The researchers also note that adding CHX to the epoxy resin did not affect its optical transparency.

According to de Cogan, the novel concoction has a range of potential scientific, clinical and commercial applications.

“We have shown that it is highly effective against a range of different pathogens like E. coli and MRSA. We have also shown that it is effective against bacteria even when they are already resistant to antibiotics and biocides,” she says. “This means the technology could be a useful tool to circumvent the global problem of antimicrobial resistance.”

In de Cogan’s view, there are also number of major advantages to using the new coating to tackle bacterial infection – especially when compared to existing approaches – further boosting the prospects of future applications.

The key advantage of the technology is that the paint is “self-cleaning” – meaning that it would no longer be necessary to carry out the arduous task of repeatedly cleaning a surface to remove harmful microbes. Instead, after a single application, the simple presence of the paint on the surface would actively and continuously kill bacteria and viruses whenever they come into contact with it.

“This means that you can be sure a surface won’t pass on infections when you touch it,” says de Cogan.

“We are looking at more extensive testing in harsher environments and long-term durability testing over months and years. This work is ongoing and we will be following up with another publication shortly,” she adds.

The post Bacteria-killing paint could dramatically improve hospital hygiene appeared first on Physics World.

Why I stopped submitting my work to for-profit publishers

21 mai 2025 à 12:00

Peer review is a cornerstone of academic publishing. It is how we ensure that published science is valid. Peer review, by which researchers judge the quality of papers submitted to journals, stops pseudoscience from being peddled as equivalent to rigorous research. At the same time, the peer-review system is under considerable strain as the number of journal articles published each year increases, jumping from 1.9 million in 2016 to 2.8 million in 2022, according to Scopus and Web of Science.

All these articles require experienced peer reviewers, with papers typically taking months to go through peer review. This cannot be blamed alone on the time taken to post manuscripts and reviews back and forth between editors and reviewers, but instead is a result of high workloads and, fundamentally, how busy everyone is. Given peer reviewers need to be expert in their field, the pool of potential reviewers is inherently limited. A bottleneck is emerging as the number of papers grows quicker than the number of researchers in academia.

Scientific publishers have long been central to managing the process of peer review. For anyone outside academia, the concept of peer review may seem illogical given that researchers spend their time on it without much acknowledgement. While initiatives are in place to change this such as outstanding-reviewer awards and the Web of Science recording reviewer data, there is no promise that such recognition will be considered when looking for permanent positions or applying for promotion.

The impact of open access

Why, then, do we agree to review? As an active researcher myself in quantum physics, I peer-reviewed more than 40 papers last year and I’ve always viewed it as a duty. It’s a necessary time-sink to make our academic system function, to ensure that published research is valid and to challenge questionable claims. However, like anything people do out of a sense of duty, inevitably there are those who will seek to exploit it for profit.

Many journals today are open access, in which fees, known as article-processing charges, are levied to make the published work freely available online. It makes sense that costs need to be imposed – staff working at publishing companies need paying; articles need editing and typesetting; servers need be maintained and web-hosting fees have to be paid. Recently, publishers have invested heavily in digital technology and developed new ways to disseminate research to a wider audience.

Open access, however, has encouraged some publishers to boost revenues by simply publishing as many papers as possible. At the same time, there has been an increase in retractions, especially of fabricated or manipulated manuscripts sold by “paper mills”. The rise of retractions isn’t directly linked to the emergence of open access, but it’s not a good sign, especially when the academic publishing industry reports profit margins of roughly 40% – higher than many other industries. Elsevier, for instance, publishes nearly 3000 journals and in 2023 its parent company, Relx, recorded a profit of £1.79bn. This is all money that was either paid in open-access fees or by libraries (or private users) for journal subscriptions but ends up going to shareholders rather than science.

It’s important to add that not all academic publishers are for-profit. Some, like the American Physical Society (APS), IOP Publishing, Optica, AIP Publishing and the American Association for the Advancement of Science – as well as university presses – are wings of academic societies and universities. Any profit they make is reinvested into research, education or the academic community. Indeed, IOP Publishing, AIP Publishing and the APS have formed a new “purpose-led publishing” coalition, in which the three publishers confirm that they will continue to reinvest the funds generated from publishing back into research and “never” have shareholders that result in putting “profit above purpose”.

But many of the largest publishers – the likes of Springer Nature, Elsevier, Taylor and Francis, MDPI and Wiley – are for-profit companies and are making massive sums for their shareholders. Should we just accept that this is how the system is? If not, what can we do about it and what impact can we as individuals have on a multi-billion-dollar industry? I have decided that I will no longer review for, nor submit my articles (when corresponding author) to, any for-profit publishers.

I’m lucky in my field that I have many good alternatives such as the arXiv overlay journal Quantum, IOP Publishing’s Quantum Science and Technology, APS’s Physical Review X Quantum and Optica Quantum. If your field doesn’t, then why not push for them to be created? We may not be able to dismantle the entire for-profit publishing industry, but we can stop contributing to it (especially those who have a permanent job in academia and are not as tied down by the need to publish in high impact factor journals). Such actions may seem small, but together can have an effect and push to make academia the environment we want to be contributing to. It may sound radical to take change into your own hands, but it’s worth a try. You never know, but it could help more money make its way back into science.

The post Why I stopped submitting my work to for-profit publishers appeared first on Physics World.

Visual assistance system helps blind people navigate

21 mai 2025 à 10:00
Structure and workflow of a wearable visual assistance system
Visual assistance system The wearable system uses intuitive multimodal feedback to assist visually impaired people with daily life tasks. (Courtesy: J Tang et al. Nature Machine Intelligence 10.1038/s42256-025-01018-6, 2005, Springer Nature)

Researchers from four universities in Shanghai, China, are developing a practical visual assistance system to help blind and partially sighted people navigate. The prototype system combines lightweight camera headgear, rapid-response AI-facilitated software and artificial “skins” worn on the wrists and finger that provide physiological sensing. Functionality testing suggests that the integration of visual, audio and haptic senses can create a wearable navigation system that overcomes current designs’ adoptability and usability concerns.

Worldwide, 43 million people are blind, according to 2021 estimates by the International Agency for the Prevention of Blindness. Millions more are so severely visually impaired that they require the use of a cane to navigate.

Visual assistance systems offer huge potential as navigation tools, but current designs have many drawbacks and challenges for potential users. These include limited functionality with respect to the size and weight of headgear, battery life and charging issues, slow real-time processing speeds, audio command overload, high system latency that can create safety concerns, and extensive and sometimes complex learning requirements.

Innovations in miniaturized computer hardware, battery charge longevity, AI-trained software to decrease latency in auditory commands, and the addition of lightweight wearable sensory augmentation material providing near-real-time haptic feedback are expected to make visual navigation assistance viable.

The team’s prototype visual assistance system, described in Nature Machine Intelligence, incorporates an RGB-D (red, green, blue, depth) camera mounted on a 3D-printed glasses frame, ultrathin artificial skins, a commercial lithium-ion battery, a wireless bone-conducting earphone and a virtual reality training platform interfaced via triboelectric smart insoles. The camera is connected to a microcontroller via USB, enabling all computations to be performed locally without the need for a remote server.

When a user sets a target using a voice command, AI algorithms process the RGB-D data to estimate the target’s orientation and determine an obstacle-free direction in real time. As the user begins to walk to the target, bone conduction earphones deliver spatialized cues to guide them, and the system updates the 3D scene in real time.

The system’s real-time visual recognition incorporates changes in distance and perspective, and can compensate for low ambient light and motion blur. To provide robust obstacle avoidance, it combines a global threshold method with a ground interval approach to accurately detect overhead hanging, ground-level and sunken obstacles, as well as sloping or irregular ground surfaces.

First author Jian Tang of Shanghai Jiao Tong University and colleagues tested three audio feedback approaches: spatialized cues, 3D sounds and verbal instructions. They determined that spatialized cues are the most rapid to convey and be understood and provide precise direction perception.

Real-world testing A visually impaired person navigates through a cluttered conference room. (Courtesy: Tang et al. Nature Machine Intelligence)

To complement the audio feedback, the researchers developed stretchable artificial skin – an integrated sensory-motor device that provides near-distance alerting. The core component is a compact time-of-flight sensor that vibrates to stimulate the skin when the distance to an obstacle or object is smaller than a predefined threshold. The actuator is designed as a slim, lightweight polyethylene terephthalate cantilever. A gap between the driving circuit and the skin promotes air circulation to improve skin comfort, breathability and long-term wearability, as well as facilitating actuator vibration.

Users wear the sensor on the back of an index or middle finger, while the actuator and driving circuit are worn on the wrist. When the artificial skin detects a lateral obstacle, it provides haptic feedback in just 18 ms.

The researchers tested the trained system in virtual and real-world environments, with both humanoid robots and 20 visually impaired individuals who had no prior experience of using visual assistance systems. Testing scenarios included walking to a target while avoiding a variety of obstacles and navigating through a maze. Participants’ navigation speed increased with training and proved comparable to walking with a cane. Users were also able to turn more smoothly and were more efficient at pathfinding when using the navigation system than when using a cane.

“The proficient completion of tasks mirroring real-world challenges underscores the system’s effectiveness in meeting real-life challenges,” the researchers write. “Overall, the system stands as a promising research prototype, setting the stage for the future advancement of wearable visual assistance.”

The post Visual assistance system helps blind people navigate appeared first on Physics World.

Subtle quantum effects dictate how some nuclei break apart

20 mai 2025 à 14:46

Subtle quantum effects within atomic nuclei can dramatically affect how some nuclei break apart. By studying 100 isotopes with masses below that of lead, an international team of physicists uncovered a previously unknown region in the nuclear landscape where fragments of fission split in an unexpected way. This is driven not by the usual forces, but by shell effects rooted in quantum mechanics.

“When a nucleus splits apart into two fragments, the mass and charge distribution of these fission fragments exhibits the signature of the underlying nuclear structure effect in the fission process,” explains Pierre Morfouace of Université Paris-Saclay, who led the study. “In the exotic region of the nuclear chart that we studied, where nuclei do not have many neutrons, a symmetric split was previously expected. However, the asymmetric fission means that a new quantum effect is at stake.”

This unexpected discovery not only sheds light on the fine details of how nuclei break apart but also has far-reaching implications. These range from the development of safer nuclear energy to understanding how heavy elements are created during cataclysmic astrophysical events like stellar explosions.

Quantum puzzle

Fission is the process by which a heavy atomic nucleus splits into smaller fragments. It is governed by a complex interplay of forces. The strong nuclear force, which binds protons and neutrons together, competes with the electromagnetic repulsion between positively charged protons. The result is that certain nuclei are unstable and typically leads to a symmetric fission.

But there’s another, subtler phenomenon at play: quantum shell effects. These arise because protons and neutrons inside the nucleus tend to arrange themselves into discrete energy levels or “shells,” much like electrons do in atoms.

“Quantum shell effects [in atomic electrons] play a major role in chemistry, where they are responsible for the properties of noble gases,” says Cedric Simenel of the Australian National University, who was not involved in the study. “In nuclear physics, they provide extra stability to spherical nuclei with so-called ‘magic’ numbers of protons or neutrons. Such shell effects drive heavy nuclei to often fission asymmetrically.”

In the case of very heavy nuclei, such as uranium or plutonium, this asymmetry is well documented. But in lighter, neutron-deficient nuclei – those with fewer neutrons than their stable counterparts – researchers had long expected symmetric fission, where the nucleus breaks into two roughly equal parts. This new study challenges that view.

New fission landscape

To investigate fission in this less-explored part of the nuclear chart, scientists from the R3B-SOFIA collaboration carried out experiments at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. They focused on nuclei ranging from iridium to thorium, many of which had never been studied before. The nuclei were fired at high energies into a lead target to induce fission.

The fragments produced in each fission event were carefully analysed using a suite of high-resolution detectors. A double ionization chamber captured the number of protons in each product, while a superconducting magnet and time-of-flight detectors tracked their momentum, enabling a detailed reconstruction of how the split occurred.

Using this method, the researchers found that the lightest fission fragments were frequently formed with 36 protons, which is the atomic number of krypton. This pattern suggests the presence of a stabilizing shell effect at that specific proton number.

“Our data reveal the stabilizing effect of proton shells at Z=36,” explains Morfouace. “This marks the identification of a new ‘island’ of asymmetric fission, one driven by the light fragment, unlike the well-known behaviour in heavier actinides. It expands our understanding of how nuclear structure influences fission outcomes.”

Future prospects

“Experimentally, what makes this work unique is that they provide the distribution of protons in the fragments, while earlier measurements in sub-lead nuclei were essentially focused on the total number of nucleons,” comments Simenel.

Since quantum shell effects are tied to specific numbers of protons or neutrons, not just the overall mass, these new measurements offer direct evidence of how proton shell structure shapes the outcome of fission in lighter nuclei. This makes the results particularly valuable for testing and refining theoretical models of fission dynamics.

“This work will undoubtedly lead to further experimental studies, in particular with more exotic light nuclei,” Simenel adds. “However, to me, the ball is now in the camp of theorists who need to improve their modelling of nuclear fission to achieve the predictive power required to study the role of fission in regions of the nuclear chart not accessible experimentally, as in nuclei formed in the astrophysical processes.”

The research is described in Nature.

The post Subtle quantum effects dictate how some nuclei break apart appeared first on Physics World.

New coronagraph pushes exoplanet discovery to the quantum limit

19 mai 2025 à 18:21
Diagram of the new coronagraph
How it works Diagram showing simulated light from an exoplanet and its companion star (far left) moving through the new coronagraph. (Courtesy: Nico Deshler/University of Arizona)

A new type of coronagraph that could capture images of dim exoplanets that are extremely close to bright stars has been developed by a team led by Nico Deshler at the University of Arizona in the US. As well as boosting the direct detection of exoplanets, the new instrument could support advances in areas including communications, quantum sensing, and medical imaging.

Astronomers have confirmed the existence of nearly 6000 exoplanets, which are planets that orbit stars other as the Sun. The majority of these were discovered based on their effects on their companion stars, rather than being observed directly. This is because most exoplanets are too dim and too close to their companion stars for the exoplanet light to be differentiated from starlight. That is where a coronagraph can help.

A coronagraph is an astronomical instrument that blocks light from an extremely bright source to allow the observation of dimmer objects in the nearby sky. Coronagraphs were first developed a century ago to allow astronomers to observe the outer atmosphere (corona) of the Sun , which would otherwise be drowned out by light from the much brighter photosphere.

At the heart of a coronagraph is a mask that blocks the light from a star, while allowing light from nearby objects into a telescope. However, the mask (and the telescope aperture) will cause the light to interfere and create diffraction patterns that blur tiny features. This prevents the observation of dim objects that are closer to the star than the instrument’s inherent diffraction limit.

Off limits

Most exoplanets lie within the diffraction limit of today’s coronagraphs and Deshler’s team addressed this problem using two spatial mode sorters. The first device uses a sequence of optical elements to separate starlight from light originating from the immediate vicinity of the star. The starlight is then blocked by a mask while the rest of the light is sent through a second spatial mode sorter, which reconstructs an image of the region surrounding the star.

As well as offering spatial resolution below the diffraction limit, the technique approaches the fundamental limit on resolution that is imposed by quantum mechanics.

“Our coronagraph directly captures an image of the surrounding object, as opposed to measuring only the quantity of light it emits without any spatial orientation,” Deshler describes. “Compared to other coronagraph designs, ours promises to supply more information about objects in the sub-diffraction regime – which lie below the resolution limits of the detection instrument.”

To test their approach, Deshler and colleagues simulated an exoplanet orbiting at a sub-diffraction distance from a host star some 1000 times brighter. After passing the light through the spatial mode sorters, they could resolve the exoplanet’s position – which would have been impossible with any other coronagraph.

Context and composition

The team believe that their technique will improve astronomical images. “These images can provide context and composition information that could be used to determine exoplanet orbits and identify other objects that scatter light from a star, such as exozodiacal dust clouds,” Deshler says.

The team’s coronagraph could also have applications beyond astronomy. With the ability to detect extremely faint signals close to the quantum limit, it could help to improve the resolution of quantum sensors. This could to lead to new methods for detecting tiny variations in magnetic or gravitational fields.

Elsewhere, the coronagraph could help to improve non-invasive techniques for imaging living tissue on the cellular scale – with promising implications in medical applications such as early cancer detection and the imaging of neural circuits. Another potential use could be new multiplexing techniques for optical communications. This would see the coronagraph being used to differentiate between overlapping signals. This has the potential of boosting the rate at which data could be transferred between satellites and ground-based receivers.

The research is described in Optica.

The post New coronagraph pushes exoplanet discovery to the quantum limit appeared first on Physics World.

Five-body recombination could cause significant loss from atom traps

15 mai 2025 à 10:05

Five-body recombination, in which five identical atoms form a tetramer molecule and a single free atom, could be the largest contributor to loss from ultracold atom traps at specific “Efimov resonances”, according to calculations done by physicists in the US. The process, which is less well understood than three- and four-body recombination, could be useful for building molecules, and potentially for modelling nuclear fusion.

A collision involving trapped atoms can be either elastic – in which the internal states of the atoms and their total kinetic energy remain unchanged – or inelastic, in which there is an interchange between the kinetic energy of the system and the internal energy states of the colliding atoms.

Most collisions in a dilute quantum gas involve only two atoms, and when physicists were first studying Bose-Einstein condensates (the ultralow-temperature state of some atomic gases), they suppressed inelastic two-body collisions, keeping the atoms in the desired state and preserving the condensate. A relatively small number of collisions, however, involve three or more bodies colliding simultaneously.

“They couldn’t turn off three body [inelastic collisions], and that turned out to be the main reason atoms leaked out of the condensate,” says theoretical physicist Chris Greene of Purdue University in the US.

Something remarkable

While attempting to understand inelastic three-body collisions, Greene and colleagues made the connection to work done in the 1970s by the Soviet theoretician Vitaly Efimov. He showed that at specific “resonances” of the scattering length, quantum mechanics allowed two colliding particles that could otherwise not form a bound state to do so in the presence of a third particle. While Efimov first considered the scattering of nucleons (protons and neutrons) or alpha particles, the effect applies to atoms and other quantum particles.

In the case of trapped atoms, the bound dimer and free atom are then ejected from the trap by the energy released from the binding event. “There were signatures of this famous Efimov effect that had never been seen experimentally,” Greene says. This was confirmed in 2005 by experiments from Rudolf Grimm’s group at the University of Innsbruck in Austria.

Hundreds of scientific papers have now been written about three-body recombination. Greene and colleagues subsequently predicted resonances at which four-body Efimov recombination could occur, producing a trimer. These were observed almost immediately by Grimm and colleagues. “Five was just too hard for us to do at the time, and only now are we able to go that next step,” says Greene.

Principal loss channel

In the new work, Greene and colleague Michael Higgins modelled collisions between identical caesium atoms in an optical trap. At specific resonances, five-body recombination – in which four atoms combine to produce a tetramer and a free particle – is not only enhanced but becomes the principal loss channel. The researchers believe these resonances should be experimentally observable using today’s laser box traps, which hold atomic gases in a square-well potential.

“For most ultracold experiments, researchers will be avoiding loss as much as possible – they would stay away from these resonances,” says Greene; “But for those of us in the few-body community interested in how atoms bind and resonate and how to describe complicated rearrangement, it’s really interesting to look at these points where the loss becomes resonant and very strong.” This is one technique that can be used to create new molecules, for example.

In future, Greene hopes to apply the model to nucleons themselves. “There have been very few people in the few-body theory community willing to tackle a five-particle collision – the Schrödinger equation has so many dimensions,” he says.

Fusion reactions

He hopes it may be possible to apply the researchers’ toolkit to nuclear reactions. “The famous one is the deuterium/tritium fusion reaction. When they collide they can form an alpha particle and a neutron and release a ton of energy, and that’s the basis of fusion reactors…There’s only one theory in the world from the nuclear community, and it’s such an important reaction I think it needs to be checked,” he says.

The researchers also wish to study the possibility of even larger bound states. However, they foresee a problem because the scattering length of the ground state resonance gets shorter and shorter with each additional particle. “Eventually the scattering length will no longer be the dominant length scale in the problem, and we think between five and six is about where that border line occurs,” Greene says. Nevertheless, higher-lying, more loosely-bound six-body Efimov resonances could potentially be visible at longer scattering lengths.

The research is described in Proceedings of the National Academy of Sciences.

Theoretical physicist Ravi Rau of Louisiana State University in the US is impressed by Greene and Higgins’ work. “For quite some time Chris Greene and a succession of his students and post-docs have been extending the three-body work that they did, using the same techniques, to four and now five particles,” he says. “Each step is much more complicated, and that he could use this technique to extend it to five bosons is what I see as significant.” Rau says, however, that “there is a vast gulf” between five atoms and the number treated by statistical mechanics, so new theoretical approaches may be required to bridge the gap.

The post Five-body recombination could cause significant loss from atom traps appeared first on Physics World.

Quantum effect could tame noisy nanoparticles by rendering them invisible

14 mai 2025 à 10:00

In the quantum world, observing a particle is not a passive act. If you shine light on a quantum object to measure its position, photons scatter off it and disturb its motion. This disturbance is known as quantum backaction noise, and it limits how precisely physicists can observe or control delicate quantum systems.

Physicists at Swansea University have now proposed a technique that could eliminate quantum backaction noise in optical traps, allowing a particle to remain suspended in space undisturbed. This would bring substantial benefits for quantum sensors, as the amount of noise in a system determines how precisely a sensor can measure forces such as gravity; detect as-yet-unseen interactions between gravity and quantum mechanics; and perhaps even search for evidence of dark matter.

There’s just one catch: for the technique to work, the particle needs to become invisible.

Levitating nanoparticles

Backaction noise is a particular challenge in the field of levitated optomechanics, where physicists seek to trap nanoparticles using light from lasers. “When you levitate an object, the whole thing moves in space and there’s no bending or stress, and the motion is very pure,” explains James Millen, a quantum physicist who studies levitated nanoparticles at Kings College, London, UK. “That’s why we are using them to detect crazy stuff like dark matter.”

While some noise is generally unavoidable, Millen adds that there is a “sweet spot” called the Heisenberg limit. “This is where you have exactly the right amount of measurement power to measure the position optimally while causing the least noise,” he explains.

The problem is that laser beams powerful enough to suspend a nanoparticle tend to push the system away from the Heisenberg limit, producing an increase in backaction noise.

Blocking information flow

The Swansea team’s method avoids this problem by, in effect, blocking the flow of information from the trapped nanoparticle. Its proposed setup uses a standing-wave laser to trap a nanoparticle in space with a hemispherical mirror placed around it. When the mirror has a specific radius, the scattered light from the particle and its reflection interfere so that the outgoing field no longer encodes any information about the particle’s position.

At this point, the particle is effectively invisible to the observer, with an interesting consequence: because the scattered light carries no usable information about the particle’s location, quantum backaction disappears. “I was initially convinced that we wanted to suppress the scatter,” team leader James Bateman tells Physics World. “After rigorous calculation, we arrived at the correct and surprising answer: we need to enhance the scatter.”

In fact, when scattering radiation is at its highest, the team calculated that the noise should disappear entirely. “Even though the particle shines brighter than it would in free space, we cannot tell in which direction it moves,” says Rafał Gajewski, a postdoctoral researcher at Swansea and Bateman’s co-author on a paper in Physical Review Research describing the technique.

Gajewski and Bateman’s result flips a core principle of quantum mechanics on its head. While it’s well known that measuring a quantum system disturbs it, the reverse is also true: if no information can be extracted, then no disturbance occurs, even when photons continuously bombard the particle. If physicists do need to gain information about the trapped nanoparticle, they can use a different, lower-energy laser to make their measurements, allowing experiments to be conducted at the Heisenberg limit with minimal noise.

Putting it into practice

For the method to work experimentally, the team say the mirror needs a high-quality surface and a radius that is stable with temperature changes. “Both requirements are challenging, but this level of control has been demonstrated and is achievable,” Gajewski says.

Positioning the particle precisely at the center of the hemisphere will be a further challenge, he adds, while the “disappearing” effect depends on the mirror’s reflectivity at the laser wavelength. The team is currently investigating potential solutions to both issues.

If demonstrated experimentally, the team says the technique could pave the way for quieter, more precise experiments and unlock a new generation of ultra-sensitive quantum sensors. Millen, who was not involved in the work, agrees. “I think the method used in this paper could possibly preserve quantum states in these particles, which would be very interesting,” he says.

Because nanoparticles are far more massive than atoms, Millen adds, they interact more strongly with gravity, making them ideal candidates for testing whether gravity follows the strange rules of quantum theory.  “Quantum gravity – that’s like the holy grail in physics!” he says.

The post Quantum effect could tame noisy nanoparticles by rendering them invisible appeared first on Physics World.

Smartphone sensors and antihydrogen could soon put relativity to the test

10 mai 2025 à 14:36

Researchers on the AEgIS collaboration at CERN have designed an experiment that could soon boost our understanding of how antimatter falls under gravity. Created by a team led by Francesco Guatieri at the Technical University of Munich, the scheme uses modified smartphone camera sensors to improve the spatial resolution of measurements of antimatter annihilations. This approach could be used in rigorous tests of the weak equivalence principle (WEP).

The WEP is a key concept of Albert Einstein’s general theory of relativity, which underpins our understanding of gravity. It suggests that within a gravitational field, all objects of should be accelerated at the same rate, regardless of their mass or whether they are matter or antimatter. Therefore, if matter and antimatter accelerate at different rates in freefall, it would reveal serious problems with the WEP.

In 2023 the ALPHA-g experiment at CERN was the first to observe how antimatter responds to gravity. They found that it falls down, with the tantalizing possibility that antimatter’s gravitational response is weaker than matter’s. Today, there are several experiments that are seeking to improve on this observation.

Falling beam

AEgIS’ approach is to create a horizontal beam of cold antihydrogen atoms and observe how the atoms fall under gravity. The drop will be measured by a moiré deflectometer in which a beam passes through two successive and aligned grids of horizontal slits before striking a position-sensitive detector. As the beam falls under gravity between the grids, the effect is similar to a slight horizontal misalignment of the grids. This creates a moiré pattern – or superlattice – that results in the particles making a distinctive pattern on the detector. By detecting a difference in the measured moiré pattern and that predicted by WEP, the AEgIS collaboration hopes to reveal a discrepancy with general relativity.

However, as Guatieri explains, a number of innovations are required for this to work. “For AEgIS to work, we need a detector with incredibly high spatial resolution. Previously, photographic plates were the only option, but they lacked real-time capabilities.”

AEgIS physicists are addressing this by developing a new vertexing detector. Instead of focussing on the antiparticles directly, their approach detects the secondary particles produced when the antimatter annihilates on contact with the detector. Tracing the trajectories of these particles back to their vertex gives the precise location of the annihilation.

Vertexing detector

Borrowing from industry, the team has created its vertexing detector using an array of modified mobile-phone camera sensors (see figure). Gautieri had already used this approach to measure the real-time positions of low-energy positrons (anti-electrons) with unprecedented precision.

“Mobile camera sensors have pixels smaller than 1 micron,” Guatieri describes. “We had to strip away the first layers of the sensors, which are made to deal with the advanced integrated electronics of mobile phones. This required high-level electronic design and micro-engineering.”

With these modifications in place, the team measured the positions of antiproton annihilations to within just 0.62 micron: making their detector some 35 times more precise than previous designs.

Many benefits

“Our solution, demonstrated for antiprotons and directly applicable to antihydrogen, combines photographic-plate-level resolution, real-time diagnostics, self-calibration and a good particle collection surface, all in one device,” Gautieri says.

With some further improvements, the AEgIS team is confident that their vertexing detector with boost the resolution of the freefall of horizontal antihydrogen beams – allowing rigorous tests of the WEC.

AEgIS team member Ruggero Caravita of Italy’s University of Trento adds, “This game-changing technology could also find broader applications in experiments where high position resolution is crucial, or to develop high-resolution trackers”. He says, “Its extraordinary resolution enables us to distinguish between different annihilation fragments, paving the way for new research on low-energy antiparticle annihilation in materials”.

The research is described in Science Advances.

The post Smartphone sensors and antihydrogen could soon put relativity to the test appeared first on Physics World.

‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study

9 mai 2025 à 13:38

Chatbots could boost students’ interest in maths and physics and make learning more enjoyable. So say researchers in Germany, who have compared the emotional response of students using artificial intelligence (AI) texts to learn physics compared to those who only read traditional textbooks. The team, however, found no difference in test performance between the two groups.

The study has been led by Julia Lademann, a physics-education researcher from the University of Cologne, who wanted to see if AI could boost students’ interested in physics. They did this by creating a customized chatbot using OpenAI’s ChatGPT model with a tone and language that was considered accessible to second-year high-school students in Germany.

After testing the chatbot for factual accuracy and for its use of motivating language, the researchers prompted it to generate explanatory text on proportional relationships in physics and mathematics. They then split 214 students, who had an average age of 11.7, into two groups. One was given textbook material on the topic along with chatbot text, while the control group only got the textbook .

The researchers first surveyed the students’ interest in mathematics and physics and then gave them 15 minutes to review the learning material. Their interest was assessed again afterwards along with the students’ emotional state and “cognitive load” – the mental effort required to do the work – through a series of questionnaires.

Higher confidence

The chatbot was found to significantly enhance students’ positive emotions – including pleasure and satisfaction, interest in the learning material and self-belief in their understanding of the subject — compared with those who only used textbook text. “The text of the chatbot is more human-like, more conversational than texts you will find in a textbook,” explains Lademann. “It is more chatty.”

Chatbot text was also found to reduce cognitive load. “The group that used the chatbot explanation experience higher positive feelings about the subject [and] they also had a higher confidence in their learning comprehension,” adds Lademann.

Tests taken within 30 minutes of the “learning phase” of the experiment, however, found no difference in performance between students that received the AI-generated explanatory text and the control group, despite the former receiving more information. Lademann says this could be due to the short study time of 15 minutes.

The researchers say that while their findings suggest that AI could provide a superior learning experience for students, further research is needed to assess its impact on learning performance and long-term outcomes. “It is also important that this improved interest manifests in improved learning performance,” Lademann adds.

Lademann would now like to see “longer term studies with a lot of participants and with children actually using the chatbot”. Such research would explore the potential key strength of chatbots; their ability to respond in real time to student’s queries and adapt their learning level to each individual student.

The post ‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study appeared first on Physics World.

European centre celebrates 50 years at the forefront of weather forecasting

8 mai 2025 à 15:01

What is the main role of the European Centre for Medium-Range Weather Forecasts (ECMWF)?

Making weather forecasts more accurate is at the heart of what we do at the ECMWF, working in close collaboration with our member states and their national meteorological services (see box below). That means enhanced forecasting for the weeks and months ahead as well as seasonal and annual predictions. We also have a remit to monitor the atmosphere and the environment – globally and regionally – within the context of a changing climate.

How does the ECMWF produce its weather forecasts?

Our task is to get the best representation, in a 3D sense, of the current state of the atmosphere versus key metrics like wind, temperature, humidity and cloud cover. We do this via a process of reanalysis and data assimilation: combining the previous short-range weather forecast, and its component data, with the latest atmospheric observations – from satellites, ground stations, radars, weather balloons and aircraft. Unsurprisingly, using all this observational data is a huge challenge, with the exploitation of satellite measurements a significant driver of improved forecasting over the past decade.

In what ways do satellite measurements help?

Consider the EarthCARE satellite that was launched in May 2024 by the European Space Agency (ESA) and is helping ECMWF to improve its modelling of clouds, aerosols and precipitation. EarthCARE has a unique combination of scientific instruments – a cloud-profiling radar, an atmospheric lidar, a multispectral imager and a broadband radiometer – to infer the properties of clouds and how they interact with solar radiation as well as thermal-infrared radiation emitted by different layers of the atmosphere.

How are you combining such data with modelling?

The ECMWF team is learning how to interpret and exploit the EarthCARE data to directly initiate our models. Put simply, mathematical models that better represent clouds and, in turn, yield more accurate forecasts. Indirectly, EarthCARE is also revealing a clearer picture of  the fundamental physics governing cloud formation, distribution and behaviour. This is just one example of numerous developments taking advantage of new satellite data. We are looking forward, in particular, to fully exploiting next-generation satellite programmes from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) – including the EPS-SG polar-orbiting system and the Meteosat Third Generation geostationary satellite for continuous monitoring over Europe, Africa and the Indian Ocean.

ECMWF high-performance computing centre
Big data, big opportunities: the ECMWF’s high-performance computing facility in Bologna, Italy, is the engine-room of the organization’s weather and climate modelling efforts. (Courtesy: ECMWF)

What other factors help improve forecast accuracy?

We talk of “a day, a decade” improvement in weather forecasting, such that a five-day forecast now is as good as a three-day forecast 20 years ago. A richer and broader mix of observational data underpins that improvement, with diverse data streams feeding into bigger supercomputers that can run higher-resolution models and better algorithms. Equally important is ECMWF’s team of multidisciplinary scientists, whose understanding of the atmosphere and climate helps to optimize our models and data assimilation methods. A case study in this regard is Destination Earth, an ambitious European Union initiative to create a series of “digital twins” – interactive computer simulations – of our planet by 2030. Working with ESA and EUMETSTAT, the ECMWF is building the software and data environment for Destination Earth as well as developing the first two digital twins.

What are these two twins?

Our Digital Twin on Weather-Induced and Geophysical Extremes will assess and predict environmental extremes to support risk assessment and management. Meanwhile, in collaboration with others, the Digital Twin on Climate Change Adaptation complements and extends existing capabilities for the analysis and testing of “what if” scenarios – supporting sustainable development and climate adaptation and mitigation policy-making over multidecadal timescales.

Progress in machine learning and AI has been dramatic over the past couple of years

What kind of resolution will these models have?

Both digital twins integrate sea, atmosphere, land, hydrology and sea ice and their deep connections with a resolution currently impossible to reach. Right now, for example, the ECMWF’s operational forecasts cover the whole globe in a 9 km grid – effectively a localized forecast every 9 km. With Destination Earth, we’re experimenting with 4 km, 2 km, and even 1 km grids.

In February, the ECMWF unveiled a 10-year strategy to accelerate the use of machine learning and AI. How will this be implemented?

The new strategy prioritizes growing exploitation of data-driven methods anchored on established physics-based modelling – rapidly scaling up our previous deployment of machine learning and AI. There are also a variety of hybrid approaches combining data-driven and physics-based modelling.

What will this help you achieve?

On the one hand, data assimilation and observations will help us to directly improve as well as initialize our physics-based forecasting models – for example, by optimizing uncertain parameters or learning correction terms. We are also investigating the potential of applying machine-learning techniques directly on observations – in effect, to make another step beyond the current state-of-the-art and produce forecasts without the need for reanalysis or data assimilation.

How is machine learning deployed at the moment?

Progress in machine learning and AI has been dramatic over the past couple of years – so much so that we launched our Artificial Intelligence Forecasting System (AIFS) back in February. Trained on many years of reanalysis and using traditional data assimilation, AIFS is already an important addition to our suite of forecasts, though still working off the coat-tails of our physics-based predictive models. Another notable innovation is our Probability of Fire machine-learning model, which incorporates multiple data sources beyond weather prediction to identify regional and localized hot-spots at risk of ignition. Those additional parameters – among them human presence, lightning activity as well as vegetation abundance and its dryness – help to pinpoint areas of targeted fire risk, improving the model’s predictive skill by up to 30%.

What do you like most about working at the ECMWF?

Every day, the ECMWF addresses cutting-edge scientific problems – as challenging as anything you’ll encounter in an academic setting – by applying its expertise in atmospheric physics, mathematical modelling, environmental science, big data and other disciplines. What’s especially motivating, however, is that the ECMWF is a mission-driven endeavour with a straight line from our research outcomes to wider societal and economic benefits.

ECMWF at 50: new frontiers in weather and climate prediction

The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organization supported by 35 states – 23 member states and 12 co-operating states. Established in 1975, the centre employs around 500 staff from more than 30 countries at its headquarters in Reading, UK, and sites in Bologna, Italy, and Bonn, Germany. As a research institute and 24/7 operational service, the ECMWF produces global numerical weather predictions four times per day and other data for its member/cooperating states and the broader meteorological community.

The ECMWF processes data from around 90 satellite instruments as part of its daily activities (yielding 60 million quality-controlled observations each day for use in its Integrated Forecasting System). The centre is a key player in Copernicus – the Earth observation component of the EU’s space programme – by contributing information on climate change for the Copernicus Climate Change Service; atmospheric composition to the Copernicus Atmosphere Monitoring Service; as well as flooding and fire danger for the Copernicus Emergency Management Service. This year, the ECMWF is celebrating its 50th anniversary and has a series of celebratory events scheduled in Bologna (15–19 September) and Reading (1–5 December).

The post European centre celebrates 50 years at the forefront of weather forecasting appeared first on Physics World.

Beyond the Big Bang: reopening the doors on how it all began

7 mai 2025 à 12:00

“The universe began with a Big Bang.”

I’ve said this neat line more times than I can count at the start of a public lecture. It summarizes one of the most incomprehensible ideas in science: that the universe began in an extreme, hot, dense and compact state, before expanding and evolving into everything we now see around us. The certainty of the simple statement is reassuring, and it is an easy way of quickly setting the background to any story in astronomy.

But what if it isn’t just an oversimplified summary? What if it is misleading, perhaps even wholly inaccurate?

The Battle of the Big Bang: the New Tales of Our Cosmic Origin aims to dismantle the complacency many of us have fallen into when it comes to our knowledge of the earliest time. And it succeeds – if you push through the opening pages.

When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith

Early on, authors Niayesh Afshordi and Phil Halper say “in some sense the theory of the Big Bang cannot be trusted”, which caused me to raise an eyebrow and wonder what I had let myself in for. After all, for many astronomers, myself included, the Big Bang is practically gospel. And therein lies the problem. When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith.

It is easy to read the first few pages of The Battle of the Big Bang with deep scepticism but don’t worry, your eyebrows will eventually lower. That the universe has evolved from a “hot Big Bang” is not in doubt – observations such as the measurements of the cosmic microwave background leave no room for debate. But the idea that the universe “began” as a singularity – a region of space where the curvature of space–time becomes infinite – is another matter. The authors argue that no current theory can describe such a state, and there is no evidence to support it.

An astronomical crowbar

Given the confidence with which we teach it, many might have assumed the Big Bang theory beyond any serious questioning, thereby shutting the door on their own curiosity. Well, Afshordi and Halper have written the popular science equivalent of a crowbar, gently prising that door back open without judgement, keen only to share the adventure still to be had.

A cosmologist at the University of Waterloo, Canada, Afshordi is obsessed with finding observational ways of solving problems in fundamental physics, and is known for his creative alternative theories, such as a non-constant speed of light. Meanwhile Halper, a science popularizer, has carved out a niche by interviewing leading voices in early universe cosmology on YouTube, often facilitating fierce debates between competing thinkers. The result is a book that is both authoritative and accessible – and refreshingly free from ego.

Over 12 chapters, the book introduces more than two dozen alternatives to the Big Bang singularity, with names as tongue-twisting as the theories are mind-bending. For most readers, and even this astrophysicist, the distinctions between the theories quickly blur. But that’s part of the point. The focus isn’t on convincing you which model is correct, it’s about making clear that many alternatives exist that are all just as credible (give or take). Reading this book feels like walking through an art gallery with a knowledgeable and thoughtful friend explaining each work’s nuance. They offer their own opinions in hushed tones, but never suggest that their favourite should be yours too, or even that you should have a favourite.

If you do find yourself feeling dizzy reading about the details of holographic cosmology or eternal inflation, then it won’t be long before an insight into the nature of scientific debate or a crisp analogy brings you up for air. This is where the co-authorship begins to shine: Halper’s presence is felt in the moments when complicated theories are reduced to an idea anyone can relate to; while Afshordi brings deep expertise and an insider’s view of the cosmological community. These vivid and sometimes gossipy glimpses into the lives and rivalries of his colleagues paint a fascinating picture. It is a huge cast of characters – including Roger Penrose, Alan Guth and Hiranya Peiris – most of whom appear only for a page. But even though you won’t remember all the names, you are left with the feeling that Big Bang cosmology is a passionate, political and philosophical side of science very much still in motion.

Keep the door open

The real strength of this book is its humility and lack of defensiveness. As much as reading about the theory behind a multiverse is interesting, as a scientist, I’m always drawn to data. A theory that cannot be tested can feel unscientific, and the authors respect that instinct. Surprisingly, some of the most fantastical ideas, like pre-Big Bang cosmologies, are testable. But the tools required are almost science fiction themselves – such as a fleet of gravitational-wave detectors deployed in space. It’s no small task, and one of the most delightful moments in the book is a heartfelt thank you to taxpayers, for funding the kind of fundamental research that might one day get us to an answer.

In the concluding chapters, the authors pre-emptively respond to scepticism, giving real thought to discussing when thinking outside the box becomes going beyond science altogether. There are no final answers in this book, and it does not pretend to offer any. In fact, it actively asks the reader to recognize that certainty does not belong at the frontiers of science. Afshordi doesn’t mind if his own theories are proved wrong, the only terror for him is if people refuse to ask questions or pursue answers simply because the problem is seen as intractable.

Curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain

A book that leaves you feeling like you understand less about the universe than when you started it might sound like it has failed. But when that “understanding” was an illusion based on dogma, and a book manages to pry open a long-sealed door in your mind, that’s a success.

The Battle of the Big Bang offers both intellectual humility and a reviving invitation to remain eternally open-minded. It reminded me of how far I’d drifted from being one of the fearless schoolchildren who, after I declare with certainty that the universe began with a Big Bang, ask, “But what came before it?”. That curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain.

  • May 2025 University of Chicago Press 360pp $32.50/£26.00 hb

The post Beyond the Big Bang: reopening the doors on how it all began appeared first on Physics World.

Exoplanet could be in a perpendicular orbit around two brown dwarfs

30 avril 2025 à 17:52

The first strong evidence for an exoplanet with an orbit perpendicular to that of the binary system it orbits has been observed by astronomers in the UK and Portugal. Based on observations from the ESO’s Very Large Telescope (VLT), researchers led by Tom Baycroft, a PhD student at the University of Birmingham, suggest that such an exoplanet is required to explain the changing orientation in the orbit of a pair of brown dwarfs – objects that are intermediate in mass between the heaviest gas-giant planets and the lightest stars.

The Milky Way is known to host a diverse array of planetary systems, providing astronomers with extensive insights into how planets form and systems evolve. One thing that is evident is that most exoplanets (planets that orbit stars other than the Sun) and systems that have been observed so far bear little resemblance to Earth and the solar system.

Among the most interesting planets are the circumbinaries, which orbit two stars in a binary system. So far, 16 of these planets have been discovered. In each case, they have been found to orbit in the same plane as the orbits of their binary host stars. In other words, the planetary system is flat. This is much like the solar system, where each planet orbits the Sun within the same plane.

“But there has been evidence that planets might exist in a different configuration around a binary star,” Baycroft explains. “Inclined at 90° to the binary, these polar orbiting planets have been theorized to exist, and discs of dust and gas have been found in this configuration.”

Especially interesting

Baycroft’s team had set out to investigate a binary pair of brown dwarfs around 120 light–years away. The system is called 2M1510 and each brown dwarf is only about 45 million years old and they have masses about 18 times that of Jupiter. The pair are especially interesting because they are eclipsing: periodically passing in front of each other from our line of sight. When observed by the VLT, this unique vantage allowed the astronomers to determine the masses and radii of the stars and the nature of their orbit.

“This is a rare object, one of only two eclipsing binary brown dwarfs, which is useful for understanding how brown dwarfs form and evolve,” Baycroft explains. “In our study, we were not looking for a planet, only aiming to improve our understanding of the brown dwarfs.”

Yet as they analysed the VLT’s data, the team noticed something strange about pair’s orbit. Doppler shifts in the light they emitted revealed that their elliptical orbit was slowly changing orientation in an apsidal precession.

Not unheard of

This behaviour is not unheard of. In its orbit around the Sun, Mercury undergoes apsidal precession, which is explained by Albert Einstein’s general theory of relativity. But Baycroft says that the precession must have had an entirely different cause in the brown-dwarf pair.

“Unlike Mercury, this precession is going backwards, in the opposite direction to the orbit,” he explains. “Ruling out any other causes for this, we find that the best explanation is that there is a companion to the binary on a polar orbit, inclined at close to 90° relative to the binary.” As it exerts its gravitational pull on the binary pair, the inclination of this third, smaller body induces a gradual rotation in the orientation of the binary’s elliptical orbit.

For now, the characteristics of this planet are difficult to pin down and the team believe its mass could lie anywhere between 10–100 Earths. All the same, the astronomers are confident that their results now confirm the possibility of polar exoplanets existing in circumbinary orbits – providing valuable guidance for future observations.

“This result exemplifies how the many different configurations of planetary systems continue to astound us,” Baycroft comments. “It also paves the way for more studies aiming to find out how common such polar orbits may be.”

The observations are described in Science Advances.

The post Exoplanet could be in a perpendicular orbit around two brown dwarfs appeared first on Physics World.

Mathematical genius: celebrating the life and work of Emmy Noether

30 avril 2025 à 11:00
Mathematical genius Emmy Noether, around 1900. (Public domain. Photographer unknown)

In his debut book, Einstein’s Tutor: the Story of Emmy Noether and the Invention of Modern Physics, Lee Phillips champions the life and work of German mathematician Emmy Noether (1882–1935). Despite living a life filled with obstacles, injustices and discrimination as a Jewish mathematician, Noether revolutionized the field and discovered “the single most profound result in all of physics”. Phillips’ book weaves the story of her extraordinary life around the central subject of “Noether’s theorem”, which itself sits at the heart of a fascinating era in the development of modern theoretical physics.

Noether grew up at a time when women had few rights. Unable to officially register as a student, she was instead able to audit courses at the University of Erlangen in Bavaria, with the support of her father who was a mathematics professor there. At the time, young Noether was one of only two female auditors in the university of 986 students. Just two years previously, the university faculty had declared that mixed-sex education would “overthrow academic order”. Despite going against this formidable status quo, she was able to graduate in 1903.

Noether continued her pursuit of advanced mathematics, travelling to the “[world’s] centre of mathematics” – the University of Göttingen. Here, she was able to sit in the lectures of some of the brightest mathematical minds of the time – Karl Schwarzschild, Hermann Minkowski, Otto Blumenthal, Felix Klein and David Hilbert. While there, the law finally changed: women were, at last, allowed to enrol as students at university. In 1904 Noether returned to the University of Erlangen to complete her postgraduate dissertation under the supervision of Paul Gordan. At the time, she was the only woman to matriculate alongside 46 men.

Despite being more than qualified, Noether was unable to secure a university position after graduating from her PhD in 1907. Instead, she worked unpaid for almost a decade – teaching her father’s courses and supervising his PhD students. As of 1915, Noether was the only woman in the whole of Europe with a PhD in mathematics. She had worked hard to be recognized as an expert on symmetry and invariant theory, and eventually accepted an invitation from Klein and Hilbert to work alongside them in Göttingen. Here, the three of them would meet Albert Einstein to discuss his latest project – a general theory of relativity.

Infiltrating the boys’ club

In Einstein’s Tutor, Phillips paints an especially vivid picture of Noether’s life at Göttingen, among colleagues including Klein, Hilbert and Einstein, who loom large and bring a richness to the story. Indeed, much of the first three chapters are dedicated to these men, setting the scene for Noether’s arrival in Göttingen. Phillips makes it easy to imagine these exceptionally talented and somewhat eccentric individuals working at the forefront of mathematics and theoretical physics together. And it was here, when supporting Einstein with the development of general relativity (GR), that Noether discovered a profound result: for every symmetry in the universe, there is a corresponding conservation law.

Throughout the book, Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of GR. Einstein himself “expressed wonderment at what happened to his equations in her hands, how he never imagined that things could be expressed with such elegance and generality”. Phillips argues that Einstein should not be credited as the sole architect of GR. Indeed, the contributions of Grossman, Klein, Besso, Hilbert, and crucially, Noether, remain largely unacknowledged – a wrong that Phillips is trying to right with this book.

Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of general relativity

A key theme running through Einstein’s Tutor is the importance of the support and allyship that Noether received from her male contemporaries. While at Göttingen, there was a battle to allow Noether to receive her habilitation (eligibility for tenure). Many argued in her favour but considered her an exception, and believed that in general, women were not suited as university professors. Hilbert, in contrast, saw her sex as irrelevant (famously declaring “this is not a bath house”) and pointed out that science requires the best people, of which she was one. Einstein also fought for her on the basis of equal rights for women.

Eventually, in 1919 Noether was allowed to habilitate (as an exception to the rule) and was promoted to professor in 1922. However, she was still not paid for her work. In fact, her promotion came with the specific condition that she remained unpaid, making it clear that Noether “would not be granted any form of authority over any male employee”. Hilbert however, managed to secure a contract with a small salary for her from the university administration.

Her allies rose to the cause again in 1933, when Noether was one of the first Jewish academics to be dismissed under the Nazi regime. After her expulsion, German mathematician Helmut Hasse convinced 14 other colleagues to write letters advocating for her importance, asking that she be allowed to continue as a teacher to a small group of advanced students – the government denied this request.

When the time came to leave Germany, many colleagues wrote testimonials in her support for immigration, with one writing “She is one of the 10 or 12 leading mathematicians of the present generation in the entire world.” Rather than being placed at a prestigious university or research institute (Hermann Weyl and Einstein were both placed at “the men’s university”, the Institute for Advanced Study in Princeton), it was recommended she join Bryn Mawr, a women’s college in Pennsylvania, US. Her position there would “compete with no-one… the most distinguished feminine mathematician connected with the most distinguished feminine university”. Phillips makes clear his distaste for the phrasing of this recommendation. However, all accounts show that she was happy at Bryn Mawr and stayed there until her unexpected death in 1935 at the age of 53.

Noether’s legacy

With a PhD in theoretical physics, Phillips has worked for many years in both academia and industry. His background shows itself clearly in some unusual writing choices. While his writing style is relaxed and conversational, it includes the occasional academic turn of phrase (e.g. “In this chapter I will explain…”), which feels out of place in a popular-science book. He also has a habit of piling repetitive and overly sincere praise onto Noether. I personally prefer stories that adopt the “show, don’t tell” approach – her abilities speak for themselves, so it should be easy to let the reader come to their own conclusions.

Phillips has made the ambitious choice to write a popular-science book about complex mathematical concepts such as symmetries and conservation laws that are challenging to explain, especially to general readers. He does his best to describe the mathematics and physics behind some of the key concepts around Noether’s theorem. However, in places, you do need to have some familiarity with university-level physics and maths to properly follow his explanations. The book also includes a 40-page appendix filled with additional physics content, which I found unnecessary.

Einstein’s Tutor does achieve its primary goal of familiarizing the reader with Emmy Noether and the tremendous significance of her work. The final chapter on her legacy breezes quickly through developments in particle physics, astrophysics, quantum computers, economics and XKCD Comics to highlight the range and impact this single theorem has had. Phillips’ goal was to take Noether into the mainstream, and this book is a small step in the right direction. As cosmologist and author Katie Mack summarizes perfectly: “Noether’s theorem is to theoretical physics what natural selection is to biology.”

  • 2024 Hachette UK £25hb 368pp

The post Mathematical genius: celebrating the life and work of Emmy Noether appeared first on Physics World.

Brain region used for speech decoding also supports BCI cursor control

30 avril 2025 à 10:00

Sending an email, typing a text message, streaming a movie. Many of us do these activities every day. But what if you couldn’t move your muscles and navigate the digital world? This is where brain–computer interfaces (BCIs) come in.

BCIs that are implanted in the brain can bypass pathways damaged by illness and injury. They analyse neural signals and produce an output for the user, such as interacting with a computer.

A major focus for scientists developing BCIs has been to interpret brain activity associated with movements to control a computer cursor. The user drives the BCI by imagining arm and hand movements, which often originate in the dorsal motor cortex. Speech BCIs, which restore communication by decoding attempted speech from neural activity in sensorimotor cortical areas such as the ventral precentral gyrus, have also been developed.

Researchers at the University of California, Davis recently found that the same part of the brain that supported a speech BCI could also support computer cursor control for an individual with amyotrophic lateral sclerosis (ALS). ALS is progressive neurodegenerative disease affecting the motor neurons in the brain and spinal cord.

“Once that capability [to control a computer mouse] became reliably achievable roughly a decade ago, it stood to reason that we should go after another big challenge, restoring speech, that would help people unable to speak. And from there – and this is where this new paper comes in – we recognized that patients would benefit from both of these capabilities [speech and computer cursor control],” says Sergey Stavisky, who co-directs the UC Davis Neuroprosthetics Lab with David Brandman.

Their clinical case study suggests that computer cursor control may not be as body-part-specific as scientists previously believed. If results are replicable, this could enable the creation of multi-modal BCIs that restore communication and movement to people with paralysis. The researchers share information about their cursor BCI and the case study in the Journal of Neural Engineering.

The study participant, a 45-year-old man with ALS, had previous success working with a speech BCI. The researchers recorded neural activity from the participant’s ventral precentral gyrus while he imagined controlling a computer cursor, and built a BCI to interpret that neural activity and predict where and when he wanted to move and click the cursor. The participant then used the new cursor BCI to send texts and emails, watch Netflix, and play The New York Times Spelling Bee game on his personal computer.

“This finding, that the tiny region of the brain we record from has a lot more than just speech information, has led to the participant also being able to control his own computer on a daily basis, and get back some independence for him and his family,” says first author Tyler Singer-Clark, a graduate student in biomedical engineering at UC Davis.

The researchers found that most of the information driving cursor control came from one of the participant’s four implanted microelectrode arrays, while click information was available on all four of the BCI arrays.

“The neural recording arrays are the same ones used in many prior studies,” explains Singer-Clark. “The result that our cursor BCI worked well given this choice makes it all the more convincing that this brain area (speech motor cortex) has untapped potential for controlling BCIs in multiple useful ways.”

The researchers are working to incorporate more computer actions into their cursor BCI, to make the control faster and more accurate, and to reduce calibration time. They also note that it’s important to replicate these results in more people to understand how generalizable the results of their case study may be.

The research was conducted as part of the BrainGate2 clinical trial.

The post Brain region used for speech decoding also supports BCI cursor control appeared first on Physics World.

Curiouser and curiouser: delving into quantum Cheshire cats

29 avril 2025 à 12:00

Most of us have heard of Schrödinger’s eponymous cat, but it is not the only feline in the quantum physics bestiary. Quantum Cheshire cats may not be as well known, yet their behaviour is even more insulting to our classical-world common sense.

These quantum felines get their name from the Cheshire cat in Lewis Carroll’s Alice’s Adventures in Wonderland, which disappears leaving its grin behind. As Alice says: I’ve often seen a cat without a grin, but a grin without a cat! It’s the most curious thing I ever saw in my life!”

Things are curiouser in the quantum world, where the property of a particle seems to be in a different place from the particle itself. A photon’s polarization, for example, may exist in a totally different location from the photon itself: that’s a quantum Cheshire cat.

While the prospect of disembodied properties might seem disturbing, it’s a way of interpreting the elegant predictions of quantum mechanics. That at least was the thinking when quantum Cheshire cats were first put forward by Yakir AharonovSandu PopescuDaniel Rohrlich and Paul Skrzypczyk in an article published in 2013 (New J. Phys. 15 113015).

Strength of a measurement

To get to grips with the concept, remember that making a measurement on a quantum system will “collapse” it into one of its eigenstates – think of opening the box and finding Schrödinger’s cat either dead or alive. However, by playing on the trade-off between the strength of a measurement and the uncertainty of the result, one can gain a tiny bit of information while disturbing the system as little as possible. If such a measurement is done many times, or on an ensemble of particles, it is possible to average out the results, to obtain a precise value.

First proposed in the 1980s, this method of teasing out information from the quantum system by a series of gentle pokes is known as weak measurement. While the idea of weak measurement in itself does not appear a radical departure from quantum formalism, “an entire new world appeared” as Popescu puts it. Indeed, Aharonov and his collaborators have spent the last four decades investigating all kinds of scenarios in which weak measurement can lead to unexpected consequences, with the quantum Cheshire cat being one they stumbled upon.

In their 2013 paper, Aharonov and colleagues imagined a simple optical interferometer set-up, in which the “cat” is a photon that can be in either the left or the right arm, while the “grin” is the photon’s circular polarization. The cat (the photon) is first prepared in a certain superposition state, known as pre-selection. After it enters the set-up, the cat can leave via several possible exits. The disembodiment between particle and property appears in the cases in which the particle emerges in a particular exit (post-selection).

Certain measurements, analysing the properties of the particle, are performed while the particle is in the interferometer (in between the pre- and post-selection). Being weak measurements, they have to be carried out many times to get the average. For certain pre- and post-selection, one finds the cat will be in the left arm while the grin is in the right. It’s a Cheshire cat disembodied from its grin.

The mathematical description of this curious state of affairs was clear, but the interpretation seemed preposterous and the original article spent over a year in peer review, with its eventual publication still sparking criticism. Soon after, experiments with polarized neutrons (Nature Comms 5 4492) and photons (Phys. Rev. A 94 012102) tested the original team’s set-up. However, these experiments and subsequent tests, despite confirming the theoretical predictions, did not settle the debate – after all, the issue was with the interpretation.

A quantum of probabilities

To come to terms with this perplexing notion, think of the type of pre- and post-selected set-up as a pachinko machine, in which a ball starts at the top in a single pre-selected slot and goes down through various obstacles to end up in a specific point (post-selection): the jackpot hole. If you count how many balls hit the jackpot hole, you can calculate the probability distribution. In the classical world, measuring the position and properties of the ball at different points, say with a camera, is possible.

This observation will not affect the trajectory of the ball, or the probability of the jackpot. In a quantum version of the pachinko machine, the pre- and post-selection will work in a similar way, except you could feed in balls in superposition states. A weak measurement will not disturb the system so multiple measurements can tease out the probability of certain outcomes. The measurement result will not yield an eigenvalue, which corresponds to a physical property of the system, but weak values, and the way one should interpret these is not clear-cut.

1 Split particle property

Illustration of 5 pachinko machines with cats sat on or behind them. Below, a series of diagrams explain how the quantum cat's grin can be separated from its body
(Illustration courtesy: Mayank Shreshtha)

Quantum Cheshire cats are a curious phenomenon, whereby the property of a quantum particle can be completely separate from the particle itself. A photon’s polarization, for example, may exist at a location where there is no photon at all. In this illustration, our quantum Cheshire cats (the photons) are at a pachinko parlour. Depending on certain pre- and post-selection criteria, the cats end up in one location – in one arm of the detector or the other – and their grins in a different location, on the chairs.  

To make sense of this in a quantum sense, we need an intuitive mental image, even a limited one. This is why quantum Cheshire cats are a powerful metaphor, but they are also more than that, guiding researchers into new directions. Indeed, since the initial discovery, Aharonov, Popescu and colleagues have stumbled  upon more surprises.

In 2021 they generalized the quantum Cheshire cat effect to a dynamical picture in which the “disembodied” property can propagate in space (Nature Comms 12 4770). For example, there could be a flow of angular momentum without anything carrying it (Phys. Rev. A 110 L030201). In another generalization, Aharonov imagined a massive particle with a mass that could be measured in one place with no momentum, while its momentum could be measured in another place without its mass (Quantum 8 1536). A gedankenexperiment to test this effect would involve a pair of nested Mach–Zehnder interferometers with moving mirrors and beam splitters.

Provocative interpretations

If you find these ideas bewildering, you’re in good company. “They’re brain teasers,” explains Jonte Hance, a researcher in quantum foundations at Newcastle University, UK. In fact, Hance thinks that quantum Cheshire cats are a great way to get people interested in the foundations of quantum mechanics.

Physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions

Sure, the early years of quantum physics saw famous debates between Niels Bohr and Albert Einstein, culminating in the criticism in the Einstein–Podolski–Rosen (EPR) paradox (Phys. Rev. 47 777) in 1935. But after that, physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions.

This lack of interest in quantum fundamentals is perfectly illustrated by two anecdotes, the first involving Aharonov himself. When he was studying physics at Technion in Israel in the 1950s, he asked Nathan Rosen (the R of the EPR) about working on the foundations of quantum mechanics. The topic was deemed so unfashionable that Rosen advised him to focus on applications. Luckily, Aharonov ignored the advice and went on to work with American quantum theorist David Bohm.

The other story concerns Alain Aspect, who in 1975 visited CERN physicist John Bell to ask for advice on his plans to do an experimental test of Bell’s inequalities to settle the EPR paradox. Bell’s very first question was not about the details of the experiment – but whether Aspect had a permanent position (Nature Phys. 3 674). Luckily, Aspect did, so he carried out the test, which went on to earn him a share of the 2022 Nobel Prize for Physics.

As quantum computing and quantum information began to emerge, there was a brief renaissance in quantum foundations culminating in the early 2010s. But over the past decade, with many of aspects of quantum physics reaching commercial fruition, research interest has shifted firmly once again towards applications.

Despite popular science’s constant reminder of how “weird” quantum mechanics is, physicists often take the pragmatic “shut up and calculate” approach. Hance says that researchers “tend to forget how weird quantum mechanics is, and to me you need that intuition of it being weird”. Indeed, paradoxes like Schrödinger’s cat and EPR have attracted and inspired generations of physicists and have been instrumental in the development of quantum technologies.

The point of the quantum Cheshire cat, and related paradoxes, is to challenge our intuition and provoke us to think outside the box. That’s important even if applications may not be immediately in sight. “Most people agree that although we know the basic laws of quantum mechanics, we don’t really understand what quantum mechanics is all about,” says Popescu.

Aharonov and colleagues’ programme is to develop a correct intuition that can guide us further. “We strongly believe that one can find an intuitive way of thinking about quantum mechanics,” adds Popescu. That may, or may not, involve felines.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Curiouser and curiouser: delving into quantum Cheshire cats appeared first on Physics World.

India must boost investment in quantum technologies to become world leader, says report

28 avril 2025 à 17:00

India must intensify its efforts in quantum technologies as well as boost private investment if it is to become a leader in the burgeoning field. That is according to the first report from India’s National Quantum Mission (NQM), which also warns that the country must improve its quantum security and regulation to make its digital infrastructure quantum-safe.

Approved by the Indian government in 2023, the NQM is an eight-year $750m (60bn INR) initiative that aims to make the country a leader in quantum tech. Its new report focuses on developments in four aspects of NQM’s mission: quantum computing; communication; sensing and metrology; and materials and devices.

Entitled India’s International Technology Engagement Strategy for Quantum Science, Technology and Innovation, the report finds that India’s research interests include error-correction algorithms for quantum computers. It is also involved in building quantum hardware with superconducting circuits, trapped atoms/ions and engineered quantum dots.

The NQM-supported Bengaluru-based startup QPiAI, for example, recently developed a 25-superconducting qubit quantum computer called “Indus”, although the qubits were fabricated abroad.

Ajay Sood, principal scientific advisor to the Indian government, told Physics World that while India is strong in “software-centric, theoretical and algorithmic aspects of quantum computing, work on completely indigenous development of quantum computing hardware is…at a nascent stage.”

Sood, who is a physicist by training, adds that while there are a few groups working on different platforms, these are at less than 10-qubit stage. “[It is] important for [India] to have indigenous capabilities for fabricating qubits and other ancillary hardware for quantum computers,” he says

India is also developing secure protocols and satellite-based systems and implementing quantum systems for precision measurements. QNu Labs – another Begalaru startup – is, for example, developing a quantum-safe communication-chip module to secure satellite and drone communications with built-in quantum randomness and security micro-stack.

Lagging behind

The report highlights the need for greater involvement of Indian industry in hardware-related activities. Unlike other countries, India struggles with limited industry funding, in which most comes from angel investors, with limited participation from institutional investors such as venture-capital firms, tech corporates and private equity funds.

There are many areas of quantum tech that are simply not being pursued in India

Arindam Ghosh

The report also calls for more indigenous development of essential sensors and devices such as single-photon detectors, quantum repeaters, and associated electronics, with necessary testing facilities for quantum communication. “There is also room for becoming global manufacturers and suppliers for associated electronic or cryogenic components,” says Sood. “Our industry should take this opportunity.”

India must work on its quantum security and regulation as well, according to the report. It warns that the Indian financial sector, which is one the major drivers for quantum tech applications, “risks lagging behind” in quantum security and regulation, with limited participation of Indian financial-service providers.

“Our cyber infrastructure, especially related to our financial systems, power grids, and transport systems, need to be urgently protected by employing the existing and evolving post quantum cryptography algorithms and quantum key distribution technologies,” says Sood.

India currently has about 50 educational programmes in various universities and institutions. Yet Arindam Ghosh, who runs the Quantum Technology Initiative at the India Institute of Science, Bangalore, says that the country faces a lack of people going into quantum-related careers.

“In spite of [a] very large number of quantum-educated graduates, the human resource involved in developing quantum technologies is abysmally small,” says Ghosh. “As a result, there are many areas of quantum tech that are simply not being pursued in India.”  Other problems, according to Ghosh, include “modest” government funding compared to other countries as well as “slow and highly bureaucratic” government machinery.

Sood, however, is optimistic, pointing out recent Indian initiatives such as setting up hardware fabrication and testing facilities, supporting start-ups as well as setting up a $1.2bn (100bn INR) fund to promote “deep-tech” startups. “[With such initiatives] there is every reason to believe that India would emerge even stronger in the field,” says Sood.

The post India must boost investment in quantum technologies to become world leader, says report appeared first on Physics World.

Quantum transducer enables optical control of a superconducting qubit

28 avril 2025 à 10:10
Optical micrograph of a quantum transducer
Quantum transducer A niobium microwave LC resonator (silver) is capacitively coupled to two hybridized lithium niobate racetrack resonators in a paperclip geometry (black) to exchange energy between the microwave and optical domains using the electro-optic effect. (Courtesy: Lončar group/Harvard SEAS)

The future of quantum communication and quantum computing technologies may well revolve around superconducting qubits and quantum circuits, which have already been shown to improve processing capabilities over classical supercomputers – even when there is noise within the system. This scenario could be one step closer with the development of a novel quantum transducer by a team headed up at the Harvard John A Paulson School of Engineering and Applied Sciences (SEAS).

Realising this future will rely on systems having hundreds (or more) logical qubits (each built from multiple physical qubits). However, because superconducting qubits require ultralow operating temperatures, large-scale refrigeration is a major challenge – there is no technology available today that can provide the cooling power to realise such large-scale qubit systems.

Superconducting microwave qubits are a promising option for quantum processor nodes, but they currently require bulky microwave components. These components create a lot of heat that can easily disrupt the refrigeration systems cooling the qubits.

One way to combat this cooling conundrum is to use a modular approach, with small-scale quantum processors connected via quantum links, and each processor having its own dilution refrigerator. Superconducting qubits can be accessed using microwave photons between 3 and 8 GHz, thus the quantum links could be used to transmit microwave signals. The downside of this approach is that it would require cryogenically cooled links between each subsystem.

On the other hand, optical signals at telecoms frequency (around 200 THz) can be generated using much smaller form factor components, leading to lower thermal loads and noise, and can be transmitted via low-loss optical fibres. The transduction of information between optical and microwave frequencies is therefore key to controlling superconducting microwave qubits without the high thermal cost.

The large energy gap between microwave and optical photons makes it difficult to control microwave qubits with optical signals and requires a microwave–optical quantum transducer (MOQT). These MOQTs provide a coherent, bidirectional link between microwave and optical frequencies while preserving the quantum states of the qubit. A team led by SEAS researcher Marko Lončar has now created such a device, describing it in Nature Physics.

Electro-optic transducer controls superconducting qubits

Lončar and collaborators have developed a thin-film lithium niobate (TFLN) cavity electro-optic (CEO)-based MOQT (clad with silica to aid thermal dissipation and mitigate optical losses) that converts optical frequencies into microwave frequencies with low loss. The team used the CEO-MOQT to facilitate coherent optical driving of a superconducting qubit (controlling the state of the quantum system by manipulating its energy).

The on-chip transducer system contains three resonators: a microwave LC resonator capacitively coupled to two optical resonators using the electro-optic effect. The device creates hybridized optical modes in the transducer that enables a resonance-enhanced exchange of energy between the microwave and optical modes.

The transducer uses a process known as difference frequency generation to create a new frequency output from two input frequencies. The optical modes – an optical pump in a classical red-pumping regime and an optical idler – interact to generate a microwave signal at the qubit frequency, in the form of a shaped, symmetric single microwave photon.

This microwave signal is then transmitted from the transducer to a superconducting qubit (in the same refrigerator system) using a coaxial cable. The qubit is coupled to a readout resonator that enables its state to be read by measuring the transmission of a readout pulse.

The MOQT operated with a peak conversion efficiency of 1.18% (in both microwave-to-optical and optical-to-microwave regimes), low microwave noise generation and the ability to drive Rabi oscillations in a superconducting qubit. Because of the low noise, the researchers state that stronger optical-pump fields could be used without affecting qubit performance.

Having effectively demonstrated the ability to control superconducting circuits with optical light, the researchers suggest a number of future improvements that could increase the device performance by orders of magnitude. For example, microwave and optical coupling losses could be reduced by fabricating a single-ended microwave resonator directly onto the silicon wafer instead of on silica. A flux tuneable microwave cavity could increase the optical bandwidth of the transducer. Finally, the use of improved measurement methods could improve control of the qubits and allow for more intricate gate operations between qubit nodes.

The researchers suggest this type of device could be used for networking superconductor qubits when scaling up quantum systems. The combination of this work with other research on developing optical readouts for superconducting qubit chips “provides a path towards forming all-optical interfaces with superconducting qubits…to enable large scale quantum processors,” they conclude.

The post Quantum transducer enables optical control of a superconducting qubit appeared first on Physics World.

Could an extra time dimension reconcile quantum entanglement with local causality?

25 avril 2025 à 14:33

Nonlocal correlations that define quantum entanglement could be reconciled with Einstein’s theory of relativity if space–time had two temporal dimensions. That is the implication of new theoretical work that extends nonlocal hidden variable theories of quantum entanglement and proposes a potential experimental test.

Marco Pettini, a theoretical physicist at Aix Marseille University in France, says the idea arose from conversations with the mathematical physicist Roger Penrose – who shared the 2020 Nobel Prize for Physics for showing that the general theory of relativity predicted black holes. “He told me that, from his point of view, quantum entanglement is the greatest mystery that we have in physics,” says Pettini. The puzzle is encapsulated by Bell’s inequality, which was derived in the mid-1960s by the Northern Irish physicist John Bell.

Bell’s breakthrough was inspired by the 1935 Einstein–Podolsky–Rosen paradox, a thought experiment in which entangled particles in quantum superpositions (using the language of modern quantum mechanics) travel to spatially separated observers Alice and Bob. They make measurements of the same observable property of their particles. As they are superposition states, the outcome of neither measurement is certain before it is made. However, as soon as Alice measures the state, the superposition collapses and Bob’s measurement is now fixed.

Quantum scepticism

A sceptic of quantum indeterminacy could hypothetically suggest that the entangled particles carried hidden variables all along, so that when Alice made her measurement, she simply found out the state that Bob would measure rather than actually altering it. If the observers are separated by a distance so great that information about the hidden variable’s state would have to travel faster than light between them, then hidden variable theory violates relativity. Bell derived an inequality showing the maximum degree of correlation between the measurements possible if each particle carried such a “local” hidden variable, and showed it was indeed violated by quantum mechanics.

A more sophisticated alternative investigated by the theoretical physicists David Bohm and his student Jeffrey Bub, as well as by Bell himself, is a nonlocal hidden variable. This postulates that the particle – including the hidden variable – is indeed in a superposition and defined by an evolving wavefunction. When Alice makes her measurement, this superposition collapses. Bob’s value then correlates with Alice’s. For decades, researchers believed the wavefunction collapse could travel faster than light without allowing superliminal exchange of information – therefore without violating the special theory of relativity. However, in 2012 researchers showed that any finite-speed collapse propagation would enable superluminal information transmission.

“I met Roger Penrose several times, and while talking with him I asked ‘Well, why couldn’t we exploit an extra time dimension?’,” recalls Pettini. Particles could have five-dimensional wavefunctions (three spatial, two temporal), and the collapse could propagate through the extra time dimension – allowing it to appear instantaneous. Pettini says that the problem Penrose foresaw was that this would enable time travel, and the consequent possibility that one could travel back through the “extra time” to kill one’s ancestors or otherwise violate causality. However, Pettini says he “recently found in the literature a paper which has inspired some relatively standard modifications of the metric of an enlarged space–time in which massive particles are confined with respect to the extra time dimension…Since we are made of massive particles, we don’t see it.”

Toy model

Pettini believes it might be possible to test this idea experimentally. In a new paper, he proposes a hypothetical experiment (which he describes as a toy model), in which two sources emit pairs of entangled, polarized photons simultaneously. The photons from one source are collected by recipients Alice and Bob, while the photons from the other source are collected by Eve and Tom using identical detectors. Alice and Eve compare the polarizations of the photons they detect. Alice’s photon must, by fundamental quantum mechanics, be entangled with Bob’s photon, and Eve’s with Tom’s, but otherwise simple quantum mechanics gives no reason to expect any entanglement in the system.

Pettini proposes, however, that Alice and Eve should be placed much closer together, and closer to the photon sources, than to the other observers. In this case, he suggests, the communication of entanglement through the extra time dimension when the wavefunction of Alice’s particle collapses, transmitting this to Bob, or when Eve’s particle is transmitted to Tom would also transmit information between the much closer identical particles received by the other woman. This could affect the interference between Alice’s and Eve’s photons and cause a violation of Bell’s inequality. “[Alice and Eve] would influence each other as if they were entangled,” says Pettini. “This would be the smoking gun.”

Bub, now a distinguished professor emeritus at the University of Maryland, College Park, is not holding his breath. “I’m intrigued by [Pettini] exploiting my old hidden variable paper with Bohm to develop his two-time model of entanglement, but to be frank I can’t see this going anywhere,” he says. “I don’t feel the pull to provide a causal explanation of entanglement, and I don’t any more think of the ‘collapse’ of the wave function as a dynamical process.” He says the central premise of Pettini’s – that adding an extra time dimension could allow the transmission of entanglement between otherwise unrelated photons, is “a big leap”. “Personally, I wouldn’t put any money on it,” he says.

The research is described in Physical Review Research.

The post Could an extra time dimension reconcile quantum entanglement with local causality? appeared first on Physics World.

Light-activated pacemaker is smaller than a grain of rice

24 avril 2025 à 17:50

The world’s smallest pacemaker to date is smaller than a single grain of rice, optically controlled and dissolves after it’s no longer needed. According to researchers involved in the work, the pacemaker could work in human hearts of all sizes that need temporary pacing, including those of newborn babies with congenital heart defects.

“Our major motivation was children,” says Igor Efimov, a professor of medicine and biomedical engineering, in a press release from Northwestern University. Efimov co-led the research with Northwestern bioelectronics pioneer John Rogers.

“About 1% of children are born with congenital heart defects – regardless of whether they live in a low-resource or high-resource country,” Efimov explains. “Now, we can place this tiny pacemaker on a child’s heart and stimulate it with a soft, gentle, wearable device. And no additional surgery is necessary to remove it.”

The current clinical standard-of-care involves sewing pacemaker electrodes directly onto a patient’s heart muscle during surgery. Wires from the electrodes protrude from the patient’s chest and connect to an external pacing box. Placing the pacemakers – and removing them later – does not come without risk. Complications include infection, dislodgment, torn or damaged tissues, bleeding and blood clots.

To minimize these risks, the researchers sought to develop a dissolvable pacemaker, which they introduced in Nature Biotechnology in 2021. By varying the composition and thickness of materials in the devices, Rogers’ lab can control how long the pacemaker functions before dissolving. The dissolvable device also eliminates the need for bulky batteries and wires.

“The heart requires a tiny amount of electrical stimulation,” says Rogers in the Northwestern release. “By minimizing the size, we dramatically simplify the implantation procedures, we reduce trauma and risk to the patient, and, with the dissolvable nature of the device, we eliminate any need for secondary surgical extraction procedures.”

Light-controlled pacing
Light-controlled pacing When the wearable device (left) detects an irregular heartbeat, it emits light to activate the pacemaker. (Courtesy: John A Rogers/Northwestern University)

The latest iteration of the device – reported in Nature – advances the technology further. The pacemaker is paired with a small, soft, flexible, wireless device that is mounted onto the patient’s chest. The skin-interfaced device continuously captures electrocardiogram (ECG) data. When it detects an irregular heartbeat, it automatically shines a pulse of infrared light to activate the pacemaker and control the pacing.

“The new device is self-powered and optically controlled – totally different than our previous devices in those two essential aspects of engineering design,” says Rogers. “We moved away from wireless power transfer to enable operation, and we replaced RF wireless control strategies – both to eliminate the need for an antenna (the size-limiting component of the system) and to avoid the need for external RF power supply.”

Measurements demonstrated that the pacemaker – which is 1.8 mm wide, 3.5 mm long and 1 mm thick – delivers as much stimulation as a full-sized pacemaker. Initial studies in animals and in the human hearts of organ donors suggest that the device could work in human infants and adults. The devices are also versatile, the researchers say, and could be used across different regions of the heart or the body. They could also be integrated with other implantable devices for applications in nerve and bone healing, treating wounds and blocking pain.

The next steps for the research (supported by the Querrey Simpson Institute for Bioelectronics, the Leducq Foundation and the National Institutes of Health) include further engineering improvements to the device. “From the translational standpoint, we have put together a very early-stage startup company to work individually and/or in partnerships with larger companies to begin the process of designing the device for regulatory approval,” Rogers says.

The post Light-activated pacemaker is smaller than a grain of rice appeared first on Physics World.

Harvard University sues Trump administration as attacks on US science deepen

24 avril 2025 à 14:55

Harvard University is suing the Trump administration over its plan to block up to $9bn of government research grants to the institution. The suit, filed in a federal court on 21 April, claims that the administration’s “attempt to coerce and control” Harvard violates the academic freedom protected by the first amendment of the US constitution.

The action comes in the wake of the US administration claiming that Harvard and other universities have not protected Jewish students during pro-Gaza campus demonstrations. Columbia University has already agreed to change its teaching policies and clamp down on demonstrations in the hope of regaining some $400,000 of government grants.

Harvard president Alan Garber also sought negotiations with the administration on ways that it might satisfy its demands. But a letter sent to Garber dated 11 April, signed by three Trump administration officials, asserted that the university had “failed to live up to both the intellectual and civil rights conditions that justify federal investments”.

The letter demanded that Harvard reform and restructure its governance, stop all diversity, equality and inclusion (DEI) programmes and reform how it hires staff and students. It also said Harvard must stop recruiting international students who are “hostile to American values” and provide an audit on “viewpoint diversity” on admissions and hiring.

Some administration sources suggested that the letter, which effectively insists on government oversight of Harvard’s affairs, was an internal draft sent to Harvard by mistake. Nevertheless, Garber decided to end negotiations, leading Harvard to instead sue the government over the blocked funds.

We stand for the values that have made American higher education a beacon for the world

Alan Garber

A letter on 14 April from Harvard’s lawyers states that the university is “committed to fighting antisemitism and other forms of bigotry in its community”. It adds that it is “open to dialogue” about what it has done, and is planning to do, to “improve the experience of every member” of its community but concludes that Harvard “is not prepared to agree to demands that go beyond the lawful authority of this or any other administration”.

Writing in an open letter to the community dated 22 April, Garber says that “we stand for the values that have made American higher education a beacon for the world”. The administration has hit back by threatening to withdraw Harvard’s non-profit status, tax its endowment and jeopardise its ability to enrol overseas students, who currently make up more than 27% of its intake.

Budget woes

The Trump administration is also planning swingeing cuts to government science agencies. If its budget request for 2026 is approved by Congress, funding for NASA’s Science Mission Directorate would be almost halved from $7.3bn to $3.9bn. The Nancy Grace Roman Space Telescope, a successor to the Hubble and James Webb space telescopes, would be axed. Two missions to Venus – the DAVINCI atmosphere probe and the VERITAS surface-mapping project – as well as the Mars Sample Return mission would lose their funding too.

“The impacts of these proposed funding cuts would not only be devastating to the astronomical sciences community, but they would also have far-reaching consequences for the nation,” says Dara Norman, president of the American Astronomical Society. “These cuts will derail not only cutting-edge scientific advances, but also the training of the nation’s future STEM workforce.”

The National Oceanic and Atmospheric Administration (NOAA) also stands to lose key programmes, with the budget for its Ocean and Atmospheric Research Office slashed from $485m to just over $170m. Surviving programmes from the office, including research on tornado warning and ocean acidification, would move to the National Weather Service and National Ocean Service.

“This administration’s hostility toward research and rejection of climate science will have the consequence of eviscerating the weather forecasting capabilities that this plan claims to preserve,” says Zoe Lofgren, a senior Democrat who sits on the House of Representatives’ Science, Space, and Technology Committee.

The National Science Foundation (NSF), meanwhile, is unlikely to receive $234m for major building projects this financial year, which could spell the end of the Horizon supercomputer being built at the University of Texas at Austin. The NSF has already halved the number of graduate students in its research fellowship programme, while Science magazine says it is calling back all grant proposals that had been approved but not signed off, apparently to check that awardees conform to Trump’s stance on DEI.

A survey of 292 department chairs at US institutions in early April, carried out by the American Institute of Physics, reveals that almost half of respondents are experiencing or anticipate cuts in federal funding in the coming months. Entitled Impacts of Restrictions on Federal Grant Funding in Physics and Astronomy Graduate Programs, the report also says that the number of first-year graduate students in physics and astronomy is expected to drop by 13% in the next enrolment.

Update: 25/04/2025: Sethuraman Panchanathan has resigned as NSF director five years into his six-year term. Panchanathan took up the position in 2020 during Trump’s first term as US President. “I believe that I have done all I can to advance the mission of the agency and feel that it is time to pass the baton to new leadership,” Panchanathan said in a statement yesterday. “This is a pivotal moment for our nation in terms of global competitiveness. We must not lose our competitive edge.”

The post Harvard University sues Trump administration as attacks on US science deepen appeared first on Physics World.

Superconducting device delivers ultrafast changes in magnetic field

23 avril 2025 à 18:12

Precise control over the generation of intense, ultrafast changes in magnetic fields called “magnetic steps” has been achieved by researchers in Hamburg, Germany. Using ultrashort laser pulses, Andrea Cavalleri and colleagues at the Max Planck Institute for the Structure and Dynamics of Matter disrupted the currents flowing through a superconducting disc. This alters the superconductor’s local magnetic environment on very short timescales – creating a magnetic step.

Magnetic steps rise to their peak intensity in just a few picoseconds, before decaying more slowly in several nanoseconds. They are useful to scientists because they rise and fall on timescales far shorter than the time it takes for materials to respond to external magnetic fields. As a result, magnetic steps could provide fundamental insights into the non-equilibrium properties of magnetic materials, and could also have practical applications in areas such as magnetic memory storage.

So far, however, progress in this field has been held back by technical difficulties in generating and controlling magnetic steps on ultrashort timescales. Previous strategies  have employed technologies including microcoils, specialized antennas, and circularly polarized light pulses. However, each of these schemes offer a limited degree of control over the properties of the magnetic steps they generated.

Quenching supercurrents

Now, Cavalleri’s team has developed a new technique that involves the quenching of currents in a superconductor. Normally, these “supercurrents” will flow indefinitely without losing energy, and will act to expel any external magnetic fields from the superconductor’s interior. However, if these currents are temporarily disrupted on ultrashort timescales, a sudden change will be triggered in the magnetic field close to the superconductor – which could be used to create a magnetic step.

To create this process, Cavalleri and colleagues applied ultrashort laser pulses to a thin, superconducting disc of yttrium barium copper oxide (YBCO), while also exposing the disc to an external magnetic field.

To detect whether magnetic steps had been generated, they placed a crystal of the semiconductor gallium phosphide in the superconductor’s vicinity. This material exhibits an extremely rapid Faraday response. This involves the rotation of the polarization of light passing through the semiconductor in response to changes in the local magnetic field. Crucially, this rotation can occur on sub-picosecond timescales.

In their experiments, researchers monitored changes to the polarization of an ultrashort “probe” laser pulse passing through the semiconductor shortly after they quenched supercurrents in their YBCO disc using a separate ultrashort “pump” laser pulse.

“By abruptly disrupting the material’s supercurrents using ultrashort laser pulses, we could generate ultrafast magnetic field steps with rise times of approximately one picosecond – or one trillionth of a second,” explains team member Gregor Jotzu.

Broadband step

This was used to generate an extremely broadband magnetic step, which contains frequencies ranging from sub-gigahertz to terahertz. In principle, this should make the technique suitable for studying magnetization in a diverse variety of materials.

To demonstrate practical applications, the team used these magnetic steps to control the magnetization of a ferrimagnet. Such a magnet has opposing magnetic moments, but has a non-zero spontaneous magnetization in zero magnetic field.

When they placed a ferrimagnet on top of their superconductor and created a magnetic step, the step field caused the ferrimagnet’s magnetization to rotate.

For now, the magnetic steps generated through this approach do not have the speed or amplitude needed to switch materials like a ferrimagnet between stable states. Yet through further tweaks to the geometry of their setup, the researchers are confident that this ability may not be far out of reach.

“Our goal is to create a universal, ultrafast stimulus that can switch any magnetic sample between stable magnetic states,” Cavalleri says. “With suitable improvements, we envision applications ranging from phase transition control to complete switching of magnetic order parameters.”

The research is described in Nature Photonics.

The post Superconducting device delivers ultrafast changes in magnetic field appeared first on Physics World.

FLIR MIX – a breakthrough in infrared and visible imaging

23 avril 2025 à 16:25

flir mix champagne cork

Until now, researchers have had to choose between thermal and visible imaging: One reveals heat signatures while the other provides structural detail. Recording both and trying to align them manually — or harder still, synchronizing them temporally — can be inconsistent and time-consuming. The result is data that is close but never quite complete. The new FLIR MIX is a game changer, capturing and synchronizing high-speed thermal and visible imagery at up to 1000 fps. Visible and high-performance infrared cameras with FLIR Research Studio software work together to deliver one data set with perfect spatial and temporal alignment — no missed details or second guessing, just a complete picture of fast-moving events.

Jerry Beeney
Jerry Beeney

Jerry Beeney is a seasoned global business development leader with a proven track record of driving product growth and sales performance in the Teledyne FLIR Science and Automation verticals. With more than 20 years at Teledyne FLIR, he has played a pivotal role in launching new thermal imaging solutions, working closely with technical experts, product managers, and customers to align products with market demands and customer needs. Before assuming his current role, Beeney held a variety of technical and sales positions, including senior scientific segment engineer. In these roles, he managed strategic accounts and delivered training and product demonstrations for clients across diverse R&D and scientific research fields. Beeney’s dedication to achieving meaningful results and cultivating lasting client relationships remains a cornerstone of his professional approach.

The post FLIR MIX – a breakthrough in infrared and visible imaging appeared first on Physics World.

Dual-robot radiotherapy system designed to reduce the cost of cancer treatment

23 avril 2025 à 13:00

Researchers at the University of Victoria in Canada are developing a low-cost radiotherapy system for use in low- and middle-income countries and geographically remote rural regions. Initial performance characterization of the proof-of-concept device produced encouraging results, and the design team is now refining the system with the goal of clinical commercialization.

This could be good news for people living in low-resource settings, where access to cancer treatment is an urgent global health concern. The WHO’s International Agency for Research on Cancer estimates that there are at least 20 million new cases of cancer diagnosed annually and 9.7 million annual cancer-related deaths, based on 2022 data. By 2030, approximately 75% of cancer deaths are expected to occur in low- and middle-income countries, due to rising populations, healthcare and financial disparities, and a general lack of personnel and equipment resources compared with high-income countries.

The team’s orthovoltage radiotherapy system, known as KOALA (kilovoltage optimized alternative for adaptive therapy), is designed to create, optimize and deliver radiation treatments in a single session. The device, described in Biomedical Physics & Engineering Express, consists of a dual-robot system with a 225 kVp X-ray tube mounted onto one robotic arm and a flat-panel detector mounted on the other.

The same X-ray tube can be used to acquire cone-beam CT (CBCT) images, as well as to deliver treatment, with a peak tube voltage of 225 kVp and a maximum tube current of 2.65 mA for a 1.2 mm focal spot. Due to its maximum reach of 2.05 m and collision restrictions, the KOALA system has a limited range of motion, achieving 190° arcs for both CBCT acquisition and treatments.

Device testing

To characterize the KOALA system, lead author Olivia Masella and colleagues measured X-ray spectra for tube voltages of 120, 180 and 225 kVp. At 120 and 180 kVp, they observed good agreement with spectra from SpekPy (a Python software toolkit for modelling X-ray tube spectra). For the 225 kVp spectrum, they found a notable overestimation in the higher energies.

The researchers performed dosimetric tests by measuring percent depth dose (PDD) curves for a 120 kVp imaging beam and a 225 kVp therapy beam, using solid water phantom blocks with a Farmer ionization chamber at various depths. They used an open beam with 40° divergence and a source-to-surface distance of 30 cm. They also measured 2D dose profiles with radiochromic film at various depths in the phantom for a collimated 225 kVp therapy beam and a dose of approximately 175 mGy at the surface.

The PDD curves showed excellent agreement between experiment and simulations at both 120 and 225 kVp, with dose errors of less than 2%. The 2D profile results were less than optimal. The team aims to correct this by using a more optimal source-to-collimator distance (100 mm) and a custom-built motorized collimator.

Workflow proof-of-concept for the KOALA system
Workflow proof-of-concept The team tested the workflow by acquiring a CBCT image of a dosimetry phantom containing radiochromic film, delivering a 190° arc to the phantom, and scanning and analysing the film. The CBCT image was then processed for Monte Carlo dose calculation and compared to the film dose. (Courtesy: CC BY 4.0/Biomed. Phys. Eng. Express 10.1088/2057-1976/adbcb2)

Geometrical evaluation conducted using a coplanar star-shot test showed that the system demonstrated excellent geometrical accuracy, generating a wobble circle with a diameter of just 0.3 mm.

Low costs and clinical practicality

Principal investigator Magdalena Bazalova-Carter describes the rationale behind the KOALA’s development. “I began the computer simulations of this project about 15 years ago, but the idea originated from Michael Weil, a radiation oncologist in Northern California,” she tells Physics World. “He and our industrial partner, Tai-Nang Huang, the president of Linden Technologies, are overseeing the progress of the project. Our university team is diversified, working in medical physics, computer science, and electrical and mechanical engineering. Orimtech, a medical device manufacturer and collaborator, developed the CBCT acquisition and reconstruction software and built the imaging prototype.”

Masella says that the team is keeping costs low is various ways. “Megavoltage X-rays are most commonly used in conventional radiotherapy, but KOALA’s design utilizes low-energy kilovoltage X-rays for treatment. By using a 225 kVp X-ray tube, the X-ray generation alone is significantly cheaper compared to a conventional linac, at a cost of USD $150,000 compared to $3 million,” she explains. “By operating in the kilovoltage instead of megavoltage range, only about 4 mm of lead shielding is required, instead of 6 to 7 feet of high-density concrete, bringing the shielding cost down from $2 million to $50,000. We also have incorporated components that are much lower cost than [those in] a conventional radiotherapy system.”

“Our novel iris collimator leaves are only 1-mm thick due to the lower treatment X-ray beam energy, and its 12 leaves are driven by a single motor,” adds Bazalova-Carter. “Although multileaf collimators with 120 leaves utilized with megavoltage X-ray radiotherapy are able to create complex fields, they are about 8-cm thick and are controlled by 120 separate motors. Given the high cost and mechanical vulnerability of multileaf collimators, our single motor design offers a more robust and reliable alternative.”

The team is currently developing a new motorized collimator, an improved treatment couch and a treatment planning system. They plan to improve CBCT imaging quality with hardware modifications, develop a CBCT-to-synthetic CT machine learning algorithm, refine the auto-contouring tool and integrate all of the software to smooth the workflow.

The researchers are planning to work with veterinarians to test the KOALA system with dogs diagnosed with cancer. They will also develop quality assurance protocols specific to the KOALA device using a dog-head phantom.

“We hope to demonstrate the capabilities of our system by treating beloved pets for whom available cancer treatment might be cost-prohibitive. And while our system could become clinically adopted in veterinary medicine, our hope is that it will be used to treat people in regions where conventional radiotherapy treatment is insufficient to meet demand,” they say.

The post Dual-robot radiotherapy system designed to reduce the cost of cancer treatment appeared first on Physics World.

Top-quark pairs at ATLAS could shed light on the early universe

22 avril 2025 à 18:02

Physicists working on the ATLAS experiment on the Large Hadron Collider (LHC) are the first to report the production of top quark–antiquark pairs in collisions involving heavy nuclei. By colliding lead ions, CERN’s LHC creates a fleeting state of matter called the quark–gluon plasma. This is an extremely hot and dense soup of subatomic particles that includes deconfined quarks and gluons. This plasma is believed to have filled the early universe microseconds after the Big Bang.

“Heavy-ion collisions at the LHC recreate the quark–gluon plasma in a laboratory setting,” Anthony Badea, a postdoctoral researcher at the University of Chicago and one of the lead authors of a paper describing the research. As well as boosting our understanding of the early universe, studying the quark–gluon plasma at the LHC could also provide insights into quantum chromodynamics (QCD), which is the theory of how quarks and gluons interact.

Although the quark–gluon plasma at the LHC vanishes after about 10-23 s, scientists can study it by analysing how other particles produced in collisions move through it. The top quark is the heaviest known elementary particle and its short lifetime and distinct decay pattern offer a unique way to explore the quark–gluon plasma. This because the top quark decays before the quark–gluon plasma dissipates.

“The top quark decays into lighter particles that subsequently further decay,” explains Stefano Forte at the University of Milan, who was not involved in the research. “The time lag between these subsequent decays is modified if they happen within the quark–gluon plasma, and thus studying them has been suggested as a way to probe [quark–gluon plasma’s] structure. In order for this to be possible, the very first step is to know how many top quarks are produced in the first place, and determining this experimentally is what is done in this [ATLAS] study.”

First observations

The ATLAS team analysed data from lead–lead collisions and searched for events in which a top quark and its antimatter counterpart were produced. These particles can then decay in several different ways and the researchers focused on a less frequent but more easily identifiable mode known as the di-lepton channel. In this scenario, each top quark decays into a bottom quark and a W boson, which is a weak force-carrying particle that then transforms into a detectable lepton and an invisible neutrino.

The results not only confirmed that top quarks are created in this complex environment but also showed that their production rate matches predictions based on our current understanding of the strong nuclear force.

“This is a very important study,” says Juan Rojo, a theoretical physicist at the Free University of Amsterdam who did not take part in the research. “We have studied the production of top quarks, the heaviest known elementary particle, in the relatively simple proton–proton collisions for decades. This work represents the first time that we observe the production of these very heavy particles in a much more complex environment, with two lead nuclei colliding among them.”

As well as confirming QCD’s prediction of heavy-quark production in heavy-nuclei collisions, Rojo explains that “we have a novel probe to resolve the structure of the quark–gluon plasma”. He also says that future studies will enable us “to understand novel phenomena in the strong interactions such as how much gluons in a heavy nucleus differ from gluons within the proton”.

Crucial first step

“This is a first step – a crucial one – but further studies will require larger samples of top quark events to explore more subtle effects,” adds Rojo.

The number of top quarks created in the ATLAS lead–lead collisions agrees with theoretical expectations. In the future, more detailed measurements could help refine our understanding of how quarks and gluons behave inside nuclei. Eventually, physicists hope to use top quarks not just to confirm existing models, but to reveal entirely new features of the quark–gluon plasma.

Rojo says we could, “learn about the time structure of the quark–gluon plasma, measurements which are ‘finer’ would be better, but for this we need to wait until more data is collected, in particular during the upcoming high-luminosity run of the LHC”.

Badea agrees that ATLAS’s observation opens the door to deeper explorations. “As we collect more nuclei collision data and improve our understanding of top-quark processes in proton collisions, the future will open up exciting prospects”.

The research is described in Physical Review Letters.

The post Top-quark pairs at ATLAS could shed light on the early universe appeared first on Physics World.

Grete Hermann: the quantum physicist who challenged Werner Heisenberg and John von Neumann

22 avril 2025 à 17:00
Grete Hermann
Great mind Grete Hermann, pictured here in 1955, was one of the first scientists to consider the philosophical implications of quantum mechanics. (Photo: Lohrisch-Achilles. Courtesy: Bremen State Archives)

In the early days of quantum mechanics, physicists found its radical nature difficult to accept – even though the theory had successes. In particular Werner Heisenberg developed the first comprehensive formulation of quantum mechanics in 1925, while the following year Erwin Schrödinger was able to predict the spectrum of light emitted by hydrogen using his eponymous equation. Satisfying though these achievements were, there was trouble in store.

Long accustomed to Isaac Newton’s mechanical view of the universe, physicists had assumed that identical systems always evolve with time in exactly the same way, that is to say “deterministically”. But Heisenberg’s uncertainty principle and the probabilistic nature of Schrödinger’s wave function suggested worrying flaws in this notion. Those doubts were famously expressed by Albert Einstein, Boris Podolsky and Nathan Rosen in their “EPR” paper of 1935 (Phys. Rev. 47 777) and in debates between Einstein and Niels Bohr.

But the issues at stake went deeper than just a disagreement among physicists. They also touched on long-standing philosophical questions about whether we inhabit a deterministic universe, the related question of human free will, and the centrality of cause and effect. One person who rigorously addressed the questions raised by quantum theory was the German mathematician and philosopher Grete Hermann (1901–1984).

Hermann stands out in an era when it was rare for women to contribute to physics or philosophy, let alone to both. Writing in The Oxford Handbook of the History of Quantum Interpretations, published in 2022, the City University of New York philosopher of science Elise Crull has called Hermann’s work “one of the first, and finest, philosophical treatments of quantum mechanics”.

Grete Hermann upended the famous ‘proof’, developed by the Hungarian-American mathematician and physicist John von Neumann, that ‘hidden variables’ are impossible in quantum mechanics

What’s more, Hermann upended the famous “proof”, developed by the Hungarian-American mathematician and physicist John von Neumann, that “hidden variables” are impossible in quantum mechanics. But why have Hermann’s successes in studying the roots and meanings of quantum physics been so often overlooked? With 2025 being the International Year of Quantum Science and Technology, it’s time to find out.

Free thinker

Hermann was born on 2 March 1901 in the north German port city of Bremen. One of seven children, her mother was deeply religious, while her father was a merchant, a sailor and later an itinerant preacher. According to the 2016 book Grete Hermann: Between Physics and Philosophy by Crull and Guido Bacciagaluppi, she was raised according to her father’s maxim: “I train my children in freedom!” Essentially, he enabled Hermann to develop a wide range of interests and benefit from the best that the educational system could offer a woman at the time.

She was eventually admitted as one of a handful of girls at the Neue Gymnasium – a grammar school in Bremen – where she took a rigorous and broad programme of subjects. In 1921 Hermann earned a certificate to teach high-school pupils – an interest in education that reappeared in her later life – and began studying mathematics, physics and philosophy at the University of Göttingen.

In just four years, Hermann earned a PhD under the exceptional Göttingen mathematician Emmy Noether (1882–1935), famous for her groundbreaking theorem linking symmetry to physical conservation laws. Hermann’s final oral exam in 1925 featured not just mathematics, which was the subject of her PhD, but physics and philosophy too. She had specifically requested to be examined in the latter by the Göttingen philosopher Leonard Nelson, whose “logical sharpness” in lectures had impressed her.

abstract illustration of human heads overlapping
Mutual interconnections Grete Hermann was fascinated by the fundamental overlap between physics and philosophy. (Courtesy: iStock/agsandrew)

By this time, Hermann’s interest in philosophy was starting to dominate her commitment to mathematics. Although Noether had found a mathematics position for her at the University of Freiburg, Hermann instead decided to become Nelson’s assistant, editing his books on philosophy. “She studies mathematics for four years,” Noether declared, “and suddenly she discovers her philosophical heart!”

Hermann found Nelson to be demanding and sometimes overbearing but benefitted from the challenges he set. “I gradually learnt to eke out, step by step,” she later declared, “the courage for truth that is necessary if one is to utterly place one’s trust, also within one’s own thinking, in a method of thought recognized as cogent.” Hermann, it appeared, was searching for a path to the internal discovery of truth, rather like Einstein’s Gedankenexperimente.

After Nelson died in 1927 aged just 45, Hermann stayed in Göttingen, where she continued editing and expanding his philosophical work and related political ideas. Espousing a form of socialism based on ethical reasoning to produce a just society, Nelson had co-founded a political action group and set up the associated Philosophical-Political Academy (PPA) to teach his ideas. Hermann contributed to both and also wrote for the PPA’s anti-Nazi newspaper.

Hermann’s involvement in the organizations Nelson had founded later saw her move to other locations in Germany, including Berlin. But after Hitler came to power in 1933, the Nazis banned the PPA, and Hermann and her socialist associates drew up plans to leave Germany. Initially, she lived at a PPA “school-in-exile” in neighbouring Denmark. As the Nazis began to arrest socialists, Hermann feared that Germany might occupy Denmark (as it indeed later did) and so moved again, first to Paris and then London.

Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics

Arriving in Britain in early 1938, Hermann became acquainted with Edward Henry, another socialist, whom she later married. It was, however, merely a marriage of convenience that gave Hermann British citizenship and – when the Second World War started in 1939 – stopped her from being interned as an enemy alien. (The couple divorced after the war.) Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics.

Mixing philosophy and physics

A major stimulus for Hermann’s work came from discussions she had in 1934 with Heisenberg and Carl Friedrich von Weizsäcker, who was then his research assistant at the Institute for Theoretical Physics in Leipzig. The previous year Hermann had written an essay entitled “Determinism and quantum mechanics”, which analysed whether the indeterminate nature of quantum mechanics – central to the “Copenhagen interpretation” of quantum behaviour – challenged the concept of causality.

Much cherished by physicists, causality says that every event has a cause, and that a given cause always produces a single specific event. Causality was also a tenet of the 18th-century German philosopher Immanuel Kant, best known for his famous 1781 treatise Critique of Pure Reason. He believed that causality is fundamental for how humans organize their experiences and make sense of the world.

Hermann, like Nelson, was a “neo-Kantian” who believed that Kant’s ideas should be treated with scientific rigour. In her 1933 essay, Hermann examined how the Copenhagen interpretation undermines Kant’s principle of causality. Although the article was not published at the time, she sent copies to Heisenberg, von Weizsäcker, Bohr and also Paul Dirac, who was then at the University of Cambridge in the UK.

In fact, we only know of the essay’s existence because Crull and Bacciagaluppi discovered a copy in Dirac’s archives at Churchill College, Cambridge. They also found a 1933 letter to Hermann from Gustav Heckmann, a physicist who said that Heisenberg, von Weizsäcker and Bohr had all read her essay and took it “absolutely and completely seriously”. Heisenberg added that Hermann was a “fabulously clever woman”.

Heckmann then advised Hermann to discuss her ideas more fully with Heisenberg, who he felt would be more open than Bohr to new ideas from an unexpected source. In 1934 Hermann visited Heisenberg and von Weizsäcker in Leipzig, with Heisenberg later describing their interaction in his 1971 memoir Physics and Beyond: Encounters and Conversations.

In that book, Heisenberg relates how rigorously Hermann wanted to treat philosophical questions. “[She] believed she could prove that the causal law – in the form Kant had given it – was unshakable,” Heisenberg recalled. “Now the new quantum mechanics seemed to be challenging the Kantian conception, and she had accordingly decided to fight the matter out with us.”

Their interaction was no fight, but a spirited discussion, with some sharp questioning from Hermann. When Heisenberg suggested, for instance, that a particular radium atom emitting an electron is an example of an unpredictable random event that has no cause, Hermann countered by saying that just because no cause has been found, it didn’t mean no such cause exists.

Significantly, this was a reference to what we now call “hidden variables” – the idea that quantum mechanics is being steered by additional parameters that we possibly don’t know anything about. Heisenberg then argued that even with such causes, knowing them would lead to complications in other experiments because of the wave nature of electrons.

Abstract illustration of atomic physics
Forward thinker Grete Hermann was one of the first people to study the notion that quantum mechanics might be steered by mysterious additional parameters – now dubbed “hidden variables” – that we know nothing about. (Courtesy: iStock/pobytov)

Suppose, using a hidden variable, we could predict exactly which direction an electron would move. The electron wave wouldn’t then be able to split and interfere with itself, resulting in an extinction of the electron. But such electron interference effects are experimentally observed, which Heisenberg took as evidence that no additional hidden variables are needed to make quantum mechanics complete. Once again, Hermann pointed out a discrepancy in Heisenberg’s argument.

In the end, neither side fully convinced the other, but inroads were made, with Heisenberg concluding in his 1971 book that “we had all learned a good deal about the relationship between Kant’s philosophy and modern science”. Hermann herself paid tribute to Heisenberg in a 1935 paper “Natural-philosophical foundations of quantum mechanics”, which appeared in a relatively obscure philosophy journal called Abhandlungen der Fries’schen Schule (6 69). In it, she thanked Heisenberg “above all for his willingness to discuss the foundations of quantum mechanics, which was crucial in helping the present investigations”.

Quantum indeterminacy versus causality

In her 1933 paper, Hermann aimed to understand if the indeterminacy of quantum mechanics threatens causality. Her overall finding was that wherever indeterminacy is invoked in quantum mechanics, it is not logically essential to the theory. So without claiming that quantum theory actually supports causality, she left the possibility open that it might.

To illustrate her point, Hermann considered Heisenberg’s uncertainty principle, which says that there’s a limit to the accuracy with which complementary variables, such as position, q, and momentum, p, can be measured, namely ΔqΔp h where h is Planck’s constant. Does this principle, she wondered, truly indicate quantum indeterminism?

Hermann asserted that this relation can mean only one of two possible things. One is that measuring one variable leaves the value of the other undetermined. Alternatively, the result of measuring the other variable can’t be precisely predicted. Hermann dismissed the first option because its very statement implies that exact values exist, and so it cannot be logically used to argue against determinism. The second choice could be valid, but that does not exclude the possibility of finding new properties – hidden variables – that give an exact prediction.

Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics

In making her argument about hidden variables, Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics. Quantum mechanics, according to von Neumann, is complete and no extra deterministic features need to be added.

For decades, his result was cited as “proof” that any deterministic addition to quantum mechanics must be wrong. Indeed, von Neumann had such a well-deserved reputation as a brilliant mathematician that few people had ever bothered to scrutinize his analysis. But in 1964 the Northern Irish theorist John Bell famously showed that a valid hidden-variable theory could indeed exist, though only if it’s “non-local” (Physics Physique Fizika 1 195).

Non-locality says that things can happen at different parts of the universe simultaneously without needing faster-than-light communication. Despite being a notion that Einstein never liked, non-locality has been widely confirmed experimentally. In fact, non-locality is a defining feature of quantum physics and one that’s eminently useful in quantum technology.

Then, in 1966 Bell examined von Neumann’s reasoning and found an error that decisively refuted the proof (Rev. Mod, Phys. 38 447). Bell, in other words, showed that quantum mechanics could permit hidden variables after all – a finding that opened the door to alternative interpretations of quantum mechanics. However, Hermann had reported the very same error in her 1933 paper, and again in her 1935 essay, with an especially lucid exposition that almost exactly foresees Bell’s objection.

She had got there first, more than three decades earlier (see box).

Grete Hermann: 30 years ahead of John Bell

artist impression of a quantum computer core
(Courtesy: iStock/Chayanan)

According to Grete Hermann, John von Neumann’s 1932 proof that quantum mechanics doesn’t need hidden variables “stands or falls” on his assumption concerning “expectation values”, which is the sum of all possible outcomes weighted by their respective probabilities. In the case of two quantities, say, r and s, von Neumann supposed that the expectation value of (r + s) is the same as the expectation value of r plus the expectation value of s. In other words, <(s)> = <r> + <s>.

This is clearly true in classical physics, Hermann writes, but the truth is more complicated in quantum mechanics. Suppose r and s are the conjugate variables in an uncertainty relationship, such as momentum q and position p given by ΔqΔp ≥ h. By definition, measuring q eliminates making a precise measurement of p, so it is impossible to simultaneously measure them and satisfy the relation <p> = <q> + <p>.

Further analysis, which Hermann supplied and Bell presented more fully, shows exactly why this invalidates or at least strongly limits the applicability of von Neumann’s proof; but Hermann caught the essence of the error first. Bell did not recognize or cite Hermann’s work, most probably because it was hardly known to the physics community until years after his 1966 paper.

A new view of causality

After rebutting von Neumann’s proof in her 1935 essay, Hermann didn’t actually turn to hidden variables. Instead, Hermann went in a different and surprising direction, probably as a result of her discussions with Heisenberg. She accepted that quantum mechanics is a complete theory that makes only statistical predictions, but proposed an alternative view of causality within this interpretation.

We cannot foresee precise causal links in a quantum mechanics that is statistical, she wrote. But once a measurement has been made with a known result, we can work backwards to get a cause that led to that result. In fact, Hermann showed exactly how to do this with various examples. In this way, she maintains, quantum mechanics does not refute the general Kantian category of causality.

Not all philosophers have been satisfied by the idea of retroactive causality. But writing in The Oxford Handbook of the History of Quantum Interpretations, Crull says that Hermann “provides the contours of a neo-Kantian interpretation of quantum mechanics”. “With one foot squarely on Kant’s turf and the other squarely on Bohr’s and Heisenberg’s,” Crull concludes, “[Hermann’s] interpretation truly stands on unique ground.”

Grete Hermann’s 1935 paper shows a deep and subtle grasp of elements of the Copenhagen interpretation.

But Hermann’s 1935 paper did more than just upset von Neumann’s proof. In the article, she shows a deep and subtle grasp of elements of the Copenhagen interpretation such as its correspondence principle, which says that – in the limit of large quantum numbers – answers derived from quantum physics must approach those from classical physics.

The paper also shows that Hermann was fully aware – and indeed extended the meaning – of the implications of Heisenberg’s thought experiment that he used to illustrate the uncertainty principle. Heisenberg envisaged a photon colliding with an electron, but after that contact, she writes, the wave function of the physical system is a linear combination of terms, each being “the product of one wave function describing the electron and one describing the light quantum”.

As she went on to say, “The light quantum and the electron are thus not described each by itself, but only in their relation to each other. Each state of the one is associated with one of the other.” Remarkably, this amounts to an early perception of quantum entanglement, which Schrödinger described and named later in 1935. There is no evidence, however, that Schrödinger knew of Hermann’s insights.

Hermann’s legacy                             

On the centenary of the birth of a full theory of quantum mechanics, how should we remember Hermann? According to Crull, the early founders of quantum mechanics were “asking philosophical questions about the implications of their theory [but] none of these men were trained in both physics and philosophy”. Hermann, however, was an expert in the two. “[She] composed a brilliant philosophical analysis of quantum mechanics, as only one with her training and insight could have done,” Crull says.

Had Hermann’s 1935 paper been more widely known, it could have altered the early development of quantum mechanics

Sadly for Hermann, few physicists at the time were aware of her 1935 paper even though she had sent copies to some of them. Had it been more widely known, her paper could have altered the early development of quantum mechanics. Reading it today shows how Hermann’s style of incisive logical examination can bring new understanding.

Hermann leaves other legacies too. As the Second World War drew to a close, she started writing about the ethics of science, especially the way in which it was carried out under the Nazis. After the war, she returned to Germany, where she devoted herself to pedagogy and teacher training. She disseminated Nelson’s views as well as her own through the reconstituted PPA, and took on governmental positions where she worked to rebuild the German educational system, apparently to good effect according to contemporary testimony.

Hermann also became active in politics as an adviser to the Social Democratic Party. She continued to have an interest in quantum mechanics, but it is not clear how seriously she pursued it in later life, which saw her move back to Bremen to care for an ill comrade from her early socialist days.

Hermann’s achievements first came to light in 1974 when the physicist and historian Max Jammer revealed her 1935 critique of von Neumann’s proof in his book The Philosophy of Quantum Mechanics. Following Hermann’s death in Bremen on 15 April 1984, interest slowly grew, culminating in Crull and Bacciagaluppi’s 2016 landmark study Grete Hermann: Between Physics and Philosophy.

The life of this deep thinker, who also worked to educate others and to achieve worthy societal goals, remains an inspiration for any scientist or philosopher today.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Grete Hermann: the quantum physicist who challenged Werner Heisenberg and John von Neumann appeared first on Physics World.

Tennis-ball towers reach record-breaking heights with 12-storey, 34-ball structure

18 avril 2025 à 12:00
Four photos of tennis ball towers: 34 balls with base 3n+1; 21 balls with base 4n+1; 11 balls with base 5n+1; and six balls in a single layer
Oh, balls A record-breaking 34-ball, 12-storey tower with three balls per layer (photo a); a 21-ball six-storey tower with four balls per layer (photo b); an 11-ball, three-storey tower with five balls per layers (photo c); and why a tower with six balls per layer would be impossible as the “locker” ball just sits in the middle (photo d). (Courtesy: Andria Rogava)

A few years ago, I wrote in Physics World about various bizarre structures I’d built from tennis balls, the most peculiar of which I termed “tennis-ball towers”. They consisted of a series of three-ball layers topped by a single ball (“the locker”) that keeps the whole tower intact. Each tower had (3n + 1) balls, where n is the number of triangular layers. The tallest tower I made was a seven-storey, 19-ball structure (n = 6). Shortly afterwards, I made an even bigger, nine-storey, 25-ball structure (n = 8).

Now, in the latest exciting development, I have built a new, record-breaking tower with 34 balls (n = 11), in which all 30 balls from the second to the eleventh layer are kept in equilibrium by the locker on the top (see photo a). The three balls in the bottom layer aren’t influenced by the locker as they stay in place by virtue of being on the horizontal surface of a table.

I tried going even higher but failed to build a structure that would stay intact without supporting “scaffolds”. Now in case you think I’ve just glued the balls together, watch the video below to see how the incredible 34-ball structure collapses spontaneously, probably due to a slight vibration as I walked around the table.

Even more unexpectedly, I have been able to make tennis-ball towers consisting of layers of four balls (4n + 1) and five balls too (5n + 1). Their equilibria are more delicate and, in the case of four-ball structures, so far I have only managed to build (photo b) a 21-ball, six-storey tower (n = 5). You can also see the tower in the video below.

The (5n + 1) towers are even trickier to make and (photo c) I have only got up to a three-storey structure with 11 balls (n = 2): two lots of five balls with a sixth single ball on top. In case you’re wondering, towers with six balls in each layer are physically impossible to build because they form a regular hexagon. You can’t just use another ball as a locker because it would simply sit between the other six (photo d).

The post Tennis-ball towers reach record-breaking heights with 12-storey, 34-ball structure appeared first on Physics World.

❌