↩ Accueil

Vue lecture

Reformulation of general relativity brings it closer to Newtonian physics

The first-ever detection of gravitational waves was made by LIGO in 2015 and since then researchers have been trying to understand the physics of the black-hole and neutron-star mergers that create the waves. However, the physics is very complicated and is defined by Albert Einstein’s general theory of relativity.

Now Jiaxi Wu, Siddharth Boyeneni and Elias Most at the California Institute of Technology (Caltech) have addressed this challenge by developing a new formulation of general relativity that is inspired by the equations that describe electromagnetic interactions. They show that general relativity behaves in the same way as the gravitational inverse square law described by Isaac Newton more than 300 years ago. “This is a very non-trivial insight,” says Most.

One of the fascinations of black holes is the extreme physics they invoke. These astronomical objects  pack so much mass into so little space that not even light can escape their gravitational pull. Black holes (and neutron stars) can exist in binary systems in which the objects orbit each other. These pairs eventually merge to create single black holes in events that create detectable gravitational waves. The study of these waves provides an important testbed for gravitational physics. However, the mathematics of general relativity that describe these mergers is very complicated.

Inverse square law

According to Newtonian physics, the gravitational attraction between two masses is proportional to the inverse of the square of the distance between them – the inverse square law. However, as Most points out, “Unless in special cases, general relativity was not thought to act in the same way.”

Over the past decade, gravitational-wave researchers have taken various approaches including post-Newtonian theory and effective one-body approaches to better understand the physics of black-hole mergers. One important challenge is how to model parameters such as orbital eccentricity and precession in black hole systems and how best to understand “ringdown”. The latter is the process whereby a black hole formed by a merger emits gravitational waves as it relaxes into a stable state.

The trio’s recasting of the equations of general relativity was inspired by the Maxwell equations that describe how electric and magnetic fields leapfrog each other through space. According to these equations, the forces between electric charges diminish according to the same inverse square law as Newton’s gravitational attraction.

Early reformulations

The original reformulations of “gravitoelectromagnetism” date back to the 90s. Most explains how among those who did this early work was his Caltech colleague and LIGO Nobel laureate Kip Thorne, who exploited a special mathematical structure of the curvature of space–time.

“This structure mathematically looks like the equations governing light and the attraction of electric charges, but the physics is quite different,” Most tells Physics World. The gravito-electric field thus derived describes how an object might squish under the forces of gravity. “Mathematically this means that the previous gravito-electric field falls off with inverse distance cubed, which is unlike the inverse distance square law of Newtonian gravity or electrostatic attraction,” adds Most.

Most’s own work follows on from previous studies of the potential radio emission from the interaction of magnetic fields during the collision of neutron stars and black holes from which it seemed reasonable to then “think about whether some of these insights naturally carry over to Einstein’s theory of gravity”. The trio began with different formulations of general relativity and electromagnetism with the aim of deriving gravitational analogues for the electric and magnetic fields that behave more closely to classical theories of electromagnetism. They then demonstrated how their formulation might describe the behaviour of a non-rotating Schwarzschild black hole, as well as a black hole binary.

Not so different

“Our work says that actually general relativity is not so different from Newtonian gravity (or better, electric forces) when expressed in the right way,” explains Most. The actual behaviour predicted is the same in both formulations but the trio’s reformulation reveals how general relativity and Newtonian physics are more similar than they are generally considered to be. “The main new thing is then what does it mean to ‘observe’ gravity, and what does it mean to measure distances relative to how you ‘observe’.”

Alexander Phillipov is a black-hole expert at the University of Maryland in the US and was not directly involved with Most’s research. He describes the research as “very nice”, adding that while the analogy between gravity and electromagnetism has been extensively explored in the past, there is novelty in the interpretation of results from fully nonlinear general relativistic simulations in terms of effective electromagnetic fields. “It promises to provide valuable intuition for a broad class of problems involving compact object mergers.”

The research is described in Physical Review Letters.

The post Reformulation of general relativity brings it closer to Newtonian physics appeared first on Physics World.

  •  

Researchers create glow-in-the-dark succulents that recharge with sunlight

“Picture the world of Avatar, where glowing plants light up an entire ecosystem,” describes Shuting Liu of South China Agricultural University in Guangzhou.

Well, that vision is now a step closer thanks to researchers in China who have created glow-in-the-dark succulents that recharge in sunlight.

Instead of coaxing cells to glow through genetic modification, the team instead used afterglow phosphor particles — materials similar to those found in glow-in-the-dark toys — that can absorb light and release it slowly over time.

The researchers then injected the particles into succulents finding that they produced a strong glow, thanks to the narrow, uniform, and evenly distributed channels within the leaf that helped to disperse the particles.

After a couple of minutes of exposure to sunlight or indoor LED light, the modified plants glowed for up to two hours. By using different types of phosphors, the researchers created plants that shine in various colours, including green, red, and blue.

The team even built a glowing plant wall with 56 succulents, which was bright enough to illuminate nearby objects.

“I just find it incredible that an entirely human-made, micro-scale material can come together so seamlessly with the natural structure of a plant,” notes Liu. “The way they integrate is almost magical. It creates a special kind of functionality.”

The post Researchers create glow-in-the-dark succulents that recharge with sunlight appeared first on Physics World.

  •  

Big data helps Gaelic football club achieve promotion following 135-year wait

An astrophysics PhD student from County Armagh in Northern Ireland has combined his passion for science with Gaelic football to help his club achieve a historic promotion.

Eamon McGleenan plays for his local team – O’Connell’s GAC Tullysaran – and is a PhD student at Queen’s University Belfast, where he is a member of the Predictive Sports Analytics (PSA) research team, which was established in 2023.

McGleenan and his PhD supervisor David Jess teamed up with GAC Tullysaran to investigate whether data analysis and statistical techniques could improve their training and results.

Over five months, the Queen’s University researchers took over 550 million individual measurements from the squad, which included information such as player running speed, accelerations and heart rates.

“We applied mathematical models to the big data we obtained from the athletes,” notes McGleenan. “This allowed us to examine how the athletes evolved over time and we then provided key insights for the coaching staff, who then generated bespoke training routines and match tactics.”

The efforts immediately paid off as in July GAC Tullysaran won their league by two points and were promoted for the first time in 135 years to the top-flight Senior Football League, which they will start in March.

“The statistical insight provided by PSA is of great use and I like how it lets me get the balance of training right, especially in the run-up to match day,” noted Tullysaran manager Pauric McGlone, who adds that it also provided a bit of competition in the squad that ensured the players were “conditioned in a way that allows them to perform at their best”.

For more about the PSA’s activities, see here.

The post Big data helps Gaelic football club achieve promotion following 135-year wait appeared first on Physics World.

  •  

Zero-point motion of atoms measured directly for the first time

Physicists in Germany say they have measured the correlated behaviour of atoms in molecules prepared in their lowest quantum energy state for the first time. Using a technique known as Coulomb explosion imaging, they showed that the atoms do not simply vibrate individually. Instead, they move in a coupled fashion that displays fixed patterns.

According to classical physics, molecules with no thermal energy – for example, those held at absolute zero – should not move. However, according to quantum theory, the atoms making up these molecules are never completely “frozen”, so they should exhibit some motion even at this chilly temperature. This motion comes from the atoms’ zero-point energy, which is the minimum energy allowed by quantum mechanics for atoms in their ground state at absolute zero. It is therefore known as zero-point motion.

Reconstructing the molecule’s original structure

To study this motion, a team led by Till Jahnke from the Institute for Nuclear Physics at Goethe University Frankfurt and the Max Planck Institute for Nuclear Physics in Heidelberg used the European XFEL in Hamburg to bombard their sample – an iodopyridine molecule consisting of 11 atoms – with ultrashort, high-intensity X-ray pulses. These high-intensity pulses violently eject electrons out of the iodopyridine, causing its constituent atoms to become positively charged (and thus to repel each other) so rapidly that the molecule essentially explodes.

To image the molecular fragments generated by the explosion, the researchers used a customized version of a COLTRIMS reaction microscope. This approach allowed them to reconstruct the molecule’s original structure.

From this reconstruction, the researchers were able to show that the atoms do not simply vibrate individually, but that they do so in correlated, coordinated patterns. “This is known, of course, from quantum chemistry, but it had so far not been measured in a molecule consisting of so many atoms,” Jahnke explains.

Data challenges

One of the biggest challenges Jahnke and colleagues faced was interpreting what the microscope data was telling them. “The dataset we obtained is super-rich in information and we had already recorded it in 2019 when we began our project,” he says. “It took us more than two years to understand that we were seeing something as subtle (and fundamental) as ground-state fluctuations.”

Since the technique provides detailed information that is hidden to other imaging approaches, such as crystallography, the researchers are now using it to perform further time-resolved studies – for example, of photochemical reactions. Indeed, they performed and published the first measurements of this type at the beginning of 2025, while the current study (which is published in Science) was undergoing peer review.

“We have pushed the boundaries of the current state-of-the-art of this measurement approach,” Jahnke tells Physics World, “and it is nice to have seen a fundamental process directly at work.”

For theoretical condensed matter physicist Asaad Sakhel at Balqa Applied University, Jordan, who was not involved in this study, the new work is “an outstanding achievement”. “Being able to actually ‘see’ zero-point motion allows us to delve deeper into the mysteries of quantum mechanics in our quest to a further understanding of its foundations,” he says.

The post Zero-point motion of atoms measured directly for the first time appeared first on Physics World.

  •  

Artificial intelligence predicts future directions in quantum science

Can artificial intelligence predict future research directions in quantum science? Listen to this episode of the Physics World Weekly podcast to discover what is already possible.

My guests are Mario Krenn – who heads the Artificial Scientist Lab at Germany’s Max Planck Institute for the Science of Light – and Felix Frohnert, who is doing a PhD on the intersection of quantum physics and machine learning at Leiden University in the Netherlands.

Frohnert, Krenn and colleagues published a paper earlier this year called “Discovering emergent connections in quantum physics research via dynamic word embeddings” in which they analysed more than 66,000 abstracts from the quantum-research literature to see if they could predict future trends in the field. They were particularly interested in the emergence of connections between previously isolated subfields of quantum science.

We chat about what motivated the duo to use machine learning to study quantum science; how their prediction system works; and I ask them whether they have been able to predict current trends in quantum science using historical data.

Their paper appears in the journal Machine Learning Science and Technology. It is published by IOP Publishing – which also brings you Physics World.  Krenn is on the editorial board of the journal and in the podcast he explains why it is important to have a platform to publish research at the intersection of physics and machine learning.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Artificial intelligence predicts future directions in quantum science appeared first on Physics World.

  •  

Errors in A-level physics papers could jeopardize student university admissions, Institute of Physics warns

Errors in some of this year’s A-level physics exam papers could leave students without good enough grades to study physics at university. The mistakes have forced Tom Grinyer, chief executive of the Institute of Physics (IOP), to write to all heads of physics at UK universities, calling on them to “take these exceptional circumstances into account during the final admissions process”. The IOP is particularly concerned about students whose grades are lower than expected or are “a significant outlier” compared to other subjects.

The mistakes in question appeared in the physics (A) exam papers 1 and 2 set by the OCR exam board. Erratum notices had been issued to students at the start of the exam in June, but a further error in paper 2 was only spotted after the exam had taken place, causing some students to get stuck. Physics paper 2 from the rival AQA exam board was also seen to contain complex phrasing that hindered students’ ability to answer the question and led to time pressures.

A small survey of physics teachers carried out after the exam by the IOP, which publishes Physics World, reveals that 41% were dissatisfied with the OCR physics exam papers and more than half (58%) felt that students had a negative experience. Two-thirds of teachers, meanwhile, reported that students had a negative experience during the AQA exam. A-levels are mostly taken by 18 year olds in England, Wales and Northern Ireland, with the grades being used by universities to decide admission.

Grinyer says that the IOP is engaging in “regular, open dialogue with exam boards” to ensure that the assessment process supports and encourages students, while maintaining the rigour and integrity of the qualification. “Our immediate concern,” Grinyer warns, “is that the usual standardization processes and adjustments to grade boundaries – particularly for the OCR paper with errors – may fail to compensate fully for the negative effect on exam performance for some individuals.”

An OCR spokesperson told Physics World that the exam board is “sorry to the physics students and teachers affected by errors in A-level physics this year”. The board says that it “evaluated student performance across all physics papers, and took all necessary steps to mitigate the impact of these errors”. The OCR claims that the 13,000 students who sat OCR A-level physics A this year “can be confident” in their A-level physics results.

“We have taken immediate steps to review and strengthen our quality assurance processes to prevent such issues from occurring in the future,” the OCR adds. “We appreciated the opportunity to meet with the Institute of Physics to discuss these issues, and also to discuss our shared interest in encouraging the growth of this vital subject.”

Almost 23,500 students sat AQA A-level physics this year and an AQA spokesperson told Physics World that the exam board “listened to feedback and took steps to make A-level physics more accessible” to students and that there “is no need for universities to make an exception for AQA physics outcomes when it comes to admissions criteria”.

“These exam papers were error-free, as teachers and students would expect, and we know that students found the papers this year to be more accessible than last year,” they say. “We’ll continue to engage with any feedback that we receive, including feedback from the Institute of Physics, to explore how we can enhance our A-level physics assessments and give students the best possible experience when they sit exams.”

Students ‘in tears’

The IOP now wants A-level physics students to be given a “fair opportunity” when it comes to university admissions. “These issues are particularly concerning for students on widening participation pathways, many of whom already face structural barriers to high-stakes assessment,” the IOP letter states. “The added challenge of inaccessible or error-prone exam papers risks compounding disadvantage and may not reflect the true potential of these students.”

The IOP also contacted AQA last year over inaccessible contexts and language used in previous physics exams. But despite AQA’s assurances that the problems would be addressed, some of the same issues have now recurred. Helen Sinclair, head of physics at the all-girls Wimbledon High School, believes that the “variable quality” of recent A-level papers have had “far-reaching consequences” on young people thinking of going into physics at university.

“Our students have exceptionally high standards for themselves and the opaque nature of many questions affects them deeply, no matter what grades they ultimately achieve. This has even led some to choose to apply for other subjects at university,” she told Physics World. “This is not to say that papers should not be challenging; however, better scaffolding within some questions would help students anchor themselves in what is an already stressful environment, and would ultimately enable them to better demonstrate their full potential within an exam.”

Students come out of the exams feeling disheartened, and those students share their perceptions with younger students

Abbie Hope, Stokesley School

Those concerns are echoed by Abbie Hope, head of physics at Stokesley School in Stockton-on-Tees. She says the errors in this year’s exam papers are “not acceptable” and believes that OCR has “failed their students”. Hope says that AQA physics papers in recent years have been “very challenging” and have resulted in students feeling like they cannot do physics. She also says some have emerged from exam halls in tears.

“Students come out of the exams feeling disheartened and share their perceptions with younger students,” she says. “I would rather students sat a more accessible paper, with higher grade boundaries so they feel more successful when leaving the exam hall, rather than convinced they have underachieved and then getting a surprise on results day.” Hope fears the mistakes will undermine efforts to encourage uptake and participation in physics and that exam boards need to serve students and teachers better.

A ‘growing unease’

Rachael Houchin, head of physics at Royal Grammar School Newcastle, says this year’s errors have added to her “growing unease” about the state of physics education in the UK. “Such incidents – particularly when they are public and recurring – do little to improve the perception of the subject or encourage its uptake,” she says. “Everyone involved in physics education – at any level – has a duty to get it right. If we fail, we risk physics drifting into the category of subjects taught predominantly in selective or independent schools, and increasingly absent from the mainstream.”

Hari Rentala, associate director of education and workforce at the IOP, is concerned that the errors unfairly “perpetuate the myth” that physics is a difficult subject. “OCR appear to have managed the situation as best they can, but this is not much consolation for how students will have felt during the exam and over the ensuing weeks,” says Rentala. “Once again AQA set some questions that were overly challenging. We can only hope that the majority of students who had a negative experience as a result of these issues at least receive a fair grade – as grade boundaries have been adjusted down.”

Mixed news for pupils

Despite the problems with some specific papers, almost 45,000 students took A-level physics in the UK – a rise of 4.3% on last year – to reach the highest level for 25 years. Physics is now the sixth most popular subject at A-level, up from ninth last year, with girls representing a quarter of all candidates. Meanwhile, in Scotland the number of entries in both National 5 and Higher physics was 13,680 and 8560, respectively, up from 13,355 and 8065 last year.

“We are delighted so many young people, and increasing numbers of girls, are hearing the message that physics can open up a lifetime of opportunities,” says Grinyer. “If we can build on this momentum there is a real opportunity to finally close the gap between boys and girls in physics at A-level. To do that we need to continue to challenge the stereotypes that still put too many young people off physics and ensure every young person knows that physics – and a career in science and innovation – could be for them.”

However, there is less good news for younger pupils, with a new IOP report finding that more than half a million GCSE students are expected to start the new school year with no physics teacher. It reveals that a quarter of English state schools have no specialist physics teachers at all and fears that more than 12,000 students could miss out on taking A-level physics because of this. The IOP wants the UK government to invest £120m over the next 10 years to address the shortage by retaining, recruiting and retraining a new generation of physics teachers.

The post Errors in A-level physics papers could jeopardize student university admissions, Institute of Physics warns appeared first on Physics World.

  •  

Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates

Physicists at the Chinese Academy of Sciences (CAS) have used diamond-based quantum sensors to uncover what they say is the first unambiguous experimental evidence for the Meissner effect – a hallmark of superconductivity – in bilayer nickelate materials at high pressures. The discovery could spur the development of highly sensitive quantum detectors that can be operated under high-pressure conditions.

Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. Apart from a sharp drop in electrical resistance, another important sign that a material has crossed this threshold is the appearance of the Meissner effect, in which the material expels a magnetic field from its interior (diamagnetism). This expulsion creates such a strong repulsive force that a magnet placed atop the superconducting material will levitate above it.

In “conventional” superconductors such as solid mercury, the Tc is so low that the materials must be cooled with liquid helium to keep them in the superconducting state. In the late 1980s, however, physicists discovered a new class of superconductors that have a Tabove the boiling point of liquid nitrogen (77 K). These “unconventional” or high-temperature superconductors are derived not from metals but from insulators containing copper oxides (cuprates).

Since then, the search has been on for materials that superconduct at still higher temperatures, and perhaps even at room temperature. Discovering such materials would have massive implications for technologies ranging from magnetic resonance imaging machines to electricity transmission lines.

Enter nickel oxides

In 2019 researchers at Stanford University in the US identified nickel oxides (nickelates) as additional high-temperature superconductors. This created a flurry of interest in the superconductivity community because these materials appear to superconduct in a way that differs from their copper-oxide cousins.

Among the nickelates studied, La3Ni2O7-δ (where δ can range from 0 to 0.04) is considered particularly promising because in 2023, researchers led by Meng Wang of China’s Sun Yat-Sen University spotted certain signatures of superconductivity at a temperature of around 80 K. However, these signatures only appeared when crystals of the material were placed in a device called a diamond anvil cell (DAC). This device subjects samples of material to extreme pressures of more than 400 GPa (or 4 × 106 atmospheres) as it squeezes them between the flattened tips of two tiny, gem-grade diamond crystals.

The problem, explains Xiaohui Yu of the CAS’ Institute of Physics, is that it is not easy to spot the Meissner effect under such high pressures. This is because the structure of the DAC limits the available sample volume and hinders the use of highly sensitive magnetic measurement techniques such as SQUID. Another problem is that the sample used in the 2023 study contains several competing phases that could mix and degrade the signal of the La3Ni2O7-δ.

Nitrogen-vacancy centres embedded as in-situ quantum sensors

In the new work, Yu and colleagues used nitrogen-vacancy (NV) centres embedded in the DAC as in-situ quantum sensors to track and image the Meissner effect in pressurized bilayer La3Ni2O7-δ. This newly developed magnetic sensing technique boasts both high sensitivity and high spatial resolution, Yu says. What is more, it fits perfectly into the DAC high-pressure chamber.

Next, they applied a small external magnetic field of around 120 G. Under these conditions, they measured the optically detected magnetic resonance (ODMR) spectra of the NV centres point by point. They could then extract the local magnetic field from the resonance frequencies of these spectra. “We directly mapped the Meissner effect of the bilayer nickelate samples,” Yu says, noting that the team’s image of the magnetic field clearly shows both a diamagnetic region and a region where magnetic flux is concentrated.

Weak demagnetization signal

The researchers began their project in late 2023, shortly after receiving single-crystal samples of La3Ni2O7-δ from Wang. “However, after two months of collecting data, we still had no meaningful results,” Yu recalls. “From these experiments, we learnt that the demagnetization signal in La3Ni2O7-δ crystals was quite weak and that we needed to improve either the nickelate sample or the sensitivity of the quantum sensor.”

To overcome these problems, they switched to using polycrystalline samples, enhancing the quality of the nickelate samples by doping them with praseodymium to make La2PrNi2O7. This produced a sample with an almost pure bilayer structure and thus a much stronger demagnetization signal. They also used shallow NV centres implanted on the DAC cutlet (the smaller face of the two diamond tips).

“Unlike the NV centres in the original experiments, which were randomly distributed in the pressure-transmitting medium and have relatively large ODMR widths, leading to only moderate sensitivity in the measurements, these shallow centres are evenly distributed and well aligned, making it easier for us to perform magnetic imaging with increased sensitivity,” Yu explains.

These improvements enabled the team to obtain a demagnetization signal from the La2PrNi2O7 and La3Ni2O7-δ samples, he tells Physics World. “We found that the diamagnetic signal from the La2PrNi2O7 samples is about five times stronger than that from the La3Ni2O7-δ ones prepared under similar conditions – a result that is consistent with the fact that the Pr-doped samples are of a better quality.”

Physicist Jun Zhao of Fudan University, China, who was not involved in this work, says that Yu and colleagues’ measurement represents “an important step forward” in nickelate research. “Such measurements are technically very challenging, and their success demonstrates both experimental ingenuity and scientific significance,” he says. “More broadly, their result strengthens the case for pressurized nickelates as a new platform to study high-temperature superconductivity beyond the cuprates. It will certainly stimulate further efforts to unravel the microscopic pairing mechanism.”

As well as allowing for the precise sensing of magnetic fields, NV centres can also be used to accurately measure many other physical quantities that are difficult to measure under high pressure, such as strain and temperature distribution. Yu and colleagues say they are therefore looking to further expand the application of these structures for use as quantum sensors in high-pressure sensing.

They report their current work in National Science Review.

The post Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates appeared first on Physics World.

  •  

Quantum foundations: towards a coherent view of physical reality

One hundred years after its birth, quantum mechanics remains one of the most powerful and successful theories in all of science. From quantum computing to precision sensors, its technological impact is undeniable – and one reason why 2025 is being celebrated as the International Year of Quantum Science and Technology.

Yet as we celebrate these achievements, we should still reflect on what quantum mechanics reveals about the world itself. What, for example, does this formalism actually tell us about the nature of reality? Do quantum systems have definite properties before we measure them? Do our observations create reality, or merely reveal it?

These are not just abstract, philosophical questions. Having a clear understanding of what quantum theory is all about is essential to its long-term coherence and its capacity to integrate with the rest of physics. Unfortunately, there is no scientific consensus on these issues, which continue to provoke debate in the research community.

That uncertainty was underlined by a recent global survey of physicists about quantum foundational issues, conducted by Nature (643 1157). It revealed a persistent tension between “realist” views, which seek an objective, visualizable account of quantum phenomena, and “epistemic” views that regard the formalism as merely a tool for organizing our knowledge and predicting measurement outcomes.

Only 5% of the 1100 people who responded to the Nature survey expressed full confidence in the Copenhagen interpretation, which is still prevalent in textbooks and laboratories. Further divisions were revealed over whether the wavefunction is a physical entity, a mere calculation device, or a subjective reflection of belief. The lack of agreement on such a central feature underscores the theoretical fragility underlying quantum mechanics.

The willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches

More broadly, 75% of respondents believe that quantum theory will eventually be replaced, at least partially, by a more complete framework. Encouragingly, 85% agree that attempts to interpret the theory in intuitive or physical terms are valuable. This willingness to explore alternatives reflects the intellectual vitality of the field but also underscores the inadequacy of current approaches.

Beyond interpretation

We believe that this interpretative proliferation stems from a deeper problem, which is that quantum mechanics lacks a well-defined physical foundation. It describes the statistical outcomes of measurements, but it does not explain the mechanisms behind them. The concept of causality has been largely abandoned in favour of operational prescriptions such that quantum theory works impressively in practice but remains conceptually opaque.

In our view, the way forward is not to multiply interpretations or continue debating them, but to pursue a deeper physical understanding of quantum phenomena. One promising path is stochastic electrodynamics (SED), a classical theory augmented by a random electromagnetic background field, the real vacuum or zero-point field discovered by Max Planck as early as 1911. This framework restores causality and locality by explaining quantum behaviour as the statistical response of particles to this omnipresent background field.

Over the years, several researchers from different lines of thought have contributed to SED. Since our early days with Trevor Marshall, Timothy Boyer and others, we have refined the theory to the point that it can now account for the emergence of features that are considered building blocks of quantum formalism, such as the basic commutator and Heisenberg inequalities.

Particles acquire wave-like properties not by intrinsic duality, but as a consequence of their interaction with the vacuum field. Quantum fluctuations, interference patterns and entanglement emerge from this interaction, without the need to resort to non-local influences or observer-dependent realities. The SED approach is not merely mechanical, but rather electrodynamic.

Coherent thoughts

We’re not claiming that SED is the final word. But it does offer a coherent picture of microphysical processes based on physical fields and forces. Importantly, it doesn’t abandon the quantum formalism but rather reframes it as an effective theory – a statistical summary of deeper dynamics. Such a perspective enables us to maintain the successes of quantum mechanics while seeking to explain its origins.

For us, SED highlights that quantum phenomena can be reconciled with concepts central to the rest of physics, such as realism, causality and locality. It also shows that alternative approaches can yield testable predictions and provide new insights into long-standing puzzles. One phenomenon lying beyond current quantum formalism that we could now test, thanks to progress in experimental physics, is the predicted violation of Heisenberg’s inequalities over very short time periods.

As quantum science continues to advance, we must not lose sight of its conceptual foundations. Indeed, a coherent, causally grounded understanding of quantum mechanics is not a distraction from technological progress but a prerequisite for its full realization. By turning our attention once again to the foundations of the theory, we may finally complete the edifice that began to rise a century ago.

The centenary of quantum mechanics should be a time not just for celebration but critical reflection too.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum foundations: towards a coherent view of physical reality appeared first on Physics World.

  •  

Twisted graphene reveals a new type of chirality

Structural chirality refers to the geometric property of objects that are not superimposable on their mirror images, a concept that is central to organic chemistry. In contrast, topological chirality in physics involves quantum properties like spin and is essential for understanding topological edge states. The connection between these two forms of chirality remains an open question.

Traditionally, topological phenomena have been studied in spinful systems, where the presence of spin allows for chiral interactions and symmetry-breaking effects. This new study challenges that paradigm by demonstrating that topological chirality can arise even in spinless systems, purely from the three-dimensional structural arrangement of otherwise featureless units.

The researchers mathematically investigate two types of twisted 3D graphite systems, composed of stacked 2D graphene layers. Importantly, large twist angles were used (21.8). In one configuration, the layers are twisted into a helical screw-like structure, while in the other, the twist angles alternate between layers, forming a periodic chiral pattern. These structural designs give rise to novel topological phases.

A key mechanism underlying these effects is intervalley Umklapp scattering. This scattering captures the chirality of the twisted interfaces and induces a sign-flipped interlayer hopping, by introducing a π-flux lattice gauge field. This field alters the symmetry algebra of the system, enabling the emergence of spinless topological chirality.

This research opens up a new design principle for topological materials. By engineering the spatial patterning of structureless units, researchers can induce topological chirality without relying on spin. This has significant implications for the development of topological photonic and acoustic devices, potentially leading to simpler, more tunable materials for applications in quantum computing, sensing, and waveguiding technologies.

Read the full article

Spinless topological chirality from Umklapp scattering in twisted 3D structures

Cong Chen et al 2025 Rep. Prog. Phys. 88 018001

Do you want to learn more about this topic?

Interacting topological insulators: a review by Stephan Rachel (2018)

The post Twisted graphene reveals a new type of chirality appeared first on Physics World.

  •  

Unveiling topological edge states with attosecond precision

In condensed matter physics, topological phase transitions are a key area of research because they lead to unusual and potentially useful states of matter. One example is the Floquet topological insulator, which can switch from a non-topological to a topological phase when exposed to a laser pulse. However, detecting these transitions is difficult due to the extremely fast timescales involved and interference from infrared fields, which can distort the photoelectron signals.

A Chern insulator is a unique material that acts as an insulator in its bulk but conducts electricity along its edges. These edge states arise from the material’s crystal structure of the bulk. Unlike other topological materials, Chern insulators do not require magnetic fields. Their edge conduction is topologically protected, meaning it is highly resistant to defects and noise. This makes them promising candidates for quantum technologies, spintronics, and energy-efficient electronics.

In this study, researchers developed a new method to detect phase changes in Chern insulators. Using numerical simulations, they demonstrated that attosecond x-ray absorption spectroscopy, combined with polarization-dependent dichroism, can effectively reveal these transitions. Their semi-classical approach isolates the intra-band Berry connection, providing deeper insight into how topological edge states form and how electrons behave in these systems.

This work represents a significant advance in topological materials research. It offers a new way to observe changes in quantum materials in real time, expands the use of attosecond spectroscopy from simple atoms and molecules to complex solids, and opens the door to studying dynamic systems like Floquet topological insulators.

Read the full article

Topological phase transitions via attosecond x-ray absorption spectroscopy

Juan F P Mosquera et al 2024 Rep. Prog. Phys. 87 117901

Do you want to learn more about this topic?

Strong–laser–field physics, non–classical light states and quantum information science by U BhattacharyaTh LamprouA S MaxwellA OrdóñezE PisantyJ Rivera-DeanP StammerM F CiappinaM Lewenstein and P Tzallas (2023)

The post Unveiling topological edge states with attosecond precision appeared first on Physics World.

  •  

Broadband wireless gets even broader thanks to integrated transmitter

Researchers in China have unveiled an ultrabroadband system that uses the same laser and resonator to process signals at frequencies ranging from below 1 GHz up to more than 100 GHz. The system, which is based on a thin-film lithium niobate resonator developed in 2018 by members of the same team, could facilitate the spread of the so-called “Internet of things” in which huge numbers of different devices are networked together at different frequency bands to avoid interference.

Modern complementary metal oxide semiconductor (CMOS) electronic devices generally produce signals at frequencies of a few GHz. These signals are then often shifted into other frequency bands for processing and transmission. For example, sending electronic signals long distances down silicon optical fibres generally means using a frequency of around 200 THz, as silicon is transparent at the corresponding “telecoms” wavelength of 1550nm.

One of the most popular materials for performing this conversion is lithium niobate. This material has been called “the silicon of photonics” because it is highly nonlinear, allowing optical signals to be generated efficiently at a wide range of frequencies.

In integrated devices, bulk lithium niobate modulators are undesirable. However, in 2018 Cheng Wang and colleagues led by Marko Lončar of Harvard University in Massachusetts, US, developed a miniaturized, thin-film version that used an interferometric design to create a much stronger electro-optic effect in a shorter distance. “Usually, the bandwidth limit is set by the radiofrequency loss,” explains Wang, who is now at the City University of Hong Kong, China. “Being shorter means you can go to much higher frequencies.”

A broadband data transmission system

In the new work, Wang, together with researchers at Peking University in China and the University of California, Santa Barbara in the US, used an optimized version of this setup to make a broadband data transmission system. They divided the output of a telecom-wavelength oscillator into two arms. In one of these arms, optical signal modulation software imprinted a complex amplitude-phase pattern on the wave. The other arm was exposed to the data signal and a lithium niobate microring resonator. The two arms were then recombined at a photodetector, and the frequency difference between the two arms (in the GHz range) was transmitted using an antenna to a detector, where the process was reversed.

Crucially, the offset between the centre frequencies of the two arms (the frequency of the beat note at the photodetector when the two arms are recombined) is determined solely by the frequency shift imposed by the lithium niobate resonator. This can be tuned anywhere between 0.5 GHz and 115 GHz via the thermo-optic effect – essentially, incorporating a small electronic heater and using it to tune the refractive index. The signal is then encoded in modulations of the beat frequency, with additional information imprinted into the phase of the waves.

The researchers say this system is an improvement on standard electronic amplifiers because such devices usually operate in relatively narrow bands. Using them to make large jumps in frequency therefore means that signals need to be shifted multiple times. This introduces cumulative noise into the signal and is also problematic for applications such as robotic surgery, where the immediate arrival of a signal can literally be a matter of life and death.

Internet of things applications

The researchers demonstrated wireless data transfer across a distance of 1.3 m, achieving speeds of up to 100 gigabits per second. In the present setup, they used three different horn antennas to transmit microwaves of different frequencies through free space, but they hope to improve this: “That is our next goal – to get a fully frequency-tuneable link,” says Peking University’s Haowen Shu.

The researchers believe such a wideband setup could be crucial to the development of the “Internet of things” in which all sorts of different electronic devices are networked together without unwanted interference. Atmospheric transparency windows below 6 GHz, where loss is lower and propagation lengths are higher, are likely to be crucial for providing wireless Internet access to rural areas. Meanwhile, higher frequencies – with higher data rates – will probably be needed for augmented reality and remote surgery applications.

Alan Willner, an electrical engineer and optical scientist at the University of Southern California, US, who was not involved in the research, thinks the team is on the right track. “You have lots of spectrum in various radio bands for wireless communications,” he says. “But how are you going to take advantage of these bands to transmit high data rates in a cost-effective and flexible way? Are you going to use multiple different systems – one each for microwave, millimetre wave, and terahertz?  Using one tuneable and reconfigurable integrated platform to cover these bands is significantly better. This research is a great step in that direction.”

The research is published in Nature.

The post Broadband wireless gets even broader thanks to integrated transmitter appeared first on Physics World.

  •  

From ‘rewarding and exciting’ to ‘challenging and overwhelming’: what it means to have a career in intelligence and cyber security

GCHQ spelt out in Scrabble pieces on a chess board
Your next move? A career in intelligence can suit physicists with the right mindset. (Courtesy: Shutterstock/shaneinswedenx)

As a physics graduate or an early career researcher looking for a job, you might not think of the UK’s primary intelligence and security agency – Government Communications Headquarters (GCHQ) – as somewhere you might consider. But GCHQ, which covers counter-terrorism, cybersecurity, organized crime and defence support for the UK, hires a vast number of physicists. Indeed, to celebrate the 2025 International Year of Quantum Science and Technology, the agency has hosted many internal talks, informational campaigns and more.

GCHQ works with the Secret Intelligence Service (MI6), MI5, as well as the armed forces, a number of international partners, and firms in the private sector and academia. To find out more about a career at GCHQ – working with cutting-edge technology to identify, analyse and disrupt threats to the UK – Physics World speaks to two people with academic backgrounds who have a long career at the organization. They tell us about the benefits, the difficulties and the complexity of working at an intelligence agency.

Nia is the deputy director for science at GCHQ, where she has worked for the past 15 years. After studying physics at university, she joined GCHQ as a graduate and has since contributed to a wide range of scientific and technological initiatives in support of national security. She is a Fellow of both the Institute of Physics (IOP), which publishes Physics World, and the Institution of Engineering and Technology (IET).

Cheryl leads GCHQ’s adoption of quantum technologies. Following a degree in engineering, her career began as an apprentice at an avionics company. Since then, she has had many roles across research and development at GCHQ and across broader UK government departments, with a focus on understanding and implementing emerging technology. Cheryl is a Fellow of the IET and a Member of the IOP. 

When did your interest in science first develop?

Nia My fascination with science was nurtured from a young age, largely inspired by my parents. My mum was a physics teacher, and my dad is a passionate historian with an insatiable curiosity about the world. Growing up in an environment rich with books, experiments, and discussions about how things work – whether exploring astrophysics, geology or ancient Egypt – instilled in me a lifelong desire to understand our universe. My mum’s electronics, mechanics and physics lessons meant there were always breadboards, crocodile clips and even a Van de Graaff generator in the house, transforming learning into an exciting tangible experience.

Cheryl As a child I was always interested in nature and in how things work. I used to build bug farms in the garden and still have my old Observer’s books with the butterflies, etc, ticked off when spotted. Leaning towards my practical side of constantly making things (and foolishly believing my careers teacher that a physics degree would only lead to teaching), I took physics, chemistry and maths A-levels and a degree in engineering.

Could you tell us a bit about your educational background and your career path that led to you work at GCHQ?

Nia I was born and grew up in South Wales and attended a Welsh-language school where I studied physics, maths and chemistry at A-level. I then studied physics at Durham University for four years, before I started working at GCHQ as a graduate. My first role was in an area that is now the National Cyber Security Centre (NCSC). As the cyber security arm of GCHQ, it researches the reliability of semiconductors in national security applications and uses that research to shape policy and security standards. This was great for me as my final year in university was focused on material science and condensed matter physics which came in very useful.

Cheryl My engineering degree apprenticeship was through an aerospace company in Cheltenham, and I worked there afterwards designing test kits for the RAF. It was almost natural that I should at least try a few years at GCHQ as a local employer and I had plans to then move to other R&D labs.

What’s it like to work here – what are some of the stresses of working in this kind of an environment and not being able to discuss your job with friends and family? What are some of the best aspects of working at GCHQ?

Nia Working at GCHQ is rewarding and exciting especially as we look at the most exciting developments in emerging technologies. It can also be challenging especially when navigating the complexities of global security challenges amid an unpredictable geopolitical landscape. There are days when media reports or international events feel overwhelming, but knowing that my work contributes towards safeguarding the UK’s interests today and into the future offers a strong sense of purpose.

The most rewarding aspect, by far, is the people. We have some of the brightest, most dedicated experts – mentors, colleagues, friends – whose commitment inspires me daily. Their support and collaboration make even the most demanding days manageable.

Cheryl At GCHQ I found that I have been able to enjoy several very different “careers” within the organization, including opportunities to travel and to develop diverse skills. This, together with a flexibility to change working patterns to suit stages of family life, has meant I have stayed for most of my career.

I’ve had some amazing and unique opportunities and experiences

Cheryl, GCHQ

I’ve had some amazing and unique opportunities and experiences. In the Cheltenham area it’s accepted that so many people work here and is widely respected that we cannot talk about the detail of what we do.

Fingerprint on circuitboard illustration
Safety net Maintaining secure communication and anticipating new threats are key to the work carried out at GCHQ. (Shutterstock/S and V Design)

What role does physics and especially quantum science play in what you do? And what role does physics play when it comes to the national security of the UK?

Nia As deputy director of science at GCHQ, my role involves collaborating with experts to understand how emerging technologies, including quantum science, impact national security. Quantum offers extraordinary potential for secure communication and advanced sensing – but it equally threatens to upend existing security protocols if adversaries harness it maliciously. A deep understanding of physics is crucial – not only to spot opportunities but also to anticipate and counter threats.

Quantum science is just one example of how a fundamental understanding of physics and maths gives you the foundations to understand the broad waterfront of emerging technologies coming our way. We work closely with government departments, academia, industry and start-ups to ensure the UK remains at the forefront of this field, shaping a resilient and innovative security ecosystem.

Cheryl I first came across quantum science, technologies and quantum computing around 15 years ago through an emerging technology analysis role in R&D; and I watched and learned keenly as I could see that these would be game changing. Little did I know at the time that I would later be leading our adoption of quantum and just how significant these emerging technologies for sensing, timing and computing would grow to be.

The UK national ecosystem developing around quantum technologies is a great mix of minds from academia, industry and government departments and is one of the most collegiate, inspiring and well-motivated communities that I have interacted with.

For today’s physics graduates who might be interested in a career at GCHQ, what are some of the key skills they require?

Nia Many people will have heard of historic tales of the tap on the shoulder for people to work in intelligence agencies, but as with all other jobs the reality is that people can find out about careers at GCHQ in much the same way they would with any other kind of job.

Maintaining a hunger to learn and adapt is what will set you apart

Nia, GCHQ

I would emphasize qualities like curiosity, problem-solving and resilience as being key. The willingness to roll up your sleeves, a genuine care for collaborative work, and empathy are equally important – particularly because much of what we do is sensitive and demands trust and discretion. Maintaining a hunger to learn and adapt is what will set you apart.

Cheryl We have roles where you will be helping to solve complex problems – doing work you simply won’t find anywhere else. It’s key to have curiosity, an open mind and don’t be put off by the fact you can’t ask too many questions in advance!

What sort of equality, diversity and inclusion initiatives do you have at GCHQ and how are you looking to get more women and minorities working there?

Nia Diversity and inclusion are mission-critical for us at GCHQ, gathering the right mix of minds to find innovative solutions to the toughest of problems. We’re committed to building on our work to better represent the communities we serve, including increasing the number of people from ethnic minority backgrounds and the number of women in senior roles.

Cheryl We are committed to having a workforce that reflects the communities we serve. Our locations in the north-west, in both Manchester and now Lancashire, are part of the mission to find the right mix of minds

What is your advice to today’s physics grads? What is it that you know today that you wish you knew at the start of your career?

Nia One key lesson is that career paths are rarely linear. When starting out, uncertainty can feel daunting, but it’s an opportunity for growth. Embrace challenges and seize opportunities that excite you – whether they seem narrowly related to your studies or not. Every experience contributes to your development. Additionally, don’t underestimate the importance of work–life balance. GCHQ offers a supportive environment – remember, careers are marathons, not sprints. Patience and curiosity will serve you well.

Cheryl It takes multidisciplinary teams to deliver game-changers and new ecosystems. Your initial “career choices” are just a stepping stone from which you can forge your own path and follow your instincts.

The post From ‘rewarding and exciting’ to ‘challenging and overwhelming’: what it means to have a career in intelligence and cyber security appeared first on Physics World.

  •  

Desert dust helps freeze clouds in the northern hemisphere

Micron-sized dust particles in the atmosphere could trigger the formation of ice in certain types of clouds in the Northern Hemisphere. This is the finding of researchers in Switzerland and Germany, who used 35 years of satellite data to show that nanoscale defects on the surface of these aerosol particles are responsible for the effect. Their results, which agree with laboratory experiments on droplet freezing, could be used to improve climate models and to advance studies of cloud seeding for geoengineering.

In the study, which was led by environmental scientist Diego Villanueva of ETH Zürich, the researchers focused on clouds in the so-called mixed-phase regime, which form at temperatures of between −39° and 0°C and are commonly found in mid- and high-latitudes, particularly over the North Atlantic, Siberia and Canada. These mixed-phase regime clouds (MPRCs) are often topped by a liquid or ice layer, and their makeup affects how much sunlight they reflect back into space and how much water they can release as rain or snow. Understanding them is therefore important for forecasting weather and making projections of future climate.

Researchers have known for a while that MPRCs are extremely sensitive to the presence of ice-nucleating particles in their environment. Such particles mainly come from mineral dust aerosols (such as K-feldspar, quartz, albite and plagioclase) that get swept up into the upper atmosphere from deserts. The Sahara Desert in northern Africa, for example, is a prime source of such dust in the Northern Hemisphere.

More dust leads to more ice clouds

Using 35 years of satellite data collected as part of the Cloud_cci project and MERRA-2 aerosol reanalyses, Villanueva and colleagues looked for correlations between dust levels and the formation of ice-topped clouds. They found that at temperatures of between -15°C and -30°C, the more dust there was, the more frequent the ice clouds were. What is more, their calculated increase in ice-topped clouds with increasing dust loading agrees well with previous laboratory experiments that predicted how dust triggers droplet freezing.

The new study, which is detailed in Science, shows that there is a connection between aerosols in the micrometre-size range and cloud ice observed over distances of several kilometres, Villanueva says. “We found that it is the nanoscale defects on the surface of dust aerosols that trigger ice clouds, so the process of ice glaciation spans more than 15 orders of magnitude in length,” he explains.

Thanks to this finding, Villaneuva tells Physics World that climate modellers can use the team’s dataset to better constrain aerosol-cloud processes, potentially helping them to construct better estimates of cloud feedback and global temperature projections.

The result also shows how sensitive clouds are to varying aerosol concentrations, he adds. “This could help bring forward the field of cloud seeding and include this in climate geoengineering efforts.”

The researchers say they have successfully replicated their results using a climate model and are now drafting a new manuscript to further explore the implications of dust-driven cloud glaciation for climate, especially for the Arctic.

The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.

  •  

Radioactive ion beams enable simultaneous treatment and imaging in particle therapy

Researchers in Germany have demonstrated the first cancer treatment using a radioactive carbon ion beam (11C), on a mouse with a bone tumour close to the spine. Performing particle therapy with radioactive ion beams enables simultaneous treatment and visualization of the beam within the body.

Particle therapy using beams of protons or heavy ions is a highly effective cancer treatment, with the favourable depth–dose deposition – the Bragg peak – providing extremely conformal tumour targeting. This conformality, however, makes particle therapy particularly sensitive to range uncertainties, which can impact the Bragg peak position.

One way to reduce such uncertainties is to use positron emission tomography (PET) to map the isotopes generated as the treatment beam interacts with tissues in the patient. For therapy with carbon (12C) ions, currently performed at 17 centres worldwide, this involves detecting the beta decay of 10C and 11C projectile fragments. Unfortunately, such fragments generate a small PET signal, while their lower mass shifts the measured activity peak away from the Bragg peak.

The researchers – working within the ERC-funded BARB (Biomedical Applications of Radioactive ion Beams) project – propose that treatment with positron-emitting ions such as 11C could overcome these obstacles. Radioactive ion beams have the same biological effectiveness as their corresponding stable ion beams, but generate an order of magnitude larger PET signal. They also reduce the shift between the activity and dose peaks, enabling precise localization of the ion beam in vivo.

“Range uncertainty remains the main problem of particle therapy, as we do not know exactly where the Bragg peak is,” explains Marco Durante, head of biophysics at the GSI Helmholtz Centre for Heavy Ion Research and principal investigator of the BARB project. “If we ‘aim-and-shoot’ using a radioactive beam and PET imaging, we can see where the beam is and can then correct it. By doing this, we can reduce the margins around the target that spoil the precision of particle therapy.”

In vivo experiments

To test this premise, Durante and colleagues performed in vivo experiments at the GSI/FAIR accelerator facility in Darmstadt. For online range verification, they used a portable small-animal in-beam PET scanner built by Katia Parodi and her team at LMU Munich. The scanner, initially designed for the ERC project SIRMIO (Small-animal proton irradiator for research in molecular image-guided radiation-oncology), contains 56 depth-of-interaction detectors – based on scintillator blocks of pixelated LYSO crystals – arranged spherically with an inner diameter of 72 mm.

LMU researchers with small-animal PET scanner
LMU researchers Members of the LMU team involved in the BARB project (left to right: Peter Thirolf, Giulio Lovatti, Angelica Noto, Francesco Evangelista, Munetaka Nitta and Katia Parodi) with the small-animal PET scanner. (Courtesy: Katia Parodi/Francesco Evangelista, LMU)

“Not only does our spherical in-beam PET scanner offer unprecedented sensitivity and spatial resolution, but it also enables on-the-fly monitoring of the activity implantation for direct feedback during irradiation,” says Parodi, co-principal investigator of the BARB project.

The researchers used a radioactive 11C-ion beam – produced at the GSI fragment separator – to treat 32 mice with an osteosarcoma tumour implanted in the neck near the spinal cord. To encompass the full target volume, they employed a range modulator to produce a spread-out Bragg peak (SOBP) and a plastic compensator collar, which also served to position and immobilize the mice. The anaesthetized animals were placed vertically inside the PET scanner and treated with either 20 or 5 Gy at a dose rate of around 1 Gy/min.

For each irradiation, the team compared the measured activity with Monte Carlo-simulated activity based on pre-treatment microCT scans. The activity distributions were shifted by about 1 mm, attributed to anatomical changes between the scans (with mice positioned horizontally) and irradiation (vertical positioning). After accounting for this anatomical shift, the simulation accurately matched the measured activity. “Our findings reinforce the necessity of vertical CT planning and highlight the potential of online PET as a valuable tool for upright particle therapy,” the researchers write.

With the tumour so close to the spine, even small range uncertainties risk damage to the spinal cord, so the team used the online PET images generated during the irradiation to check that the SOPB did not cover the spine. While this was not seen in any of the animals, Durante notes that if it had, the beam could be moved to enable “truly adaptive” particle therapy. Assessing the mice for signs of radiation-induced myelopathy (which can lead to motor deficits and paralysis) revealed that no mice exhibited severe toxicity, further demonstrating that the spine was not exposed to high doses.

PET imaging in a mouse
PET imaging in a mouse (a) Simulation showing the expected 11C-ion dose distribution in the pre-treatment microCT scan. (b) Corresponding simulated PET activity. (c) Online PET image of the activity during 11C irradiation, overlaid on the same microCT used for simulations. The target is outlined in black, the spine in red. (Courtesy: CC BY 4.0/Nat. Phys. 10.1038/s41567-025-02993-8)

Following treatment, tumour measurements revealed complete tumour control after 20 Gy irradiation and prolonged tumour growth delay after 5 Gy, suggesting complete target coverage in all animals.

The researchers also assessed the washout of the signal from the tumour, which includes a slow activity decrease due to the decay of 11C (which has a half-life of 20.34 min), plus a faster decrease as blood flow removes the radioactive isotopes from the tumour. The results showed that the biological washout was dose-dependent, with the fast component visible at 5 Gy but disappearing at 20 Gy.

“We propose that this finding is due to damage to the blood vessel feeding the tumour,” says Durante. “If this is true, high-dose radiotherapy may work in a completely different way from conventional radiotherapy: rather than killing all the cancer stem cells, we just starve the tumour by damaging the blood vessels.”

Future plans

Next, the team intends to investigate the use of 10C or 15O treatment beams, which should provide stronger signals and increased temporal resolution. A new Super-FRS fragment separator at the FAIR accelerator facility will provide the high-intensity beams required for studies with 10C.

Looking further ahead, clinical translation will require a realistic and relatively cheap design, says Durante. “CERN has proposed a design [the MEDICIS-Promed project] based on ISOL [isotope separation online] that can be used as a source of radioactive beams in current accelerators,” he tells Physics World. “At GSI we are also working on a possible in-flight device for medical accelerators.”

The findings are reported in Nature Physics.

The post Radioactive ion beams enable simultaneous treatment and imaging in particle therapy appeared first on Physics World.

  •  

Garbage in, garbage out: why the success of AI depends on good data

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

  •  

Why foamy heads on Belgium beers last so long

It’s well documented that a frothy head on a beverage can stop the liquid from sloshing around and onto the floor – it’s one reason why when walking around with coffee, it swills around more than beer, for example.

When it comes to beer, a clear sign of a good brew is a big head of foam at the top of a poured glass.

Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse.

What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that reduce surface tension and are found in soaps and detergents.

To find out more, researchers from ETH Zurich and Eindhoven University of Technology (EUT) investigated beer-foam stability for different types of beers at varying stages of the fermentation process.

They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is influenced by proteins in the beer – the more they contain the more viscous the film and more stable the foam will be.

“We can directly visualize what’s happening when two bubbles come into close proximity,” notes EUT material scientist Emmanouil Chatzigiannakis. “We can directly see the bubble’s protein aggregates, their interface, and their structure.”

When it comes to double-fermented beers, however, the proteins in the beer are altered slightly by yeast cells and come together to form a two-dimensional membrane that keeps foam intact longer.

The head was found to be even more stable for triple-fermented beers, which include Belgium Trappist beers. The proteins change further and behave like a surfactant that stabilizes the bubbles.

The team say that the finding of how the fermentation process alters the stability of bubbles could be used to produce more efficient ways of creating foams – or identify ways to control the amount of froth so that everyone can pour a perfect glass of beer every time. Cheers!

The post Why foamy heads on Belgium beers last so long appeared first on Physics World.

  •  

Making molecules with superheavy elements could shake up the periodic table

Nuclear scientists at the Lawrence Berkeley National Laboratory (LBNL) in the US have produced and identified molecules containing nobelium for the first time. This element, which has an atomic number of 102, is the heaviest ever to be observed in a directly-identified molecule, and team leader Jennifer Pore says the knowledge gained from such work could lead to a shake-up at the bottom of the periodic table.

“We compared the chemical properties of nobelium side-by-side to simultaneously produced molecules containing actinium (element number 89),” says Pore, a research scientist at LBNL. “The success of these measurements demonstrates the possibility to further improve our understanding of heavy and superheavy-element chemistry and so ensure that these elements are placed correctly on the periodic table.”

The periodic table currently lists 118 elements. As well as vertical “groups” containing elements with similar properties and horizontal “periods” in which the number of protons (atomic number Z) in the nucleus increases from left to right, these elements are arranged in three blocks. The block that contains actinides such as actinium (Ac) and nobelium (No), as well as the slightly lighter lanthanide series, is often shown offset, below the bottom of the main table.

The end of a predictive periodic table?

Arranging the elements this way is helpful because it gives scientists an intuitive feel for the chemical properties of different elements. It has even made it possible to predict the properties of new elements as they are discovered in nature or, more recently, created in the laboratory.

The problem is that the traditional patterns we’ve come to know and love may start to break down for elements at the bottom of the table, putting an end to the predictive periodic table as we know it. The reason, Pore explains, is that these heavy nuclei have a very large number of protons. In the actinides (Z > 88), for example, the intense charge of these “extra” protons exerts such a strong pull on the inner electrons that relativistic effects come into play, potentially changing the elements’ chemical properties.

“As some of the electrons are sucked towards the centre of the atom, they shield some of the outer electrons from the pull,” Pore explains. “The effect is expected to be even stronger in the superheavy elements, and this is why they might potentially not be in the right place on the periodic table.”

Understanding the full impact of these relativistic effects is difficult because elements heavier than fermium (Z = 100) need to be produced and studied atom by atom. This means resorting to complex equipment such as accelerated ion beams and the FIONA (For the Identification Of Nuclide A) device at LBNL’s 88-Inch Cyclotron Facility.

Producing and directly identifying actinide molecules

The team chose to study Ac and No in part because they represent the extremes of the actinide series. As the first in the series, Ac has no electrons in its 5f shell and is so rare that the crystal structure of an actinium-containing molecule was only determined recently. The chemistry of No, which contains a full complement of 14 electrons in its 5f shell and is the heaviest of the actinides, is even less well known.

In the new work, which is described in Nature, Pore and colleagues produced and directly identified molecular species containing Ac and No ions. To do this, they first had to produce Ac and No. They achieved this by accelerating beams of 48Ca with the 88-Inch Cyclotron and directing them onto targets of 169Tm and 208Pb, respectively. They then used the Berkeley Gas-filled Separator to separate the resulting actinide ions from unreacted beam material and reaction by-products.

The next step was to inject the ions into a chamber in the FIONA spectrometer known as a gas catcher. This chamber was filled with high-purity helium, as well as trace amounts of H2O and N2, at a pressure of approximately 150 torr. After interactions with the helium gas reduced the actinide ions to their 2+ charge state, so-called “coordination compounds” were able to form between the 2+ actinide ions and the H2O and N2 impurities. This compound-formation step took place either in the gas buffer cell itself or as the gas-ion mixture exited the chamber via a 1.3-mm opening and entered a low-pressure (several torr) environment. This transition caused the gas to expand at supersonic speeds, cooling it rapidly and allowing the molecular species to stabilize.

Once the actinide molecules formed, the researchers transferred them to a radio-frequency quadrupole cooler-buncher ion trap. This trap confined the ions for up to 50 ms, during which time they continued to collide with the helium buffer gas, eventually reaching thermal equilibrium. After they had cooled, the molecules were reaccelerated using FIONA’s mass spectrometer and identified according to their mass-to-charge ratio.

A fast and sensitive instrument

FIONA is much faster than previous such instruments and more sensitive. Both properties are important when studying the chemistry of heavy and superheavy elements, which Pore notes are difficult to make, and which decay quickly. “Previous experiments measured the secondary particles made when a molecule with a superheavy element decayed, but they couldn’t identify the exact original chemical species,” she explains. “Most measurements reported a range of possible molecules and were based on assumptions from better-known elements. Our new approach is the first to directly identify the molecules by measuring their masses, removing the need for such assumptions.”

As well as improving our understanding of heavy and superheavy elements, Pore says the new work might also have applications in radioactive isotopes used in medical treatment. For example, the 225Ac isotope shows promise for treating certain metastatic cancers, but it is difficult to make and only available in small quantities, which limits access for clinical trials and treatment. “This means that researchers have had to forgo fundamental chemistry experiments to figure out how to get it into patients,” Pore notes. “But if we could understand such radioactive elements better, we might have an easier time producing the specific molecules needed.”

The post Making molecules with superheavy elements could shake up the periodic table appeared first on Physics World.

  •  

Super sticky underwater hydrogels designed using data mining and AI

The way in which new materials are designed is changing, with data becoming ever more important in the discovery and design process. Designing soft materials is a particularly tricky task that requires selection of different “building blocks” (monomers in polymeric materials, for example) and optimization of their arrangement in molecular space.

Soft materials also exhibit many complex behaviours that need to be balanced, and their molecular and structural complexities make it difficult for computational methods to help in the design process – often requiring costly trial and error experimental approaches instead. Now, researchers at Hokkaido University in Japan have combined artificial intelligence (AI) with data mining methods to develop an ultra-sticky hydrogel material suitable for very wet environments – a difficult design challenge because the properties that make materials soft don’t usually promote adhesion. They report their findings in Nature.

Challenges of designing sticky hydrogels

Hydrogels are a permeable soft material composed of interlinked polymer networks with water held within the network. Hydrogels are highly versatile, with properties controlled by altering the chemical makeup and structure of the material.

Designing hydrogels computationally to perform a specific function is difficult, however, because the polymers used to build the hydrogel network can contain a plethora of chemical functional groups, complicating the discovery of suitable polymers and the structural makeup of the hydrogel. The properties of hydrogels are also influenced by factors including the molecular arrangement and intermolecular interactions between molecules (such as van der Waals forces and hydrogen bonds). There are further challenges for adhesive hydrogels in wet environments, as hydrogels will swell in the presence of water, which needs to be factored into the material design.

Data driven methods provide breakthrough

To develop a hydrogel with a strong and lasting underwater adhesion, the researchers mined data from the National Center for Biotechnology Information (NCBI) Protein database. This database contains the amino acid sequences responsible for adhesion in underwater biological systems – such as those found in bacteria, viruses, archaea and eukaryotes. The protein sequences were synthetically mimicked and adapted for the polymer strands in hydrogels.

“We were inspired by nature’s adhesive proteins, but we wanted to go beyond mimicking a few examples. By mining the entire protein database, we aimed to systematically explore new design rules and see how far AI could push the boundaries of underwater adhesion,” says co-lead author Hailong Fan.

The researchers used information from the database to initially design and synthesize 180 bioinspired hydrogels, each with a unique polymer network and all of which showed adhesive properties beyond other hydrogels. To improve them further, the team employed machine learning to create hydrogels demonstrating the strongest underwater adhesive properties to date, with instant and repeatable adhesive strengths exceeding 1 MPa – an order-of-magnitude improvement over previous underwater adhesives. In addition, the AI-designed hydrogels were found to be functional across many different surfaces in both fresh and saline water.

“The key achievement is not just creating a record-breaking underwater adhesive hydrogel but demonstrating a new pathway – moving from biomimetic experience to data-driven, AI-guided material design,” says Fan.

A versatile adhesive

The researchers took the three best performing hydrogels and tested them in different wet environments to show that they could maintain their adhesive properties for long time periods. One hydrogel was used to stick a rubber duck to a rock by the sea, which remained in place despite continuous wave impacts over many tide cycles. A second hydrogel was used to patch up a 20 mm hole on a pipe filled with water and instantly stopped a high-pressure leak. This hydrogel remained in place for five months without issue. The third hydrogel was placed under the skin of mice to demonstrate biocompatibility.

The super strong adhesive properties in wet environments could have far ranging applications, from biomedical engineering (prosthetic coatings or wearable biosensors) to deep-sea exploration and marine farming. The researchers also note that this data-driven approach could be adapted for designing other functional soft materials.

When asked about what’s next for this research, Fan says that “our next step is to study the molecular mechanisms behind these adhesives in more depth, and to expand this data-driven design strategy to other soft materials, such as self-healing and biomedical hydrogels”.

The post Super sticky underwater hydrogels designed using data mining and AI appeared first on Physics World.

  •  

From a laser lab to The Economist: physicist Jason Palmer on his move to journalism

My guest in this episode of the Physics World Weekly podcast is the journalist Jason Palmer, who co-hosts “The Intelligence” podcast at The Economist.

Palmer did a PhD in chemical physics at Imperial College London before turning his hand to science writing with stints at the BBC and New Scientist.

He explains how he made the transition from the laboratory to the newsroom and offers tips for scientists planning to make the same career journey. We also chat about how artificial intelligence is changing how journalists work.

The post From a laser lab to <em>The Economist</em>: physicist Jason Palmer on his move to journalism appeared first on Physics World.

  •  

Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury

Traumatic brain injury (TBI), caused by a sudden impact to the head, is a leading cause of death and disability. After such an injury, the most important indicator of how severe the injury is intracranial pressure – the pressure inside the skull. But currently, the only way to assess this is by inserting a pressure sensor into the patient’s brain. UK-based startup Crainio aims to change this by developing a non-invasive method to measure intracranial pressure using a simple optical probe attached to the patient’s forehead.

Can you explain why diagnosing TBI is such an important clinical challenge?

Every three minutes in the UK, someone is admitted to hospital with a head injury, it’s a very common problem. But when someone has a blow to the head, nobody knows how bad it is until they actually reach the hospital. TBI is something that, at the moment, cannot be assessed at the point of injury.

From the time of impact to the time that the patient receives an assessment by a neurosurgical expert is known as the golden hour. And nobody knows what’s happening to the brain during this time – you don’t know how best to manage the patient, whether they have a severe TBI with intracranial pressure rising in the head, or just a concussion or a medium TBI.

Once at the hospital, the neurosurgeons have to assess the patient’s intracranial pressure, to determine whether it is above the threshold that classifies the injury as severe. And to do that, they have to drill a hole in the head – literally – and place an electrical probe into the brain. This really is one of the most invasive non-therapeutic procedures, and you obviously can’t do this to every patient that comes with a blow in the head. It has its risks, there is a risk of haemorrhage or of infection.

Therefore, there’s a need to develop technologies that can measure intracranial pressure more effectively, earlier and in a non-invasive manner. For many years, this was almost like a dream: “How can you access the brain and see if the pressure is rising in the brain, just by placing an optical sensor on the forehead?”

Crainio has now created such a non-invasive sensor; what led to this breakthrough?

The research goes back to 2016, at the Research Centre for Biomedical Engineering at City, University of London (now City St George’s, University of London), when the National Institute for Health Research (NIHR) gave us our first grant to investigate the feasibility of a non-invasive intracranial sensor based on light technologies. We developed a prototype, secured the intellectual property and conducted a feasibility study on TBI patients at the Royal London Hospital, the biggest trauma hospital in the UK.

It was back in 2021, before Crainio was established, that we first discovered that after we shone certain frequencies of light, like near-infrared, into the brain through the forehead, the optical signals coming back – known as the photoplethysmogram, or PPG – contained information about the physiology or the haemodynamics of the brain.

When the pressure in the brain rises, the brain swells up, but it cannot go anywhere because the skull is like concrete. Therefore, the arteries and vessels in the brain are compressed by that pressure. PPG measures changes in blood volume as it pulses through the arteries during the cardiac cycle. If you have a viscoelastic artery that is opening and closing, the volume of blood changes and this is captured by the PPG. Now, if you have an artery that is compromised, pushed down because of pressure in the brain, that viscoelastic property is impacted and that will impact the PPG.

Changes in the PPG signal due to changes arising from compression of the vessels in the brain, can give us information about the intracranial pressure. And we developed algorithms to interrogate this optical signal and machine learning models to estimate intracranial pressure.

How did the establishment of Crainio help to progress the sensor technology?

Following our research within the university, Crainio was set up in 2022. It brought together a team of experts in medical devices and optical sensors to lead the further development and commercialization of this device. And this small team worked tirelessly over the last few years to generate funding to progress the development of the optical sensor technology and bring it to a level that is ready for further clinical trials.

Panicos Kyriacou
Panicos Kyriacou “At Crainio we want to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.” (Courtesy: Crainio)

In 2023, Crainio was successful with an Innovate UK biomedical catalyst grant, which will enable the company to engage in a clinical feasibility study, optimize the probe technology and further develop the algorithms. The company was later awarded another NIHR grant to move into a validation study.

The interest in this project has been overwhelming. We’ve had a very positive feedback from the neurocritical care community. But we also see a lot of interest from communities where injury to the brain is significant, such as rugby associations, for example.

Could the device be used in the field, at the site of an accident?

While Crainio’s primary focus is to deliver a technology for use in critical care, the system could also be used in ambulances, in helicopters, in transfer patients and beyond. The device is non-invasive, the sensor is just like a sticking plaster on the forehead and the backend is a small box containing all the electronics. In the past few years, working in a research environment, the technology was connected into a laptop computer. But we are now transferring everything into a graphical interface, with a monitor to be able to see the signals and the intracranial pressure values in a portable device.

Following preliminary tests on patients, Crainio is now starting a new clinical trial. What do you hope to achieve with the next measurements?

The first study, a feasibility study on the sensor technology, was done during the time when the project was within the university. The second round is led by Crainio using a more optimized probe. Learning from the technical challenges we had in the first study, we tried to mitigate them with a new probe design. We’ve also learned more about the challenges associated with the acquisition of signals, the type of patients, how long we should monitor.

We are now at the stage where Crainio has redeveloped the sensor and it looks amazing. The technology has received approval by MHRA, the UK regulator, for clinical studies and ethical approvals have been secured. This will be an opportunity to work with the new probe, which has more advanced electronics that enable more detailed acquisition of signals from TBI patients.

We are again partnering with the Royal London Hospital, as well as collaborators from the traumatic brain injury team at Cambridge and we’re expecting to enter clinical trials soon. These are patients admitted into neurocritical trauma units and they all have an invasive intracranial pressure bolt. This will allow us to compare the physiological signal coming from our intracranial pressure sensor with the gold standard.

The signals will be analysed by Crainio’s data science team, with machine learning algorithms used to look at changes in the PPG signal, extract morphological features and build models to develop the technology further. So we’re enriching the study with a more advanced technology, and this should lead to more accurate machine learning models for correctly capturing dynamic changes in intracranial pressure.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose traumatic brain injury effectively, faster, accurately and earlier

This time around, we will also record more information from the patients. We will look at CT scans to see whether scalp density and thickness have an impact. We will also collect data from commercial medical monitors within neurocritical care to see the relation between intracranial pressure and other physiological data acquired in the patients. We aim to expand our knowledge of what happens when a patient’s intracranial pressure rises – what happens to their blood pressures? What happens to other physiological measurements?

How far away is the system from being used as a standard clinical tool?

Crainio is very ambitious. We’re hoping that within the next couple of years we will progress adequately in order to achieve CE marking and all meet the standards that are necessary to launch a medical device.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose TBI effectively, faster, accurately and earlier. This can only yield better outcomes and improve patients’ quality-of-life.

Of course, as a company we’re interested in being successful commercially. But the ambition here is, first of all, to keep the cost affordable. We live in a world where medical technologies need to be affordable, not only for Western nations, but for nations that cannot afford state-of-the-art technologies. So this is another of Crainio’s primary aims, to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.

The post Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury appeared first on Physics World.

  •  

Extremely stripped star reveals heavy elements as it explodes

Artist's impression of a star just before it explodes
Stripped star Artist’s impression of the star that exploded to create SN 2021yfj. Shown are the ejection of silicon (grey), sulphur (yellow) and argon (purple) just before the final explosion. (Courtesy: WM Keck Observatory/Adam Makarenko)

For the first time, astronomers have observed clear evidence for a heavily stripped star that has shed many of its outer layers before its death in a supernova explosion. Led by Steve Schulze at Northwestern University, the team has spotted the spectral signatures of heavier elements that are usually hidden deep within stellar interiors.

Inside a star, atomic nuclei fuse together to form heavier elements in a process called nucleosynthesis. This releases a vast amount of energy that offsets the crushing force of gravity.

As stars age, different elements are consumed and produced. “Observations and models of stars tell us that stars are enormous balls of hydrogen when they are born,” Schulze explains. “The temperature and density at the core are so high that hydrogen is fused into helium. Subsequently, helium fuses into carbon, and this process continues until iron is produced.”

Ageing stars are believed to have an onion-like structure, with a hydrogen outer shell enveloping deeper layers of successively heavier elements. Near the end of a star’s life, inner-shell elements including silicon, sulphur, and argon fuse to form a core of iron. Unlike lighter elements, iron does not release energy as it fuses, but instead consumes energy from its surroundings. As a result, the star can no longer withstand its own gravity and it will collapse rapidly in and then explode in a dramatic supernova.

Hidden elements

Rarely, astronomers can observe an old star that has blown out its outer layers before exploding. When the explosion finally occurs, heavier elements that are usually hidden within deeper shells create absorption lines in the supernova’s light spectrum, allowing astronomers to determine the compositions of these inner layers. So far, inner-layer elements as heavy as carbon and oxygen have been observed, but not direct evidence for elements in deeper layers.

Yet in 2021, a mysterious new observation was made by a programme of the Zwicky Transient Facility headed by Avishay Gal-Yam at the Weizmann Institute of Science in Israel. The team was scanning the sky for signs of infant supernovae at the very earliest stages following their initial explosion.

“On 7 September 2021 it was my duty to look for infant supernovae,” Schulze recounts. “We discovered SN 2021yfj due to its rapid increase in brightness. We immediately contacted Alex Filippenko’s group at the University of California Berkeley to ask whether they could obtain a spectrum of this supernova.”

When the results arrived, the team realised that the absorption lines in the supernova’s spectrum were unlike anything they had encountered previously. “We initially had no idea that most of the features in the spectrum were produced by silicon, sulphur, and argon,” Schulze continues. Gal-Yam took up the challenge of identifying the mysterious features in the spectrum.

Shortly before death

In the meantime, the researchers examined simultaneous observations of SN 2021yfj, made by a variety of ground- and space-based telescopes. When Gal-Yam’s analysis was complete, all of the team’s data confirmed the same result. “We had detected a supernova embedded in a shell of material rich in silicon, sulphur, and argon,” Schulze describes. “These elements are formed only shortly before a star dies, and are often hidden beneath other materials – therefore, they are inaccessible under normal circumstances.”

The result provided clear evidence that the star had been more heavily stripped back towards the end of its life than any other observed previously: shedding many of its outer layers before the final explosion.

“SN 2021yfj demonstrates that stars can die in far more extreme ways than previously imagined,” says Schulze. “It reveals that our understanding of how stars evolve and die is still not complete, despite billions of them having already been studied.” By studying their results, the team now hopes that astronomers can better understand the later stages of stellar evolution, and the processes leading up to these dramatic ends.

The research is described in Nature.

The post Extremely stripped star reveals heavy elements as it explodes appeared first on Physics World.

  •  

Rainer Weiss: US gravitational-wave pioneer dies aged 92

Rainer Weiss, who shared the Nobel Prize for Physics in 2017 for the discovery of gravitational waves, died on 25 August at the age of 92. Weiss came up with the idea of detecting gravitational waves by measuring changes in distance as tiny as 10–18 m via an interferometer several kilometres long. His proposal eventually led to the formation of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO), which first detected such waves in 2015.

Weiss was born in Berlin, Germany, on 29 September 1932 shortly before the Nazis rose to power. With a father who was Jewish and an ardent communist, Weiss and his family were forced to flee the country – first to Czechoslovakia and then to the US in 1939.  Weiss was raised in New York, finishing his school days at the private Columbia Grammar School thanks to a scholarship from a refugee relief organization.

In 1950 Weiss began studying electrical engineering at Massachusetts Institute of Technology (MIT) before switching to physics, eventually earning a PhD in 1962, developing atomic clocks under the supervision of Jerrold Zacharias,. He then worked at Tufts University before moving to Princeton University, where he was a research associate with the astronomer and physicist Robert Dicke.

In 1964 Weiss returned to MIT, where he began developing his idea of using a large interferometer to measure gravitational waves. Teaming up with Kip Thorne at the California Institute of Technology (Caltech), Weiss drew up a feasibility study for a kilometre-scale laser interferometer. In 1979 the National Science Foundation funded Caltech and MIT to develop the proposal to build LIGO.

Construction of two LIGO detectors – one in Hanford, Washington and the other at Livingston, Louisiana, each of which featured arms 4 km long – began in 1990, with the facilities opening in 2002. After almost a decade of operation, however, no waves had been detected so in 2011 the two observatories were upgraded to make them 10 times more sensitive than before.

On 14 September 2015 – during the first observation run of what was known as Advanced LIGO, or aLIGO – the interferometer detected gravitational waves from two merging black holes some  1.3 billion light-years from Earth. The discovery was announced by those working on aLIGO in February 2016.

The following year, Weiss was awarded one half of the 2017 Nobel Prize for Physics “for decisive contributions to the LIGO detector and the observation of gravitational waves”. The other half was shared by Thorne and fellow Caltech physicist Barry Barish, who was LIGO project director.

‘An indelible mark’

As well as pioneering the detection of gravitational waves, Weiss also developed atomic clocks and led efforts to measure the spectrum of the cosmic microwave background via weather balloons. He co-founded NASA’s Cosmic Background Explorer project, measurements from which have helped support the Big Bang theory describing the expansion of the universe.

In addition to the Nobel prize, Weiss was awarded the Gruber Prize in Cosmology in 2006, the Einstein Prize from the American Physical Society in 2007 as well as the Shaw Prize and the Kavli Prize in Astrophysics, both in 2016.

MIT’s dean of science Nergis Mavalvala, who worked with Weiss to build an early prototype of a gravitational-wave detector as part of her PhD in the 1990s, says that every gravitational-wave event that is observed “will be a reminder of his legacy”.

“[Weiss] leaves an indelible mark on science and a gaping hole in our lives,” says Mavalvala. “I am heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us – of passion for science and discovery, but most of all to always put people first.”

The post Rainer Weiss: US gravitational-wave pioneer dies aged 92 appeared first on Physics World.

  •  

Famous double-slit experiment gets its cleanest test yet

Scientists at the Massachusetts Institute of Technology (MIT) in the US have achieved the cleanest demonstration yet of the famous double-slit experiment. Using two single atoms as the slits, they inferred the photon’s path by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when no path was observed, two bright spots when it was.

First performed in the 1800s by Thomas Young, the double-slit experiment has been revisited many times. Its setup is simple: send light toward a pair of slits in a screen and watch what happens. Its outcome, however, is anything but. If the light passes through the slits unobserved, as it did in Young’s original experiment, an interference pattern of bright and dark fringes appears, like ripples overlapping in a pond. But if you observe which slit the light goes through, as Albert Einsten proposed in a 1920s “thought experiment” and as other physicists have since demonstrated in the laboratory, the fringes vanish in favour of two bright spots. Hence, whether light acts as a wave (fringes) or a particle (spots) depends on whether anyone observes it. Reality itself seems to shift with the act of looking.

The great Einstein–Bohr debate

Einstein disliked the implications of this, and he and Niels Bohr debated them extensively. According to Einstein, observation only has an effect because it introduces noise. If the slits were mounted on springs, he suggested, their recoil would reveal the photon’s path without destroying the fringes.

Bohr countered that measuring the photon’s recoil precisely enough to reveal its path would blur the slits’ positions and erase interference. For him, this was not a flaw of technology but a law of nature – namely, his own principle of complementarity, which states that quantum systems can show wave-like or particle-like behaviour, but never both at once.

Physicists have performed numerous versions of the experiment since, and each time the results have sided with Bohr. Yet the unavoidable noise in real set-ups left room for doubt that this counterintuitive rule was truly fundamental.

Atoms as slits

To celebrate the International Year of Quantum Science and Technology, physicists in Wolfgang Ketterle’s group at MIT performed Einstein’s thought experiment directly. They began by cooling more than 10,000 rubidium atoms to near absolute zero and trapping them in a laser-made lattice such that each one acted as an individual scatterer of light. If a faint beam of light was sent through this lattice, a single photon could scatter off an atom.

Since the beam was so faint, the team could collect very little information per experimental cycle. “This was the most difficult part,” says team member Hanzhen Lin, a PhD student at MIT. “We had to repeat the experiment thousands of times to collect enough data.”

In every such experiment, the key was to control how much photon path information the atoms provided. The team did this by adjusting the laser traps to tune the “fuzziness” of the atoms’ position. Tightly trapped atoms had well-defined positions and so, according to Heisenberg’s uncertainty principle, they could not reveal much about the photon’s path. In these experiments, fringes appeared. Loosely trapped atoms, in contrast, had more position uncertainty and were able to move, meaning an atom struck by a photon could carry a trace of that interaction. This faint record was enough to collapse the interference fringes, leaving only spots. Once again, Bohr was right.

While Lin acknowledges that theirs is not the first experiment to measure scattered light from trapped atoms, he says it is the first to repeat the measurements after the traps were removed, while the atoms floated freely. This went further than Einstein’s spring-mounted slit idea, and (since the results did not change) eliminated the possibility that the traps were interfering with the observation.

“I think this is a beautiful experiment and a testament to how far our experimental control has come,” says Thomas Hird, a physicist who studies atom-light interactions at the University of Birmingham, UK, and was not involved in the research. “This probably far surpasses what Einstein could have imagined possible.”

The MIT team now wants to observe what happens when there are two atoms per site in the lattice instead of one. “The interactions between the atoms at each site may give us interesting results,” Lin says.

The team describes the experiment in Physical Review Letters.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Famous double-slit experiment gets its cleanest test yet appeared first on Physics World.

  •  

Towards quantum PET: harnessing the diagnostic power of positronium imaging

Positron emission tomography (PET) is a diagnostic imaging technique that uses an injected radioactive tracer to detect early signs of cancer, brain disorders or other diseases. At Jagiellonian University in Poland, a research team headed up by Paweł Moskal is developing a totally new type of PET scanner. The Jagiellonian PET (J-PET) can image the properties of positronium, a positron–electron bound state produced during PET scans, offering potential to increase the specificity of PET diagnoses.

The researchers have now recorded the first ever in vivo positronium image of the human brain. They also used the J-PET to show that annihilation photons generated during PET scans are not completely quantum entangled, opening up the possibility of using the degree of quantum entanglement as a diagnostic indicator. Moskal tells Physics World’s Tami Freeman about these latest breakthroughs and the team’s ongoing project to build the world’s first whole-body quantum PET scanner.

Can you describe how conventional PET images are generated?

PET is based on the annihilation of a positron with an electron to create two photons. The patient is administered a radiopharmaceutical labelled with a positron-emitting radionuclide (for example, fluoro-deoxy-glucose (FDG) labelled with 18F), which localizes in targeted tissues. The 18F emits positrons inside the body, which annihilate with electrons from the body, and the resulting annihilation photons are registered by the PET scanner.

By measuring the locations and times of the photons’ interactions in the scanner, we can reconstruct the density distribution of annihilation points in the body. With 18F-FDG, this image correlates with the density distribution of glucose, which in turn, indicates the rate of glucose metabolism. Thus the PET scanner delivers an image of the radiopharmaceutical’s metabolic rate in the body.

Such an image enables physicians to identify tissues with abnormal metabolism, such as cancers that metabolize glucose up to 10 times more intensively than healthy tissues. Therefore, PET scanners can provide information about alterations in cell function, even before cancer may be visible in anatomical images recorded using CT or MRI.

During annihilation, a short-lived atom called positronium can form. What’s the rationale for imaging this positronium?

It’s amazing that in tissue, positron–electron annihilation proceeds via the formation of positronium in about 40% of cases. Positronium, a bound state of matter and antimatter (an electron and a positron), is short lived because it can undergo self-annihilation into photons. In tissue, however, it can decay via additional processes that further shorten its lifetime. For example, its positron may annihilate by “picking off” an electron from a surrounding atom, or it may convert from the long-lived state (ortho-positronium) to the short-lived state (para-positronium) through interaction with oxygen molecules.

In tissue, therefore, positronium lifetime is an indicator of the intra- and inter-molecular structure and the concentration of oxygen molecules. Both molecular composition and the degree of oxygen concentration differ between healthy and cancerous tissues, with hypoxia (a deficit in tissue oxygenation) a major feature of solid tumours that’s related to the development of metastases and treatment resistance.

As such, imaging positronium lifetime can help in early disease recognition at the stage of molecular alterations. It can also improve diagnosis and the proper choice of anti-cancer therapy. In the case of brain diagnostics, positronium imaging may become an early diagnostic indicator for neurodegenerative disorders such as dementia, Alzheimer’s disease and Parkinson’s disease.

So how does the J-PET detect positronium?

To reconstruct the positronium lifetime we use a radionuclide (44Sc, 82Rb or 124I, for example) that, after emitting a positron, promptly (within a few picoseconds) emits an additional gamma photon. This “prompt gamma” can be used to measure the exact time that the positron was emitted into the tissue and formed positronium.

Multiphoton detection in a PET scanner
Multiphoton detection In about 1% of cases, after emitting a positron that annihilates with an electron into photons (blue arrows), 68Ga also emits a prompt gamma (solid arrow). (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)

Current PET scanners are designed to register only two annihilation photons, which makes them incapable of determining positronium lifetime. The J-PET is the first multiphoton PET scanner designed for simultaneous registration of any number of photons.

The registration of annihilation photons enables us to reconstruct the time and location of the positronium decay, while registration of the prompt gamma provides the time of its formation. The positronium lifetime is then calculated as the time difference between annihilation and prompt gamma emission.

Can you describe how your team recorded the first in vivo positronium image?

Last year we presented the world’s first in vivo images of positronium lifetime in a human, reported in Science Advances. For this, we designed and constructed a modular, lightweight and portable J-PET tomograph, consisting of 24 independent detection modules, each weighing only 2 kg. The device uses a multiphoton data acquisition system, invented by us, to simultaneously register prompt gamma and annihilation photons – the first PET scanner in the world to achieve this.

The research was performed at the Medical University of Warsaw, with studies conducted following routine procedures so as not to interfere with routine diagnostics and therapy. If a patient agreed to stay longer on the platform, we had about 10 minutes to install the J-PET tomograph around them and collect data.

First patient imaging with J-PET
In vivo imaging The first imaging of a patient, illustrating the advantages of the J-PET as a portable, lightweight device with an adaptable imaging volume. (Courtesy: Paweł Moskal)

The first patient was a 45-year-old man with glioblastoma (an aggressive brain tumour) undergoing alpha-particle radiotherapy. The primary aim of his therapy was to destroy the tumour using alpha particles emitted by the radionuclide 225Ac. The positronium imaging was made possible by the concurrent theranostic application of the radionuclide 68Ga to monitor the site of cancer lesions using a PET scanner.

The patient was administered a simultaneous intra-tumoural injection of the alpha-particle-emitting radiopharmaceutical (225Ac-DOTA-SP) for therapy and the positron emitting pharmaceutical (68Ga-DOTA-SP) for diagnosis. In about 1% of cases, after emitting a positron that annihilates with an electron, 68Ga also emits a prompt gamma ray.

We determined the annihilation location by measuring the time and position of interaction of the annihilation photons in the scanner. For each image voxel, we also determined a lifetime spectrum as the distribution of differences between the time of annihilation and the time of prompt gamma emission.

Our study found that positronium lifetimes in glioblastoma cells are shorter than in salivary glands and healthy brain tissues. We showed for the first time that the mean lifetime of ortho-positronium in a glioma (1.77±0.58 ns) is shorter than in healthy brain tissue (2.72±0.72 ns). This finding demonstrates that positronium imaging could be used for in vivo diagnosis to differentiate between healthy and cancerous tissues.

Positronium images of a patient with glioblastoma
Lifetime distributions Positronium images of a patient with glioblastoma, showing the difference in mean ortho-positronium lifetime between glioma and healthy brain. (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)

You recently demonstrated that J-PET can detect quantum entanglement of annihilation photons, how could this impact cancer diagnostics?

For this study, reported earlier this year in Science Advances, we used the laboratory prototype of the J-PET scanner (as employed previously for the first ex vivo positronium imaging). The crucial result was the first ever observation that photons from electron–positron annihilation in matter are not completely quantum entangled. Our study is pioneering in revealing a clear dependence of the degree of photon entanglement on the material in which the annihilation occurs.

These results are totally new compared with all previous investigations of photons from positron–electron annihilations. Up to this point, all experiments had focused on showing that this entanglement is maximal, and for that purpose, were performed in metals. None of the previous studies mentioned or even hypothesized a possible material dependence.

Laboratory prototype of the J-PET scanner
Lab prototype The J-PET scanner used to discover non-maximal entanglement, with (left to right) Deepak Kumar, Sushil Sharma and Pawel Moskal. (Courtesy: Damian Gil and Deepak Kumar)

If the degree of quantum entanglement of annihilation photons depends on the material, it may also differ according to tissue type or the degree of hypoxia. This is a hypothesis that we will test in future studies. I recently received an ERC Advanced Grant, entitled “Can tissue oxidation be sensed by positronium?”, to investigate whether the degree of oxidation in tissue can be sensed by the degree of quantum entanglement of photons originating from positron annihilation.

What causes annihilation photons to be entangled (or not)?

Quantum entanglement is a fascinating phenomenon that cannot be explained by our classical perception of the world. Entangled photons behave as if one instantly knows what is happening with the other, regardless of how far apart they are, so they propagate in space as a single object.

Annihilation photons are entangled if they originate from a pure quantum state. A state is “pure” if we know everything that can be known about it. For example, if the photons originate from the ground state of para-positronium (a pure state), then we expect them to be maximally entangled.

However, if electron–positron annihilation occurs in a mixed state (a statistical mixture of different pure states) where we have incomplete information, then the resulting photons will not be maximally entangled. In our case, this could be the annihilation of a positron from positronium with electrons from the patient’s body. Because these electrons can have different angular momenta with respect to the positron, the annihilation generally occurs from a mixed state.

You have also measured the polarization of the annihilation photons; how is this information used?

In current PET scanners, images are reconstructed based on the position and time of interaction of annihilation photons within the scanner. However, annihilation photons also carry information about their polarization.

Theoretically, annihilation photons are quantum entangled in polarization and exhibit non-local correlations. In the case of electron–positron annihilation into two photons, this means that the amplitude of the distribution of the relative angle between their polarization planes is larger when they are quantum entangled than when they propagate in space as independent objects.

State-of-the-art PET scanners, however, cannot access polarization information. Annihilation photons have energy in the mega-electronvolt range and their polarization cannot be determined using established optical methods, which are designed for optical photons in the electronvolt range. Because these energetic annihilation photons interact with single electrons, their polarization can only be sensed via Compton scattering.

The angular distribution of photons scattered by electrons is not isotropic with respect to the polarization direction. Instead, scattering is most likely to occur in a plane perpendicular to the polarization plane of the photon before scattering. Thus, by determining the scattering plane (containing the primary and scattered photon), one can estimate the direction of polarization as being perpendicular to that plane. Therefore, to practically determine the polarization plane of the photon, you need to know its directions of flight both before and after Compton scattering in the material.

In plastic scintillators, annihilation photons primarily interact via the Compton effect. As the J-PET is built from plastic scintillators, it’s ideally suited to provide information about the photons’ polarization, which can be determined by registering both the annihilation photon and the scattered photon and then reconstructing the scattering plane.

Using the J-PET scanner, we determined the distribution of the relative angle between the polarization planes of photons from positron–electron annihilation in a porous polymer. The amplitude of the observed distribution is smaller than predicted for maximally quantum-entangled two-photon states, but larger than expected for separable photons.

This result can be explained by assuming that photons from pick-off annihilation are not entangled, while photons from direct and para-positronium annihilations are maximally entangled. Our finding indicates that the degree of entanglement depends on the annihilation mechanism in matter, opening avenues for exploring polarization correlations in PET as a diagnostic indicator.

What further developments are planned for the J-PET scanner?

When creating the J-PET technology, we started with a two-strip prototype, then a 24-strip prototype in 2014, followed by a full-scale 192-strip prototype in 2016. In 2021 we completed the construction of a lightweight (60 kg) J-PET version that is both modular and portable, and which we used to demonstrate the first clinical images.

The next step is the construction of the total-body quantum J-PET scanner. We are now at the stage of collecting all the elements of this scanner and expect to complete construction in 2028. The scanner will be installed at the Center for Theranostics, established by myself and Ewa Stępień, medical head of the J-PET team, at Jagiellonian University.

Schematic of the full-body J-PET scanner
Future developments Schematic cross-section of the full-body J-PET scanner under construction at Jagiellonian University. The diagram shows the patient and several examples of electron–positron annihilation. (Courtesy: Rev. Mod. Phys. 10.1103/RevModPhys.95.021002)

Total-body PET provides the ability to image the metabolism of all tissues in the body at the same time. Additionally, due to the high sensitivity of total-body PET scanners, it is possible to perform dynamic imaging – essentially, creating a movie of how the radiopharmaceutical distributes throughout the body over time.

The total-body J-PET will also be able to register the pharmacokinetics of drugs administered to a patient. However, its true distinction is that it will be the world’s first quantum PET scanner with the ability to image the degree of quantum entanglement of annihilation photons throughout the patient’s body. Additionally, it will be the world’s first total-body multiphoton PET, enabling simultaneous positronium imaging in the entire human body.

How do you see the J-PET’s clinical applications evolving in the future?

We have already performed the first clinical imaging using J-PET at the Medical University of Warsaw and the University Hospital in Kraków. The studies included the diagnosis of patients with neuroendocrine, prostate and glioblastoma tumours. The data collected at these hospitals were used to reconstruct standard PET images as well as positronium lifetime images.

Next, we plan to conduct positronium imaging of phantoms and humans with various radionuclides to explore its clinical applications as a biomarker for tissue pathology and hypoxia. We also intend to explore the J-PET’s multiphoton capabilities for simultaneous double-tracer imaging, as well as study the degree of quantum entanglement as a function of the annihilation mechanism.

Finally, we plan to explore the possibilities of applying quantum entanglement to diagnostics, and we look forward to performing total-body positronium and quantum entanglement imaging with the total-body J-PET in the Center for Theranostics.

  • Paweł Moskal is a panellist in the forthcoming Physics World Live event on 25 September 2025. The event, which also features Miles Padgett from the University of Glasgow and Matt Brookes from the University of Nottingham, will examine how medical physics can make the most of the burgeoning field of quantum science. You can sign up free here.

The post Towards quantum PET: harnessing the diagnostic power of positronium imaging appeared first on Physics World.

  •