A new method for generating high-energy proton beams could one day improve the precision of proton therapy for treating cancer. Developed by an international research collaboration headed up at the National University of Singapore, the technique involves accelerating H2+ ions and then using a novel two-dimensional carbon membrane to split the high-energy ion beam into beams of protons.
One obstacle when accelerating large numbers of protons together is that they all carry the same positive charge and thus naturally repel each other. This so-called space–charge effect makes it difficult to keep the beam tight and focused.
“By accelerating H₂⁺ ions instead of single protons, the particles don’t repel each other as strongly,” says project leader Jiong Lu. “This enables delivery of proton beam currents up to an order of magnitude higher than those from existing cyclotrons.”
Lu explains that a high-current proton beam can deliver more protons in a shorter time, making proton treatments quicker, more precise and targeting tumours more effectively. Such a proton beam could also be employed in FLASH therapy, an emerging treatment that delivers therapeutic radiation at ultrahigh dose rates to reduce normal tissue toxicity while preserving anti-tumour activity.
Industry-compatible fabrication
The key to this technique lies in the choice of an optimal membrane with which to split the H₂⁺ ions. For this task, Lu and colleagues developed a new material – ultraclean monolayer amorphous carbon (UC-MAC). MAC is similar in structure to graphene, but instead of an ordered honeycomb structure of hexagonal rings, it contains a disordered mix of five-, six-, seven and eight-membered carbon rings. This disorder creates angstrom-scale pores in the films, which can be used to split the H₂⁺ ions into protons as they pass through.
Pentagons, hexagons, heptagons, octagons Illustration of disorder-to-disorder synthesis (left); scanning transmission electron microscopy image of UC-MAC (right). (Courtesy: National University of Singapore)
Scaling the manufacture of ultrathin MAC films, however, has previously proved challenging, with no industrial synthesis method available. To address this problem, the researchers proposed a new fabrication approach in which the emergence of long-range order in the material is suppressed, not by the conventional approach of low-temperature growth, but by a novel disorder-to-disorder (DTD) strategy.
DTD synthesis uses plasma-enhanced chemical vapor deposition (CVD) to create a MAC film on a copper substrate containing numerous nanoscale crystalline grains. This disordered substrate induces high levels of randomized nucleation in the carbon layer and disrupts long-range order. The approach enabled wafer-scale (8-inch) production of UC-MAC films within just 3 s – an order of magnitude faster than conventional CVD methods.
Disorder creates precision
To assess the ability of UC-MAC to split H₂⁺ ions into protons, the researchers generated a high-energy H2+ nanobeam and focused it onto a freestanding two-dimensional UC-MAC crystal. This resulted in the ion beam splitting to create high-precision proton beams. For comparison they repeated the experiment (with beam current stabilities controlled within 10%) using single-crystal graphene, non-clean MAC with metal impurities and commercial carbon thin films (8 nm).
Measuring double-proton events – in which two proton signals are detected from a single H2+ ion splitting – as an indicator for proton scattering revealed that the UC-MAC membrane produced far fewer unwanted scattered protons than the other films. Ion splitting using UC-MAC resulted in about 47 double-proton events over a 20 s collection time, while the graphene film exhibited roughly twice this number and the non-clean MAC slightly more. The carbon thin film generated around 46 times more scattering events.
The researchers point out that the reduced double-proton events in UC-MAC “demonstrate its superior ability to minimize proton scattering compared with commercial materials”. They note that as well as UC-MAC creating a superior quality proton beam, the technique provides control over the splitting rate, with yields ranging from 88.8 to 296.0 proton events per second per detector.
“Using UC-MAC to split H₂⁺ produces a highly sharpened, high-energy proton beam with minimal scattering and high spatial precision,” says Lu. “This allows more precise targeting in proton therapy – particularly for tumours in delicate or critical organs.”
“Building on our achievement of producing proton beams with greatly reduced scattering, our team is now developing single molecule ion reaction platforms based on two-dimensional amorphous materials using high-energy ion nanobeam systems,” he tells Physics World. “Our goal is to make proton beams for cancer therapy even more precise, more affordable and easier to use in clinical settings.”
Evidence of the coherent elastic scattering of reactor antineutrinos from atomic nuclei has been reported by the German-Swiss Coherent Neutrino Nucleus Scattering (CONUS) collaboration. This interaction has a higher cross section (probability) than the processes currently used to detect neutrinos, and could therefore lead to smaller detectors. It also involves lower-energy neutrinos, which could offer new ways to look for new physics beyond the Standard Model.
Antineutrinos only occasionally interact with matter, which makes them very difficult to detect. They can be observed using inverse beta decay, which involves the capture of electron antineutrinos by protons, producing neutrons and positrons. An alternative method involves observing the scattering of antineutrinos from electrons. Both these reactions have small cross sections, so huge detectors are required to capture just a few events. Moreover, inverse beta decay can only detect antineutrinos if they have energies above about 1.8 MeV, which precludes searches for low-energy physics beyond the Standard Model.
It is also possible to detect neutrinos by the tiny kick a nucleus receives when a neutrino scatters off it. “It’s very hard to detect experimentally because the recoil energy of the nucleus is so low, but on the other hand the interaction probability is a factor of 100–1000 higher than these typical reactions that are otherwise used,” says Christian Buck of the Max Planck Institute for Nuclear Physics in Heidelberg. This enables measurements with kilogram-scale detectors.
This was first observed in 2017 by the COHERENT collaboration using a 14.6 kg caesium iodide crystal to detect neutrinos from the Spallation Neutron Source at the Oak Ridge National Laboratory in the US. These neutrinos have a maximum energy of 55 MeV, making them ideal for the interaction. Moreover, the neutrinos come in pulses, allowing the signal to be distinguished from background radiation.
Reactor search
Multiple groups have subsequently looked for signals from nuclear reactors, which produce lower-energy neutrinos. These include the CONUS collaboration, which operated at the Brokdorf nuclear reactor in Germany until 2022. However, the only group to report a strong hint of a signal included Juan Collar of the University of Chicago. In 2022 it published results suggesting a stronger than expected signal at the Dresden-2 power reactor in the US.
Now, Buck and his CONUS colleagues present data from the CONUS+ experiment conducted at the Leibstadt reactor in Switzerland. They used three 1 kg germanium diodes sensitive to energies as low as 160 eV. They extracted the neutrino spectrum from background radiation by taking data when the reactor was running and when it was not. Writing in Nature, the team conclude that 395±106 neutrinos had been detected during 119 days of operation, which is consistent with the Standard Model 3.7σ away from zero. The experiment is currently in its second run, with the detector masses increased to 2.4 kg to provide better statistics and potentially a lower threshold energy.
Collar, however, is sceptical of the result. “[The researchers] seem to have an interest in dismissing the limitations of these detectors – limitations that affect us too,” he says. “The main difference between our approach and theirs is that we have made a best effort to demonstrate that our data are not contaminated by residual sources of low-energy noise dominant in this type of device prior to a careful analysis.” His group will soon release data taken at the Vandellòs reactor in Spain. “When we release these, we will take the time to point out the issues visible in their present paper,” he says. “It is a long list.”
Buck accepts that, if the previous measurements by Collar’s group are correct, the CONUS+ researchers should have detected least 10 times more neutrinos than they actually did. “I would say the control of backgrounds at our site in Leibstadt is better because we do not have such a strong neutron background. We have clearly demonstrated that the noise Collar has in mind is not dominant in the energy region of interest in our case.”
Patrick Huber at Virginia Tech in the US says, “Let’s see what Collar’s new result is going to be. I think this is a good example of the scientific method at work. Science doesn’t care who’s first – scientists care, but for us, what matters is that we get it right. But with the data that we have in hand, most experts, myself included, think that the current result is essentially the result we have been looking for.”
After 40 years lecturing on physics and technology, you’d think I’d be ready for any classroom challenge thrown at me. Surely, during that time, I’d have covered all the bases? As an academic with a background in designing military communication systems, I’m used to giving in-depth technical lectures to specialists. I’ve delivered PowerPoint presentations to a city mayor and council dignitaries (I’m still not sure why, to be honest). And perhaps most terrifying of all, I’ve even had my mother sit in on one of my classes.
During my retirement, I’ve taken part in outreach events at festivals, where I’ve learned how to do science demonstrations to small groups that have included everyone from babies to great-grandparents. I once even gave a talk about noted local engineers to a meeting of the Women’s Institute in what was basically a shed in a Devon hamlet. But nothing could have prepared me for a series of three talks I gave earlier this year.
I’d been invited to a school to speak to three classes, each with about 50 children aged between six and 11. The remit from the headteacher was simple: talk about “My career as a physicist”. To be honest, most of my working career focused on things like phased-array antennas, ferrite anisotropy and computer modelling of microwave circuits, which isn’t exactly easy to adapt for a young audience.
But for a decade or so my research switched to sports physics and I’ve given talks to more than 200 sports scientists in a single room. I once even wrote a book called Projectile Dynamics in Sport (Routledge, 2011). So I turned up at the school armed with a bag full of balls, shuttlecocks, Frisbees and flying rings. I also had a javelin (in the form of a telescopic screen pointer) and a “secret weapon” for my grand finale.
Our first game was “guess the sport”. The pupils did well, correctly discriminating the difference between a basketball, softball and a football, and even between an American football and a rugby ball. We discussed the purposes of dimples on a golf ball, the seam on a cricket ball and the “skirt” on a shuttlecock – the feathers, which are always taken from the right wing of a goose. Unless they are plastic.
As physicists, you’re probably wondering why the feathers are taken from its right side – and I’ll leave that as an exercise for the reader. But one pupil was more interested in the poor goose, asking me what happens when its feathers are pulled out. Thinking on my feet, I said the feathers grow back and the bird isn’t hurt. Truth is I have no idea, but I didn’t want to upset her.
Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy
Then: the finale. From my bag I took out a genuine Aboriginal boomerang, complete with authentic religious symbols. Not wanting to delve into Indigenous Australian culture or discuss a boomerang’s return mechanism in terms of gyroscopy and precession, I instead allowed the class to throw around three foam versions of it. Despite the look of abject terror on the teachers’ faces, we did not descend into anarchy but ended each session with five minutes of carefree enjoyment.
There is something uniquely joyful about the energy of children when they engage in learning. At this stage, curiosity is all. They ask questions because they genuinely want to know how the world works. And when I asked them a question, hands shot up so fast and arms were waved around so frantically to attract my attention that some pupils’ entire body shook. At one point I picked out an eager firecracker who swiftly realized he didn’t know the answer and shrank into a self-aware ball of discomfort.
Mostly, though, children’s excitement is infectious. I left the school buzzing and on a high. I loved it. In this vibrant environment, learning isn’t just about facts or skills; it’s about puzzle-solving, discovery, imagination, excitement and a growing sense of independence. The enthusiasm of young learners turns the classroom into a place of shared exploration, where every day brings something new to spark their imagination.
How lucky primary teachers are to work in such a setting, and how lucky I was to be invited into their world.
A new type of nanostructured lasing system called a metalaser emits light with highly tuneable wavefronts – something that had proved impossible to achieve with conventional semiconductor lasers. According to the researchers in China who developed it, the new metalaser can generate speckle-free laser holograms and could revolutionize the field of laser displays.
The first semiconductor lasers were invented in the 1960s and many variants have since been developed. Their numerous advantages – including small size, long lifetimes and low operating voltages – mean they are routinely employed in applications ranging from optical communications and interconnects to biomedical imaging and optical displays.
To make further progress with this class of lasers, researchers have been exploring ways of creating them at the nanoscale. One route for doing this is to integrate light-scattering arrays called metasurfaces with laser mirrors or insert them inside resonators. However, the wavefronts of the light emitted by these metalasers have proven very difficult to control, and to date only a few simple profiles have been possible without introducing additional optical elements.
Not significantly affected by perturbations
In the new work, a team led by Qinghai Song of the Harbin Institute of Technology, Shenzhen, created a metalaser that consists of silicon nitride nanodisks that have holes in their centres and are arranged in a periodic array. This configuration generates bound states in a continuous medium (BICs). Since the laser energy is concentrated in the centre of each nanodisk, the wavelength of the BIC is not significantly affected by perturbations such as tiny holes in the structure.
“At the same time, the in-plane electric fields of these modes are distributed along the periphery of each nanodisk,” Song explains. “This greatly enhances the light field inside the centre of the hole and induces an effective dipole moment there, which is what produces a geometric phase change to the light emission at each pixel.”
By rotating the holes in the nanodisks, Song says that it is possible to introduce specific geometric phase profiles into the metasurface. The laser emission can then be tailored to create focal spots, focal lines and doughnut shapes as well as holographic images.
And that is not all. Unlike in conventional laser modes, the waves scattered from the new metalaser are too weak to undergo resonant amplification. This means that the speckle noise generated is negligibly small, which resolves the longstanding challenge of reducing speckle noise in holographic displays without reducing image quality.
According to Song, this property could revolutionize laser displays. He adds that the physical concept outlined in the team’s work could be extended to other nanophotonic devices, substantially improving their performance in various optics and photonics applications.
“Controlling laser emission at will has always been a dream of laser researchers,” he tells Physics World. “Researchers have traditionally done this by introducing metasurfaces into structures such as laser oscillators. This approach, while very straightforward, is severely limited by the resonant conditions of this type of laser system. With other types of laser, they had to either integrate a metasurface wave plate outside the laser cavity or use bulky and complicated components to compensate for phase changes.”
With the new metalaser, the laser emission can be changed from fixed profiles such as Hermite-Gaussian modes and Laguerre-Gaussian modes to arbitrarily customized beams, he says. One consequence of this is that the lasers could be fabricated to match the numerical aperture of fibres or waveguides, potentially boosting the performance of optical communications and optical information processing.
Developing a programmable metalaser will be the researchers’ next goal, Song says.
A free-electron laser (FEL) that is driven by a plasma-based electron accelerator has been unveiled by Sam Barber at Lawrence Berkeley National Laboratory and colleagues. The device is a promising step towards compact, affordable free-electron lasers that are capable of producing intense, ultra-short X-ray laser pulses. It was developed in collaboration with researchers at Berkeley Lab, University of California Berkeley, University of Hamburg and Tau Systems.
A FEL creates X-rays by the rapid back-and-forth acceleration of fast-moving electron pulses using a series of magnets called an undulator. These X-rays are emitted at a narrow wavelength and then interact with the pulse as it travels down the undulator. The result is a bright X-ray pulse with laser-like coherence.
What is more, wavelength of the emitted X-rays can be adjusted simply by changing the energy of the electron pulses, making FELs highly tuneable.
Big and expensive
FELs are especially useful for generating intense, ultra-short X-ray pulses, which cannot be produced using conventional laser systems. So far, several X-ray FELs have been built for this purpose – but each of them relies on kilometre-scale electron accelerators costing huge amounts of money to build and maintain.
To create cheaper and more accessible FELs, researchers are exploring the use of laser-plasma accelerators (LPAs) – which can accelerate electron pulses to high energies over distances of just a few centimetres.
Yet as Barber explains, “LPAs have had a reputation for being notoriously hard to use for FELs because of things like parameter jitter and the large energy spread of the electron beam compared to conventional accelerators. But sustained research across the international landscape continues to drive improvements in all aspects of LPA performance.”
Recently, important progress was made by a group at the Chinese Academy of Sciences (CAS), who used an LPA to create FEL pulses by a factor of 50. Their pulses have a wavelength of 27 nm – which is close to the X-ray regime – but only about 10% of pulses succeeded.
Very stable laser
Now, the team has built on this by making several improvements to the FEL setup, with the aim to enhance its compatibility with LPAs. “On our end, we have taken great pains to ensure a very stable laser with several active feedback systems,” Barber explains. “Our strategy has essentially been to follow the playbook established by the original FEL research: start at longer wavelengths where it is easier to optimize and learn about the process and then scale the system to the shorter wavelengths.”
With these refinements, the team amplified their FEL’s output by a factor of 1000, achieving this in over 90% of their shots. This vastly outperformed the CAS result – albeit at a longer wavelength. “We designed the experiment to operate the FEL at around 420 nm, which is not a particularly exciting wavelength for scientific use cases – it’s just blue light,” Barber says. “But, with very minor upgrades, we plan to scale it for sub-100 nm wavelength where scientific applications become interesting.”
The researchers are optimistic that further breakthroughs are within reach, which could improve the prospects for LPA-driven FEL experiments. One especially important target is reaching the “saturation level” at X-ray wavelengths: the point beyond which FEL amplification no longer increases significantly.
“Another really crucial component is developing laser technology to scale the current laser systems to much higher repetition rates,” Barber says. “Right now, the typical laser used for LPAs can operate at around 10 Hz, but that will need to scale up dramatically to compare to the performance of existing light sources that are pushing megahertz.”
The most common form of water in the universe appears to be much more complex than was previously thought. While past measurements suggested that this “space ice” is amorphous, researchers in the UK have now discovered that it contains crystals. The result poses a challenge to current models of ice formation and could alter our understanding of ordinary liquid water.
Unlike most other materials, water is denser as a liquid than it is as a solid. It also expands rather than contracts when it cools; becomes less viscous when compressed; and exists in many physical states, including at least 20 polymorphs of ice.
One of these polymorphs is commonly known as space ice. Found in the bulk matter in comets, on icy moons and in the dense molecular clouds where stars and planets form, it is less dense than liquid water (0.94 g cm−3 rather than 1 g cm−3), and X-ray diffraction images indicate that it is an amorphous solid. These two properties give it its formal name: low-density amorphous ice, or LDA.
While space ice was discovered almost a century ago, Michael Davies, who studied LDA as part of his PhD research at University College London and the University of Cambridge, notes that its exact atomic structure is still being debated. “It is unclear, for example, whether LDA is a ‘true glassy state’ (meaning a frozen liquid with no ordered structure) or a high disordered crystal,” Davies explains.
The memory of ice
In the new work, Davies and colleagues used two separate computational simulations to better understand this atomic structure. In the first simulation, they froze “boxes” of water molecules by cooling them to -150 °C at different rates, which produced crystalline and amorphous ice in varying proportions. They then compared this spectrum of structures to the structure of amorphous ice as measured by X-ray diffraction.
“The best model to match experiments was a ‘goldilocks’ scenario – that is, one that is not too amorphous and not too crystalline,” Davies explains. “Specifically, we found ice that was up to 20% crystalline and 80% amorphous, with the structure containing tiny crystals around 3-nm wide.”
The second simulation began with large “boxes” of ice consisting of many small ice crystals packed together. “Here, we varied the number of crystals in the boxes to again give a range of very crystalline to amorphous models,” Davies says. “We found very close agreement to experiment with models that had very similar structures compared to the first approach with 25% crystalline ice.”
To back up these findings, the UCL/Cambridge researchers performed a series of experiments. “By re-crystallizing different samples of LDA formed via different ‘parent ice phases’ we found that the final crystal structure formed varied depending on the pathway to creation,” Davies tells Physics World. In other words, he adds, “The final structure had a memory of its parent.”
This is important, Davies continues, because if LDA was truly amorphous and contained no crystalline grains at all, this “memory” effect would not be possible.
Impact on our understanding
The discovery that LDA is not completely amorphous has implications for our understanding of ordinary liquid water. The prevailing “two state” model for water is appealing because it accounts for many of water’s thermodynamic anomalies. However, it rests on the assumption that both LDA and high-density amorphous ice have corresponding liquid forms, and that liquid water can be modelled as a mixture of the two.
“Our finding that LDA actually contains many small crystallites presents some challenges to this model,” Davies says. “It is thus of paramount importance for us to now confirm if a truly amorphous version of LDA is achievable in experiments.”
The existence of structure within LDA also has implications for “panspermia” theory, which hypothesizes that the building blocks of life (such as simple amino acids) were carried to Earth within an icy comet. “Our findings suggest that LDA would be a less efficient transporting material for these organic molecules because a partly crystalline structure has less space in which these ingredients could become embedded,” Davies says.
“The theory could still hold true, though,” he adds, “as there are amorphous regions in the ice where such molecules could be trapped and stored.”
Challenges in determining atomic structure
The study, which is detailed in Physical Review B, highlights the difficulty of determining the exact atomic structure of materials. According to Davies, it could therefore be important for understanding other amorphous materials, including some that are widely used in technologies such as OLEDs and fibre optics.
“Our methodology could be applied to these materials to determining whether they are truly glassy,” he says. “Indeed, glass fibres that transport data along long distances need to be amorphous to function efficiently. If they are found to contain tiny crystals, these could then be removed to improve performance.”
The researchers are now focusing on understanding the structure of other amorphous ices, including high-density amorphous ice. “There is much for us to investigate with regards to the links between amorphous ice phases and liquid water,” Davies concludes.
This episode of the Physics World Weekly podcast features an interview with Kirsty McGhee, who is a scientific writer at the quantum-software company Qruise. It is the second episode in our two-part miniseries on careers for physicists.
While she was doing a PhD in condensed matter physics, McGhee joined Physics World’s Student Contributors Network. This involved writing articles about peer-reviewed research and also proof reading articles written by other contributors.
McGhee explains how the network broadened her knowledge of physics and improved her communication skills. She also says that potential employers looked favourably on her writing experience.
At Qruise, McGhee has a range of responsibilities that include writing documentation, marketing, website design, and attending conference exhibitions. She explains how her background in physics prepared her for these tasks, and what new skills she is learning.
An experiment that scattered high-energy electrons from helium-3 and tritium nuclei has provided the first evidence for three-nucleon short-range correlations. The data were taken in 2018 at Jefferson Lab in the US and further studies of these correlations could improve our understanding of both atomic nuclei and neutron stars.
Atomic nuclei contain nucleons (protons and neutrons) that are bound together by the strong force. These nucleons are not static and they can move rapidly about the nucleus. While nucleons can move independently, they can also move as correlated pairs, trios and larger groupings. Studying this correlated motion can provide important insights into interactions between nucleons – interactions that define the structures of tiny nuclei and huge neutron stars.
The momenta of nucleons can be measured by scattering a beam of high-energy electrons from nuclei. This is because the de Broglie wavelength of these electrons is smaller that the size of the nucleons – allowing individual nucleons to be isolated. During the scattering process, momentum is exchanged between a nucleon and an electron, and how this occurs provides important insights into the correlations between nucleons.
Electron scattering has already revealed that most of the momentum in nuclei is associated with single nucleons, with some also assigned to correlated pairs. These experiments also suggested that nuclei have additional momenta that had not been accounted for.
Small but important
“We know that the three-nucleon interaction is important in the description of nuclear properties, even though it’s a very small contribution,” explains John Arrington at the Lawrence Berkeley National Laboratory in the US. “Until now, there’s never really been any indication that we’d observed them at all. This work provides a first glimpse at them.”
In 2018, Arrington and others did a series of electron-scattering experiments at Jefferson Lab with helium-3 and tritium targets. Now Arrington and an international team of physicists has scoured this scattering data for evidence of short-range, three-nucleon correlations.
Studying these correlations in nuclei with just three nucleons is advantageous because there are no correlations between four or more nucleons. These correlations would make it more difficult to isolate three-nucleon effects in the scattering data.
A further benefit of looking at tritium and helium-3 is that they are “mirror nuclei”. Tritium comprises one proton and two neutrons, while helium-3 comprises two protons and a neutron. The strong force that binds nucleons together acts equally on protons and neutrons. However, there are subtle differences in how protons and neutrons interact with each other – and these differences can be studied by comparing tritium and helium-3 electron scattering experiments.
A clean picture
“We’re trying to show that it’s possible to study three-nucleon correlations at Jefferson Lab even though we can’t get the energies necessary to do these studies in heavy nuclei,” says principle investigator Shujie Li, at Lawrence Berkeley. “These light systems give us a clean picture — that’s the reason we put in the effort of getting a radioactive target material.”
Both helium-3 and tritium are rare isotopes of their respective elements. Helium-3 is produced from the radioactive decay of tritium, which itself is produced in nuclear reactors. Tritium is a difficult isotope to work with because it is used to make nuclear weapons; has a half–life of about 12 years; and is toxic when ingested or inhaled. To succeed, the team had to create a special cryogenic chamber to contain their target of tritium gas.
Analysis of the scattering experiments revealed tantalizing hints of three-nucleon short-range correlations. Further investigation is need to determine exactly how the correlations occur. Three nucleons could become correlated simultaneously, for example, or an existing correlated pair could become correlated to a third nucleon.
Three-nucleon interactions are believed to play an important role in the properties of neutron stars, so further investigation into some of the smallest of nuclei could shed light on the inner workings of much more massive objects. “It’s much easier to study a three-nucleon correlation in the lab than in a neutron star,” says Arrington.
The Butler-Volmer equation is commonly the standard model of electrochemical kinetics. Typically, the effects of applied voltage on the free energies of activation of the forward and backward reactions are analyzed and used to derive a current-voltage relationship. Traditionally, specific properties of the electrode metal were not considered in this derivation and consequently the resulting expression contained no information on the variation of exchange current density with electrode-material-specific parameters such as work function Φ. In recent papers1,2, Buckley and Leddy revisited the classical derivation of the Butler-Volmer equation to include the effect of the electrode metal. We considered in detail the complementary relationship of the chemical potential of electrons μe and the Galvani potential φ and so derived expressions for the current-voltage relationship and the exchange current density that include μe The exchange current density j0 appears as an exponential function of Δμe. Making the approximation Δμe ≈ —FΔΦ yields a linear relationship between ln j0 and Φ. This linear increase in ln j0 with Φ had long been reported3 but had not been explained. In this webinar, these recent modifications of the Butler-Volmer equation and their consequences will be discussed.
1 K S R Dadallagei, D L Parr IV, J R Coduto, A Lazicki, S DeBie, C D Haas and J Leddy, J. Electrochem. Soc, 170, 086508 (2023)
2 D N Buckley and J Leddy, J. Electrochem. Soc, 171, 116503 (2024)
3 S Trasatti, J. Electroanal. Chem., 39, 163—184 (1972)
D Noel Buckley
D Noel Buckley is professor of physics emeritus at the University of Limerick, Ireland and adjunct professor of chemical and biomolecular engineering at Case Western Reserve University. He is a fellow and past-president of ECS and has served as an editor of both the Journal of the Electrochemical Society and Electrochemical and Solid State Letters. He has over 50 years of research experience on a range of topics. His PhD research on oxygen electrochemistry at University College Cork, Ireland was followed by postdoctoral research on high-temperature corrosion at the University of Pennsylvania. From 1979 to 1996, he worked at Bell Laboratories (Murray Hill, NJ), initially on lithium batteries but principally on III-V semiconductors for electronics and photonics. His research at the University of Limerick has been on semiconductor electrochemistry, stress in electrodeposited nanofilms and electrochemical energy storage, principally vanadium flow batteries in collaboration with Bob Savinell’s group at Case. His recent interest in the theory of electron transfer kinetics arose from collaboration with Johna Leddy at the University of Iowa. He has taught courses in scientific writing since 2006 at the University of Limerick and short courses at several ECS Meetings. He is a recipient of the Heinz Gerischer Award and the ECS Electronics and Photonics Division Award. Recently, he led Poetry Evenings at ECS Meetings in Gothenburg and Montreal.
What does a history of women in science accomplish? This volume firmly establishes that women have for a long time made substantial contributions to quantum physics. It raises the profiles of figures like Chien-Shiung Wu, whose early work on photon entanglement is often overshadowed by her later fame in nuclear physics; and Grete Hermann, whose critiques of John von Neumann and Werner Heisenberg make her central to early quantum theory.
But in specifically recounting the work of these women in quantum, do we risk reproducing the same logic of exclusion that once kept them out – confining women to a specialized narrative? The answer is no, and this book is an especially compelling illustration of why.
A reference and a reminder
Two big ways this volume demonstrates its necessity are by its success as a reference, a place to look for the accomplishments and contributions of women in quantum physics; and as a reminder that we still have far to go before there is anything like true diversity, equality or the disappearance of prejudice in science.
The subtitle Beyond Knabenphysik – meaning “boys’ physics” in German – points to one of the book’s central aims: to move past a vision of quantum physics as a purely male domain. Originally a nickname for quantum mechanics given because of the youth of its pioneers, Knabenphysik comes to be emblematic of the collaboration and mentorship that welcomed male physicists and consistently excluded women.
The exclusion was not only symbolic but material. Hendrika Johanna van Leeuwen, who co-developed a key theorem in classical magnetism, was left out of the camaraderie and recognition extended to her male colleagues. Similarly, credit for Laura Chalk’s research into the Stark effect – an early confirmation of Schrödinger’s wave equation – was under-acknowledged in favour of that of her male collaborator’s.
Something this book does especially well is combine the sometimes conflicting aims of history of science and biography. We learn not only about the trajectories of these women’s careers, but also about the scientific developments they were a part of. The chapter on Hertha Sponer, for instance, traces both her personal journey and her pioneering role in quantum spectroscopy. The piece on Freda Friedman Salzman situates her theoretical contributions within the professional and social networks that both enabled and constrained her. In so doing, the book treats each of these women as not only whole human beings, but also integral players in a complex history of one of the most successful and debated physical theories in history.
Lost physics
Because the history is told chronologically, we trace quantum physics from some of the early astronomical images suggesting discrete quantized elements to later developments in quantum electrodynamics. Along the way, we encounter women like Maria McEachern, who revisits Williamina Fleming’s spectral work; Maria Lluïsa Canut, whose career spanned crystallography and feminist activism; and Sonja Ashauer, a Brazilian physicist whose PhD at Cambridge placed her at the heart of theoretical developments but whose story remains little known.
This history could lead to a broader reflection on how credit, networking and even theorizing are accomplished in physics. Who knows how many discoveries in quantum physics, and science more broadly, could have been made more quickly or easily without the barriers and prejudice women and other marginalized persons faced then and still face today? Or what discoveries still lie latent?
Not all the women profiled here found lasting professional homes in physics. Some faced barriers of racism as well as gender discrimination, like Carolyn Parker who worked on the Manhattan Project’s polonium research and is recognized as the first African American woman to have earned a postgraduate degree in physics. She died young without having received full recognition in her lifetime. Others – like Elizabeth Monroe Boggs who performed work in quantum chemistry – turned to policy work after early research careers. Their paths reflect both the barriers they faced and the broader range of contributions they made.
Calculate, don’t think
The book makes a compelling argument that the heroic narrative of science doesn’t just undermine the contributions of women, but of the less prestigious more broadly. Placing these stories side by side yields something greater than the sum of its parts. It challenges the idea that physics is the work of lone geniuses by revealing the collective infrastructures of knowledge-making, much of which has historically relied not only on women’s labour – and did they labour – but on their intellectual rigour and originality.
Many of the women highlighted were at times employed “to calculate, not to think” as “computers”, or worked as teachers, analysts or managers. They were often kept from more visible positions even when they were recognized by colleagues for their expertise. Katharine Way, for instance, was praised by peers and made vital contributions to nuclear data, yet was rarely credited with the same prominence as her male collaborators. It shows clearly that those employed to support from behind the scenes could and did contribute to theoretical physics in foundational ways.
The book also critiques the idea of a “leaky pipeline”, showing that this metaphor oversimplifies. It minimizes how educational and institutional investments in women often translate into contributions both inside and outside formal science. Ana María Cetto Kramis, for example, who played a foundational role in stochastic electrodynamics, combined research with science diplomacy and advocacy.
Should women’s accomplishments be recognized in relation to other women’s, or should they be integrated into a broader historiography? The answer is both. We need inclusive histories that acknowledge all contributors, and specialized works like this one that repair the record and show what emerges specifically and significantly from women’s experiences in science. Quantum physics is a unique field, and women played a crucial and distinctive role in its formation. This recognition offers an indispensable lesson: in physics and in life it’s sometimes easy to miss what’s right in front of us, no less so in the history of women in quantum physics.
Tau leptons are fundamental particles in the lepton family, similar to electrons and muons, but with unique properties that make them particularly challenging to study. Like other leptons, they have a half-integer spin, but they are significantly heavier and have extremely short lifetimes, decaying rapidly into other particles. These characteristics limit opportunities for direct observation and detailed analysis.
The Standard Model of particle physics describes the fundamental particles and forces, along with the mathematical framework that governs their interactions. According to quantum electrodynamics (QED), a component of the Standard Model, protons in high-energy environments can emit photons (γ), which can then fuse to create a pair of tau leptons (ττ⁻): γ γ → ττ
Using QED equations, scientists have previously calculated the probability of this process, how the tau leptons would be produced, and how often it should occur at specific energies. While muons have been extensively studied in proton collisions, tau leptons have remained more elusive due to their short lifetimes.
In a major breakthrough, researchers at CERN have used data from the CMS detector at the Large Hadron Collider (LHC) to make the first measurement of tau lepton pair production via photon-photon fusion in proton-proton collisions. Previously, this phenomenon had only been observed in lead-ion (PbPb) collisions by the ATLAS and CMS collaborations. In those cases, the photons were generated by the strong electromagnetic fields of the heavy nuclei, within a highly complex environment filled with many particles and background noise. In contrast, proton-proton collisions are much cleaner but also much rarer, making the detection of photon-induced tau production a greater technical challenge.
Notably, the team were able to distinguish QED photon collisions from QCD (Quantum Chromodynamics) collisions by the lack of the underlying event. They demonstrated tau particles were being produced without other nearby tracks (paths left by particles) using the excellent vertex resolution of their pixel detector. To verify the technique, the researchers did careful studies of the same processes in muon pair production and developed corrections to apply to the tau lepton processes.
Demonstrating tau pair production in proton-proton collisions not only confirms theoretical predictions but also opens a new avenue for studying tau leptons in high-energy environments. This breakthrough enhances our understanding of lepton interactions and provides a valuable tool for testing the Standard Model with greater precision.
Understanding the behaviour of atoms and molecules at the quantum level is crucial for advances in chemistry, physics, and materials science. However, simulating these systems is extremely complex.
Traditional methods rely on mathematical functions that must be smooth and differentiable. This limits the types of models that can be used—especially modern machine learning models.
In order to remove this requirement, a team of researchers from Tel Aviv University have developed a new approach by combining a stochastic representation of many-body wavefunctions with path integrals.
Their work opens the door to using more flexible and powerful machine learning architectures, such as diffusion models and piecewise transformers.
They demonstrated their method on a simplified model of interacting particles in a 2D harmonic trap. They were able to show that it can accurately capture complex quantum behaviours, including symmetry breaking and the formation of Wigner molecules (a type of ordered quantum state).
The approach is computationally efficient and scales better with system size than traditional methods.
Most importantly though, this work allows for more accessible and scalable quantum simulations using modern AI techniques, potentially transforming how scientists study quantum systems.
Studying physics can so busy and stressful that deciding what you should do after graduating is probably the last thing on any student’s mind. Here to help you work out what to do next are four careers experts, who took part in an episode of Physics World Live earlier this year. They all studied physics or engineering – and have thought long and hard about the career opportunities available for physics graduates.
The four experts are:
Crystal Bailey, director of programmes and inclusive practices at the American Physical Society (APS);
Tamara Clelford, a physics consultant working an aerospace and currently leading a review of the Chartered Physicist standard at the Institute of Physics, which publishes Physics World;
The career options for physicists are wide but can also seem overwhelming – so what advice do you have for people starting out on their career journey today?
Crystal Bailey: Finding a fulfilling career means trying to find something that matches your values. I don’t just mean what you’re interested in or what you like – but who you are as a person. So the first step always starts with self-assessment and self exploration, exploring what it is you really want from your life.
Do you want a job that has good work-life balance? Do you want something with a flexible schedule? Or do you want to make money? Making money is a very righteous and noble thing to want to do it – there’s nothing wrong with that. But when I give careers talks and ask the audience if they’ve asked themselves those questions, almost nobody raises their hand
So I encourage you to reflect on a time when’ve you been really happy and fulfilled. I don’t just mean were you doing, say, a quantum-mechanics problem, but were you with other people? Were you alone? Were you doing something with your hands, building something? Or was it something theoretical? You need to understand what will be a good match for you.
After you’ve done that self-assessment and understand what you need, I advise you do “informational interviews”, which basically involves getting in touch with somebody – online or in person – to ask them what they do day-to-day. What advice do they have? Where’s their sector going?
You’ll get real insider knowledge and, more importantly, it’ll help you build your network – especially if you follow-up, say, every six months to thank them for advice and update them of your situation. It’ll keep that relationship fresh and serve you later when you’re actually looking for jobs in a more targeted way.
Tamara Clelford: You need to understand what it is you enjoy. Are you a leader or do you like to be managed? Do you prefer to be told what to do? Do you like working in a team or working alone? Are you theoretical or more experimental? Do you prefer research or the real world? Maybe you just want to work with, say, aeroplanes, which is a perfectly valid reason to do so.
You also need to ask yourself where you want to work. Do you want to work in a big company, a medium-sized firm, or a small start up? I began in a large defence company, where I could easily switch jobs if something wasn’t the right fit. But in a big firm you often get taken off work as priorities change, so I now work for myself, which is fabulous.
Araceli Venegas-Gomez: The hardest thing is finding out what you like. Your long-term goal might be to get rich or have your own company. Once you work that out, you’ll need a short-term plan. It’ll probably change but having a plan is a great start. Then ask yourself: are you good at it? That self-assessment – understanding your skills and talents – is really important.
Tushna Commissariat: My advice is don’t leave your job search until just before you graduate. Start looking at internships and summer jobs as early as you can. I recall interviewing one physicist who sent an e-mail to NASA and got an internship at the age of 15. But on the other hand, remember that even if you land your perfect job, it might not work out, and its always okay to change your mind.
Our expert panel
Sound advice From left to right: Crystal Bailey, Tamara Clelford, Araceli Venegas-Gomez and Tushna Commissariat. (Courtesy: APS, T Clelford, Qureca, IOP Publishing)
After getting interested in science at high school, Crystal Bailey majored in electrical engineering at the University of Arkansas in Fayetteville but soon realized that “physics was the most beautiful thing ever” and did a PhD in nuclear physics at Indiana University in Bloomington. A chance encounter with someone who was in her Morris-dancing group led to Bailey working as career-programme manager at the American Physical Society, where she now serves as its director of programmes and inclusive practices.
Having declared aged five that she wanted to be a nuclear physicist, Tamara Clelford studied physics and astrophysics at the University of Sheffield in the UK. She has a PhD in antenna design and simulation from Queen Mary, University of London. After a year teaching physics in secondary schools, Clelford then spent a decade working as an antenna engineer in the defence industry. Following a short spell in a start-up, she now works as a freelance physics consultant in the aerospace sector.
Araceli Venegas-Gomez always wanted to work in science or technology and studied aerospace engineering at the Universidad Politécnica de Madrid, before getting a job at Airbus in Germany. However, she always had a passion for physics and in her spare time did a master’s in medical physics via distance learning. After taking an online course in quantum physics at the University of Maryland, Venegas-Gomez did a PhD in quantum simulation at the University of Strathclyde, UK. Her experience of business and academia led her to set up QURECA in 2019, which offers resources, careers advice and education to people who want to work in the burgeoning quantum sector.
Tushna Commissariat grew up in Mumbai, India, where gazing up at the few stars she could make out in the big-city skies inspired her to study science. While doing a bachelor’s degree in physics at Xavier’s College, she did a summer astrophysics placement in Pune, where she quickly realized she wasn’t cut out for academia. Instead, Commissariat did a master’s in science journalism at City, University of London. After an internship at the International Centre for Theoretical Physics in Trieste, Italy, she joined Physics World in 2011, where she now works as careers and features editor.
What is the number one skill – over and above technical knowledge – that physicists have that will help them in their career?
Crystal Bailey: Physicists often go into well-paid jobs that have “engineering” in the title, working alongside other STEM graduates. In fact, physicists have many of the same scientific and technical skills that make engineers and computer scientists so attractive to employers. But what sets physicists apart is a confidence that they can teach themselves whatever they need to know to go to the next step.
It’s a kind of “intellectual fearlessness” that is part of being a physicist. You’re used to marching up to the edge of what is known about the universe and taking that next step over to discover new knowledge. You might not know the answer, but you know you can teach yourself how to find the answer – or find somebody who can help you get there.
Tamara Clelford: It might not help us narrow down where we want to work, but physicists are capable of solving a huge range of problems. We can root around a problem, look for its fundamental aspects, and use mathematical and experimental skills to solve it. Whether it’s a hardware problem, a software problem or the need to derive an equation, we can do all that.
As physicists, we have the ability to upskill, to improve and to solve whatever problem we want.
Tamara Clelford
If we’re not an expert in a particular area, we know we can go and get the relevant expertise. As physicists, we know where our limits are. We’re not going to make stuff up to sound better than we are. We have the ability to upskill, to improve and to solve whatever problem we want.
Araceli Venegas-Gomez: As physicists, we have a multidisciplinarity that we often don’t realize we have. If you’re, say, a marine engineer, you’re going to work in marine engineering. But as a physicist, you can work anywhere there’s a job for you. What’s more, physicists don’t only solve problems; we also want to know why they exist. It might take us a bit longer to find a solution, but we look at it in a way that engineers might not.
Tushna Commissariat: One of the brilliant things about physicists is that they’re absolutely confident that they can come in and fix a problem. You see physicists going into biology and saying “Oh cancer, I can do that”. There are physicists who’ve gone into politics and into sport. I’ve even seen physicists improving nappies for babies.
At the same time, there’s almost a joy in failure: if something doesn’t work or goes wrong, it means something exciting and interesting is about to happen. I remember Rolf-Dieter Heuer, who was then director-general of CERN, saying it’ll be more exciting if we don’t find the Higgs boson because it would have meant the Standard Model of particle physics is broken – which would open up a wealth of possibilities.
What do you know today that you wish you’d known at the start of your career?
Crystal Bailey: When I went to grad school, I liked physics and thought “I’m good at it and I want to keep doing physics”. But I didn’t really having a clear reason for staying in academia. I was just doing what I thought was expected of me and didn’t even want a career in academia. So I wish I had had more of a sense of ownership and a little more confidence about my career.
Don’t doubt yourself. Don’t let anybody tell you that you can’t do something.
Crystal Bailey
The key message is: don’t doubt yourself. Don’t let anybody tell you that you can’t do something. It’s your life – and what you want is the most important thing. I just wish I had been given a little more encouragement and a little more confidence to go in new directions.
Tamara Clelford: In life, your priorities change and it’s very difficult to project into the future. At any particular time, you have certain experience and knowledge, on which you make the best decision you can make. But if, in five or 10 years’ time, you realize things aren’t working, then change and do something else. Trust your instincts – and change when you need to change.
Araceli Venegas-Gomez: I wish I’d known at the start of my career that everything’s going to be okay and there’s no need to panic. If you’re doing a PhD and you don’t finish it, that’s fine – I don’t think I’ve ever met a single physicist who’s ended up jobless. There are millions of options so remind yourself that everything is going to be okay.
Tushna Commissariat: When you’re studying, it’s easy to feel you’re in a kind of bubble universe of exams, practicals or labs. Set backs can feel like the end of the world when they really aren’t: your marks on a particular test won’t determine your entire future. Remember that you gain so many useful skills while studying, whether it’s working with other people or doing outreach work, which might seem a waste of time but are great for your CV.
This article is based on the 9 April 2025 episode of Physics World Live, which you can watch on demand here.
The engines in everyday devices such as cars, vacuum cleaners and fans rely on a classical understanding of heat, energy and work. In recent years, scientists have designed (and in some cases built) new types of engines that incorporate unique quantum features. In addition to boosting performance, these features allow quantum engines to perform tasks that classical machines cannot.
Vijit Nautiyal from the University of New England, Armidale, New South Wales, Australia has now proposed a new type of quantum engine that exchanges not only heat, but also particles, with thermal reservoirs. The advantage of Nautiyal’s proposed quantum thermochemical engine, as described in Physical Review E, is that it combines near-maximum efficiency with high power output. “It’s equivalent to driving a Ferrari at the running cost of a Toyota,” Nautiyal explains. “You enjoy the thrill of high power while saving on fuel efficiency.”
Classical and quantum engines
Car engines typically operate in a four-stroke (Otto) cycle. In the intake stroke, the piston moves downwards, drawing air and fuel into a cylinder. The compression stroke then causes the piston to move upwards, compressing the mixture and increasing its temperature and pressure adiabatically (that is, without losing or gaining heat). Next comes the expansion stroke, when heat is added in the form of an igniting spark, causing the gas to expand adiabatically and performing work on the piston. Finally, during the exhaust stroke, the piston moves up, expelling the spent exhaust gases out of the cylinder.
Nautiyal’s proposed quantum engine replaces the fuel in a car engine with a weakly interacting one-dimensional Bose-Einstein condensate, or Bose gas, in a harmonic trap. Here, the ignition and exhaust (thermalization) strokes are equivalent to coupling the Bose gas to a surrounding cloud of thermal atoms that serves as a hot or cold reservoir. Because the Bose gas (the working fluid) can exchange both heat and particles with this reservoir, the setup can be considered an open quantum system. During the two work strokes (compression and expansion), the gas is instead treated as an isolated quantum many-body system.
The piston in this quantum engine is the strength of inter-atomic interactions in the gas. To move the piston, Nautiyal’s scheme calls for abruptly increasing this interaction strength during the compression stroke and abruptly decreasing it during the expansion stroke.
Engine operations
When Nautiyal’s system exchanges only heat with the hot and cold reservoirs, it cannot operate as an engine because its beneficial output work is less than the input work. However, if it also exchanges particles with the reservoirs, it operates as a thermochemical engine with output work greater than the input, compensating for any quantum friction experienced during the process.
Like the classical Otto engine cycle, Nautiyal’s quantum engine experiences a trade-off between power and efficiency. In classical engines, operating the cycle at a faster speed increases engine power; however, it also typically decreases efficiency because dissipative effects such as heat and friction increase irreversible losses. Similarly, in quantum engines, driving the system faster during the work stroke produces losses in the form of non-adiabatic energy excitations.
These excitations can be suppressed if the work strokes are performed extremely slowly (a quasi-static quench), leading to maximum efficiency. However, this comes at the cost of null power output due to extremely long driving time. Optimizing this trade-off between power and efficiency is thus one of the main goals of this field of finite-time quantum thermodynamics.
The upper bound on the work and efficiency produced by Nautiyal’s thermochemical engine is set by an adiabatic quantum thermochemical engine operating at zero temperature. Remarkably, this engine can operate at near maximum efficiencies while maintaining high power output even in the sudden quench, out-of-equilibrium regime. This is because instead of increasing efficiency by extending cycle time, one can increase it by boosting the flow of particles from the hot reservoir, which raises the internal energy of the working fluid. The additional energy can then be converted into mechanical work during the expansion stroke.
Asked about possible applications of his quantum engine, Nautiyal referred to “quantum steampunk”. This term, which was coined by the physicist Nicole Yunger Halpern at the US National Institute of Standards and Technology and the University of Maryland, encapsulates the idea that as quantum technologies advance, the field of quantum thermodynamics must also advance in order to make such technologies more efficient. A similar principle, Nautiyal explains, applies to smartphones: “The processor can be made more powerful, but the benefits cannot be appreciated without an efficient battery to meet the increased power demands.” Conducting research on quantum engines and quantum thermodynamics is thus a way to optimize quantum technologies.
Dear Physics World readers, I’m going to let you in on a secret. I get anxious every time I see the word “networking” on a meeting or conference agenda. I’m nervous whether anyone will talk to me and – if they do – what I’ll say in reply. Will I end up stuck in a corner fiddling on my phone to make it seem like I want to join in but have something more important to do?
If you feel this way – or even if you don’t – please read on because I have some something important to say for anyone who attends or organizes scientific events.
Now, we all know there are many benefits to networking. It’s a good way to meet like-minded people, tell others about what you’re doing, and build a foundation for collaboration. Networking can also boost your professional and personal development – for example, by identifying new perspectives and challenges, finding a mentor, connecting with other organizations, or developing a tailor-made support system.
However, doing this effectively and efficiently is not necessarily easy. Networking can also soak up valuable time. It can create connections that lead nowhere. It can even be a hugely exploitative and one-sided affair where you find yourself under pressure to share personal and/or professional information that you didn’t intend to.
Top tips
Like most things in life, what you get from networking depends on what you put in. To make the most of such events, try to think about how others are feeling in the same situation. Chances are that they will be a bit nervous and apprehensive about opening the conversation. So there’s no harm in you going first.
A good opening gambit is to briefly introduce yourself, say who you are, where you work and what you do, and seek similar information from the other person. Preparing a short “elevator pitch” about yourself makes it easier to start a conversation and reduces the need to think on the spot. (Fun fact: elevator pitch gets its name from US inventor Elisha Otis, who needed a concise way of explaining his device to catch a plummeting elevator.)
Make an effort to remember other people’s names. I am not brilliant at this and have found that double checking and using people’s names in conversation is a good way to commit them to memory. Some advance preparation also helps. If possible, study the attendee list, so you know who else might be there and where they’re from. Be yourself and try to be an active listener – listen to what others are saying and ask thoughtful questions.
Don’t feel the need to stick with one person or group of people for the whole the time. Five minutes or so is polite and then you can move on and mingle further. Obviously, if you are making a good connection then it’s worth spending a bit more time. But if you are genuinely engaged, making plans to follow up post event should be straightforward.
Decide the best way to share your contact details. It could be an iPhone air drop, taking a photo of someone’s name badge, sending an e-mail, or swapping business cards (seems a bit unecological these days). If there are people you want to meet, don’t be afraid to seek them out. It’s always a nice compliment to approach someone and say: “Ah, I was hoping to speak to you today; I’ve heard a lot about you.”
On the flip side, avoid hanging out with your cronies, by which I mean colleagues from the same company or organization or people you already know well. Set yourself a challenge to meet people you’ve never met before. Remember few of us like being left out so try to involve others in a conversation. That’s especially true if someone’s listening but not getting the chance to speak; think of a question to bring that person into the discussion.
Of course, if someone you meet doesn’t seem to be relevant to you, don’t be afraid to admit it. I’m sure they won’t be offended if you don’t follow up after the meeting. And to those who are already comfortable with networking, remember not to hog all the limelight and to encourage others to participate.
A message to organizers
Let me end with a message to organizers, which – I’ll be honest – is the main reason I’m writing this article. I have recently attended conferences and events where the music is so loud that people, myself included, have gone with the smokers to the perishing cold outside simply so we can hear each other speak. Am I getting old or is this defeating the object of networking? Please, no more loud music!
I also urge event organizers to have places where people can connect, including tables and seating areas where you can put your plates and drinks down. There’s nothing worse than trying to talk while juggling cutlery to avoid a quiche collapsing down the front of your shirt. Buffets are always better than formal sit-down dinners as it provides more opportunity for people to mix. But remember that long queues for food can arise.
So what has networking ever done for me? Over the years the benefits have changed, but most recently I have met some great peer mentors, people whom I can share cross-industry experience and best practice with. And, if I hadn’t been at a certain Institute of Physics networking event last year and met Matin Durrani, the editor of Physics World, then I wouldn’t be writing this article for you today.
I’ll let you, though, be the judge of whether that was a success. [Editor’s note: it certainly was…]
Evaluating electrocardiogram (ECG) traces using a new deep-learning model known as EchoNext looks set to save lives by flagging patients at high risk of structural heart disease (SHD) who might otherwise be missed.
SHD encompasses a range of conditions affecting millions worldwide, including heart failure and valvular heart disease. It is, however, currently underdiagnosed because the diagnostic test for SHD, an echocardiogram, is relatively expensive and complex and thus not routinely performed. Late diagnosis results in unnecessary deaths, reductions in patient quality-of-life and an additional burden on healthcare services. EchoNext could reduce these problems as it provides a way of determining which patients should be sent for an echocardiogram – ultrasound imaging that shows the valves and chambers and how the heart is beating – by analysing the inexpensive and commonly collected ECG traces that record electrical activity in the heart.
The EchoNext model was developed by researchers at Columbia University and NewYork-Presbyterian Hospital in the US, led by Pierre Elias, assistant professor at Columbia University Vagelos College of Physicians and Surgeons and medical director for artificial intelligence at NewYork-Presbyterian. EchoNext is a convolutional neural network, which uses the mathematical operation of convolution to generate information and make predictions. In this case, EchoNext scans through the ECG data in bite-sized segments, generating information about each segment and subsequently assigning it a numerical “weight”. From these values, the AI model then determines if a patient is showing markers of SHD and so requires an echocardiogram. EchoNext learns from retrospective data by checking the accuracy of its predictions, with more than 1.2 million ECG traces from 230,000 patients used in its initial training.
In their study, reported in Nature, the researchers describe running EchoNext on ECG data from 85,000 patients. The AI model identified 9% of those patients as being in the high-risk category for undiagnosed SHD, 55% of whom subsequently had their first echocardiogram. This resulted in a positive diagnosis in almost three-quarters of cases; double the rate of positivity normally seen in first-time echocardiograms.
EchoNext also outperformed 13 cardiologists in making diagnoses based on 3200 ECGs by correctly flagging 77% of structural heart problems while its human colleagues were only 64% accurate – a result so good that it shocked the researchers.
“The really challenging thing here was that from medical school I was taught that you can’t detect things like heart failure or valvular disease from an electrocardiogram. So we initially asked: would the model actually pick out patients with disease that we were missing? I have read more than 10,000 ECGs in my career and I can’t look at an ECG and see what an AI model is seeing,” enthuses Elias. “It’s able to pick up on different sets of patterns that are not necessarily perceptible to us.”
Elias instigated the EchoNext project after an upsetting incident in which he was unable to save a patient transferred from another hospital with critical valvular heart disease because they had been diagnosed too late. “You can’t take care of the patient you don’t know about. So we said: is there a way that we can do a better job with diagnoses?”
EchoNext is now undergoing a clinical trial, based in eight hospital emergency departments, that ends in 2026. “My number one priority is to produce the right clinical evidence that is necessary to prove this technology is safe and efficacious, can be widely adopted and has value in helping patients,” says Elias.
He stresses that it is still early days for all AI technologies, but that even in these trial phases EchoNext – which was recently designated a breakthrough technology by the US Food and Drug Administration (FDA) – is already improving patient lives.
“It’s a really wonderful thing that every week we get to meet the patients that this helped. Our goal is for this to impact as many patients as possible over the next 12 months,” states Elias, adding that since EchoNext is successfully detecting 13 types of heart disease, a similar system should be useful in other healthcare domains too. “We think these kinds of AI-augmented biomarkers can become something that is routinely ordered and used as part of clinical practice,” he concludes.
Illustration of a polaron The bright sphere is the electron, which is distorting the surrounding lattice. The wavy lines are high-order Feynman diagrams for the electron–phonon interaction. (Courtesy: Ella Maru Studio)
Electron–phonon interactions in a material have been modelled by combining billions of Feynman diagrams. Using a modified form of the Monte Carlo method, Marco Bernardi and colleagues at the California Institute of Technology predicted the behaviour of polarons in certain materials without racking up significant computational costs.
Phonons are quantized collective vibrations of the atoms or molecules in a lattice. When an electron moves through certain solids, it can interact with phonons. This electromagnetic interaction creates a particle-like excitation that comprises a propagating electron surrounded by a cloud of phonons. This quasiparticle excitation is called a polaron.
By lowering the electron’s mobility, while increasing its effective mass, polarons can have a substantial impact on the electronic properties of a variety of materials – including semiconductors and high-temperature superconductors.
However, physicists have struggled to model polarons and it would be extremely helpful for them to represent polarons using Feynman diagrams. These are a mainstay of particle physics, which are used to calculate the probabilities of certain particle interactions taking place. This has been challenging because polarons emerge from a superposition of infinitely many higher-order interactions between electrons and phonons. With each successive order, the complexity of these interactions steadily increases – along with the computational power required to represent them with Feynman diagrams.
Higher-order trouble
Unlike some other interactions, each higher order becomes more and more important in representing the polaron as accurately as possible. As a result, calculations cannot be simplified using standard perturbation theory – where only the first few orders of interaction are required to closely approximate the overall process.
“If you can calculate the lowest order, it’s very likely that you cannot do the second order, and the third order will just be impossible,” Bernardi explains. “The computational cost typically scales prohibitively with interaction order. There are too many diagrams to compute, and the higher-order diagrams are too computationally expensive. It’s basically a nightmare in terms of scaling.”
Bernardi’s team – which also included Yao Luo and Jinsoo Park – approached the problem with the Monte Carlo method. This involves taking repeated random samples within a space of all possible events contributing to a process, then adding them together. It allows researchers to build up a close approximation of the process, without accounting for every possibility.
The team generated a series of Feynman diagrams spanning the full range of possible electron–phonon interactions. Then, they combined the diagrams to gain precise descriptions of the dynamic and ground-state properties of polarons in real materials.
Statistical noise
One issue with a fully-random Monte Carlo approach is the sign problem, which arises from statistical noise that can emerge as electrons scatter between different energy bands during electron–phonon interactions. Since different bands can contribute positively or negatively to the interaction probabilities represented by Feynman diagrams, these contributions can cancel each other out when added together.
To avoid this, Bernardi’s team adapted the Monte Carlo method to evaluate each band contribution in a structured, non-random way – preventing sign cancellations. In addition, the researchers applied a matrix compression approach. This vastly reduced the size and complexity of the electron–phonon interaction data, without sacrificing accuracy. Altogether, this enabled them to generate billions of diagrams without significant computational costs.
“The clever diagram sampling, sign problem removal, and electron–phonon matrix compression are the three key pieces of the puzzle that have enabled this paradigm shift in the polaron problem,” Bernardi explains.
The trio hopes that its technique will help us understand polaron behaviours. “The method we developed could also help study strong interactions between light and matter, or even provide the blueprint to efficiently add up Feynman diagrams in entirely different physical theories,” Bernardi says. In turn, it could help to provide deeper insights into a variety of effects where polarons contribute – including electrical transport, spectroscopy, and superconductivity.
A young gas giant exoplanet appears to be causing its host star to emit energetic outbursts. This finding, which comes from astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) and collaborators in Germany, Sweden and Switzerland, is the first evidence of planets actively influencing their stars, rather than merely orbiting them.
“Until now, we had only seen stars flare on their own, but theorists have long suspected that close-in planets might disturb their stars’ magnetic fields enough to trigger extra flares,” explains Maximilian Günther, a project scientist with the European Space Agency’s Cheops (Characterising ExOPlanet Satellite) mission. “This study now offers the first observational hint that this might indeed be happening.”
Stars with flare(s)
Most stars produce flares at least occasionally. This is because as they spin, they build up magnetic energy – a process that Günther compares to the dynamos on Dutch bicycles. “When their twisted magnetic field lines occasionally snap, they release bursts of radiation,” he explains. “Our own Sun regularly behaves like this, and we experience its bursts of energy as part of space weather on Earth.” The charged particles that follow such flares, he adds, are responsible for the aurorae at our planet’s poles.
The flares the ASTRON team spotted came from a star called HIP 67522. Although classified as a G dwarf star like our own Sun, HIP 67522 is much younger, being 17 million years old rather than 4.5 billion. It is also slightly larger and cooler, and astronomers had previously used data from NASA’s Transiting Exoplanet Survey Satellite (TESS) to identify two planets orbiting it. Denoted HIP 67522 b and HIP 67522 c, both are located near their host, but HIP 67522 b is especially close, completing an orbit in just seven Earth days.
In the latest work, which is detailed in Nature, ASTRON’s Ekaterina Ilin and colleagues used Cheops’ precision targeting to make more detailed observations of the HIP 67522 system. These observations revealed a total of 15 flares, and Ilin notes that almost all of them appeared to be coming towards us as HIP 67522 b transited in front of its host as seen from Earth. This is significant, she says, because it suggests that the flares are being triggered by the planet, rather than by some other process.
“This is the first time we have seen a planet influencing its host star, overturning our previous assumptions that stars behave independently,” she says.
Six times more flaring
The ASTRON team estimate that HIP 67522 b is exposed to around six times as many flares as it would be if it wasn’t triggering some of them itself. This is an unusually high level of radiation, and it may help explain recent observations from the James Webb Space Telescope (JWST) that show HIP 67522 b losing its atmosphere faster than expected.
“The new study estimates that the planet is cutting its own atmosphere’s life short by half,” Günther says. “It might lose its atmosphere in the next 400‒700 million years, compared to the 1 billion years it would otherwise.”
If such a phenomenon turns out to be common, he adds, “it could help explain why some young planets have inflated atmospheres or evolve into smaller, denser worlds. And it could inform how we see the demography of ‘adult planets’.”
Astrobiology implications
One big unanswered question, Günther says, is whether the slightly more distant planet HIP 67522 c shows similar interactions with its host. “Comparing the two would be incredible, not only doubling the sampling size, but revealing how distance from the star affects magnetic interactions.”
The ASTRON researchers say they also want to understand the magnetic field of HIP 67522 b itself. More broadly, they plan to look for other such systems, hoping to find out how common they really are.
For Günther, who was not directly involved in the present study, even a single example is already important. “I have worked on exoplanets and stellar flares myself for many years, mostly inspired by the astrobiology implications, but this discovery opens a whole new window into how stars and planets can influence each other,” he says. “It is a wake-up call to me that planets are not just passive passengers; they actively shape their environments,” he tells Physics World. “That has big implications for how we think about planetary atmospheres, habitability and the evolution of worlds across the galaxy.”
Many of us will have careers with three distinct eras: education, work and retirement. While the first two tend to be regimented, the third age offers the possibility of pursuing a wide range of interests.
Our guest in this episode of the Physics World Weekly podcast is the retired particle physicist Michael Albrow, who is scientist emeritus at Fermilab in the US. He has just published his book Space Times Matter: One Hundred Short Stories About The Universe, which is a collection of brief essays and poems related to science.
Much of the book comes from a newspaper column that Albrow wrote earlier in his retirement and he has also been involved in collaborations with visual and musical artists. In this podcast he talks about this third age of his career as a physicist and gives some tips for your retirement.
Ever since the Fukushima Daiichi nuclear power plant accident that caused the discharge of radionuclides from the power plant into the ocean, operators at the Tokyo Electric Power Company (TEPCO) have been implementing measures to reduce groundwater inflow into the damaged reactor buildings. TEPCO has also been pumping water into the reactors since the accident to cool them.
The cooled water is then treated using the Advanced Liquid Processing System (ALPS), which removes all radioactive materials from the water except for tritium – which is very difficult to remove and has a half-life of 12.32 ±0.02 years. This treated water ended up being accumulated and stored at the site, with limited space to store it.
To combat this storage issue, the Japanese government implemented a new policy in 2021 focused on discharging the ALPS-treated water into the ocean using a 1 km long tunnel. The release of the treated water (containing tritium) began on 24th August 2023, and the plan is to continue releasing it until 2050. The government set a threshold for tritium suspension levels of 700 Bq/L in the discharge outlet vicinity and 30 Bq/L in the ocean. If the concentrations exceed these thresholds, then the discharging must stop immediately.
Researchers at the University of Tokyo have now collaborated with Fukushima University to investigate the effects of discharging tritium into the local ocean environment, and whether the discharging of this treated water is actually having an adverse impact. The study used an ocean general circulation model known as COCO4.9 to look at the influence of climate conditions – such as long-term global warming – on the discharge scenarios of tritium from the power plant. The researchers examined multiple discharge scenarios (based on the amount of tritium released) up until 2099.
Previously, no modelling had been performed looking at long-term impacts relating to the changing environmental conditions of the planet. In a press release from the University of Tokyo, lead author Alexandre Cauqouin states that: “In our global ocean simulations, we could investigate how ocean circulation changes due to the global warming and representation of fine-scale ocean eddies influence the temporal and spatial distribution of tritium originating from these treated-water releases”.
It is important to find out how fast and far the tritium discharge spreads because both climate change and eddies in water currents can speed up the movement of tritium through the ocean.
The study revealed that in all but one of the modelled scenarios (and at the release location, which has a much higher concentration because the treated water hasn’t dissipated yet), the tritium concentration in the ocean remained almost the same, and at a very low concentration. This was true for both long- and short-term scenarios – showing that the discharge from the Fukushima Daiichi nuclear power plant has an almost negligible impact on the ocean.
Other than the worst-case scenario, the model discovered that the increase in tritium from the treated water is 0.1% or less of the tritium background concentration of 0.03–0.2 Bq/L within 25 km of the discharge site in the Pacific Ocean. This is well below detection limits – such a small amount that the presence of the added tritium from the treated water cannot be measured directly in the seawater. The results are also far below the safety standards of 10,000 Bq/L set by the World Health Organization and consistent with physical seawater monitoring being performed today.
Even in the worst-case scenario, the levels of tritium still fell well below the detection limits, but the model did find that in such a high-CO2 emission scenario, there would be an increased concentration of tritium in the south of Japan due to the Kuroshio current – which could theoretically reach the western coast of the US, but in insufficient concentrations to have any adverse effects throughout the Pacific Ocean.
Overall, the study showed that the long-term safety threshold won’t be exceeded under the current treated water release plans. The study could also help with building future models to better understand how tritium moves through both water vapour and ocean water – as tritium could be used in the future as a chemical tracer to track atmospheric and oceanic circulation, precipitation patterns, river catchments, moisture sources and groundwater flow.
Glioblastoma is the most aggressive brain cancer and the hardest to treat, as it spreads and invades healthy brain tissue in a diffuse, microscopic way. Surgical treatment calls for a fine balance between excising all cancerous tissues and removing as little healthy brain tissue as possible. To help neurosurgeons more accurately remove glioblastoma, an international research collaboration has developed an optical imaging probe that identifies microscopic cancer cells in the margins of tumour-resected cavities in the brain.
The imaging probe works by exploiting the significantly increased fatty acid (FA) metabolism exhibited by glioblastoma cells. FA metabolism plays a key role in tumour progression and proliferation and is central to cancer immunity. To enable real-time, non-invasive imaging of FA absorption, the researchers – from Erasmus University Medical Center (Erasmus MC) in The Netherlands and the University of Missouri in the US – covalently linked a long-chain saturated FA with the clinically approved near-infrared (NIR) dye indocyanine green (ICG).
ICG has intrinsic low autofluorescence, enables deep tissue imaging and exhibits a high signal-to-noise ratio compared with visible fluorophores. The team hypothesized that a probe combining ICG with a FA might specifically accumulate in tumours and enable efficient intraoperative visualization of tumour margins. Importantly, the spectral characteristics of ICG make it compatible with many existing intraoperative cameras and surgical microscopes.
The researchers initially investigated the uptake of the FA-ICG probe in living cells, confirming that the dye’s physiological uptake resembles that of natural FAs. They then used fluorescence imaging to assess FA-ICG uptake in mice with implanted glioblastoma, observing high accumulation in the brain tumours.
Comparing the fluorescence signal from mice administered with equivalent doses of FA-ICG and ICG revealed that the average radiance from FA-ICG was approximately 2.2 times higher than that from IGC. At 12 and 24 h post-injection, retention of the probe in the brain was approximately two to three times higher in the tumour-bearing than the non-tumour-bearing hemisphere.
Next, lead authors Meedie Ali and Pavlo Khodakivskyi and their colleagues investigated the application of FA-ICG as a preclinical imaging agent in a patient-derived model of glioblastoma. They showed that the probe could successfully image tumour growth at different time points in several mice.
“This finding is of importance for preclinical research since patient-derived xenograph models of glioblastoma are characterized by an unpredictable growth pattern and low tumour implantation rates,” explains principal investigator Elena Goun from the University of Missouri. “Thus, monitoring of tumour status by sensitive, non-invasive in vivo fluorescence imaging would be of high value as the introduction of optical imaging of reporter genes [an alternative monitoring approach] is known to result in tumour phenotypic alterations.”
Fluorescence-guided surgery
The researchers also demonstrated the feasibility of FA-ICG as a contrast agent for NIR image-guided cancer surgery, performing surgery on tumour-bearing mice using a standard NIR camera approved for use in surgical suites. Not only did the FA-ICG probe successfully image glioblastoma in the animals’ brains, but the brains also exhibited a considerably higher fluorescence signal than seen from similar mice injected with an ICG-only dye.
Subsequently, the team employed the probe during surgical resection of veterinarian-diagnosed symptomatic canine mastocytoma (a skin cancer) in a pet dog. Ten hours after injection with FA-ICG, the dog underwent surgery, with image-guided surgery performed successfully using an open-air NIR surgical camera.
If the probe transitions to routine clinical use, it could prove be of great benefit to neurosurgeons. If they can identify cancer cells, which are microscopic and resemble healthy brain tissue, outside the surgical margins, follow-up chemotherapy and radiation treatments should be more effective and cancer recurrence may be delayed. The probe also offers the practical features of a workable surgical procedure, an appropriate half-life and fluorescence that can be seen under normal operating room lights.
“Our results demonstrate that FA metabolism represents an excellent target for tumour imaging, leading to significantly enhanced uptake of the FA-ICG probe in tumours,” the researchers write. “[The probe] represents a promising candidate for a wide range of applications in the fields of metabolic imaging, drug development and most notably for translation in image-guided surgery.”
The researchers are now planning a Phase I clinical trial to examine the safety and efficacy of the probe. Specifically, they aim to determine how well patients tolerate the probe, what side effects may occur at an effective dose, and how the probe’s performance compares to existing optical imaging surgical tools.
“The upside of fluorescence-guided surgery is that you can make little remnants much more visible using the light emitting properties of these tumour cells when you give them a dye,” says Rutger Balvers, a neurosurgeon at Erasmus MC who is expected to lead the human clinical trials, in a press statement. “And we think that the upside of FA-ICG compared to what we have now is that it’s more select in targeting tumour cells. The visual properties of the probe are better than what we’ve used before.”
As a mother of two, I’ve read a lot of children’s books. While there are some so good that even parents don’t mind reading them again and again, it’s also very easy for them to miss the mark and end up “accidentally” hidden behind other books. They’ve not only got to have an exciting story, but also easy wording, a rhythmic pace, flowing language and captivating pictures.
Great non-fiction kids’ books are especially hard to find as they need to add in yet another ingredient: facts. As a result, they can often struggle to portray educational topics in an accessible and engaging way without being boring. So when I saw the ever impressive Jess Wade had published her second children’s book about physics, Light: the Extraordinary Energy That Illuminates Our World, I was intrigued.
And now, with the help of beautiful illustrations by Argentinian artist Ana Sanfelippo, Wade has created a clear, concise explanation of light, how it behaves and how we use it. The book starts by describing where light comes from and why we need it, and goes on to more complex topics like reflection, scattering and dispersion, the electromagnetic spectrum, and technologies that use light.
The language is clear, the sentences are simple, and there is a flow to the narrative that makes up for the lack of a story. Wade makes the science relatable for children by bringing in real-world examples – such as how your shadow changes length during the day, and how apples reflect red light so look red. And throughout, Sanfelippo’s gorgeous illustrations fill the pages with colourful images of a girl and her dog exploring the concepts discussed, keeping the content bright and cheerful.
Cats and secrets
Now obviously I am not the target audience for Light. So, as my own children are too young (the age range listed is 7–12 years), I asked my eight-year old-niece, Katie, to take a look.
Instantly, Katie loved the illustrations, which helped keep her engaged with the content as she read – her favourite was one of a cat using a desk lamp to create a shadow. She was intrigued by how fast light is – “you’d have to run seven and a half times around Planet Earth in a single second” – and liked being “let in on a secret” when Wade explains that white light actually contains a rainbow.
But as the book went on, she found some bits confusing, like the section on the electromagnetic spectrum. “It’s definitely a book someone Katie’s age should read with a grown up, and maybe in two sittings, because it’s very information heavy (in a good way),” said her mum, Nicci. Indeed, there are a couple of page spreads that stand out as being particularly busy and wordy, and these dense parts somewhat interrupt the book’s flow. “But overall, she found the topic very interesting, and it provoked a lot of questions,” Nicci continued. “I enjoyed sharing it with her!”
I think it’s safe to say that Wade can add another success to her list of many accomplishments. Light is beautiful and educational, and personally, I wouldn’t hesitate to give it as gift or keep it at the front of the bookshelf.
Superconductors are materials that, below a certain critical temperature, exhibit zero electrical resistance and completely expel magnetic fields, a phenomenon known as the Meissner effect. They can be categorized into two types.
Type-I superconductors are what we typically think of as conventional superconductors. They entirely repel magnetic fields and abruptly lose their superconducting properties when the magnetic field exceeds a certain threshold, known as the critical field, which depends on both magnetic field strength and temperature.
In contrast, Type-II superconductors have two critical field values. As the magnetic field increases, the material transitions through different states. At low magnetic fields below the first critical field, magnetic flux is completely excluded. Between the first and second critical fields, some magnetic flux enters the material. Above the second critical field, superconductivity is destroyed.
In Type-II superconductors, when magnetic flux enters the material, it does so at discrete points, forming quantized vortices. These vortices repel each other and self-organize into a regular pattern known as the Abrikosov lattice. This effect has also been observed in Bose-Einstein condensates (bosons at extremely low temperatures) and chiral magnets (magnetic materials with spirally aligned magnetic moments). Interestingly, similar vortex self-organization is seen in liquid crystals, offering deeper insights into the underlying physics.
In this study, the researchers investigate vortex behaviour within a liquid crystal droplet, revealing a novel phenomenon termed Abrikosov clusters, which parallels the structures seen in Type-II superconductors. They examine the transition from an isotropic liquid phase to a chiral liquid phase upon cooling. Through a combination of experimental observations and theoretical modelling, the study demonstrates how chiral domains, in other words topological defects, cluster due to the interplay between vortex repulsion and the spatial confinement imposed by the droplet.
To model this behaviour, the researchers use a mathematical framework originally developed for superconductivity called the Ginzburg-Landau equation, which helps identify how certain vortex patterns emerge by minimizing the system’s energy. An interesting observation is that light passing through the chiral domains of the droplet can resultingly obtain chirality. This suggests that the research may offer innovative ways to steer and shape light, making it valuable for both data communication and astronomical imaging.
The quantum tangent kernel method is a mathematical approach used to understand how fast and how well quantum neural networks can learn. A quantum neural network is a machine learning model that runs on a quantum computer. Quantum tangent kernels help predict how the model will behave, particularly as it becomes very large – this is known as the infinite-width limit. This allows researchers to assess a model’s potential before training it, helping them design more efficient quantum circuits tailored to specific learning tasks.
A major challenge in quantum machine learning is the barren plateau problem, where the optimization landscape becomes flat, hiding the location of the minimum energy state. Imagine hiking in the mountains, searching for the lowest valley, but standing on a huge, flat plain. You wouldn’t know which direction to go. This is similar to trying to find the optimal solution in a quantum model when the learning signal disappears.
To address this, the researchers introduce the concept of quantum expressibility, which describes how well a quantum circuit can explore the space of possible quantum states. In the hiking analogy, quantum expressibility is like the detail level of your map. If expressibility is too low, the map lacks enough detail to guide you. If it’s too high, the map becomes overly complex and confusing.
The researchers investigate how quantum expressibility influences the value concentration of quantum tangent kernels. Value concentration refers to the tendency of kernel values to cluster around zero, which contributes to barren plateaus. Through numerical simulations, the authors validate their theory and show that quantum expressibility can help predict and understand the learning dynamics of quantum models.
In machine learning, loss functions measure the difference between predicted outputs and actual target values. These can relate to a global optimum (the best possible value across the entire system) or a local optimum (the best value within a small region or subset of qubits). The study shows that high expressibility can drastically reduce quantum tangent kernel values for global tasks, though this effect can be partially mitigated for local tasks.
The study establishes the first rigorous analytical link between the expressibility of quantum encodings and the behaviour of quantum neural tangent kernels. It offers valuable insights for improving quantum learning algorithms and supports the design of better quantum models, especially large, powerful quantum circuits, by showing how to balance expressiveness and learnability.