↩ Accueil

Vue lecture

Shengxi Huang: how defects can boost 2D materials as single-photon emitters

Photo of researchers in a lab at Rice University.
Hidden depths Shengxi Huang (left) with members of her lab at Rice University in the US, where she studies 2D materials as single-photon sources. (Courtesy: Jeff Fitlow)

Everyday life is three dimensional, with even a sheet of paper having a finite thickness. Shengxi Huang from Rice University in the US, however, is attracted by 2D materials, which are usually just one atomic layer thick. Graphene is perhaps the most famous example — a single layer of carbon atoms arranged in a hexagonal lattice. But since it was first created in 2004, all sorts of other 2D materials, notably boron nitride, have been created.

An electrical engineer by training, Huang did a PhD at the Massachusetts Institute of Technology and postdoctoral research at Stanford University before spending five years as an assistant professor at the Pennsylvania State University. Huang has been at Rice since 2022, where she is now an associate professor in the Department of Electrical and Computer Engineering, the Department of Material Science and NanoEngineering, and the Department of Bioengineering.

Her group at Rice currently has 12 people, including eight graduate students and four postdocs. Some are physicists, some are engineers, while others have backgrounds in material science or chemistry. But they all share an interest in understanding the optical and electronic properties of quantum materials and seeing how they can be used, for example, as biochemical sensors. Lab equipment from Picoquant is vital in helping in that quest, as Huang explains in an interview with Physics World.

Why are you fascinated by 2D materials?

I’m an electrical engineer by training, which is a very broad field. Some electrical engineers focus on things like communication and computing, but others, like myself, are more interested in how we can use fundamental physics to build useful devices, such as semiconductor chips. I’m particularly interested in using 2D materials for optoelectronic devices and as single-photon emitters.

What kinds of 2D materials do you study?

The materials I am particularly interested in are transition metal dichalcogenides, which consist of a layer of transition-metal atoms sandwiched between two layers of chalcogen atoms – sulphur, selenium or tellurium. One of the most common examples is molybdenum disulphide, which in its monolayer form has a layer of sulphur on either side of a layer of molybdenum. In multi-layer molybdenum disulphide, the van der Waals forces between the tri-layers are relatively weak, meaning that the material is widely used as a lubricant – just like graphite, which is a many-layer version of graphene.

Why do you find transition metal dichalcogenides interesting?

Transition metal dichalcogenides have some very useful optoelectronic properties. In particular, they emit light whenever the electron and hole that make up an “exciton” recombine. Now because these dichalcogenides are so thin, most of the light they emit can be used. In a 3D material, in contrast, most light is generated deep in the bulk of the material and doesn’t penetrate beyond the surface. Such 2D materials are therefore very efficient and, what’s more, can be easily integrated onto chip-based devices such as waveguides and cavities.

Transition metal dichalcogenide materials also have promising electronic applications, particularly as the active material in transistors. Over the years, we’ve seen silicon-based transistors get smaller and smaller as we’ve followed Moore’s law, but we’re rapidly reaching a limit where we can’t shrink them any further, partly because the electrons in very thin layers of silicon move so slowly. In 2D transition metal dichalcogenides, in contrast, the electron mobility can actually be higher than in silicon of the same thickness, making them a promising material for future transistor applications.

What can such sources of single photons be used for?

Single photons are useful for quantum communication and quantum cryptography. Carrying information as zero and one, they basically function as a qubit, providing a very secure communication channel. Single photons are also interesting for quantum sensing and even quantum computing. But it’s vital that you have a highly pure source of photons. You don’t want them mixed up with “classical photons”, which — like those from the Sun — are emitted in bunches as otherwise the tasks you’re trying to perform cannot be completed.

What approaches are you taking to improve 2D materials as single-photon emitters?

What we do is introduce atomic defects into a 2D material to give it optical properties that are different to what you’d get in the bulk. There are several ways of doing this. One is to irradiate a sample with ions or electrons, which can bombard individual atoms out to generate “vacancy defects”. Another option is to use plasmas, whereby atoms in the sample get replaced by atoms from the plasma.

So how do you study the samples?

We can probe defect emission using a technique called photoluminescence, which basically involves shining a laser beam onto the material. The laser excites electrons from the ground state to an excited state, prompting them to emit light. As the laser beam is about 500-1000 nm in diameter, we can see single photon emission from an individual defect if the defect density is suitable.

Photo of researchers in a lab at Rice University
Beyond the surface Shengxi Huang (second right) uses equipment from PicoQuant to probe 2D materials. (Courtesy: Jeff Fitlow)

What sort of experiments do you do in your lab?

We start by engineering our materials at the atomic level to introduce the correct type of defect. We also try to strain the material, which can increase how many single photons are emitted at a time. Once we’ve confirmed we’ve got the correct defects in the correct location, we check the material is emitting single photons by carrying out optical measurements, such as photoluminescence. Finally, we characterize the purity of our single photons – ideally, they shouldn’t be mixed up with classical photons but in reality, you never have a 100% pure source. As single photons are emitted one at a time, they have different statistical characteristics to classical light. We also check the brightness and lifetime of the source, the efficiency, how stable it is, and if the photons are polarized. In fact, we have a feedback loop: what improvements can we do at the atomic level to get the properties we’re after?

Is it difficult adding defects to a sample?

It’s pretty challenging. You want to add just one defect to an area that might be just one micron square so you have to control the atomic structure very finely. It’s made harder because 2D materials are atomically thin and very fragile. So if you don’t do the engineering correctly, you may accidentally introduce other types of defects that you don’t want, which will alter the defects’ emission.

What techniques do you use to confirm the defects are in the right place?

Because the defect concentration is so low, we cannot use methods that are typically used to characterise materials, such as X-ray photo-emission spectroscopy or scanning electron microscopy. Instead, the best and most practical way is to see if the defects generate the correct type of optical emission predicted by theory. But even that is challenging because our calculations, which we work on with computational groups, might not be completely accurate.

How do your PicoQuant instruments help in that regard?

We have two main pieces of equipment – a MicroTime 100 photoluminescence microscope and a FluoTime 300 spectrometer. These have been customized to form a Hanbury Brown Twiss interferometer, which measures the purity of a single photon source. We also use the microscope and spectrometer to characterise photoluminescence spectrum and lifetime. Essentially, if the material emits light, we can then work out how long it takes before the emission dies down.

Did you buy the equipment off-the-shelf?

It’s more of a customised instrument with different components – lasers, microscopes, detectors and so on — connected together so we can do multiple types of measurement. I put in a request to Picoquant, who discussed my requirements with me to work out how to meet my needs. The equipment has been very important for our studies as we can carry out high-throughput measurements over and over again. We’ve tailored it for our own research purposes basically.

So how good are your samples?

The best single-photon source that we currently work with is boron nitride, which has a single-photon purity of 98.5% at room temperature. In other words, for every 200 photons only three are classical. With transition-metal dichalcogenides, we get a purity of 98.3% at cryogenic temperatures.

What are your next steps?

There’s still lots to explore in terms of making better single-photon emitters and learning how to control them at different wavelengths. We also want to see if these materials can be used as high-quality quantum sensors. In some cases, if we have the right types of atomic defects, we get a high-quality source of single photons, which we can then entangle with their spin. The emitters can therefore monitor the local magnetic environment with better performance than is possible with classical sensing methods.

The post Shengxi Huang: how defects can boost 2D materials as single-photon emitters appeared first on Physics World.

  •  

Richard Bond and George Efstathiou share the 2025 Shaw Prize in Astronomy

The 2025 Shaw Prize in Astronomy has been awarded to Richard Bond and George Efstathiou “for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background”. The prize citation continues, “Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass–energy content of the universe”.

Efstathiou is professor of astrophysics at the University of Cambridge in the UK. Bond is a professor at the Canadian Institute for Theoretical Astrophysics (CITA) and university professor at the University of Toronto in Canada. They share the $1.2m prize money equally.

The annual award is given by the Shaw Prize Foundation, which was founded in 2002 by the Hong Kong-based filmmaker, television executive and philanthropist Run Run Shaw (1907–2014). It will be presented at a ceremony in Hong Kong on 21 October. There are also Shaw Prizes for life sciences and medicine; and mathematical sciences.

Bond studied mathematics and physics at Toronto. In 1979 he completed a PhD in theoretical physics at the California Institute of Technology (Caltech). He directed CITA in 1996-2006.

Efstathiou studied physics at Oxford before completing a PhD in astronomy at the UK’s Durham University in 1979. He is currently director of the Institute of Astronomy in Cambridge.

The post Richard Bond and George Efstathiou share the 2025 Shaw Prize in Astronomy appeared first on Physics World.

  •  

No laughing matter: a comic book about the climate crisis

Comic depicting a parachutist whose chute is on fire and their thought process about not using their backup chute
Blunt message Anti-nuclear thinking is mocked in World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Comics are regarded as an artform in France, where they account for a quarter of all book sales. Nevertheless, the graphic novel World Without End: an Illustrated Guide to the Climate Crisis was a surprise French bestseller when it first came out in 2022. Taking the form of a Socratic dialogue between French climate expert Jean-Marc Jancovici and acclaimed comic artist Christophe Blain, it’s serious, scientific stuff.

Now translated into English by Edward Gauvin, the book follows the conventions of French-language comic strips or bandes dessinées. Jancovici is drawn with a small nose – denoting seriousness – while Blain’s larger nose signals humour. The first half explores energy and consumption, with the rest addressing the climate crisis and possible solutions.

Overall, this is a Trojan horse of a book: what appears to be a playful comic is packed with dense, academic content. Though marketed as a graphic novel, it reads more like illustrated notes from a series of sharp, provocative university lectures. It presents a frightening vision of the future and the humour doesn’t always land.

The book spans a vast array of disciplines – not just science and economics but geography and psychology too. In fact, there’s so much to unpack that, had I Blain’s skills, I might have reviewed it in the form of a comic strip myself. The old adage that “a picture is worth a thousand words” has never rung more true.

Absurd yet powerful visual metaphors feature throughout. We see a parachutist with a flaming main chute that represents our dependence on fossil fuels. The falling man jettisons his reserve chute – nuclear power – and tries to knit an alternative using clean energy, mid-fall. The message is blunt: nuclear may not be ideal, but it works.

World Without End is bold, arresting, provocative and at times polemical.

The book is bold, arresting, provocative and at times polemical. Charts and infographics are presented to simplify complex issues, even if the details invite scrutiny. Explanations are generally clear and concise, though the author’s claim that accidents like Chernobyl and Fukushima couldn’t happen in France smacks of hubris.

Jancovici makes plenty of attention-grabbing statements. Some are sound, such as the notion that fossil fuels spared whales from extinction as we didn’t need this animal’s oil any more. Others are dubious – would a 4 °C temperature rise really leave a third of humanity unable to survive outdoors?

But Jancovici is right to say that the use of fossil fuels makes logical sense. Oil can be easily transported and one barrel delivers the equivalent of five years of human labour. A character called Armor Man (a parody of Iron Man) reminds us that fossil fuels are like having 200 mechanical slaves per person, equivalent to an additional 1.5 trillion people on the planet.

Fossil fuels brought prosperity – but now threaten our survival. For Jancovici, the answer is nuclear power, which is perhaps not surprising as it produces 72% of electricity in the author’s homeland. But he cherry picks data, accepting – for example – the United Nations figure that only about 50 people died from the Chernobyl nuclear accident.

While acknowledging that many people had to move following the disaster, the author downplays the fate of those responsible for “cleaning up” the site, the long-term health effects on the wider population and the staggering economic impact – estimated at €200–500bn. He also sidesteps nuclear-waste disposal and the cost and complexity of building new plants.

While conceding that nuclear is “not the whole answer”, Jancovici dismisses hydrogen and views renewables like wind and solar as too intermittent – they require batteries to ensure electricity is supplied on demand – and diffuse. Imagine blanketing the Earth in wind turbines.

Cartoon of a doctor and patient. The patient has increased their alcohol intake but also added in some healthy orange juice
Humorous point A joke from World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Still, his views on renewables seem increasingly out of step. They now supply nearly 30% of global electricity – 13% from wind and solar, ahead of nuclear at 9%. Renewables also attract 70% of all new investment in electricity generation and (unlike nuclear) continue to fall in price. It’s therefore disingenuous of the author to say that relying on renewables would be like returning to pre-industrial life; today’s wind turbines are far more efficient than anything back then.

Beyond his case for nuclear, Jancovici offers few firm solutions. Weirdly, he suggests “educating women” and providing pensions in developing nations – to reduce reliance on large families – to stabilize population growth. He also cites French journalist Sébastien Bohler, who thinks our brains are poorly equipped to deal with long-term threats.

But he says nothing about the need for more investment in nuclear fusion or for “clean” nuclear fission via, say, liquid fluoride thorium reactors (LFTRs), which generate minimal waste, won’t melt down and cannot be weaponized.

Perhaps our survival depends on delaying gratification, resisting the lure of immediate comfort, and adopting a less extravagant but sustainable world. We know what changes are needed – yet we do nothing. The climate crisis is unfolding before our eyes, but we’re paralysed by a global-scale bystander effect, each of us hoping someone else will act first. Jancovici’s call for “energy sobriety” (consuming less) seems idealistic and futile.

Still, World Without End is a remarkable and deeply thought-provoking book that deserves to be widely read. I fear that it will struggle to replicate its success beyond France, though Raymond Briggs’ When the Wind Blows – a Cold War graphic novel about nuclear annihilation – was once a British bestseller. If enough people engaged with the book, it would surely spark discussion and, one day, even lead to meaningful action.

  • 2024 Particular Books £25.00hb 196pp

The post No laughing matter: a comic book about the climate crisis appeared first on Physics World.

  •  

The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration

The 20th of May is World Metrology Day, and this year it was extra special because it was also the 150th anniversary of the treaty that established the metric system as the preferred international measurement standard. Known as the Metre Convention, the treaty was signed in 1875 in Paris, France by representatives of all 17 nations that belonged to the Bureau International des Poids et Mesures (BIPM) at the time, making it one of the first truly international agreements. Though nations might come and go, the hope was that this treaty would endure “for all times and all peoples”.

To celebrate the treaty’s first century and a half, the BIPM and the United Nations Educational, Scientific and Cultural Organisation (UNESCO) held a joint symposium at the UNESCO headquarters in Paris. The event focused on the achievements of BIPM as well as the international scientific collaborations the Metre Convention enabled. It included talks from the Nobel prize-winning physicist William Phillips of the US National Institute of Standards and Technology (NIST) and the BIPM director Martin Milton, as well as panel discussions on the future of metrology featuring representatives of other national metrology institutes (NMIs) and metrology professionals from around the globe.

A long and revolutionary tradition

The history of metrology dates back to ancient times. As UNESCO’s Hu Shaofeng noted in his opening remarks, the Egyptians recognized the importance of precision measurements as long ago as the 21st century BCE.  Like other early schemes, the Egyptians’ system of measurement used parts of the human body as references, with units such as the fathom (the length of a pair of outstretched arms) and the foot. This was far from ideal since, as Phillips pointed out in his keynote address, people come in various shapes and sizes. These variations led to a profusion of units. By some estimates, pre-revolutionary France had a whopping 250,000 different measures, with differences arising not only between towns but also between professions.

The French Revolutionaries were determined to put an end to this mess. In 1795, just six years after the Revolution, the law of 18 Geminal An III (according to the new calendar of the French Republic) created a preliminary version of the world’s first metric system. The new system tied length and mass to natural standards (the metre was originally one-forty-millionth of the Paris meridian, while the kilogram is the mass of a cubic decimetre of water), and it became the standard for all of France in 1799. That same year, the system also became more practical, with units becoming linked, for the first time, to physical artefacts: a platinum metre and kilogram deposited in the French National Archives.

When the Metre Convention adopted this standard internationally 80 years later, it kick-started the construction of new length and mass standards. The new International Prototype of the Metre and International Prototype of the Kilogram were manufactured in 1879 and officially adopted as replacements for the Revolutionaries’ metre and kilogram in 1889, though they continued to be calibrated against the old prototypes held in the National Archives.

A short history of the BIPM

The BIPM itself was originally conceived as a means of reconciling France and Germany after the 1870–1871 Franco–Prussian War. At first, its primary roles were to care for the kilogram and metre prototypes and to calibrate the standards of its member states. In the opening decades of the 20th century, however, it extended its activities to cover other kinds of measurements, including those related to electricity, light and radiation. Then, from the 1960s onwards, it became increasingly interested in improving the definition of length, thanks to new interferometer technology that made it possible to measure distance at a precision rivalling that of the physical metre prototype.

Photo of William Phillips on stage at the Metre Convention symposium, backed by a large slide that reads "The Revolutionary Dream: A tous les temps, a tous les peuples, For all times, for all peoples". The slide also contains two large symbolic medallions, one showing a female figure dressed in Classical garments holding out a metre ruler under the logo "A tous les temps, a tous les peuples" and another showing a winged figure measuring the Earth with an instrument.
Metre man: William Phillips giving the keynote address at the Metre Convention’s 150th anniversary symposium. (Courtesy: Isabelle Dumé)

It was around this time that the BIPM decided to replace its expanded metric system with a framework encompassing the entire field of metrology. This new framework consisted of six basic units – the metre, kilogram, second, ampere, degree Kelvin (later simply the kelvin), candela and mole – plus a set of “derived” units (the Newton, Hertz, Joule and Watt) built from the six basic ones. Thus was born the International System of Units, or SI after the French initials for Système International d’unités.

The next major step – a “brilliant choice”, in Phillips’ words – came in 1983, when the BIPM decided to redefine the metre in terms of the speed of light. In the future, the Bureau decreed that the metre would officially be the length travelled by light in vacuum during a time interval of 1/299,792,458 seconds.

This decision set the stage for defining the rest of the seven base units in terms of natural fundamental constants. The most recent unit to join the club was the kilogram, which was defined in terms of the Planck constant, h, in 2019. In fact, the only base unit currently not defined in terms of a fundamental constant is the second, which is instead determined by the transition between the two hyperfine levels of the ground state of caesium-133. The international metrology community is, however, working to remedy this, with meetings being held on the subject in Versailles this month.

Measurement affects every aspect of our daily lives, and as the speakers at last week’s celebrations repeatedly reminded the audience, a unified system of measurement has long acted as a means of building trust across international and disciplinary borders. The Metre Convention’s survival for 150 years is proof that peaceful collaboration can triumph, and it has allowed humankind to advance in ways that would not have been possible without such unity. A lesson indeed for today’s troubled world.

The post The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration appeared first on Physics World.

  •  

The Physics Chanteuse: when science hits a high note

What do pulsars, nuclear politics and hypothetical love particles have in common? They’ve all inspired songs by Lynda Williams – physicist, performer and self-styled “Physics Chanteuse”.

In this month’s Physics World Stories podcast, host Andrew Glester is in conversation with Williams, whose unique approach to science communication blends physics with cabaret and satire. You’ll be treated to a selection of her songs, including a toe-tapping tribute to Jocelyn Bell Burnell, the Northern Irish physicist who discovered pulsars.

Williams discusses her writing process, which includes a full-blooded commitment to getting the science right. She describes how her shows evolve throughout the course of a tour, how she balances life on the road with other life commitments, and how Kip Thorne once arranged for her to perform at a birthday celebration for Stephen Hawking. (Yes, really.)

Her latest show, Atomic Cabaret, dives into the existential risks of the nuclear age, marking 80 years since Hiroshima and Nagasaki. The one-woman musical kicks off in Belfast on 18 June and heads to the Edinburgh Festival in August.

If you like your physics with a side of showbiz and social activism, this episode hits all the right notes. Find out more at Lynda’s website.

The post The Physics Chanteuse: when science hits a high note appeared first on Physics World.

💾

  •  

The quantum eraser doesn’t rewrite the past – it rewrites observers

“Welcome to this special issue of Physics World, marking the 200th anniversary of quantum mechanics. In this double-quantum edition, the letters in this text are stored using qubits. As you read, you project the letters into a fixed state, and that information gets copied into your mind as the article that you are reading. This text is actually in a superposition of many different articles, but only one of them gets copied into your memory. We hope you enjoy the one that you are reading.”

That’s how I imagine the opening of the 2125 Physics World quantum special issue, when fully functional quantum computers are commonplace, and we have even figured out how to control individual qubits on display screens. If you are lucky enough to experience reading such a magazine, you might be disappointed as you can read only one of the articles the text gets projected into. The problem is that by reading the superposition of articles, you made them decohere, because you copied the information about each letter into your memory. Can you figure out a way to read the others too? After all, more Physics World articles is always better.

A possible solution may be if you could restore the coherence of the text by just erasing your memory of the particular article you read. Once you no longer have information identifying which article your magazine was projected into, there is then no fundamental reason for it to remain decohered into a single state. You could then reread it to enjoy a different article.

While this thought experiment may sound fantastical, the concept is closely connected to a mind-bending twist on the famous double-slit experiment, known as the delayed-choice quantum eraser. It is often claimed to exhibit a radical phenomenon: where measurements made in the present alter events that occurred in the past. But is such a paradoxical suggestion real, even in the notoriously strange quantum realm?

A double twist on the double slit

In a standard double-slit experiment, photons are sent one by one through two slits to create an interference pattern on a screen, illustrating the wave-like behaviour of light. But if we add a detector that can spot which of the two slits the photon goes through, the interference disappears and we see only two distinct clumps on the screen, signifying particle-like behaviour. Crucially, gaining information about which path the photon took changes the photon’s quantum state, from the wave-like interference pattern to the particle-like clumps.

The first twist on this thought experiment is attributed to proposals from physicist John Wheeler in 1978, and a later collaboration with Wojciech Zurek in 1983. Wheeler’s idea was to delay the measurement of which slit the photon goes through. Instead of measuring the photon as it passes through the double-slit, the measurement could be delayed until just before the photon hits the screen. Interestingly, the delayed detection of which slit the photon goes through still determines whether or not it displays the wave-like or particle-like behaviour. In other words, even a detection done long after the photon has gone through the slit determines whether or not that photon is measured to have interfered with itself.

If that’s not strange enough, the delayed-choice quantum eraser is a further modification of this idea. First proposed by American physicists Marlan Scully and Kai Drühl in 1982 (Phys. Rev. A 25 2208), it was later experimentally implemented by Yoon-Ho Kim and collaborators using photons in 2000 (Phys. Rev. Lett. 84 1). This variation adds a second twist: if recording which slit the photon passes through causes it to decohere, then what happens if we were to erase that information? Imagine shrinking the detector to a single qubit that becomes entangled with the photon: “left” slit might correlate to the qubit being 0, “right” slit to 1. Instead of measuring whether the qubit is a 0 or 1 (revealing the path), we could measure it in a complementary way, randomising the 0s and 1s (erasing the path information).

1 Delayed detections, path revelations and complementary measurements

Detailed illustration explaining the quantum eraser effect
(Courtesy: Mayank Shreshtha)

This illustration depicts how the quantum eraser restores the wave-like behaviour of photons in a double-slit experiment, using 3D-glasses as an analogy.

The top left box shows the set-up for the standard double-slit experiment. As there are no detectors at the slits measuring which pathway a photon takes, an interference pattern emerges on the screen.  In box 1, detectors are present at each slit, and measuring which slit the photon might have passed through, the interference patter is destroyed. Boxes 2 and 3 show that by erasing the “which-slit” information, the interference patterns are restored. This is done by separating out the photons using the eraser, represented here by a red filter and a blue filter of the 3D glasses. The final box 4 shows that the overall pattern with the eraser has no interference, identical to patten seen in box 1.

In boxes 2, 3 and 4, a detector qubit measures “which-slit” information, with states |0> for left and |1> for right. These are points on the z-axis of the “Bloch sphere”, an abstract representation of the qubit. Then the eraser measures the detector qubit in a complementary way, along the x-axis of the Bloch sphere. This destroys the “which-slit information”, but reveals the red and blue lens information used to filter the outcomes, as depicted in the image of the 3D glasses.

Strikingly, while the screen still shows particle-like clumps overall, these complementary measurements of the single-qubit detector can actually be used to extract a wave-like interference pattern. This works through a sorting process: the two possible outcomes of the complementary measurements are used to separate out the photon detections on the screen. The separated patterns then each individually show bright and dark fringes.

I like to visualize this using a pair of 3D glasses, with one blue and one red lens. Each colour lens reveals a different individual image, like the two separate interference patterns. Without the 3D glasses, you see only the overall sum of the images. In the quantum eraser experiment, this sum of the images is a fully decohered pattern, with no trace of interference. Having access to the complementary measurements of the detector is like getting access to the 3D glasses: you now get an extra tool to filter out the two separate interference patterns.

Rewriting the past – or not?

If erasing the information at the detector lets us extract wave-like patterns, it may seem like we’ve restored wave-like behaviour to an already particle-like photon. That seems truly head-scratching. However, Jonte Hance, a quantum physicist at Newcastle University in the UK, highlights a different conclusion, focused on how the individual interference patterns add up to show the usual decohered pattern. “They all feel like they shouldn’t be able to fit together,” Hance explains. “It’s really showing that the correlations you get through entanglement have to be able to fit every possible way you could measure a system.” The results therefore reveal an intriguing aspect of quantum theory – the rich, counterintuitive structure of quantum correlations from entanglement – rather than past influences.

Even Wheeler himself did not believe the thought experiment implies backward-in-time influence, as explained by Lorenzo Catani, a researcher at the International Iberian Nanotechnology Laboratory (INL) in Portugal. Commenting on the history of the thought experiment, Catani notes that “Wheeler concluded that one must abandon a certain type of realism – namely, the idea that the past exists independently of its recording in the present. As far as I know, only a minority of researchers have interpreted the experiment as evidence for retrocausality.”

Eraser vs Bell: a battle of the bizarre

One physicist who is attempting to unpack this problem is Johannes Fankhauser at the University of Innsbruck, Austria. “I’d heard about the quantum eraser, and it had puzzled me a lot because of all these bizarre claims of backwards-in-time influence”, he explains. “I see something that sounds counterintuitive and puzzling and bizarre and then I want to understand it, and by understanding it, it gets a bit demystified.”

Fankhauser realized that the quantum eraser set-up can be translated into a very standard Bell experiment. These experiments are based on entangling a pair of qubits, the idea being to rule out local “hidden-variable” models of quantum theory. This led him to see that there is no need to explain the eraser using backwards-in-time influence, since the related Bell experiments can be understood without it, as explained in his 2017 paper (Quanta 8 44). Fankhauser then further analysed the thought experiment using the de Broglie–Bohm interpretation of quantum theory, which gives a physical model for the quantum wavefunction (as particles are guided by a “pilot” wave). Using this, he showed explicitly that the outcomes of the eraser experiment can be fully explained without requiring backwards-in-time influences.

So does that mean that the eraser doesn’t tell us anything else beyond what Bell experiments already tell us? Not quite. “It turns different knobs than the Bell experiment,” explains Fankhauser. “I would say it asks the question ‘what do measurements signify?’, and ‘when can I talk about the system having a property?’. That’s an interesting question and I would say we don’t have a full answer to this.”

In particular, the eraser demonstrates the importance that the very act of observation has on outcomes, with the detector playing the role of an observer. “You measure some of its properties, you change another property,” says Fankhauser. “So the next time you measure it, the new property was created through the observation. And I’m trying to formalize this now more concretely. I’m trying to come up with a new approach and framework to study these questions.”

Meanwhile, Catani found an intriguing contrast between Bell experiments and the eraser in his research. “The implications of Bell’s theorem are far more profound,” says Catani. In the 2023 paper (Quantum 7 1119) he co-authored, Catani considers a model for classical physics, with an extra condition: there is a restriction on what you can know about the underlying physical states. Applying this model to the quantum eraser, he finds that its results can be reproduced by such a classical theory. By contrast, the classical model cannot reproduce the statistical violations of a Bell experiment. This shows that having incomplete knowledge of the physical state is not, by itself, enough to explain the strange results of the Bell experiment. It is therefore demonstrating a more powerful deviation from classical physics than the eraser. Catani also contrasts the mathematical rigour of the two cases. While Bell experiments are based on explicitly formulated assumptions, claims about backwards-in-time influence in the quantum eraser rely on a particular narrative – one that gives rise to the apparent paradox

The eraser as a brainteaser

Physicists therefore broadly agree that the mathematics of the quantum eraser thought experiment fits well within standard quantum theory. Even so, Hance argues that formal results alone are not the entire story: “This is something we need to pick apart, not just in terms of mathematical assumptions, but also in terms of building intuitions for us to be able to actually play around with what quantumness is.” Hance has been analysing the physical implications of different assumptions in the thought experiment, with some options discussed in his 2021 preprint (arXiv:2111.09347) with collaborators on the quantum eraser paradox.

It therefore provides a tool for understanding how quantum correlations match up in a way that is not described by classical physics. “It’s a great thinking aid – partly brainteaser, partly demonstration of the nature of this weirdness.”

Information, observers and quantum computers

Every quantum physicist takes something different from the quantum eraser, whether it is a spotlight on the open problems surrounding the properties of measured systems; a lesson from history in mathematical rigour; or a counterintuitive puzzle to make sense of. For a minority that deviate from standard approaches to quantum theory, it may even be some form of backwards-in-time influence.

For myself, as explained in my video on YouTube and my 2023 paper (IEEE International Conference on Quantum Computing and Engineering 10.1109/QCE57702.2023.20325) on quantum thought experiments, the most dramatic implication of the quantum eraser is explaining the role of observers in the double-slit experiment. The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required. This also explains why building a quantum computer is so challenging, as unwanted entanglement with even one particle can cause the whole computation to collapse into a random state.

The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required

Where does this leave the futuristic readers of our 200-year double-quantum special issue of Physics World? Simply erasing their memories is not enough to restore the quantum behaviour of the article. It is too late to change which article was selected. Though, following an eraser-type protocol, our futurists can do one better than those sneaky magazine writers: they can use the outcomes of complementary measurements on their memory, to sort the article into two individual smaller articles, each displaying their own quantum entanglement structure that was otherwise hidden. So even if you can’t use the quantum eraser to rewrite the past, perhaps it can rewrite what you read in the future.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post The quantum eraser doesn’t rewrite the past – it rewrites observers appeared first on Physics World.

  •  

Has bismuth been masquerading as a topological material?

Bismuth has puzzled scientists for nearly 20 years. Notably, the question of whether it is topological – that is, whether electrons behave differently on its surface than they do inside it – gets different answers depending on whether you ask a theorist or an experimentalist. Researchers in Japan now say they have found a way to resolve this conflict. A mechanism called surface relaxation, they report, may have masked or “blocked” bismuth’s true topological nature.

The classic way of describing topology is to compare objects that have a hole, such as a doughnut or a coffee mug, with objects that don’t, such as a muffin. Although we usually think of doughnuts as having more in common with muffins than with mugs – you can’t eat a mug – the fact that they have the same number of holes means the mug and doughnut share topological features that the muffin does not.

While no-one has ever wondered whether they can eat an electron, scientists have long been curious about whether materials conduct electricity. As it turns out, topology is one way of answering that question.

“Previously, people classified materials as metallic or insulating,” says Yuki Fuseya, a quantum solid state physicist at Kobe University. Beginning in the 2000s, however, Fuseya says scientists started focusing more on the topology of the electrons’ complex wavefunctions. This enriched our understanding of how materials behave, because wavefunctions with apparently different shapes can share important topological features.

For example, if the topology of certain wavefunctions on a material’s surface corresponds to that of apparently different wavefunctions within its bulk, the material may be insulating in its bulk, yet still able to conduct electricity on its surface. Materials with this property are known as topological insulators, and they have garnered a huge amount of interest due to the possibility of exploiting them in quantum computing, spintronics and magnetic devices.

Topological or not topological

While it’s not possible to measure the topology of wavefunctions directly, it is generally possible to detect whether a material supports certain surface states. This information can then be used to infer something about its bulk using the so-called bulk-edge state correspondence.

In bismuth, the existence of these surface states ought to indicate that the bulk material is topologically trivial. However, experiments have delivered conflicting information.

Fuseya was intrigued. “If you look at the history of solid-state physics, many physical phenomena were found firstly in bismuth,” he tells Physics World. Examples include diamagnetism, the Seebeck effect and the Shubnikov-de Haas effect, as well as phenomena related to the giant spin Hall effect and the potential for Turing patterns that Fuseya discovered himself. “That’s one of the reasons why I am so interested in bismuth,” he says.

Fuseya’s interest attracted colleagues with different specialisms. Using density functional theory, Rikako Yaguchi of the University of Electro-Communications in Tokyo calculated that layers of bismuth’s crystal lattice expand, or relax, by 3-6% towards the surface. According to Fuseya, this might not have seemed noteworthy. However, since the team was already looking at bismuth’s topological properties, another colleague, Kazuki Koie, went ahead and calculated how this lattice expansion changed the material’s surface wavefunction.

These calculations showed that the expansion is, in fact, significant. This is because bismuth is close to the topological transition point, where a change in parameters can flip the shape of the wavefunction and give topological properties to a material that was once topologically trivial. Consequently, the reason it is not possible to observe surface states indicating that bulk bismuth is topologically trivial is that the material is effectively different – and topologically non-trivial – on its surface.

Topological blocking

Although “very surprised” at first, Fuseya says that after examining the physics in more detail, they found the result “quite reasonable”. They are now looking for evidence of similar “topological blocking” in other materials near the transition point, such as lead telluride and tin telluride.

“It is remarkable that there are still big puzzles when trying to match data to the theoretical predictions,” says Titus Mangham Neupert, a theoretical physicist at the University of Zurich, Switzerland, who was not directly involved in the research. Since “so many compounds that made the headlines in topological physics” contain bismuth, Neupert says it will be interesting to re-evaluate existing experiments and conceive new ones. “In particular, the implication for higher-order topology could be tested,” he says.

Fuseya’s team is already studying how lattice relaxation might affect hinges where two surfaces come together. In doing so, they hope to understand why angle resolved photoemission spectroscopy (ARPES), which probes surfaces, yields results that contradict those from scanning tunnelling microscopy experiments, which probe hinges. “Maybe we can find a way to explain every experiment consistently,” Fuseya says. The insights they gain, he adds, might also be useful for topological engineering: by bending a material, scientists could alter its lattice constants, and thereby tailor its topological properties.

This aspect also interests Zeila Zanolli and Matthieu Verstraete of Utrecht University in the Netherlands. Though not involved in the current study, they had previously shown that free-standing two-dimensional bismuth (bismuthene) can take on several geometric structures in-plane – not all of which are topological – depending on the material’s strain, bonding coordination and directionality. The new work, they say, “opens the way to (computational) design of topological materials, playing with symmetries, strain and the substrate interface”.

The research is published in Physical Review B.

The post Has bismuth been masquerading as a topological material? appeared first on Physics World.

  •  

Proton arc therapy eliminates hard-to-treat cancer with minimal side effects

Head-and-neck cancers are difficult to treat with radiation therapy because they are often located close to organs that are vital for patients to maintain a high quality-of-life. Radiation therapy can also alter a person’s shape, through weight loss or swelling, making it essential to monitor such changes throughout the treatment to ensure effective tumour targeting.

Researchers from Corewell Health William Beaumont University Hospital have now used a new proton therapy technique called step-and-shoot proton arc therapy (a spot-scanning proton arc method) to treat head-and-neck cancer in a human patient – the first person in the US to receive this highly accurate treatment.

“We envisioned that this technology could significantly improve the quality of treatment plans for patients and the treatment efficiency compared with the current state-of-the-art technique of intensity-modulated proton therapy (IMPT),” states senior author Xuanfeng Ding.

Progression towards dynamic proton arc therapy

“The first paper on spot-scanning proton arc therapy was published in 2016 and the first prototype for it was built in 2018,” says Ding. However, step-and-shoot proton arc therapy is an interim solution towards a more advanced technique known as dynamic proton arc therapy – which delivered its first pre-clinical treatment in 2024. Dynamic proton arc therapy is still undergoing development and regulatory approval clearance, so researchers have chosen to use step-and-shoot proton arc therapy clinically in the meantime.

Other proton therapies are more manual in nature and require a lot of monitoring, but the step-and-shoot technology delivers radiation directly to a tumour in a more continuous and automated fashion, with less lag time between radiation dosages. “Step-and-shoot proton arc therapy uses more beam angles per plan compared to the current clinical practice using IMPT and optimizes the spot and energy layers sparsity level,” explains Ding.

The extra beam angles provide a greater degree-of-freedom to optimize the treatment plan and provide a better dose conformity, robustness and linear energy transfer (LET, the energy deposited by ionizing radiation) through a more automated approach. During treatment delivery, the gantry rotates to each beam angle and stops to deliver the treatment irradiation.

In the dynamic proton arc technique that is also being developed, the gantry rotates continuously while irradiating the proton spot or switching energy layer. The step-and-shoot proton arc therapy therefore acts as an interim stage that is allowing more clinical data to be acquired to help dynamic proton arc therapy become clinically approved. The pinpointing ability of these proton therapies enables tumours to be targeted more precisely without damaging surrounding healthy tissue and organs.

The first clinical treatment

The team trialled the new technique on a patient with adenoid cystic carcinoma in her salivary gland – a rare and highly invasive cancer that’s difficult to treat as it targets the nerves in the body. This tendency to target nerves also means that fighting such tumours typically causes a lot of side effects. Using the new step-and-shoot proton arc therapy, however, the patient experienced minimal side effects and no radiation toxicity to other areas of her body (including the brain) after 33 treatments. Since finishing her treatment in August 2024, she continues to be cancer-free.

Tiffiney Beard and Rohan Deraniyagala
First US patient Tiffiney Beard, who underwent step-and-shoot proton arc therapy to treat her rare head-and-neck cancer, at a follow-up appointment with Rohan Deraniyagala. (Courtesy: Emily Rose Bennett, Corewell Health)

“Radiation to the head-and-neck typically results in dryness of the mouth, pain and difficulty swallowing, abnormal taste, fatigue and difficulty with concentration,” says Rohan Deraniyagala, a Corewell Health radiation oncologist involved with this research. “Our patient had minor skin irritation but did not have any issues with eating or performing at her job during treatment and for the last year since she was diagnosed.”

Describing the therapeutic process, Ding tells Physics World that “we developed an in-house planning optimization algorithm to select spot and energy per beam angle so the treatment irradiation time could be reduced to four minutes. However, because the gantry still needs to stop at each beam angle, the total treatment time is about 16 minutes per fraction.”

On monitoring the progression of the tumour over time and developing treatment plans, Ding confirms that the team “implemented a machine-learning-based synthetic CT platform which allows us to track the daily dosage of radiation using cone-beam computed tomography (CBCT) so that we can schedule an adaptive treatment plan for the patient.”

On the back of this research, Ding says that the next step is to help further develop the dynamic proton arc technique – known as DynamicARC – in collaboration with industry partner IBA.

The research was published in the International Journal of Particle Therapy.

The post Proton arc therapy eliminates hard-to-treat cancer with minimal side effects appeared first on Physics World.

  •  

Superconducting microwires detect high-energy particles

Arrays of superconducting wires have been used to detect beams of high-energy charged particles. Much thinner wires are already used to detect single photons, but this latest incarnation uses thicker wires that can absorb the large amounts of energy carried by fast-moving protons, electrons, and pions. The new detector was created by an international team led by Cristián Peña at Fermilab.

In a single-photon detector, an array of superconducting nanowires is operated below the critical temperature for superconductivity – with current flowing freely through the nanowires. When a nanowire absorbs a photon it creates a hotspot that temporarily destroys superconductivity and boosts the electrical resistance. This creates a voltage spike across the nanowire, allowing the location and time of the photon detection to be determined very precisely.

“These detectors have emerged as the most advanced time-resolved single-photon sensors in a wide range of wavelengths,” Peña explains. “Applications of these photon detectors include quantum networking and computing, space-to-ground communication, exoplanet exploration and fundamental probes for new physics such as dark matter.”

A similar hotspot is created when a superconducting wire is impacted by a high-energy charged particle. In principle, this could be used to create particle detectors that could be used in experiments at labs such as Fermilab and CERN.

New detection paradigm

“As with photons, the ability to detect charged particles with high spatial and temporal precision, beyond what traditional sensing technologies can offer, has the potential to propel the field of high-energy physics towards a new detection paradigm,” Peña explains.

However, the nanowire single-photon detector design is not appropriate for detecting charged particles. Unlike photons, charged particles do not deposit all of their energy at a single point in a wire. Instead, the energy can be spread out along a track, which becomes longer as particle energy increases. Also, at the relativistic energies reached at particle accelerators, the nanowires used in single-photon detectors are too thin to collect the energy required to trigger a particle detection.

To create their new particle detector, Peña’s team used the latest advances in superconductor fabrication. On a thin film of tungsten silicide, they deposited an 8×8, 2 mm2 array of micron-thick superconducting wires.

Tested at Fermilab

To test out their superconducting microwire single-photon detector (SMSPD), they used it to detect high-energy particle beams generated at the Fermilab Test Beam Facility. These included a 12 GeV beam of protons and 8 GeV beams of electrons and pions.

“Our study shows for the first time that SMSPDs are sensitive to protons, electrons, and pions,” Peña explains. “In fact, they behave very similarly when exposed to different particle types. We measured almost the same detection efficiency, as well as spatial and temporal properties.”

The team now aims to develop a deeper understanding of the physics that unfolds as a charged particle passes through a superconducting microwire. “That will allow us to begin optimizing and engineering the properties of the superconducting material and sensor geometry to boost the detection efficiency, the position and timing precision, as well as optimize for the operating temperature of the sensor,” Peña says. With further improvements SMSPDs to become an integral part of high-energy physics experiments – perhaps paving the way for a deeper understanding of fundamental physics.

The research is described in the Journal of Instrumentation.

The post Superconducting microwires detect high-energy particles appeared first on Physics World.

  •  

What is meant by neuromorphic computing – a webinar debate

AI circuit board
(Courtesy: Shutterstock/metamorworks)

There are two main approaches to what we consider neuromorphic computing. The first involves emulating biological neural processing systems through the physics of computation of computational substrates that have similar properties and constraints as real neural systems, with potential for denser structures and advantages in energy cost. The other simulates neural processing systems on scalable architectures that allow the simulation of large neural networks, with higher degree of abstraction, arbitrary precision, high resolution, and no constraints imposed by the physics of the computing medium.

Both may be required to advance the field, but is either approach ‘better’? Hosted by Neuromorphic Computing and Engineering, this webinar will see teams of leading experts in the field of neuromorphic computing argue the case for either approach, overseen by an impartial moderator.

Speakers image. Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo
Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo

Team emulation:
Elisa Donati. Elisa’s research interests aim at designing neuromorphic circuits that are ideally suited for interfacing with the nervous system and show how they can be used to build closed-loop hybrid artificial and biological neural processing systems.  She is also involved in the development of neuromorphic hardware and software systems able to mimic the functions of biological brains to apply for medical and robotics applications.

Jennifer Hasler received her BSE and MS degrees in electrical engineering from Arizona State University in August 1991. She received her PhD in computation and neural systems from California Institute of Technology in February 1997. Jennifer is a professor at the Georgia Institute of Technology in the School of Electrical and Computer Engineering; Atlanta is the coldest climate in which she has lived. Jennifer founded the Integrated Computational Electronics (ICE) laboratory at Georgia Tech, a laboratory affiliated with the Laboratories for Neural Engineering. She is a member of Tau Beta P, Eta Kappa Nu, and the IEEE.

Team simulation:
Catherine (Katie) Schuman is an assistant professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee (UT). She received her PhD in computer science from UT in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. Katie previously served as a research scientist at Oak Ridge National Laboratory, where her research focused on algorithms and applications of neuromorphic systems. Katie co-leads the TENNLab Neuromorphic Computing Research Group at UT. She has written for more than 70 publications as well as seven patents in the field of neuromorphic computing. She received the Department of Energy Early Career Award in 2019. Katie is a senior member of the Association of Computing Machinery and the IEEE.

Emre Neftci received his MSc degree in physics from EPFL in Switzerland, and his PhD in 2010 at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. He is currently an institute director at the Jülich Research Centre and professor at RWTH Aachen. His current research explores the bridges between neuroscience and machine learning, with a focus on the theoretical and computational modelling of learning algorithms that are best suited to neuromorphic hardware and non-von Neumann computing architectures.

Discussion chair:
Giulia D’Angelo is currently a Marie Skłodowska-Curie postdoctoral fellow at the Czech Technical University in Prague, where she focuses on neuromorphic algorithms for active vision. She obtained a bachelor’s degree in biomedical engineering from the University of Genoa and a master’s degree in neuroengineering with honours. During her master’s, she developed a neuromorphic system for the egocentric representation of peripersonal visual space at King’s College London. She earned her PhD in neuromorphic algorithms at the University of Manchester, receiving the President’s Doctoral Scholar Award, in collaboration with the Event-Driven Perception for Robotics Laboratory at the Italian Institute of Technology. There, she proposed a biologically plausible model for event-driven, saliency-based visual attention. She was recently awarded the Marie Skłodowska-Curie Fellowship to explore sensorimotor contingency theories in the context of neuromorphic active vision algorithms.

About this journal
Neuromorphic Computing and Engineering journal cover

Neuromorphic Computing and Engineering is a multidisciplinary, open access journal publishing cutting-edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective.

Editor-in-chief: Giacomo Indiveri, University of Zurich, Switzerland

 

The post What is meant by neuromorphic computing – a webinar debate appeared first on Physics World.

  •  

A Martian aurora, how the universe fades away, Heisenberg on holiday, physics of fake coins

In this episode of the Physics World Weekly podcast I look at what’s new in the world of physics with the help of my colleagues Margaret Harris and Matin Durrani.

We begin on Mars, where NASA’s Perseverance Rover has made the first observation of an aurora from the surface of the Red Planet. Next, we look deep into the future of the universe and ponder the physics that will govern how the last stars will fade away.

Then, we run time in reverse and go back to the German island of Helgoland, where in 1925 Werner Heisenberg laid the foundations of modern quantum mechanics. The island will soon host an event celebrating the centenary and Physics World will be there.

Finally, we explore how neutrons are being used to differentiate between real and fake antique coins and chat about the Physics World Quantum Briefing 2025.

The post A Martian aurora, how the universe fades away, Heisenberg on holiday, physics of fake coins appeared first on Physics World.

  •  

Ultrasound-activated structures clear biofilms from medical implants

When implanted medical devices like urinary stents and catheters get clogged with biofilms, the usual solution is to take them out and replace them with new ones. Now, however, researchers at the University of Bern and ETH Zurich, Switzerland have developed an alternative. By incorporating ultrasound-activated moving structures into their prototype “stent-on-a-chip” device, they showed it is possible to remove biofilms without removing the device itself. If translated into clinical practice, the technology could increase the safe lifespan of implants, saving money and avoiding operations that are uncomfortable and sometimes hazardous for patients.

Biofilms are communities of bacterial cells that adhere to natural surfaces in the body as well as artificial structures such as catheters, stents and other implants. Because they are encapsulated by a protective, self-produced extracellular matrix made from polymeric substances, they are mechanically robust and resistant to standard antibacterial measures. If not removed, they can cause infections, obstructions and other complications.

Intense, steady flows push away impurities

The new technology, which was co-developed by Cornel Dillinger, Pedro Amado and other members of Francesco Clavica and Daniel Ahmed’s research teams, takes advantage of recent advances in the fields of robotics and microfluidics. Its main feature is a coating made from microscopic hair-like structures known as cilia. Under the influence of an acoustic field, which is applied externally via a piezoelectric transducer, these cilia begin to move. This movement produces intense, steady fluid flows with velocities of up to 10 mm/s – enough to break apart encrusted deposits (made from calcium carbonate, for example) and flush away biofilms from the inner and outer surfaces of implanted urological devices.

Microscope image showing square and diamond shapes in various shades of grey
All fouled up: Typical examples of crystals known as encrustations that develop on the surfaces of urinary stents and catheters. (Courtesy: Pedro Amado and Shaokai Zheng)

“This is a major advance compared to existing stents and catheters, which require regular replacements to avoid obstruction and infections,” Clavica says.

The technology is also an improvement on previous efforts to clear implants by mechanical means, Ahmed adds. “Our polymeric cilia in fact amplify the effects of ultrasound by allowing for an effect known as acoustic streaming at frequencies of 20 to 100 kHz,” he explains. “This frequency is lower than that possible with previous microresonator devices developed to work in a similar way that had to operate in the MHz-frequency range.”

The lower frequency achieves the desired therapeutic effects while prioritizing patient safety and minimizing the risk of tissue damage, he adds.

Wider applications

In creating their technology, the researchers were inspired by biological cilia, which are a natural feature of physiological systems such as the reproductive and respiratory tracts and the central nervous system. Future versions, they say, could apply the ultrasound probe directly to a patient’s skin, much as handheld probes of ultrasound scanners are currently used for imaging. “This technology has potential applications beyond urology, including fields like visceral surgery and veterinary medicine, where keeping implanted medical devices clean is also essential,” Clavica says.

The researchers now plan to test new coatings that would reduce contact reactions (such as inflammation) in the body. They will also explore ways of improving the device’s responsiveness to ultrasound – for example by depositing thin metal layers. “These modifications could not only improve acoustic streaming performance but could also provide additional antibacterial benefits,” Clavica tells Physics World.

In the longer term, the team hope to translate their technology into clinical applications. Initial tests that used a custom-built ultrasonic probe coupled to artificial tissue have already demonstrated promising results in generating cilia-induced acoustic streaming, Clavica notes. “In vivo animal studies will then be critical to validate safety and efficacy prior to clinical adoption,” he says.

The present study is detailed in PNAS.

The post Ultrasound-activated structures clear biofilms from medical implants appeared first on Physics World.

  •  

Former IOP president Cyril Hilsum celebrates 100th birthday

Cyril Hilsum, a former president of the Institute of Physics (IOP), celebrated his 100th birthday last week at a special event held at the Royal Society of Chemistry.

Born on 17 May 1925, Hilsum completed a degree in physics at University College London in 1945. During his career he worked at the Services Electronics Research Laboratory and the Royal Radar Establishment and in 1983 was appointed chief scientist of GEC Hirst Research Centre, where he later became research director before retiring aged 70.

Hilsum helped develop commercial applications for the semiconductor gallium arsenide and is responsible for creating the UK’s first semiconductor laser as well as developments that led to modern liquid crystal display technologies.

Between 1988 and 1990 he was president of the IOP, which publishes Physics World, and in 1990 was appointed a Commander of the Order of the British Empire (CBE) for “services to the electrical and electronics industry”.

Hilsum was honoured by many prizes during his career including IOP awards such as the Max Born Prize in 1987, the Faraday Medal in 1988 as well as the Richard Glazebrook Medal and Prize in 1998. In 2007 he was awarded the Royal Society’s Royal Medal “for his many outstanding contributions and for continuing to use his prodigious talents on behalf of industry, government and academe to this day”.

Cyril Hilsum at an event to mark his 100th birthday
Looking back: Hilsum examines photographs that form an exhibition charting his life. (Courtesy: Lindey Hilsum)

Despite now being a centenarian, Hilsum still works part-time as chief science officer for Infi-tex Ltd, which produces force sensors for use in textiles.

“My birthday event was an amazing opportunity for me to greet old colleagues and friends,” Hilsum told Physics World. “Many had not seen each other since they had worked together in the distant past. It gave me a rare opportunity to acknowledge the immense contributions they had made to my career.”

Hilsum says that while the IOP gives much support to applied physics, there is still a great need for physicists “to give critical contributions to the lives of society as a whole”.

“As scientists, we may welcome progress in the subject, but all can get pleasure in seeing the results in their home, on their iPhone, or especially in their hospital!” he adds.

The post Former IOP president Cyril Hilsum celebrates 100th birthday appeared first on Physics World.

  •  

Bacteria-killing paint could dramatically improve hospital hygiene

Antimicrobial efficacy of chlorhexidine epoxy resin
Antimicrobial efficacy SEM images of steel surfaces inoculated with bacteria show a large bacterial concentration on surfaces painted with control epoxy resin (left) and little to no bacteria on those painted with chlorhexidine epoxy resin. (Courtesy: University of Nottingham)

Scientists have created a novel antimicrobial coating that, when mixed with paint, can be applied to a range of surfaces to destroy bacteria and viruses – including particularly persistent and difficult to kill strains like MRSA, flu virus and SARS-CoV-2. The development potentially paves the way for substantial improvements in scientific, commercial and clinical hygiene.

The University of Nottingham-led team made the material by combining chlorhexidine digluconate (CHX) – a disinfectant commonly used by dentists to treat mouth infections and by clinicians for cleaning before surgery – with everyday paint-on epoxy resin. Using this material, the team worked with staff at Birmingham-based specialist coating company Indestructible Paint to create a prototype antimicrobial paint. They found that, when dried, the coating can kill a wide range of pathogens.

The findings of the study, which was funded by the Royal Academy of Engineering Industrial Fellowship Scheme, were published in Scientific Reports.

Persistent antimicrobial protection

As part of the project, the researchers painted the antimicrobial coating onto a surface and used a range of scientific techniques to analyse the distribution of the biocide in the paint, to confirm that it remained uniformly distributed at a molecular level.

According to project leader Felicity de Cogan, the new paint can be used to provide antimicrobial protection on a wide array of plastic and hard non-porous surfaces. Crucially, it could be effective in a range of clinical environments, where surfaces like hospital beds and toilet seats can act as a breeding ground for bacteria for extended periods of time – even after the introduction of stringent cleaning regimes.

The team, based at the University’s School of Pharmacy, is also investigating the material’s use in the transport and aerospace industries, especially on frequently touched surfaces in public spaces such as aeroplane seats and tray tables.

“The antimicrobial in the paint is chlorhexidine – a biocide commonly used in products like mouthwash. Once it is added, the paint works in exactly the same way as all other paint and the addition of the antimicrobial doesn’t affect its application or durability on the surface,” says de Cogan.

Madeline Berrow from the University of Nottingham
In the lab Co-first author Madeline Berrow, who performed the laboratory work for the study. (Courtesy: University of Nottingham)

The researchers also note that adding CHX to the epoxy resin did not affect its optical transparency.

According to de Cogan, the novel concoction has a range of potential scientific, clinical and commercial applications.

“We have shown that it is highly effective against a range of different pathogens like E. coli and MRSA. We have also shown that it is effective against bacteria even when they are already resistant to antibiotics and biocides,” she says. “This means the technology could be a useful tool to circumvent the global problem of antimicrobial resistance.”

In de Cogan’s view, there are also number of major advantages to using the new coating to tackle bacterial infection – especially when compared to existing approaches – further boosting the prospects of future applications.

The key advantage of the technology is that the paint is “self-cleaning” – meaning that it would no longer be necessary to carry out the arduous task of repeatedly cleaning a surface to remove harmful microbes. Instead, after a single application, the simple presence of the paint on the surface would actively and continuously kill bacteria and viruses whenever they come into contact with it.

“This means that you can be sure a surface won’t pass on infections when you touch it,” says de Cogan.

“We are looking at more extensive testing in harsher environments and long-term durability testing over months and years. This work is ongoing and we will be following up with another publication shortly,” she adds.

The post Bacteria-killing paint could dramatically improve hospital hygiene appeared first on Physics World.

  •  

Why I stopped submitting my work to for-profit publishers

Peer review is a cornerstone of academic publishing. It is how we ensure that published science is valid. Peer review, by which researchers judge the quality of papers submitted to journals, stops pseudoscience from being peddled as equivalent to rigorous research. At the same time, the peer-review system is under considerable strain as the number of journal articles published each year increases, jumping from 1.9 million in 2016 to 2.8 million in 2022, according to Scopus and Web of Science.

All these articles require experienced peer reviewers, with papers typically taking months to go through peer review. This cannot be blamed alone on the time taken to post manuscripts and reviews back and forth between editors and reviewers, but instead is a result of high workloads and, fundamentally, how busy everyone is. Given peer reviewers need to be expert in their field, the pool of potential reviewers is inherently limited. A bottleneck is emerging as the number of papers grows quicker than the number of researchers in academia.

Scientific publishers have long been central to managing the process of peer review. For anyone outside academia, the concept of peer review may seem illogical given that researchers spend their time on it without much acknowledgement. While initiatives are in place to change this such as outstanding-reviewer awards and the Web of Science recording reviewer data, there is no promise that such recognition will be considered when looking for permanent positions or applying for promotion.

The impact of open access

Why, then, do we agree to review? As an active researcher myself in quantum physics, I peer-reviewed more than 40 papers last year and I’ve always viewed it as a duty. It’s a necessary time-sink to make our academic system function, to ensure that published research is valid and to challenge questionable claims. However, like anything people do out of a sense of duty, inevitably there are those who will seek to exploit it for profit.

Many journals today are open access, in which fees, known as article-processing charges, are levied to make the published work freely available online. It makes sense that costs need to be imposed – staff working at publishing companies need paying; articles need editing and typesetting; servers need be maintained and web-hosting fees have to be paid. Recently, publishers have invested heavily in digital technology and developed new ways to disseminate research to a wider audience.

Open access, however, has encouraged some publishers to boost revenues by simply publishing as many papers as possible. At the same time, there has been an increase in retractions, especially of fabricated or manipulated manuscripts sold by “paper mills”. The rise of retractions isn’t directly linked to the emergence of open access, but it’s not a good sign, especially when the academic publishing industry reports profit margins of roughly 40% – higher than many other industries. Elsevier, for instance, publishes nearly 3000 journals and in 2023 its parent company, Relx, recorded a profit of £1.79bn. This is all money that was either paid in open-access fees or by libraries (or private users) for journal subscriptions but ends up going to shareholders rather than science.

It’s important to add that not all academic publishers are for-profit. Some, like the American Physical Society (APS), IOP Publishing, Optica, AIP Publishing and the American Association for the Advancement of Science – as well as university presses – are wings of academic societies and universities. Any profit they make is reinvested into research, education or the academic community. Indeed, IOP Publishing, AIP Publishing and the APS have formed a new “purpose-led publishing” coalition, in which the three publishers confirm that they will continue to reinvest the funds generated from publishing back into research and “never” have shareholders that result in putting “profit above purpose”.

But many of the largest publishers – the likes of Springer Nature, Elsevier, Taylor and Francis, MDPI and Wiley – are for-profit companies and are making massive sums for their shareholders. Should we just accept that this is how the system is? If not, what can we do about it and what impact can we as individuals have on a multi-billion-dollar industry? I have decided that I will no longer review for, nor submit my articles (when corresponding author) to, any for-profit publishers.

I’m lucky in my field that I have many good alternatives such as the arXiv overlay journal Quantum, IOP Publishing’s Quantum Science and Technology, APS’s Physical Review X Quantum and Optica Quantum. If your field doesn’t, then why not push for them to be created? We may not be able to dismantle the entire for-profit publishing industry, but we can stop contributing to it (especially those who have a permanent job in academia and are not as tied down by the need to publish in high impact factor journals). Such actions may seem small, but together can have an effect and push to make academia the environment we want to be contributing to. It may sound radical to take change into your own hands, but it’s worth a try. You never know, but it could help more money make its way back into science.

The post Why I stopped submitting my work to for-profit publishers appeared first on Physics World.

  •  

Visual assistance system helps blind people navigate

Structure and workflow of a wearable visual assistance system
Visual assistance system The wearable system uses intuitive multimodal feedback to assist visually impaired people with daily life tasks. (Courtesy: J Tang et al. Nature Machine Intelligence 10.1038/s42256-025-01018-6, 2005, Springer Nature)

Researchers from four universities in Shanghai, China, are developing a practical visual assistance system to help blind and partially sighted people navigate. The prototype system combines lightweight camera headgear, rapid-response AI-facilitated software and artificial “skins” worn on the wrists and finger that provide physiological sensing. Functionality testing suggests that the integration of visual, audio and haptic senses can create a wearable navigation system that overcomes current designs’ adoptability and usability concerns.

Worldwide, 43 million people are blind, according to 2021 estimates by the International Agency for the Prevention of Blindness. Millions more are so severely visually impaired that they require the use of a cane to navigate.

Visual assistance systems offer huge potential as navigation tools, but current designs have many drawbacks and challenges for potential users. These include limited functionality with respect to the size and weight of headgear, battery life and charging issues, slow real-time processing speeds, audio command overload, high system latency that can create safety concerns, and extensive and sometimes complex learning requirements.

Innovations in miniaturized computer hardware, battery charge longevity, AI-trained software to decrease latency in auditory commands, and the addition of lightweight wearable sensory augmentation material providing near-real-time haptic feedback are expected to make visual navigation assistance viable.

The team’s prototype visual assistance system, described in Nature Machine Intelligence, incorporates an RGB-D (red, green, blue, depth) camera mounted on a 3D-printed glasses frame, ultrathin artificial skins, a commercial lithium-ion battery, a wireless bone-conducting earphone and a virtual reality training platform interfaced via triboelectric smart insoles. The camera is connected to a microcontroller via USB, enabling all computations to be performed locally without the need for a remote server.

When a user sets a target using a voice command, AI algorithms process the RGB-D data to estimate the target’s orientation and determine an obstacle-free direction in real time. As the user begins to walk to the target, bone conduction earphones deliver spatialized cues to guide them, and the system updates the 3D scene in real time.

The system’s real-time visual recognition incorporates changes in distance and perspective, and can compensate for low ambient light and motion blur. To provide robust obstacle avoidance, it combines a global threshold method with a ground interval approach to accurately detect overhead hanging, ground-level and sunken obstacles, as well as sloping or irregular ground surfaces.

First author Jian Tang of Shanghai Jiao Tong University and colleagues tested three audio feedback approaches: spatialized cues, 3D sounds and verbal instructions. They determined that spatialized cues are the most rapid to convey and be understood and provide precise direction perception.

Real-world testing A visually impaired person navigates through a cluttered conference room. (Courtesy: Tang et al. Nature Machine Intelligence)

To complement the audio feedback, the researchers developed stretchable artificial skin – an integrated sensory-motor device that provides near-distance alerting. The core component is a compact time-of-flight sensor that vibrates to stimulate the skin when the distance to an obstacle or object is smaller than a predefined threshold. The actuator is designed as a slim, lightweight polyethylene terephthalate cantilever. A gap between the driving circuit and the skin promotes air circulation to improve skin comfort, breathability and long-term wearability, as well as facilitating actuator vibration.

Users wear the sensor on the back of an index or middle finger, while the actuator and driving circuit are worn on the wrist. When the artificial skin detects a lateral obstacle, it provides haptic feedback in just 18 ms.

The researchers tested the trained system in virtual and real-world environments, with both humanoid robots and 20 visually impaired individuals who had no prior experience of using visual assistance systems. Testing scenarios included walking to a target while avoiding a variety of obstacles and navigating through a maze. Participants’ navigation speed increased with training and proved comparable to walking with a cane. Users were also able to turn more smoothly and were more efficient at pathfinding when using the navigation system than when using a cane.

“The proficient completion of tasks mirroring real-world challenges underscores the system’s effectiveness in meeting real-life challenges,” the researchers write. “Overall, the system stands as a promising research prototype, setting the stage for the future advancement of wearable visual assistance.”

The post Visual assistance system helps blind people navigate appeared first on Physics World.

  •  

Universe may end much sooner than predicted, say theorists

The universe’s maximum lifespan may be considerably shorter than was previously thought, but don’t worry: there’s still plenty of time to finish streaming your favourite TV series.

According to new calculations by black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom of Radboud University in the Netherlands, the most persistent stellar objects in the universe – white dwarf stars – will decay away to nothingness in around 1078 years. This, Falcke admits, is “a very long time”, but it’s a far cry from previous predictions, which suggested that white dwarfs could persist for at least 101100 years. “The ultimate end of the universe comes much sooner than expected,” he says.

Writing in the Journal of Cosmology and Astroparticle Physics, Falcke and colleagues explain that the discrepancy stems from different assumptions about how white dwarfs decay. Previous calculations of their lifetime assumed that, in the absence of proton decay (which has never been observed experimentally), their main decay process would be something called pyconuclear fusion. This form of fusion occurs when nuclei in a crystalline lattice essentially vibrate their way into becoming fused with their nearest neighbours.

If that sounds a little unlikely, that’s because it is. However, in the dense, cold cores of white dwarf stars, and over stupendously long time periods, pyconuclear fusion happens often enough to gradually (very, very gradually) turn the white dwarf’s carbon into nickel, which then transmutes into iron by emitting a positron. The resulting iron-cored stars are known as black dwarfs, and some theories predict that they will eventually (very, very eventually) collapse into black holes. Depending on how massive they were to start with, the whole process takes between 101100‒1032 000 years.

An alternative mechanism

Those estimates, however, do not take into account an alternative decay mechanism known as Hawking radiation. First proposed in the early 1970s by Stephen Hawking and Jacob Bekenstein, Hawking radiation arises from fluctuations in the vacuum of spacetime. These fluctuations allow particle-antiparticle pairs to pop into existence by essentially “borrowing” energy from the vacuum for brief periods before the pairs recombine and annihilate.

If this pair production happens in the vicinity of a black hole, one particle in the pair may stray over the black hole’s event horizon before it can recombine. This leaves its partner free to carry away some of the “borrowed” energy as Hawking radiation. After an exceptionally long time – but, crucially, not as long as the time required to disappear a white dwarf via pyconuclear fusion – Hawking radiation will therefore cause black holes to dissipate.

The fate of life, the universe and everything?

But what about objects other than black holes? Well, in a previous work published in 2023, Falcke, Wondrak and van Suijlekom showed that a similar process can occur for any object that curves spacetime with its gravitational field, not just objects that have an event horizon. This means that white dwarfs, neutron stars, the Moon and even human beings can, in principle, evaporate away into nothingness via Hawking radiation – assuming that what the trio delicately call “other astrophysical evolution and decay channels” don’t get there first.

Based on this tongue-in-cheek assumption, the trio calculated that white dwarfs will dissipate in around 1078 years, while denser objects such as black holes and neutron stars will vanish in no more than 1067 years. Less dense objects such as humans, meanwhile, could persist for as long as 1090 years – albeit only in a vast, near-featureless spacetime devoid of anything that would make life worth living, or indeed possible.

While that might sound unrealistic as well as morbid, the trio’s calculations do have a somewhat practical goal. “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory,” van Suijlekom says. “Perhaps one day, we [will] unravel the mystery of Hawking radiation.”

The post Universe may end much sooner than predicted, say theorists appeared first on Physics World.

  •  

Subtle quantum effects dictate how some nuclei break apart

Subtle quantum effects within atomic nuclei can dramatically affect how some nuclei break apart. By studying 100 isotopes with masses below that of lead, an international team of physicists uncovered a previously unknown region in the nuclear landscape where fragments of fission split in an unexpected way. This is driven not by the usual forces, but by shell effects rooted in quantum mechanics.

“When a nucleus splits apart into two fragments, the mass and charge distribution of these fission fragments exhibits the signature of the underlying nuclear structure effect in the fission process,” explains Pierre Morfouace of Université Paris-Saclay, who led the study. “In the exotic region of the nuclear chart that we studied, where nuclei do not have many neutrons, a symmetric split was previously expected. However, the asymmetric fission means that a new quantum effect is at stake.”

This unexpected discovery not only sheds light on the fine details of how nuclei break apart but also has far-reaching implications. These range from the development of safer nuclear energy to understanding how heavy elements are created during cataclysmic astrophysical events like stellar explosions.

Quantum puzzle

Fission is the process by which a heavy atomic nucleus splits into smaller fragments. It is governed by a complex interplay of forces. The strong nuclear force, which binds protons and neutrons together, competes with the electromagnetic repulsion between positively charged protons. The result is that certain nuclei are unstable and typically leads to a symmetric fission.

But there’s another, subtler phenomenon at play: quantum shell effects. These arise because protons and neutrons inside the nucleus tend to arrange themselves into discrete energy levels or “shells,” much like electrons do in atoms.

“Quantum shell effects [in atomic electrons] play a major role in chemistry, where they are responsible for the properties of noble gases,” says Cedric Simenel of the Australian National University, who was not involved in the study. “In nuclear physics, they provide extra stability to spherical nuclei with so-called ‘magic’ numbers of protons or neutrons. Such shell effects drive heavy nuclei to often fission asymmetrically.”

In the case of very heavy nuclei, such as uranium or plutonium, this asymmetry is well documented. But in lighter, neutron-deficient nuclei – those with fewer neutrons than their stable counterparts – researchers had long expected symmetric fission, where the nucleus breaks into two roughly equal parts. This new study challenges that view.

New fission landscape

To investigate fission in this less-explored part of the nuclear chart, scientists from the R3B-SOFIA collaboration carried out experiments at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. They focused on nuclei ranging from iridium to thorium, many of which had never been studied before. The nuclei were fired at high energies into a lead target to induce fission.

The fragments produced in each fission event were carefully analysed using a suite of high-resolution detectors. A double ionization chamber captured the number of protons in each product, while a superconducting magnet and time-of-flight detectors tracked their momentum, enabling a detailed reconstruction of how the split occurred.

Using this method, the researchers found that the lightest fission fragments were frequently formed with 36 protons, which is the atomic number of krypton. This pattern suggests the presence of a stabilizing shell effect at that specific proton number.

“Our data reveal the stabilizing effect of proton shells at Z=36,” explains Morfouace. “This marks the identification of a new ‘island’ of asymmetric fission, one driven by the light fragment, unlike the well-known behaviour in heavier actinides. It expands our understanding of how nuclear structure influences fission outcomes.”

Future prospects

“Experimentally, what makes this work unique is that they provide the distribution of protons in the fragments, while earlier measurements in sub-lead nuclei were essentially focused on the total number of nucleons,” comments Simenel.

Since quantum shell effects are tied to specific numbers of protons or neutrons, not just the overall mass, these new measurements offer direct evidence of how proton shell structure shapes the outcome of fission in lighter nuclei. This makes the results particularly valuable for testing and refining theoretical models of fission dynamics.

“This work will undoubtedly lead to further experimental studies, in particular with more exotic light nuclei,” Simenel adds. “However, to me, the ball is now in the camp of theorists who need to improve their modelling of nuclear fission to achieve the predictive power required to study the role of fission in regions of the nuclear chart not accessible experimentally, as in nuclei formed in the astrophysical processes.”

The research is described in Nature.

The post Subtle quantum effects dictate how some nuclei break apart appeared first on Physics World.

  •  

New coronagraph pushes exoplanet discovery to the quantum limit

Diagram of the new coronagraph
How it works Diagram showing simulated light from an exoplanet and its companion star (far left) moving through the new coronagraph. (Courtesy: Nico Deshler/University of Arizona)

A new type of coronagraph that could capture images of dim exoplanets that are extremely close to bright stars has been developed by a team led by Nico Deshler at the University of Arizona in the US. As well as boosting the direct detection of exoplanets, the new instrument could support advances in areas including communications, quantum sensing, and medical imaging.

Astronomers have confirmed the existence of nearly 6000 exoplanets, which are planets that orbit stars other as the Sun. The majority of these were discovered based on their effects on their companion stars, rather than being observed directly. This is because most exoplanets are too dim and too close to their companion stars for the exoplanet light to be differentiated from starlight. That is where a coronagraph can help.

A coronagraph is an astronomical instrument that blocks light from an extremely bright source to allow the observation of dimmer objects in the nearby sky. Coronagraphs were first developed a century ago to allow astronomers to observe the outer atmosphere (corona) of the Sun , which would otherwise be drowned out by light from the much brighter photosphere.

At the heart of a coronagraph is a mask that blocks the light from a star, while allowing light from nearby objects into a telescope. However, the mask (and the telescope aperture) will cause the light to interfere and create diffraction patterns that blur tiny features. This prevents the observation of dim objects that are closer to the star than the instrument’s inherent diffraction limit.

Off limits

Most exoplanets lie within the diffraction limit of today’s coronagraphs and Deshler’s team addressed this problem using two spatial mode sorters. The first device uses a sequence of optical elements to separate starlight from light originating from the immediate vicinity of the star. The starlight is then blocked by a mask while the rest of the light is sent through a second spatial mode sorter, which reconstructs an image of the region surrounding the star.

As well as offering spatial resolution below the diffraction limit, the technique approaches the fundamental limit on resolution that is imposed by quantum mechanics.

“Our coronagraph directly captures an image of the surrounding object, as opposed to measuring only the quantity of light it emits without any spatial orientation,” Deshler describes. “Compared to other coronagraph designs, ours promises to supply more information about objects in the sub-diffraction regime – which lie below the resolution limits of the detection instrument.”

To test their approach, Deshler and colleagues simulated an exoplanet orbiting at a sub-diffraction distance from a host star some 1000 times brighter. After passing the light through the spatial mode sorters, they could resolve the exoplanet’s position – which would have been impossible with any other coronagraph.

Context and composition

The team believe that their technique will improve astronomical images. “These images can provide context and composition information that could be used to determine exoplanet orbits and identify other objects that scatter light from a star, such as exozodiacal dust clouds,” Deshler says.

The team’s coronagraph could also have applications beyond astronomy. With the ability to detect extremely faint signals close to the quantum limit, it could help to improve the resolution of quantum sensors. This could to lead to new methods for detecting tiny variations in magnetic or gravitational fields.

Elsewhere, the coronagraph could help to improve non-invasive techniques for imaging living tissue on the cellular scale – with promising implications in medical applications such as early cancer detection and the imaging of neural circuits. Another potential use could be new multiplexing techniques for optical communications. This would see the coronagraph being used to differentiate between overlapping signals. This has the potential of boosting the rate at which data could be transferred between satellites and ground-based receivers.

The research is described in Optica.

The post New coronagraph pushes exoplanet discovery to the quantum limit appeared first on Physics World.

  •  

Miniaturized pixel detector characterizes radiation quality in clinical proton fields

Experimental setups for phantom measurements
Experimental setup Top: schematic and photo of the setup for measurements behind a homogeneous phantom. Bottom: IMPT treatment plan for the head phantom (left); the detector sensor position (middle, sensor thickness not to scale); and the setup for measurements behind the phantom (right). (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

Proton therapy is a highly effective and conformal cancer treatment. Proton beams deposit most of their energy at a specific depth – the Bragg peak – and then stop, enabling proton treatments to destroy tumour cells while sparing surrounding normal tissue. To further optimize the clinical treatment planning process, there’s recently been increased interest in considering the radiation quality, quantified by the proton linear energy transfer (LET).

LET – defined as the mean energy deposited by a charged particle over a given distance – increases towards the end of the proton range. Incorporating LET as an optimization parameter could better exploit the radiobiological properties of protons, by reducing LET in healthy tissue, while maintaining or increasing it within the target volume. This approach, however, requires a method for experimental verification of proton LET distributions and patient-specific quality assurance in terms of proton LET.

To meet this need, researchers at the Institute of Nuclear Physics, Polish Academy of Sciences have used the miniaturized semiconductor pixel detector Timepix3 to perform LET characterization of intensity-modulated proton therapy (IMPT) plans in homogeneous and heterogeneous phantoms. They report their findings in Physics in Medicine & Biology.

Experimental validation

First author Paulina Stasica-Dudek and colleagues performed a series of experiments in a gantry treatment room at the Cyclotron Centre Bronowice (CCB), a proton therapy facility equipped with a proton cyclotron accelerator and pencil-beam scanning system that provides IMPT for up to 50 cancer patients per day.

The MiniPIX Timepix3 is a radiation imaging pixel detector based on the Timepix3 chip developed at CERN within the Medipix collaboration (provided commercially by Advacam). It provides quasi-continuous single particle tracking, allowing particle type recognition and spectral information in a wide range of radiation environments.

For this study, the team used a Timepix3 detector with a 300 µm-thick silicon sensor operated as a miniaturized online radiation camera. To overcome the problem of detector saturation in the relatively high clinical beam currents, the team developed a pencil-beam scanning method with the beam current reduced to the picoampere (pA) level.

The researchers used Timepix3 to measure the deposited energy and LET spectra for spread-out Bragg peak (SOBP) and IMPT plans delivered to a homogeneous water-equivalent slab phantom, with each plan energy layer irradiated and measured separately. They also performed measurements on an IMPT plan delivered to a heterogeneous head phantom. For each scenario, they used a Monte Carlo (MC) code to simulate the corresponding spectra of deposited energy and LET for comparison.

The team first performed a series of experiments using a homogeneous phantom irradiated with various fields, mimicking patient-specific quality assurance procedures. The measured and simulated dose-averaged LET (LETd) and LET spectra agreed to within a few percent, demonstrating proper calibration of the measurement methodology.

The researchers also performed an end-to-end test in a heterogeneous CIRS head phantom, delivering a single field of an IMPT plan to a central 4 cm-diameter target volume in 13 energy layers (96.57–140.31 MeV) and 315 spots.

Energy deposition and LET spectra for an IMPT plan delivered to a head phantom
End-to-end testing Energy deposition (left) and LET in water (right) spectra for an IMPT plan measured in the CIRS head phantom obtained based on measurements (blue) and MC simulations (orange). The vertical lines indicate LETd values. (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

For head phantom measurements, the peak positions for deposited energy and LET spectra obtained based on experiment and simulation agreed within the error bars, with LETd values of about 1.47 and 1.46 keV/µm, respectively. The mean LETd values derived from MC simulation and measurement differed on average by 5.1% for individual energy layers.

Clinical translation

The researchers report that implementing the proposed LET measurement scheme using Timepix3 in a clinical setting requires irradiating IMPT plans with a reduced beam current (at the pA level). While they successfully conducted LET measurements at low beam currents in the accelerator’s research mode, pencil-beam scanning at pA-level currents is not currently available in the commercial clinical or quality assurance modes. Therefore, they note that translating the proposed approach into clinical practice would require vendors to upgrade the beam delivery system to enable beam monitoring at low beam currents.

“The presented results demonstrate the feasibility of the Timepix3 detector to validate LET computations in IMPT fields and perform patient-specific quality assurance in terms of LET. This will support the implementation of LET in treatment planning, which will ultimately increase the effectiveness of the treatment,” Stasica-Dudek and colleagues write. “Given the compact design and commercial availability of the Timepix3 detector, it holds promise for broad application across proton therapy centres.”

The post Miniaturized pixel detector characterizes radiation quality in clinical proton fields appeared first on Physics World.

  •  

Protons take to the road

Physicists at CERN have completed a “test run” for taking antimatter out of the laboratory and transporting it across the site of the European particle-physics facility. Although the test was carried out with ordinary protons, the team that performed it says that antiprotons could soon get the same treatment. The goal, they add, is to study antimatter in places other than the labs that create it, as this would enable more precise measurements of the differences between matter and antimatter. It could even help solve one of the biggest mysteries in physics: why does our universe appear to be made up almost entirely of matter, with only tiny amounts of antimatter?

According to the Standard Model of particle physics, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed). This might sound straightforward, but it leads to a peculiar prediction. Under the Standard Model, the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter. But if that were the case, there shouldn’t be any matter left, because whenever pairs of antimatter and matter particles collide, they annihilate each other in a burst of energy.

Physicists therefore suspect that there are other, more subtle differences between matter particles and their antimatter counterparts – differences that could explain why the former prevailed while the latter all but disappeared. By searching for these differences, they hope to shed more light on antimatter-matter asymmetry – and perhaps even reveal physics beyond the Standard Model.

Extremely precise measurements

At CERN’s Baryon-Antibaryon Symmetry Experiment (BASE) experiment, the search for matter-antimatter differences focuses on measuring the magnetic moment (or charge-to-mass ratio) of protons and antiprotons. These measurements need to be extremely precise, but this is difficult at CERN’s “Antimatter Factory” (AMF), which manufactures the necessary low-energy antiprotons in profusion. This is because essential nearby equipment – including the Antiproton Decelerator and ELENA, which reduce the energy of incoming antiprotons from GeV to MeV – produces magnetic field fluctuations that blur the signal.

To carry out more precise measurements, the team therefore needs a way of transporting the antiprotons to other, better-shielded, laboratories. This is easier said than done, because antimatter needs to be carefully isolated from its environment to prevent it from annihilating with the walls of its container or with ambient gas molecules.

The BASE team’s solution was to develop a device that can transport trapped antiprotons on a truck for substantial distances. It is this device, known as BASE-STEP (for Symmetry Tests in Experiments with Portable Antiprotons), that has now been field-tested for the first time.

Protons on the go

During the test, the team successfully transported a cloud of about 105 trapped protons out of the AMF and across CERN’s Meyrin campus over a period of four hours. Although protons are not the same as antiprotons, BASE-STEP team leader Christian Smorra says they are just as sensitive to disturbances in their environment caused by, say, driving them around. “They are therefore ideal stand-ins for initial tests, because if we can transport protons, we should also be able to transport antiprotons,” he says.

Photo of the BASE-STEP system sitting on a bright yellow trolley after being unloaded from the transport crane, which is visible above it. A woman in a hard hat and head scarf watches from the ground, while a man in a hard hat stands above her on a set of steps, also watching.
The next step: BASE-STEP on a transfer trolley, watched over by BASE team members Fatma Abbass and Christian Smorra. (Photo: BASE/Maria Latacz)

The BASE-STEP device is mounted on an aluminium frame and measures 1.95 m x 0.85 m x 1.65 m. At 850‒900 kg, it is light enough to be transported using standard forklifts and cranes.

Like BASE, it traps particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper. To further confine the protons and prevent them from colliding with the trap’s walls, this trap is surrounded by a superconducting magnet bore operated at cryogenic temperatures. The second electrode stack is also kept at ultralow pressures of 10-19 bar, which Smorra says is low enough to keep antiparticles from annihilating with residual gas molecules. To transport antiprotons instead of protons, Smorra adds, they would just need to switch the polarity of the electrodes.

The transportable trap system, which is detailed in Nature, is designed to remain operational on the road. It uses a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, and its frame can handle accelerations of up to 1g (9.81 m/s2) in all directions over and above the usual (vertical) force of gravity. This means it can travel up and down slopes with a gradient of up to 10%, or approximately 6°.

Once the BASE-STEP device is re-configured to transport antiprotons, the first destination on the team’s list is a new Penning-trap system currently being constructed at the Heinrich Heine University in Düsseldorf, Germany. Here, physicists hope to search for charge-parity-time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is possible at CERN’s AMF.

“At BASE, we are currently performing measurements with a precision of 16 parts in a trillion,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine and a researcher at CERN and Japan’s RIKEN laboratory. “These experiments are the most precise tests of matter/antimatter symmetry in the baryon sector to date, but to make these experiments better, we have no choice but to transport the particles out of CERN’s antimatter factory,” he tells Physics World.

The post Protons take to the road appeared first on Physics World.

  •  

Quantum computing for artists, musicians and game designers

Many creative industries rely on cutting-edge digital technologies, so it is not surprising that this sector could easily become an early adopter of quantum computing.

In this episode of the Physics World Weekly podcast I am in conversation with James Wootton, who is chief scientific officer at Moth Quantum. Based in the UK and Switzerland, the company is developing quantum-software tools for the creative industries – focusing on artists, musicians and game developers.

Wootton joined Moth Quantum in September 2024 after working on quantum error correction at IBM. He also has long-standing interest in quantum gaming and creating tools that make quantum computing more accessible. If you enjoyed this interview with Wootton, check out this article that he wrote for Physics World in 2018: “Playing games with quantum computers“.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Quantum computing for artists, musicians and game designers appeared first on Physics World.

  •  

Five-body recombination could cause significant loss from atom traps

Five-body recombination, in which five identical atoms form a tetramer molecule and a single free atom, could be the largest contributor to loss from ultracold atom traps at specific “Efimov resonances”, according to calculations done by physicists in the US. The process, which is less well understood than three- and four-body recombination, could be useful for building molecules, and potentially for modelling nuclear fusion.

A collision involving trapped atoms can be either elastic – in which the internal states of the atoms and their total kinetic energy remain unchanged – or inelastic, in which there is an interchange between the kinetic energy of the system and the internal energy states of the colliding atoms.

Most collisions in a dilute quantum gas involve only two atoms, and when physicists were first studying Bose-Einstein condensates (the ultralow-temperature state of some atomic gases), they suppressed inelastic two-body collisions, keeping the atoms in the desired state and preserving the condensate. A relatively small number of collisions, however, involve three or more bodies colliding simultaneously.

“They couldn’t turn off three body [inelastic collisions], and that turned out to be the main reason atoms leaked out of the condensate,” says theoretical physicist Chris Greene of Purdue University in the US.

Something remarkable

While attempting to understand inelastic three-body collisions, Greene and colleagues made the connection to work done in the 1970s by the Soviet theoretician Vitaly Efimov. He showed that at specific “resonances” of the scattering length, quantum mechanics allowed two colliding particles that could otherwise not form a bound state to do so in the presence of a third particle. While Efimov first considered the scattering of nucleons (protons and neutrons) or alpha particles, the effect applies to atoms and other quantum particles.

In the case of trapped atoms, the bound dimer and free atom are then ejected from the trap by the energy released from the binding event. “There were signatures of this famous Efimov effect that had never been seen experimentally,” Greene says. This was confirmed in 2005 by experiments from Rudolf Grimm’s group at the University of Innsbruck in Austria.

Hundreds of scientific papers have now been written about three-body recombination. Greene and colleagues subsequently predicted resonances at which four-body Efimov recombination could occur, producing a trimer. These were observed almost immediately by Grimm and colleagues. “Five was just too hard for us to do at the time, and only now are we able to go that next step,” says Greene.

Principal loss channel

In the new work, Greene and colleague Michael Higgins modelled collisions between identical caesium atoms in an optical trap. At specific resonances, five-body recombination – in which four atoms combine to produce a tetramer and a free particle – is not only enhanced but becomes the principal loss channel. The researchers believe these resonances should be experimentally observable using today’s laser box traps, which hold atomic gases in a square-well potential.

“For most ultracold experiments, researchers will be avoiding loss as much as possible – they would stay away from these resonances,” says Greene; “But for those of us in the few-body community interested in how atoms bind and resonate and how to describe complicated rearrangement, it’s really interesting to look at these points where the loss becomes resonant and very strong.” This is one technique that can be used to create new molecules, for example.

In future, Greene hopes to apply the model to nucleons themselves. “There have been very few people in the few-body theory community willing to tackle a five-particle collision – the Schrödinger equation has so many dimensions,” he says.

Fusion reactions

He hopes it may be possible to apply the researchers’ toolkit to nuclear reactions. “The famous one is the deuterium/tritium fusion reaction. When they collide they can form an alpha particle and a neutron and release a ton of energy, and that’s the basis of fusion reactors…There’s only one theory in the world from the nuclear community, and it’s such an important reaction I think it needs to be checked,” he says.

The researchers also wish to study the possibility of even larger bound states. However, they foresee a problem because the scattering length of the ground state resonance gets shorter and shorter with each additional particle. “Eventually the scattering length will no longer be the dominant length scale in the problem, and we think between five and six is about where that border line occurs,” Greene says. Nevertheless, higher-lying, more loosely-bound six-body Efimov resonances could potentially be visible at longer scattering lengths.

The research is described in Proceedings of the National Academy of Sciences.

Theoretical physicist Ravi Rau of Louisiana State University in the US is impressed by Greene and Higgins’ work. “For quite some time Chris Greene and a succession of his students and post-docs have been extending the three-body work that they did, using the same techniques, to four and now five particles,” he says. “Each step is much more complicated, and that he could use this technique to extend it to five bosons is what I see as significant.” Rau says, however, that “there is a vast gulf” between five atoms and the number treated by statistical mechanics, so new theoretical approaches may be required to bridge the gap.

The post Five-body recombination could cause significant loss from atom traps appeared first on Physics World.

  •  

This is what an aurora looks like on Mars

The Mars rover Perseverance has captured the first image of an aurora as seen from the surface of another planet. The visible-light image, which was taken during a solar storm on 18 March 2024, is not as detailed or as colourful as the high-resolution photos of green swirls, blue shadows and pink whorls familiar to aurora aficionados on Earth. Nevertheless, it shows the Martian sky with a distinctly greenish tinge, and the scientists who obtained it say that similar aurorae would likely be visible to future human explorers.

“Kind of like with aurora here on Earth, we need a good solar storm to induce a bright green colour, otherwise our eyes mostly pick up on a faint grey-ish light,” explains Elise Wright Knutsen, a postdoctoral researcher in the Centre for Space Sensors and Systems at the University of Oslo, Norway. The storm Knutsen and her colleagues captured was, she adds, “rather moderate”, and the aurora it produced was probably too faint to see with the naked eye. “But with a camera, or if the event had been more intense, the aurora will appear as a soft green glow covering more or less the whole sky.”

The role of planetary magnetic fields

Aurorae happen when charged particles from the Sun – the solar wind – interact with the magnetic field around a planet. On Earth, this magnetic field is the product of an internal, planetary-scale magnetic dynamo. Mars, however, lost its dynamo (and, with it, its oceans and its thick protective atmosphere) around four billion years ago, so its magnetic field is much weaker. Nevertheless, it retains some residual magnetization in its southern highlands, and its conductive ionosphere affects the shape of the nearby interplanetary magnetic field. Together, these two phenomena give Mars a hybrid magnetosphere too feeble to protect its surface from cosmic rays, but strong enough to generate an aurora.

Scientists had previously identified various types of aurorae on Mars (and every other planet with an atmosphere in our solar system) in data from orbiting spacecraft. However, no Mars rover had ever observed an aurora before, and all the orbital aurora observations, from Mars and elsewhere, were at ultraviolet wavelengths.

An artist's impression of what the aurora would have looked like. The image shows uneven terrain silhouetted against a greeish sky with several visible stars. The Perseverance rovers is in the foreground.
Awesome sight: An artist’s impression of the aurora and the Perseverance rover. (Courtesy: Alex McDougall-Page)

How to spot an aurora on Mars

According to Knutsen, the lack of visible-light, surface-based aurora observations has several causes. First, the visible-wavelength instruments on Mars rovers are generally designed to observe the planet’s bright “dayside”, not to detect faint emissions on its nightside. Second, rover missions focus primarily on geology, not astronomy. Finally, aurorae are fleeting, and there is too much demand for Perseverance’s instruments to leave them pointing at the sky just in case something interesting happens up there.

“We’ve spent a significant amount of time and effort improving our aurora forecasting abilities,” Knutsen says.

Getting the timing of observations right was the most challenging part, she adds. The clock started whenever solar satellites detected events called coronal mass ejections (CMEs) that create unusually strong pulses of solar wind. Next, researchers at the NASA Community Coordinated Modeling Center simulated how these pulses would propagate through the solar system. Once they posted the simulation results online, Knutsen and her colleagues – an international consortium of scientists in Belgium, France, Germany, the Netherlands, Spain, the UK and the US as well as Norway – had a decision to make. Was this CME likely to trigger an aurora bright enough for Perseverance to detect?

If the answer was “yes”, their next step was to request observation time on Perseverance’s SuperCam and Mastcam-Z instruments. Then they had to wait, knowing that although CMEs typically take three days to reach Mars, the simulations are only accurate to within a few hours and the forecast could change at any moment. Even if they got the timing right, the CME might be too weak to trigger an aurora.

“We have to pick the exact time to observe, the whole observation only lasts a few minutes, and we only get one chance to get it right per solar storm,” Knutsen says. “It took three unsuccessful attempts before we got everything right, but when we did, it appeared exactly as we had imagined it: as a diffuse green haze, uniform in all directions.”

Future observations

Writing in Science Advances, Knutsen and colleagues say it should now be possible to investigate how Martian aurorae vary in time and space – information which, they note, is “not easily obtained from orbit with current instrumentation”. They also point out that the visible-light instruments they used tend to be simpler and cheaper than UV ones.

“This discovery will open up new avenues for studying processes of particle transport and magnetosphere dynamics,” Knutsen tells Physics World. “So far we have only reported our very first detection of this green emission, but observations of aurora can tell us a lot about how the Sun’s particles are interacting with Mars’s magnetosphere and upper atmosphere.”

The post This is what an aurora looks like on Mars appeared first on Physics World.

  •  

Robert P Crease: ‘I’m yet another victim of the Trump administration’s incompetence’

Late on Friday 18 April, the provost of Stony Brook University, where I teach, received a standard letter from the National Science Foundation (NSF), the body that funds much academic research in the US. “Termination of certain awards is necessary,” the e-mail ran, “because they are not in alignment with current NSF priorities”. The e-mail mentioned “NSF Award Id 2318247”. Mine.

The termination notice, forwarded to me a few minutes later, was the same one that 400 other researchers all over the US received the same day, in which the agency, following a directive from the Trump administration, grabbed back $233m in grant money. According to the NSF website, projects terminated were “including but not limited to those on diversity, equity, and inclusion (DEI) and misinformation/disinformation”.

Losing grant money is disastrous for research and for the faculty, postdocs, graduate students and support staff who depend on that support. A friend of mine tried to console me by saying that I had earned a badge of honour for being among the 400 people who threatened the Trump Administration so much that it set out to stop their work. Still, I was baffled. Did I really deserve the axe?

My award, entitled “Social and political dynamics of lab-community relations”, was small potatoes. As the sole principal investigator, I’d hired no postdocs or grad students. I’d also finished most of the research and been given a “no-cost extension” to write it up that was due to expire in a few months. In fact, I’d spent all but $21,432 of the $263,266 of cash.

That may sound like a lot for a humanities researcher, but it barely covered a year of my salary and included indirect costs (to which my grant was subject like any other), along with travel and so on. What’s more, my project’s stated aim was to “enhance the effectiveness of national scientific facilities”, which was clearly within the NSF’s mission.

Such facilities, I had pointed out in my official proposal, are vital if the US is to fulfil its national scientific, technological, medical and educational goals. But friction between a facility and the surrounding community can hamper its work, particularly if the lab’s research is seen as threatening – for example, involving chemical, radiological or biological hazards. Some labs, in fact, have had important, yet perfectly safe, facilities permanently closed out of such fear.

“In an age of Big Science,” I argued, “understanding the dynamics of lab-community interaction is crucial to advancing national, scientific, and public interests.” What’s so contentious about that?

“New bad words”

Maybe I had been careless. After all, Ted Cruz, who chairs the Senate’s commerce committee, had claimed in February that 3400 NSF awards worth over $2 billion made during the Biden–Harris administration had promoted DEI and advanced “neo-Marxist class warfare propaganda”. I wondered if I might have inadvertently used some trigger word that outed me as an enemy of the state.

I knew, for instance, that the Trump Administration had marked for deletion photos of the Enola Gay aircraft, which had dropped an atomic bomb on Hiroshima, in a Defense Department database because officials had not realized that “Gay” was part of the name of the pilot’s mother. Administration officials had made similar misinterpretations in scientific proposals that included the words “biodiversity” and “transgenic”.

Had I used one of those “new bad words”? I ran a search on my proposal. Did it mention “equity”? No. “Inclusion”? Also no. The word “diversity” appeared only once, in the subtitle of an article in the bibliography about radiation fallout. “Neo-Marxist”? Again, no. Sure, I’d read Marx’s original texts during my graduate training in philosophy, but my NSF documents hadn’t tapped him or his followers as essential to my project.

Then I remembered a sentence in my proposal. “Well-established scientific findings,” I wrote, “have been rejected by activists and politicians, distorted by lurid headlines, and fuelled partisan agendas.” These lead in turn to “conspiracy theories, fake facts, science denial and charges of corruption”.

Was that it, I wondered? Had the NSF officials thought that I had meant to refer to the administration’s attacks on climate change science, vaccines, green energy and other issues? If so, that was outrageous! There was not a shred of truth to it – no truth at all!

Ructions and retractions

On 23 April – five days after the NSF termination notice – two researchers at Harvard University put together an online “Terminated NSF grant tracker”, which contained information based on what they found in the NSF database. Curious, I scrolled down to SUNY at Stony Brook and found mine: “Social and political dynamics of lab-community relations”.

I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract

I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract. The abstract given for my grant was apparently that of another NSF award, for a study that touched on DEI themes – a legitimate and useful thing to study under any normal regime, but not this one. At last, I had the reason for my grant termination: an NSF error.

The next day, 24 April, I managed to speak to the beleaguered NSF programme director, who was kind and understanding and said there’d been a mistake in the database. When I asked her if it could be fixed she said, “I don’t know”. When I asked her if the termination can be reversed, she said, “I don’t know”. I alerted Stony Brook’s grants-management office, which began to press the NSF to reverse its decision. A few hours later I learned that NSF director Sethuraman Panchanathan had resigned.

I briefly wondered if Panchanathan had been fired because my grant had been bungled. No such luck; he was probably disgusted with the administration’s treatment of the agency. But while the mistake over my abstract evidently wasn’t deliberate, the malice behind my grant’s termination certainly was. Further, doesn’t one routinely double-check before taking such an unprecedented and monumental step as terminating a grant by a major scientific agency?

I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately

I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately. A man was mistakenly sent by the Department of Homeland Security to a dangerous prison in El Salvador and they couldn’t (or wouldn’t) get him back. The Department of Health and Human Services has downplayed the value of vaccines, fuelling a measles epidemic in Texas, while defence secretary Pete Hegseth used the Signal messaging app to release classified military secrets regarding a war in progress to a journalist.

How narcissistic of me to become livid only when personally affected by termination of an award that’s almost over anyway.

A few days later, on 28 April, Stony Brook’s provost received another e-mail about my grant from the NSF. Forwarded to me, it said: “the termination notice is retracted; NSF terminated this project in error”. Since then, the online documents at the NSF, and the information about my grant in the tracker, have thankfully been corrected.

The critical point

In a few years’ time, I’ll put together another proposal to study the difference between the way that US government handles science and the needs of its citizens. I’ll certainly have a lot more material to draw on. Meanwhile, I’ll reluctantly wear my badge of honour. For I deserve it – though not, as I initially thought, because I had threatened the Trump Administration enough that they tried to halt my research.

I got it simply because I’m yet another victim of the Trump Administration’s incompetence.

The post Robert P Crease: ‘I’m yet another victim of the Trump administration’s incompetence’ appeared first on Physics World.

  •  

Plasma physics sets upper limit on the strength of ‘dark electromagnetism’

Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.

Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.

Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.

Bullet Cluster experiment

In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.

Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.

A graph of the dark electromagnetic coupling constant 𝛼𝐷 as a function of the dark matter mass 𝑚𝐷. There is a blue triangle in the upper left corner of the graph, a wide green region below it running from the bottom left to the top right, and a thin red strip below that. A white triangle at the bottom right of the graph represents a region not disallowed by the measurements.
Interaction strength: Constraints on the dark electromagnetic coupling constant 𝛼𝐷 based on observations from the Bullet Cluster. 𝛼𝐷 must lie below the blue, green and red regions. Dashed lines show the reference value used for the mass of 1 TeV. (Courtesy: K Schoefler et al., “Can plasma physics establish a significant bound on long-range dark matter interactions?” Phys Rev D 111 L071701, https://doi.org/10.1103/PhysRevD.111.L071701)

The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.

Ruling out many of the simplest dark matter theories

The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.

This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”

As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.

The present study is detailed in Physical Review D.

The post Plasma physics sets upper limit on the strength of ‘dark electromagnetism’ appeared first on Physics World.

  •  

Quantum effect could tame noisy nanoparticles by rendering them invisible

In the quantum world, observing a particle is not a passive act. If you shine light on a quantum object to measure its position, photons scatter off it and disturb its motion. This disturbance is known as quantum backaction noise, and it limits how precisely physicists can observe or control delicate quantum systems.

Physicists at Swansea University have now proposed a technique that could eliminate quantum backaction noise in optical traps, allowing a particle to remain suspended in space undisturbed. This would bring substantial benefits for quantum sensors, as the amount of noise in a system determines how precisely a sensor can measure forces such as gravity; detect as-yet-unseen interactions between gravity and quantum mechanics; and perhaps even search for evidence of dark matter.

There’s just one catch: for the technique to work, the particle needs to become invisible.

Levitating nanoparticles

Backaction noise is a particular challenge in the field of levitated optomechanics, where physicists seek to trap nanoparticles using light from lasers. “When you levitate an object, the whole thing moves in space and there’s no bending or stress, and the motion is very pure,” explains James Millen, a quantum physicist who studies levitated nanoparticles at Kings College, London, UK. “That’s why we are using them to detect crazy stuff like dark matter.”

While some noise is generally unavoidable, Millen adds that there is a “sweet spot” called the Heisenberg limit. “This is where you have exactly the right amount of measurement power to measure the position optimally while causing the least noise,” he explains.

The problem is that laser beams powerful enough to suspend a nanoparticle tend to push the system away from the Heisenberg limit, producing an increase in backaction noise.

Blocking information flow

The Swansea team’s method avoids this problem by, in effect, blocking the flow of information from the trapped nanoparticle. Its proposed setup uses a standing-wave laser to trap a nanoparticle in space with a hemispherical mirror placed around it. When the mirror has a specific radius, the scattered light from the particle and its reflection interfere so that the outgoing field no longer encodes any information about the particle’s position.

At this point, the particle is effectively invisible to the observer, with an interesting consequence: because the scattered light carries no usable information about the particle’s location, quantum backaction disappears. “I was initially convinced that we wanted to suppress the scatter,” team leader James Bateman tells Physics World. “After rigorous calculation, we arrived at the correct and surprising answer: we need to enhance the scatter.”

In fact, when scattering radiation is at its highest, the team calculated that the noise should disappear entirely. “Even though the particle shines brighter than it would in free space, we cannot tell in which direction it moves,” says Rafał Gajewski, a postdoctoral researcher at Swansea and Bateman’s co-author on a paper in Physical Review Research describing the technique.

Gajewski and Bateman’s result flips a core principle of quantum mechanics on its head. While it’s well known that measuring a quantum system disturbs it, the reverse is also true: if no information can be extracted, then no disturbance occurs, even when photons continuously bombard the particle. If physicists do need to gain information about the trapped nanoparticle, they can use a different, lower-energy laser to make their measurements, allowing experiments to be conducted at the Heisenberg limit with minimal noise.

Putting it into practice

For the method to work experimentally, the team say the mirror needs a high-quality surface and a radius that is stable with temperature changes. “Both requirements are challenging, but this level of control has been demonstrated and is achievable,” Gajewski says.

Positioning the particle precisely at the center of the hemisphere will be a further challenge, he adds, while the “disappearing” effect depends on the mirror’s reflectivity at the laser wavelength. The team is currently investigating potential solutions to both issues.

If demonstrated experimentally, the team says the technique could pave the way for quieter, more precise experiments and unlock a new generation of ultra-sensitive quantum sensors. Millen, who was not involved in the work, agrees. “I think the method used in this paper could possibly preserve quantum states in these particles, which would be very interesting,” he says.

Because nanoparticles are far more massive than atoms, Millen adds, they interact more strongly with gravity, making them ideal candidates for testing whether gravity follows the strange rules of quantum theory.  “Quantum gravity – that’s like the holy grail in physics!” he says.

The post Quantum effect could tame noisy nanoparticles by rendering them invisible appeared first on Physics World.

  •  

Delta.g wins IOP’s qBIG prize for its gravity sensors

The UK-based company Delta.g has bagged the 2025 qBIG prize, which is awarded by the Institute of Physics (IOP). Initiated in 2023, qBIG celebrates and promotes the innovation and commercialization of quantum technologies in the UK and Ireland.

Based in Birmingham, Delta.g makes quantum sensors that measure the local gravity gradient. This is done using atom interferometry, whereby laser pulses are fired at a cloud of cold atoms that is freefalling under gravity.

On the Earth’s surface, this gradient is sensitive to the presence of buildings and underground voids such as tunnels. The technology was developed by physicists at the University of Birmingham and in 2022 they showed how it could be used to map out a tunnel below a road on campus. The system has also been deployed in a cave and on a ship to test its suitability for use in navigation.

Challenging to measure

“Gravity is a fundamental force, yet its full potential remains largely untapped because it is so challenging to measure,” explains Andrew Lamb who is co-founder and chief technology officer at Delta.g. “As the first to take quantum technology gravity gradiometry from the lab to the field, we have set a new benchmark for high-integrity, noise-resistant data transforming how we understand and navigate the subsurface.”

Awarded by the IOP, the qBIG prize is sponsored by Quantum Exponential, which is the UK’s first enterprise venture capital fund focused on quantum technology. The winner was announced today at the Economist’s Commercialising Quantum Global 2025 event in London. Delta.g receives a £10,000 unrestricted cash prize; 10 months of mentoring from Quantum Exponential; and business support from the IOP.

Louis Barson, the IOP’s director of science, innovation and skills says, “The IOP’s role as UK and Ireland coordinator of the International Year of Quantum 2025 gives us a unique opportunity to showcase the exciting developments in the quantum sector. Huge congratulations must go to the Delta.g team, whose incredible work stood out in a diverse and fast-moving field.”

Two runners-up were commended by the IOP. One is Glasgow-based  Neuranics, which makes quantum sensors that detect tiny magnetic signals from the human body. This other is Southampton’s Smith Optical, which makes an augmented-reality display based on quantum technology.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Delta.g wins IOP’s qBIG prize for its gravity sensors appeared first on Physics World.

  •  

Electrolysis workstation incorporates mass spectrometry to accelerate carbon-dioxide reduction research

The electrochemical reduction of carbon dioxide is used to produce a range of chemical and energy feedstocks including syngas (hydrogen and carbon monoxide), formic acid, methane and ethylene. As well as being an important industrial process, the large-scale reduction of carbon dioxide by electrolysis offers a practical way to capture and utilize carbon dioxide.

As a result, developing new and improved electrochemical processes for carbon-dioxide reduction is an important R&D activity. This work involves identifying which catalyst and electrolyte materials are optimal for efficient production. And when a promising electrochemical system is identified in the lab, the work is not over because the design must be then scaled up to create an efficient and practical industrial process.

Such R&D activities must overcome several challenges in operating and characterizing potential electrochemical systems. These include maintaining the correct humidification of carbon-dioxide gas during the electrolysis process and minimizing the production of carbonates – which can clog membranes and disrupt electrolysis.

While these challenges can be daunting, they can be overcome using the 670 Electrolysis Workstation from US-based Scribner. This is a general-purpose electrolysis system designed to test the materials used in the conversion of electrical energy to fuels and chemical feedstocks – and it is ideal for developing systems for carbon-dioxide reduction.

Turn-key and customizable

The workstation is a flexible system that is both turn-key and customizable. Liquid and gas reactants can be used on one or both of the workstation’s electrodes. Scribner has equipped the 670 Electrolysis Workstation with cells that feature gas diffusion electrodes and membranes from US-based Dioxide Materials. The company specializes in the development of technologies for converting carbon dioxide into fuels and chemicals, and it was chosen by Scribner because Dioxide Materials’ products are well documented in the scientific literature.

The gas diffusion electrodes are porous graphite cathodes through which carbon-dioxide gas flows between input and output ports. The gas can migrate from the graphite into a layer containing a metal catalyst. Membranes are used in electrolysis cells to ensure that only the desired ions are able to migrate across the cell, while blocking the movement of gases.

Two men in a lab
Fully integrated Scribner’s Jarrett Mansergh (left) and Luke Levin-Pompetzki of Hiden Analytical in Scribner’s lab after integrating the electrolysis and mass-spectrometry systems. (Courtesy: Scribner)

The system employs a multi-range  ±20 A and 5 V potentiostat for high-accuracy operation over a wide range of reaction rates and cell sizes. The workstation is controlled by Scribner’s FlowCell™ software, which provides full control and monitoring of test cells and comes pre-loaded with a wide range of experimental protocols. This includes electrochemical impedance spectroscopy (EIS) capabilities up to 20 KHz and cyclic voltammetry protocols – both of which are used to characterize the health and performance of electrochemical systems. FlowCell™ also allows users to set up long duration experiments while providing safety monitoring with alarm settings for the purging of gases.

Humidified gas

The 670 Electrolysis Workstation features a gas handling unit that can supply humidified gas to test cells. Adding water vapour to the carbon-dioxide reactant is crucial because the water provides the protons that are needed to convert carbon dioxide to products such as methane and syngas. Humidifying gas is very difficult and getting it wrong leads to unwanted condensation in the system. The 670 Electrolysis Workstation uses temperature control to minimize condensation. The same degree of control can be difficult to achieve in homemade systems, leading to failure.

The workstation offers electrochemical cells with 5 cm2 and 25 cm2 active areas. These can be used to build carbon-dioxide reduction cells using a range of materials, catalysts and membranes – allowing the performance of these prototype cells to be thoroughly evaluated. By studying cells at these two different sizes, researchers can scale up their electrochemical systems from a preliminary experiment to something that is closer in size to an industrial system. This makes the 670 Electrolysis Workstation ideal for use across university labs, start-up companies and corporate R&D labs.

The workstation can handle, acids, bases and organic solutions. For carbon-dioxide reduction, the cell is operated with a liquid electrolyte on the positive electrode (anode) and gaseous carbon dioxide at the negative electrode (cathode). An electric potential is applied across the electrodes and the product gas comes off the cathode side.

The specific product is largely dependent on the catalyst used at the cathode. If a silver catalyst is used for example, the cell is likely to produce the syngas. If a tin catalyst is used, the product is more likely to be formic acid.

Mass spectrometry

The best way to ensure that the desired products are being made in the cell is to connect the gas output to a mass spectrometer. As a result, Scribner has joined forces with Hiden Analytical to integrate the UK-based company’s HPR-20 mass spectrometer for gas analysis. The Hiden system is specifically configured to perform continuous analysis of evolved gases and vapours from the 670 Electrolysis Workstation.

CO2 reduction cell feature
The Scribner CO2 Reduction Cell Fixture (Courtesy: Scribner)

If a cell is designed to create syngas, for example, the mass spectrometer will determine exactly how much carbon monoxide is being produced and how much hydrogen is being produced. At the same time, researchers can monitor the electrochemical properties of the cell. This allows researchers to study relationships between a system’s electrical performance and the chemical species that it produces.

Monitoring gas output is crucial for optimizing electrochemical processes that minimize negative effects such as the production of carbonates, which is a significant problem when doing carbon dioxide reduction.

In electrochemical cells, carbon dioxide is dissolved in a basic solution. This results in the precipitation of carbonate salts that clog up the membranes in cells, greatly reducing performance. This is a significant problem when scaling up cell designs for industrial use because commercial cells must be very long-lived.

Pulsed-mode operation

One strategy for dealing with carbonates is to operate electrochemical cells in pulsed mode, rather than in a steady state. The off time allows the carbonates to migrate away from electrodes, which minimizes clogging. The 670 Electrolysis Workstation allows users to explore the use of short, second-scale pulses. Another option that researchers can explore is the use of pulses of fresh water to flush carbonates away from the cathode area. These and other options are available in a set of pre-programmed experiments that allow users to explore the mitigation of salt formation in their electrochemical cells.

The gaseous products of these carbonate-mitigation modes can be monitored in real time using Hiden’s mass spectrometer. This allows researchers to identify any changes in cell performance that are related to pulsed operation. Currently, electrochemical and product characteristics can be observed on time scales as short as 100 ms. This allows researchers to fine-tune how pulses are applied to minimize carbonate production and maximize the production of desired gases.

Real-time monitoring of product gases is also important when using EIS to observe the degradation of the electrochemical performance of a cell over time. This provides researchers with a fuller picture of what is happening in a cell as it ages.

The integration of Hiden’s mass spectrometer to the 670 Electrolysis Workstation is the latest innovation from Scribner. Now, the company is working on improving the time resolution of the system so that even shorter pulse durations can be studied by users. The company is also working on boosting the maximum current of the 670 to 100 A.

The post Electrolysis workstation incorporates mass spectrometry to accelerate carbon-dioxide reduction research appeared first on Physics World.

  •  

‘We must prioritize continuity and stability to maintain momentum’: Mauro Paternostro on how to ensure that quantum tech continues to thrive

As we celebrate the International Year of Quantum Science and Technology, the quantum technology landscape is a swiftly evolving place. From developments in error correction and progress in hybrid classical-quantum architectures all the way to the commercialization of quantum sensors, there is much to celebrate.

An expert in quantum information processing and quantum technology, physicist Mauro Paternostro is based at the University of Palermo and Queen’s University Belfast. He is also editor-in-chief of the IOP Publishing journal Quantum Science and Technology, which celebrates its 10th anniversary this year. Paternostro talks to Tushna Commissariat about the most exciting recent developments in the filed, his call for a Quantum Erasmus programme and his plans for the future of the journal.

What’s been the most interesting development in quantum technologies over the last year or so?

I have a straightforward answer as well as a more controversial one. First, the simpler point: the advances in quantum error correction for large-scale quantum registers are genuinely exciting. I’m specifically referring to the work conducted by Mikhail LukinDolev Bluvstein and colleagues at Harvard University, and at the Massachusetts Institute of Technology and QuEra Computing, who built a quantum processor with 48 logical qubits that can execute algorithms while correcting errors in real time. In my opinion, this marks a significant step forward in developing computational platforms with embedded robustness. Error correction plays a vital role in the development of practical quantum computers, and Lukin and colleagues won Physics World’s 2024 Breakthrough of the Year award for their work.

Quantum error correction
Logical minds Dolev Bluvstein (left) and Mikhail Lukin with their quantum processor. (Courtesy: Jon Chase/Harvard University)

You can listen to Mikhail Lukin and Dolev Bluvstein explain how they used trapped atoms to create 48 logical qubits on the Physics World Weekly podcast.

Now, for the more complex perspective. Aside from ongoing debate about whether Microsoft’s much-discussed eight-qubit topological quantum processor – Majorana 1 – is genuinely using topological qubits, I believe the device will help to catalyze progress in integrated quantum chips. While it may not qualify as a genuine breakthrough in the long run, this moment could be the pivotal turning-point in the evolution of quantum computational platforms. All the major players will likely feel compelled to accelerate their efforts toward the unequivocal demonstration of “quantum chip” capabilities, and such a competitive drive is just want both industry and government need right now.

25-2-25 Majorana 1
Technical turning-point? Microsoft has unveiled a quantum processor called Majorana 1 that boasts a “topological core”. (Courtesy: John Brecher/Microsoft)

How do you think quantum technologies will scale up as they emerge from the lab and into real-world applications?

I am optimistic in this regard. In fact, progress is already underway, with quantum-sensing devices and atomic quantum clocks are achieving the levels of technological readiness necessary for practical, real-world applications. In the future, hybrid quantum-high-performance computing (HPC) architectures will play crucial roles in bridging classical data-analysis with whatever the field evolves into, once quantum computers can offer genuine “quantum advantage” over classical machines.

Regarding communication, the substantial push toward networked, large-scale communication structures is noteworthy. The availability of the first operating system for programmable quantum networks opens “highways” toward constructing a large-scale “quantum internet”. This development promises to transform the landscape of communication, enabling new possibilities that we are just beginning to explore.

What needs to be done to ensure that the quantum sector can deliver on its promises in Europe and the rest of the world?

We must prioritize continuity and stability to maintain momentum. The national and supranational funding programmes that have supported developments and achievements over the past few years should not only continue, but be enhanced. I am concerned, however, that the current geopolitical climate, which is undoubtedly challenging, may divert attention and funding away from quantum technologies. Additionally, I worry that some researchers might feel compelled to shift their focus toward areas that align more closely with present priorities, such as military applications. While such shifts are understandable, they may not help us keep pace with the remarkable progress the field has made since governments in Europe and beyond began to invest substantially.

On a related note, we must take education seriously. It would be fantastic to establish a Quantum Erasmus programme that allows bachelor’s, master’s and PhD students in quantum technology to move freely across Europe so that they can acquire knowledge and expertise. We need coordinated national and supranational initiatives to build a pipeline of specialists in this field. Such efforts would provide the significant boost that quantum technology needs to continue thriving.

How can the overlap between quantum technology and artificial intelligence (AI) help each other develop?

The intersection and overlap between AI, high-performance computing, and quantum technologies are significant, and their interplay is, in my opinion, one of the most promising areas of exploration. While we are still in the early stages, we have only just started to tap into the potential of AI-based tools for tackling quantum tasks. We are already witnessing the emergence of the first quantum experiments supported by this hybrid approach to information processing.

The convergence of AI, HPC, and quantum computing would revolutionize how we conceive data processing, analysis, forecasting and many other such tasks. As we continue to explore and refine these technologies, the possibilities for innovation and advancement are vast, paving the way for transformations in various fields.

What do you hope the International Year of Quantum Science and Technology (IYQ) will have achieved, going forward?

The IYQ represents a global acknowledgment, at the highest levels, of the immense potential within this field. It presents a genuine opportunity to raise awareness worldwide about what a quantum paradigm for technological development can mean for humankind. It serves as a keyhole into the future, and IYQ could enable an unprecedented number of individuals – governments, leaders and policymakers alike – to peek though it and glimpse at this potential.

All stakeholders in the field should contribute to making this a memorable year. With IYQ, 2025 might even be considered as “year zero” of the quantum technology era.

As we mark its 10th anniversary, how have you enjoyed your time over the last year as editor-in-chief of the journal Quantum Science and Technology (QST)?

Time flies when you have fun, and this is a good time for me to reflect on the past year. Firstly, I want to express my heartfelt gratitude to Rob Thew, the founding editor-in-chief of QST, for his remarkable leadership during the journal’s early years. With unwavering dedication, he and the rest of the entire editorial board, has established QST as an authoritative and selective reference point for the community engaged in the broad field of quantum science and technology. The journal is now firmly recognized as a leading platform for timely and significant research outcomes. A 94% increase in submissions since our fifth anniversary has led to an impressive 747 submissions from 62 countries in 2024 alone, revealing the growing recognition and popularity of QST among scholars. Our acceptance rate of 27% further demonstrates our commitment to publishing only the highest calibre research.

QST has, over the last 10 years, sought to feature research covering the breadth of the field within our curated focus issues covering topics such as: Quantum optomechanics, Quantum photonics: chips and dots; Quantum software, Perspectives on societal aspects and impacts of quantum technologies and Cold atoms in space.

As we celebrate IYQ, QST will lead the way with several exciting editorial initiatives aimed at disseminating the latest achievements in addressing the essential “pillars” of quantum technologies – computing, communication, sensing, and simulation – while also providing authoritative perspectives and visions for the future. Our focus collections seek research within Quantum technologies for quantum gravity & Focus on perspectives on the future of variational quantum computing.

What are your goals with QST, looking ahead?

As quantum technologies advance into an inter- and multi-disciplinary realm, merging fundamental quantum-science with technological applications, QST is evolving as well. We have an increasing number of submissions addressing the burgeoning area of machine learning-enhanced quantum information processing, alongside pioneering studies exploring the application of quantum computing in fields such as chemistry, materials science and quantitative finance. All of this illustrates how QST is proactive in seizing opportunities to advance knowledge from our community of scholars and authors.

This dynamic growth is a fantastic way to celebrate the journal’s 10th anniversary, especially with the added significant milestone of IYQ. Finally, I want to highlight a matter that is very close to my heart, reflecting a much-needed “duty of care” for our readership. As editor-in-chief, I am honoured to support a journal that is part of the ‘Purpose-Led Publishing’ initiative. I view this as a significant commitment to integrity, ethics, high standards, and transparency, which should be the foundation of any scientific endeavour.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post ‘We must prioritize continuity and stability to maintain momentum’: Mauro Paternostro on how to ensure that quantum tech continues to thrive appeared first on Physics World.

  •  

Evidence for a superconducting gap emerges in hydrogen sulphides

Researchers in Germany report that they have directly measured a superconducting gap in a hydride sulphide material for the first time. The new finding represents “smoking gun” evidence for superconductivity in these materials, while also confirming that the electron pairing that causes it is mediated by phonons.

Superconductors are materials that conduct electricity without resistance. Many materials behave this way when cooled below a certain transition temperature Tc, but in most cases this temperature is very low. For example, solid mercury, the first superconductor to be discovered, has a Tc of 4.2 K. Superconductors that operate at higher temperatures – perhaps even at room temperature – are thus highly desirable, as an ambient-temperature superconductor would dramatically increase the efficiency of electrical generators and transmission lines.

The rise of the superhydrides

The 1980s and 1990s saw considerable progress towards this goal thanks to the discovery of high-temperature copper oxide superconductors, which have Tcs between 30–133 K. Then, in 2015, the maximum known critical temperature rose even higher thanks to the discovery that a sulphide material, H3S, has a Tc of 203 K when compressed to pressures of 150 GPa.

This result sparked a flurry of interest in solid materials containing hydrogen atoms bonded to other elements. In 2019, the record was broken again, this time by lanthanum decahydride (LaH10), which was found to have a Tc of 250–260 K, again at very high pressures.

A further advance occurred in 2021 with the discovery of high-temperature superconductivity in cerium hydrides. These novel phases of CeH9 and another newly-synthesized material, CeH10, are remarkable in that they are stable and display high-temperature superconductivity at lower pressures (about 80 GPa, or 0.8 million atmospheres) than the other so-called “superhydrides”.

But how does it work?

One question left unanswered amid these advances concerned the mechanism for superhydride superconductivity. According to the Bardeen–Cooper–Schrieffer (BCS) theory of “conventional” superconductivity, superconductivity occurs when electrons overcome their mutual electrical repulsion to form pairs. These electron pairs, which are known as Cooper pairs, can then travel unhindered through the material as a supercurrent without scattering off phonons (quasiparticles arising from vibrations of the material’s crystal lattice) or other impurities.

Cooper pairing is characterized by a tell-tale energy gap near what’s known as the Fermi level, which is the highest energy level that electrons can occupy in a solid at a temperature of absolute zero. This gap is equivalent to the maximum energy required to break up a Cooper pair of electrons, and spotting it is regarded as unambiguous proof of that material’s superconducting nature.

For the superhydrides, however, this is easier said than done, because measuring such a gap requires instruments that can withstand the extremely high pressures required for superhydrides to exist and behave as superconductors. Traditional techniques such as scanning tunnelling spectroscopy or angle-resolved photoemission spectroscopy do not work, and there was little consensus on what might take their place.

Planar electron tunnelling spectroscopy

A team led by researchers at Germany’s Max Planck Institute for Chemistry has now stepped in by developing a form of spectroscopy that can operate under extreme pressures. The technique, known as planar electron tunnelling spectroscopy, required the researchers to synthesize highly pure planar tunnel junctions of H3S and its deuterated equivalent D3S under pressures of over 100 GPa. Using a technique called laser heating, they created junctions with three parts: a metal, tantalum; a barrier made of tantalum pentoxide, Ta2O5; and the H3S or D3S superconductors. By measuring the differential conductance across the junctions, they determined the density of electron states in H3S and D3S near the Fermi level.

These tunnelling spectra revealed that both H3S and D3S have fully open superconducting gaps of 60 meV and 44 meV respectively. According to team member Feng Du, the smaller gap in D3S confirms that the superconductivity in H3S comes about thanks to interactions between electrons and phonons – a finding that backs up long-standing predictions.

The researchers hope their work, which they report on in Nature, will inspire more detailed studies of superhydrides. They now plan to measure the superconducting gap of other metal superhydrides and compare them with the covalent superhydrides they studied in this work. “The results from such experiments could help us understand the origin of the high Tc in these superconductors,” Du tells Physics World.

The post Evidence for a superconducting gap emerges in hydrogen sulphides appeared first on Physics World.

  •  

Smartphone sensors and antihydrogen could soon put relativity to the test

Researchers on the AEgIS collaboration at CERN have designed an experiment that could soon boost our understanding of how antimatter falls under gravity. Created by a team led by Francesco Guatieri at the Technical University of Munich, the scheme uses modified smartphone camera sensors to improve the spatial resolution of measurements of antimatter annihilations. This approach could be used in rigorous tests of the weak equivalence principle (WEP).

The WEP is a key concept of Albert Einstein’s general theory of relativity, which underpins our understanding of gravity. It suggests that within a gravitational field, all objects of should be accelerated at the same rate, regardless of their mass or whether they are matter or antimatter. Therefore, if matter and antimatter accelerate at different rates in freefall, it would reveal serious problems with the WEP.

In 2023 the ALPHA-g experiment at CERN was the first to observe how antimatter responds to gravity. They found that it falls down, with the tantalizing possibility that antimatter’s gravitational response is weaker than matter’s. Today, there are several experiments that are seeking to improve on this observation.

Falling beam

AEgIS’ approach is to create a horizontal beam of cold antihydrogen atoms and observe how the atoms fall under gravity. The drop will be measured by a moiré deflectometer in which a beam passes through two successive and aligned grids of horizontal slits before striking a position-sensitive detector. As the beam falls under gravity between the grids, the effect is similar to a slight horizontal misalignment of the grids. This creates a moiré pattern – or superlattice – that results in the particles making a distinctive pattern on the detector. By detecting a difference in the measured moiré pattern and that predicted by WEP, the AEgIS collaboration hopes to reveal a discrepancy with general relativity.

However, as Guatieri explains, a number of innovations are required for this to work. “For AEgIS to work, we need a detector with incredibly high spatial resolution. Previously, photographic plates were the only option, but they lacked real-time capabilities.”

AEgIS physicists are addressing this by developing a new vertexing detector. Instead of focussing on the antiparticles directly, their approach detects the secondary particles produced when the antimatter annihilates on contact with the detector. Tracing the trajectories of these particles back to their vertex gives the precise location of the annihilation.

Vertexing detector

Borrowing from industry, the team has created its vertexing detector using an array of modified mobile-phone camera sensors (see figure). Gautieri had already used this approach to measure the real-time positions of low-energy positrons (anti-electrons) with unprecedented precision.

“Mobile camera sensors have pixels smaller than 1 micron,” Guatieri describes. “We had to strip away the first layers of the sensors, which are made to deal with the advanced integrated electronics of mobile phones. This required high-level electronic design and micro-engineering.”

With these modifications in place, the team measured the positions of antiproton annihilations to within just 0.62 micron: making their detector some 35 times more precise than previous designs.

Many benefits

“Our solution, demonstrated for antiprotons and directly applicable to antihydrogen, combines photographic-plate-level resolution, real-time diagnostics, self-calibration and a good particle collection surface, all in one device,” Gautieri says.

With some further improvements, the AEgIS team is confident that their vertexing detector with boost the resolution of the freefall of horizontal antihydrogen beams – allowing rigorous tests of the WEC.

AEgIS team member Ruggero Caravita of Italy’s University of Trento adds, “This game-changing technology could also find broader applications in experiments where high position resolution is crucial, or to develop high-resolution trackers”. He says, “Its extraordinary resolution enables us to distinguish between different annihilation fragments, paving the way for new research on low-energy antiparticle annihilation in materials”.

The research is described in Science Advances.

The post Smartphone sensors and antihydrogen could soon put relativity to the test appeared first on Physics World.

  •  

Ray Dolby Centre opens at the University of Cambridge

A ceremony has been held today to officially open the Ray Dolby Centre at the University of Cambridge. Named after the Cambridge physicist and sound pioneer Ray Dolby, who died in 2013, the facility is the new home of the Cavendish Laboratory and will feature 173 labs as well as lecture halls, workshops, cleanrooms and offices.

Designed by the architecture and interior design practice Jestico + Whiles (who also designed the UK’s £61m National Graphene Institute) and constructed by Bouygues UK, the centre has been funded by £85m from Dolby’s estate as well as £75m from the UK’s Engineering and Physical Sciences Research Council (EPSRC).

Spanning 33 000 m² across five floors, the new centre will house 1100 staff members and students.

The basement will feature microscopy and laser labs containing vibration-sensitive equipment as well as 2500 m² of clean rooms.

The Dolby centre will also serve as a national hub for physics, hosting the Collaborative R&D Environment – a EPSRC National Facility – that will foster collaboration between industry and university researchers and enhance public access to new research.

Parts of the centre will be open to the public, including a café as well as outreach and exhibition spaces that are organised around six courtyards.

The centre also provides a new home for the Cavendish Museum, which includes the model of DNA created by James Watson and Francis Crick as well as the cathode ray tube that was used to discover the electron.

The ceremony today was attended by Dagmar Dolby, president of the Ray and Dagmar Dolby Family Fund, Deborah Prentice, vice-chancellor of the University of Cambridge and physicist Mete Atatüre, who is head of the Cavendish Laboratory.

“The greatest impacts on society – including the Cavendish’s biggest discoveries – have happened because of that combination of technological capability and human ingenuity,” notes  Atatüre. “Science is getting more complex and technically demanding with progress, but now we have the facilities we need for our scientists to ask those questions, in the pursuit of discovering creative paths to the answers – that’s what we hope to create with the Ray Dolby Centre.”

The post Ray Dolby Centre opens at the University of Cambridge appeared first on Physics World.

  •  

Neutron Airy beams make their debut

Physicists have succeeded in making neutrons travel in a curved parabolic waveform known as an Airy beam. This behaviour, which had previously been observed in photons and electrons but never in a non-elementary particle, could be exploited in fundamental quantum science research and in advanced imaging techniques for materials characterization and development.

In free space, beams of light propagate in straight lines. When they pass through an aperture, they diffract, becoming wider and less intense. Airy beams, however, are different. Named after the 19th-century British scientist George Biddell Airy, who developed the mathematics behind them while studying rainbows, they follow a parabola-shaped path – a property known as self-acceleration – and do not spread out as they travel. Airy beams are also “self-healing”, meaning that they reconstruct themselves after passing through an obstacle that blocked part of the beam.

Scientists have been especially interested in Airy beams since 1979, when theoretical work by the physicist Michael Berry suggested several possible applications for them, says Dmitry Pushin, a physicist at the Institute for Quantum Computing (IQC) and the University of Waterloo, Canada. Researchers created the first Airy beams from light in 2007, followed by an electron Airy beam in 2013.

“Inspired by the unusual properties of these beams in optics and electron experiments, we wondered whether similar effects could be harnessed for neutrons,” Pushin says.

Making such beams out of neutrons turned out to be challenging, however. Because neutrons have no charge, they cannot be shaped by electric fields. Also, lenses that focus neutron beams do not exist.

A holographic approach

A team led by Pushin and Dusan Sarenac of the University at Buffalo’s Department of Physics in the US has now overcome these difficulties using a holographic approach based on a custom-microfabricated silicon diffraction grating. The team made this grating from an array of 6 250 000 micron-sized cubic phase patterns etched onto a silicon slab. “The grating modulates incoming neutrons into an Airy form and the resulting beam follows a curved trajectory, exhibiting the characteristics of a two-dimensional Airy profile at a neutron detector,” Sarenac explains.

According to Pushin, it took years of work to figure out the correct dimensions for the array. Once the design was optimized, however, fabricating it took just 48 hours at the IQC’s nanofabrication facility. “Developing a precise wave phase modulation method using holography and silicon microfabrication allowed us to overcome the difficulties in manipulating neutrons,” he says.

The researchers say the self-acceleration and self-healing properties of Airy beams could improve existing neutron imaging techniques (including neutron scattering and diffraction), potentially delivering sharper and more detailed images. The new beams might even allow for new types of neutron optics and could be particularly useful, for example, when targeting specific regions of a sample or navigating around structures.

Creating the neutron Airy beams required access to international neutron science facilities such as the US National Institute of Standards and Technology’s Center for Neutron Research; the US Department of Energy’s Oak Ridge National Laboratory; and the Paul Scherrer Institute in Villigen, Switzerland. To continue their studies, the researchers plan to use the UK’s ISIS Neutron and Muon Source to explore ways of combining neutron Airy beams with other structured neutron beams (such as helical waves of neutrons or neutron vortices). This could make it possible to investigate complex properties such as the chirality, or handedness, of materials. Such work could be useful in drug development and materials science. Since a material’s chirality affects how its electrons spin, it could be important for spintronics and quantum computing, too.

“We also aim to further optimize beam shaping for specific applications,” Sarenac tells Physics World. “Ultimately, we hope to establish a toolkit for advanced neutron optics that can be tailored for a wide range of scientific and industrial uses.”

The present work is detailed in Physical Review Letters.

The post Neutron Airy beams make their debut appeared first on Physics World.

  •  

‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study

Chatbots could boost students’ interest in maths and physics and make learning more enjoyable. So say researchers in Germany, who have compared the emotional response of students using artificial intelligence (AI) texts to learn physics compared to those who only read traditional textbooks. The team, however, found no difference in test performance between the two groups.

The study has been led by Julia Lademann, a physics-education researcher from the University of Cologne, who wanted to see if AI could boost students’ interested in physics. They did this by creating a customized chatbot using OpenAI’s ChatGPT model with a tone and language that was considered accessible to second-year high-school students in Germany.

After testing the chatbot for factual accuracy and for its use of motivating language, the researchers prompted it to generate explanatory text on proportional relationships in physics and mathematics. They then split 214 students, who had an average age of 11.7, into two groups. One was given textbook material on the topic along with chatbot text, while the control group only got the textbook .

The researchers first surveyed the students’ interest in mathematics and physics and then gave them 15 minutes to review the learning material. Their interest was assessed again afterwards along with the students’ emotional state and “cognitive load” – the mental effort required to do the work – through a series of questionnaires.

Higher confidence

The chatbot was found to significantly enhance students’ positive emotions – including pleasure and satisfaction, interest in the learning material and self-belief in their understanding of the subject — compared with those who only used textbook text. “The text of the chatbot is more human-like, more conversational than texts you will find in a textbook,” explains Lademann. “It is more chatty.”

Chatbot text was also found to reduce cognitive load. “The group that used the chatbot explanation experience higher positive feelings about the subject [and] they also had a higher confidence in their learning comprehension,” adds Lademann.

Tests taken within 30 minutes of the “learning phase” of the experiment, however, found no difference in performance between students that received the AI-generated explanatory text and the control group, despite the former receiving more information. Lademann says this could be due to the short study time of 15 minutes.

The researchers say that while their findings suggest that AI could provide a superior learning experience for students, further research is needed to assess its impact on learning performance and long-term outcomes. “It is also important that this improved interest manifests in improved learning performance,” Lademann adds.

Lademann would now like to see “longer term studies with a lot of participants and with children actually using the chatbot”. Such research would explore the potential key strength of chatbots; their ability to respond in real time to student’s queries and adapt their learning level to each individual student.

The post ‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study appeared first on Physics World.

  •  

Loop quantum cosmology may explain smoothness of cosmic microwave background

First light: The cosmic microwave background, as imaged by the European Space Agency’s Planck mission. (Courtesy: ESA and the Planck Collaboration)
First light The cosmic microwave background, as imaged by the European Space Agency’s Planck mission. (Courtesy: ESA and the Planck Collaboration)

In classical physics, gravity is universally attractive. At the quantum level, however, this may not always be the case. If vast quantities of matter are present within an infinitesimally small volume – at the centre of a black hole, for example, or during the very earliest moments of the universe – space–time becomes curved at scales that approach the Planck length. This is the fundamental quantum unit of distance, and is around 1020 times smaller than a proton.

In these extremely curved regions, the classical theory of gravity – Einstein’s general theory of relativity – breaks down. However, research on loop quantum cosmology offers a possible solution. It suggests that gravity, in effect, becomes repulsive. Consequently, loop quantum cosmology predicts that our present universe began in a so-called “cosmic bounce”, rather than the Big Bang singularity predicted by general relativity.

In a recent paper published in EPL, Edward Wilson-Ewing, a mathematical physicist at the University of New Brunswick, Canada, explores the interplay between loop quantum cosmology and a phenomenon sometimes described as “the echo of the Big Bang”: the cosmic microwave background (CMB). This background radiation pervades the entire visible universe, and it stems from the moment the universe became cool enough for neutral atoms to form. At this point, light was suddenly able to travel through space without being continually scattered by the plasma of electrons and light nuclei that existed before. It is this freshly liberated light that makes up the CMB, so studying it offers clues to what the early universe was like.

Edward Wilson-Ewing
Cosmologist Edward Wilson-Ewing uses loop quantum gravity to study quantum effects in the very early universe. (Courtesy: University of New Brunswick)

What was the motivation for your research?

Observations of the CMB show that the early universe (that is, the universe as it was when the CMB formed) was extremely homogeneous, with relative anisotropies of the order of one part in 104. Classical general relativity has trouble explaining this homogeneity on its own, because a purely attractive version of gravity tends to drive things in the opposite direction. This is because if a region has a higher density than the surrounding area, then according to general relativity, that region will become even denser; there is more mass in that region and therefore particles surrounding it will be attracted to it. Indeed, this is how the small inhomogeneities we do see in the CMB grew over time to form stars and galaxies today.

The main way this gets resolved in classical general relativity is to suggest that the universe experienced an episode of super-rapid growth in its earliest moments. This super-rapid growth is known as inflation, and it can suffice to generate homogeneous regions. However, in general, this requires a very large amount of inflation (much more than is typically considered in most models).

Alternately, if for some reason there happens to be a region that is moderately homogeneous when inflation starts, this region will increase exponentially in size while also becoming further homogenized. This second possibility requires a little more than a minimal amount of inflation, but not much more.

My goal in this work was to explore whether, if gravity becomes repulsive in the deep quantum regime (as is the case in loop quantum cosmology), this will tend to dilute regions of higher density, leading to inhomogeneities being smoothed out. In other words, one of the main objectives of this work was to find out whether quantum gravity could be the source of the high degree of homogeneity observed in the CMB.

What did you do in the paper?

In this paper, I studied spherically symmetric space–times coupled to dust (a simple model for matter) in loop quantum cosmology.  These space–times are known as Lemaître–Tolman–Bondi space–times, and they allow arbitrarily large inhomogeneities in the radial direction. They therefore provide an ideal arena to explore whether homogenization can occur: they are simple enough to be mathematically tractable, while still allowing for large inhomogeneities (which, in general, are very hard to handle).

Loop quantum cosmology predicts several leading-order quantum effects. One of these effects is that space–time, at the quantum level, is discrete: there are quanta of geometry just as there are quanta of matter.  This has implications for the equations of motion, which relate the geometry of space–time to the matter in it: if we take into account the discrete nature of quantum geometry, we have to modify the equations of motion.

These modifications are captured by so-called effective equations, and in the paper I solved these equations numerically for a wide range of initial conditions. From this, I found that while homogenization doesn’t occur everywhere, it always occurs in some regions. These homogenized regions can then be blown up to cosmological scales by inflation (and inflation will further homogenize them).  Therefore, this quantum gravity homogenization process could indeed explain the homogeneity observed in the CMB.

What do you plan to do next?

It is important to extend this work in several directions to check the robustness of the homogenization effect in loop quantum cosmology.  The restriction to spherical symmetry should be relaxed, although this will be challenging from a mathematical perspective. It will also be important to go beyond dust as a description of matter. The simplicity of dust makes calculations easier, but it is not particularly realistic.

Other relevant forms of matter include radiation and the so-called inflaton field, which is a type of matter that can cause inflation to occur. That said, in cosmology, the physics is to some extent independent of the universe’s matter content, at least at a qualitative level. This is because while different types of matter content may dilute more rapidly than others in an expanding universe, and the universe may expand at different rates depending on its matter content, the main properties of the cosmological dynamics (for example, the expanding universe, the occurrence of an initial singularity and so on) within general relativity are independent of the specific matter being considered.

I therefore think it is reasonable to expect that the quantitative predictions will depend on the matter content, but the qualitative features (in particular, that small regions are homogenized by quantum gravity) will remain the same. Still, further research is needed to test this expectation.

The post Loop quantum cosmology may explain smoothness of cosmic microwave background appeared first on Physics World.

  •  

Molecular engineering and battery recycling: developing new technologies in quantum, medicine and energy

This episode of the Physics World Weekly podcast comes from the Chicago metropolitan area – a scientific powerhouse that is home to two US national labs and some of the country’s leading universities.

Physics World’s Margaret Harris was there recently and met Nadya Mason. She is dean of the Pritzker School of Molecular Engineering at the University of Chicago, which focuses on quantum engineering; materials for sustainability; and immunoengineering. Mason explains how molecular-level science is making breakthroughs in these fields and she talks about her own research on the electronic properties of nanoscale and correlated systems.

Harris also spoke to Jeffrey Spangenberger who leads the Materials Recycling Group at Argonne National Laboratory, which is on the outskirts of Chicago. Spangenberger talks about the challenges of recycling batteries and how we could make it easier to recover materials from batteries of the future. Spangenberger leads the ReCell Center, a national collaboration of industry, academia and national laboratories that is advancing recycling technologies along the entire battery life-cycle.

On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now.

The post Molecular engineering and battery recycling: developing new technologies in quantum, medicine and energy appeared first on Physics World.

  •  

European centre celebrates 50 years at the forefront of weather forecasting

What is the main role of the European Centre for Medium-Range Weather Forecasts (ECMWF)?

Making weather forecasts more accurate is at the heart of what we do at the ECMWF, working in close collaboration with our member states and their national meteorological services (see box below). That means enhanced forecasting for the weeks and months ahead as well as seasonal and annual predictions. We also have a remit to monitor the atmosphere and the environment – globally and regionally – within the context of a changing climate.

How does the ECMWF produce its weather forecasts?

Our task is to get the best representation, in a 3D sense, of the current state of the atmosphere versus key metrics like wind, temperature, humidity and cloud cover. We do this via a process of reanalysis and data assimilation: combining the previous short-range weather forecast, and its component data, with the latest atmospheric observations – from satellites, ground stations, radars, weather balloons and aircraft. Unsurprisingly, using all this observational data is a huge challenge, with the exploitation of satellite measurements a significant driver of improved forecasting over the past decade.

In what ways do satellite measurements help?

Consider the EarthCARE satellite that was launched in May 2024 by the European Space Agency (ESA) and is helping ECMWF to improve its modelling of clouds, aerosols and precipitation. EarthCARE has a unique combination of scientific instruments – a cloud-profiling radar, an atmospheric lidar, a multispectral imager and a broadband radiometer – to infer the properties of clouds and how they interact with solar radiation as well as thermal-infrared radiation emitted by different layers of the atmosphere.

How are you combining such data with modelling?

The ECMWF team is learning how to interpret and exploit the EarthCARE data to directly initiate our models. Put simply, mathematical models that better represent clouds and, in turn, yield more accurate forecasts. Indirectly, EarthCARE is also revealing a clearer picture of  the fundamental physics governing cloud formation, distribution and behaviour. This is just one example of numerous developments taking advantage of new satellite data. We are looking forward, in particular, to fully exploiting next-generation satellite programmes from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) – including the EPS-SG polar-orbiting system and the Meteosat Third Generation geostationary satellite for continuous monitoring over Europe, Africa and the Indian Ocean.

ECMWF high-performance computing centre
Big data, big opportunities: the ECMWF’s high-performance computing facility in Bologna, Italy, is the engine-room of the organization’s weather and climate modelling efforts. (Courtesy: ECMWF)

What other factors help improve forecast accuracy?

We talk of “a day, a decade” improvement in weather forecasting, such that a five-day forecast now is as good as a three-day forecast 20 years ago. A richer and broader mix of observational data underpins that improvement, with diverse data streams feeding into bigger supercomputers that can run higher-resolution models and better algorithms. Equally important is ECMWF’s team of multidisciplinary scientists, whose understanding of the atmosphere and climate helps to optimize our models and data assimilation methods. A case study in this regard is Destination Earth, an ambitious European Union initiative to create a series of “digital twins” – interactive computer simulations – of our planet by 2030. Working with ESA and EUMETSTAT, the ECMWF is building the software and data environment for Destination Earth as well as developing the first two digital twins.

What are these two twins?

Our Digital Twin on Weather-Induced and Geophysical Extremes will assess and predict environmental extremes to support risk assessment and management. Meanwhile, in collaboration with others, the Digital Twin on Climate Change Adaptation complements and extends existing capabilities for the analysis and testing of “what if” scenarios – supporting sustainable development and climate adaptation and mitigation policy-making over multidecadal timescales.

Progress in machine learning and AI has been dramatic over the past couple of years

What kind of resolution will these models have?

Both digital twins integrate sea, atmosphere, land, hydrology and sea ice and their deep connections with a resolution currently impossible to reach. Right now, for example, the ECMWF’s operational forecasts cover the whole globe in a 9 km grid – effectively a localized forecast every 9 km. With Destination Earth, we’re experimenting with 4 km, 2 km, and even 1 km grids.

In February, the ECMWF unveiled a 10-year strategy to accelerate the use of machine learning and AI. How will this be implemented?

The new strategy prioritizes growing exploitation of data-driven methods anchored on established physics-based modelling – rapidly scaling up our previous deployment of machine learning and AI. There are also a variety of hybrid approaches combining data-driven and physics-based modelling.

What will this help you achieve?

On the one hand, data assimilation and observations will help us to directly improve as well as initialize our physics-based forecasting models – for example, by optimizing uncertain parameters or learning correction terms. We are also investigating the potential of applying machine-learning techniques directly on observations – in effect, to make another step beyond the current state-of-the-art and produce forecasts without the need for reanalysis or data assimilation.

How is machine learning deployed at the moment?

Progress in machine learning and AI has been dramatic over the past couple of years – so much so that we launched our Artificial Intelligence Forecasting System (AIFS) back in February. Trained on many years of reanalysis and using traditional data assimilation, AIFS is already an important addition to our suite of forecasts, though still working off the coat-tails of our physics-based predictive models. Another notable innovation is our Probability of Fire machine-learning model, which incorporates multiple data sources beyond weather prediction to identify regional and localized hot-spots at risk of ignition. Those additional parameters – among them human presence, lightning activity as well as vegetation abundance and its dryness – help to pinpoint areas of targeted fire risk, improving the model’s predictive skill by up to 30%.

What do you like most about working at the ECMWF?

Every day, the ECMWF addresses cutting-edge scientific problems – as challenging as anything you’ll encounter in an academic setting – by applying its expertise in atmospheric physics, mathematical modelling, environmental science, big data and other disciplines. What’s especially motivating, however, is that the ECMWF is a mission-driven endeavour with a straight line from our research outcomes to wider societal and economic benefits.

ECMWF at 50: new frontiers in weather and climate prediction

The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organization supported by 35 states – 23 member states and 12 co-operating states. Established in 1975, the centre employs around 500 staff from more than 30 countries at its headquarters in Reading, UK, and sites in Bologna, Italy, and Bonn, Germany. As a research institute and 24/7 operational service, the ECMWF produces global numerical weather predictions four times per day and other data for its member/cooperating states and the broader meteorological community.

The ECMWF processes data from around 90 satellite instruments as part of its daily activities (yielding 60 million quality-controlled observations each day for use in its Integrated Forecasting System). The centre is a key player in Copernicus – the Earth observation component of the EU’s space programme – by contributing information on climate change for the Copernicus Climate Change Service; atmospheric composition to the Copernicus Atmosphere Monitoring Service; as well as flooding and fire danger for the Copernicus Emergency Management Service. This year, the ECMWF is celebrating its 50th anniversary and has a series of celebratory events scheduled in Bologna (15–19 September) and Reading (1–5 December).

The post European centre celebrates 50 years at the forefront of weather forecasting appeared first on Physics World.

  •  

MR QA from radiotherapy perspective

IBA webinar image

During this webinar, the key steps of integrating an MRI scanner and MRI Linac into a radiotherapy will be presented, specially focusing on the quality assurance required for the use of the MRI images. Furthermore, the use of phantoms and their synergy with each other across the multi-vendor facility will be discussed.

Akos Gulyban
Akos Gulyban

Akos Gulyban is a medical physicist with a PhD in Physics (in Medicine), renowned for his expertise in MRI-guided radiotherapy (MRgRT). Currently based at Institut Jules Bordet in Brussels, he plays a pivotal role in advancing MRgRT technologies, particularly through the integration of the Elekta Unity MR-Linac system along the implementation of dedicated MRI simulation for radiotherapy.

In addition to his clinical research, Gulyban has been involved in developing quality assurance protocols for MRI-linear accelerator (MR-Linac) systems, contributing to guidelines that ensure safe and effective implementation of MRI-guided radiotherapy.

Gulyban is playing a pivotal role in integrating advanced imaging technologies into radiotherapy, striving to enhance treatment outcomes for cancer patients.

The post MR QA from radiotherapy perspective appeared first on Physics World.

  •  

Neutrons differentiate between real and fake antique coins

Illustration of neutron tomography
Finding fakes Illustration of how neutrons can pass easily through the metallic regions of an old coin, but are blocked by hydrogen-bearing compounds formed by corrosion. (Courtesy: S Kelley/NIST)

The presence of hydrogen in a sample is usually a bad thing in neutron scattering experiments, but now researchers in the US have turned the tables on the lightest element and used it to spot fake antique coins.

The scattering of relatively slow moving neutrons from materials provides a wide range of structural information. This is because these “cold” neutrons have wavelengths on par with the separations of atoms in a materials. However, materials that contain large amounts of hydrogen-1 nuclei (protons) can be difficult to study because hydrogen is very good at scattering neutrons in random directions – creating a noisy background signal. Indeed, biological samples containing lots of hydrogen are usually “deuterated” – replacing hydrogen with deuterium – before they are placed in a neutron beam.

However, there are some special cases where this incoherent scattering of hydrogen can be useful – measuring the water content of samples, for example.

Surfeit of hydrogen

Now, researchers in the US and South Korea have used a neutron beam to differentiate between genuine antique coins and fakes. The technique relies on the fact that the genuine coins have suffered corrosion that has resulted in the inclusion of hydrogen-bearing compounds within the coins.

Led by Youngju Kim and Daniel Hussey at the National Institute of Standards and Technology (NIST) in Colorado, the team fired a parallel beam of neutrons through individual coins (see figure). The particles travel with ease through a coin’s original metal, but tend to be scattered by the hydrogen-rich corrosion inclusions. This creates a 2D pattern of high and low intensity regions on a neutron-sensitive screen behind the coin. The coin can be rotated and a series of images taken. Then, the researchers used computed tomography to create a 3D image showing the corroded regions of a coin.

The team used this neutron tomography technique to examine an authentic 19th century coin that was recovered from a shipwreck, and on a coin that is known to be a replica. Although both coins had surface corrosion, the corrosion extended much deeper into the bulk of the authentic coin than it did in the replica.

The researchers also used a separate technique called neutron grating interferometry to characterize the pores in the surfaces of the coins. Pores are common on the surface of coins that have been buried or submerged. Authentic antique coins are often found buried or submerged, whereas replica coins will be buried or submerged to make them look more authentic.

Small-angle scattering

Neutron grating interferometry looks at the small-angle scattering of neutrons from a sample and focuses on structures that range in size from about 1 nm to 1 micron.

The team found that the authentic coin had many more tiny pores than the replica coin, which was dominated by much larger (millimetre scale) pores.

This observation was expected because when a coin is buried or submerged, chemical reactions cause metals to leach out of its surface, creating millimetre-sized pores. As time progresses, however, further chemical reactions cause corrosion by-products such as copper carbonates to fill in the pores. The result is that the pores in the older authentic coin are smaller than the pores in the newer replica coin.

The team now plans to expand its study to include more Korean coins and other metallic artefacts. The techniques could also be used to pinpoint corrosion damage in antique coins, allowing these areas to be protected using coatings.

As well as being important to coin collectors and dealers, the ability to verify the age of coins is of interest to historians and economists – who use the presence of coins in their research.

The study was done using neutrons from NIST’s research reactor in Maryland. That facility is scheduled to restart in 2026 so the team plans to continue its investigation using a neutron source in South Korea.

The research is described in Scientific Reports.

The post Neutrons differentiate between real and fake antique coins appeared first on Physics World.

  •