↩ Accueil

Vue lecture

Scientists quantify behaviour of micro- and nanoplastics in city environments

Abundance and composition of atmospheric plastics
Measuring atmospheric plastics Abundance and composition of microplastics (MP) and nanoplastics (NP) in aerosols and estimated fluxes across atmospheric compartments in semiarid (Xi’an) and humid subtropical (Guangzhou) urban environments. (TSP: total suspended particles) (Courtesy: Institute of Earth Environment, CAS)

Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.

Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.

This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.

However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.

A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.

“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”

Quantitative detection

For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.

“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”

The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.

The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.

The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.

“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”

The research was published in Science Advances.

The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.

  •  

The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’

Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.

As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.

By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.

The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.

Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.

Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.

But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.

And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.

While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.

But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.

So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.

According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.

But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?

Other options

CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.

Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.

The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.

Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.

As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.

In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.

I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.

The future of CERN – and the bracing spirit of CERN – rests in your hands.

The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.

  •  

The power of a poster

Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan
Secret to success Kevin McGuigan discovered that one research poster can lead to generous funding contributions. (Courtesy: Antonio Jaen Osuna)

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

The post The power of a poster appeared first on Physics World.

  •  

String-theory concept boosts understanding of biological networks

Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

  •  

Is our embrace of AI naïve and could it lead to an environmental disaster?

According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

  •  

Encrypted qubits can be cloned and stored in multiple locations

Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.

Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”

No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.

Signal and noise

In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.

Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”

Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.

Cloning clones

Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.

 The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”

Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

  •  

Fuel cell catalyst requirements for heavy-duty vehicle applications

Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.

TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.

This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.

We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.

Nagappan Ramaswamy

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.

He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.

At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.

The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.

  •  

India turns to small modular nuclear reactors to meet climate targets

India has been involved in nuclear energy and power for decades, but now the country is  turning to small modular nuclear reactors (SMRs) as part of a new, long-term push towards nuclear and renewable energy. In December 2025 the country’s parliament passed a bill that allows private companies for the first time to participate in India’s nuclear programme, which could see them involved in generating power, operating plants and making equipment.

Some commentators are unconvinced that the move will be enough to help meet India’s climate pledge to achieving 500 GW of non-fossil-fuel based energy generation by 2030. Interestingly, however, India has now joined other nations, such as Russia and China, in taking an interest in SMRs. They could help stem the overall decline in nuclear power, which now accounts for just 9% of electricity generated around the world – down from 17.5% in 1996.

Last year India’s finance minister Nirmala Sitharaman announced a nuclear energy mission funded with 200 billion Indian rupees ($2.2bn) to develop at least five indigenously designed and operational SMRs by 2033. Unlike huge, conventional nuclear plants, such as pressurized heavy-water reactors (PHWRs), most or all components of an SMR are manufactured in factories before being assembled at the reactor site.

SMRs, typically generate less than 300 MW of electrical power but – being modular – additional capacity can be brought on quickly and easily given their lower capital costs, shorter construction times, ability to work with lower-capacity grids and lower carbon emissions. Despite their promise, there are only two fully operating SMRs in the world – both in Russia – although two further high-temperature gas-cooled SMRs are currently being built in China. In June 2025 Rolls-Royce SMR was selected as the preferred bidder by Great British Nuclear to build the UK’s first fleet of SMRs, with plans to provide 470 MW of low-carbon electricity.

Cost benefit analysis

An official at the Department of Atomic Energy told Physics World that part of that mix of five new SMRs in India could be the 200 MW Bharat small modular reactor, which are based on pressurized water reactor technology and use slightly enriched uranium as a fuel. Other options are 55 MW small modular reactors and the Indian government also plans to partner with the private sector to deploy 220 MW Bharat small reactors.

Despite such moves, some are unconvinced that small nuclear reactors could help India scale its nuclear ambitions. “SMRs are still to demonstrate that they can supply electricity at scale,” says Karthik Ganesan, a fellow and director of partnerships at the Council on Energy, Environment and Water (CEEW), a non-profit policy research think-tank based in New Delhi. “SMRs are a great option for captive consumption, where large investment that will take time to start generating is at a premium.”

Ganesan, however, says it is too early to comment on the commercial viability of SMRs as cost reductions from SMRs depend on how much of the technology is produced in a factory and in what quantities. “We are yet to get to that point and any test reactors deployed would certainly not be the ones to benchmark their long-term competitiveness,” he says. “[But] even at a higher tariff, SMRs will still have a use case for industrial consumers who want certainty in long-term tariffs and reliable continuous supply in a world where carbon dioxide emissions will be much smaller than what we see from the power sector today.”

M V Ramana from the University of British Columbia, Vancouver, who works in international security and energy supply, is concerned over the cost efficiency of SMRs compared to their traditional counterparts. “Larger reactors are cheaper on a per-megawatt basis because their material and work requirements do not scale linearly with power capacity,” says Ramana.  This, according to Ramana, means that the electricity SMRs produce will be more expensive than nuclear energy from large reactors, which are already far more expensive than renewables such as solar and wind energy.

Clean or unclean?

Even if SMRs take over from PHWRs, there is still the question of what do with its nuclear waste. As Ramana points out, all activities linked to the nuclear fuel chain – from mining uranium to dealing with the radioactive wastes produced – have significant health and environmental impacts. “The nuclear fuel chain is polluting, albeit in a different way from that of fossil fuels,” he says, adding that those pollutants remain hazardous for hundreds of thousands of years. “There is no demonstrated solution to managing these radioactive wastes – nor can there be, given the challenge of trying to ensure that these materials do not come into contact with living beings,” says Ramana.

Ganesan, however, thinks that nuclear energy is still clean as it produces electricity with much a lower environmental footprint especially when it comes to so-called “criteria pollutants”: ozone; particulate matter; carbon monoxide; lead; sulphur dioxide; and nitrogen dioxide.  While nuclear waste still needs to be managed, Ganesan says the associated costs are already included in the price of setting up a reactor. “In due course, with technological development, the burn up will significantly higher and waste generated a lot lesser.”

The post India turns to small modular nuclear reactors to meet climate targets appeared first on Physics World.

  •  

Photonics West explores the future of optical technologies

The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.

As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.

Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.

Here are some of the product innovations on show at this year’s event.

Enabling high-performance photonics assembly with SmarAct

As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.

SmarAct’s photonics assembly portfolio
Rapid and reliable SmarAct’s technology portfolio enables assembly, alignment and testing of optical components with nanometre precision. (Courtesy: SmarAct)

A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.

For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.

Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.

Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.

For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.

Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.

  • Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.

Avantes previews AvaSoftX software platform and new broadband light source

Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.

AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.

The new AvaSoftX software platform
Streamlined solution The new AvaSoftX software platform offers next-generation control and data handling. (Courtesy: Avantes)

Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.

The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.

Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.

A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.

The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.

Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.

  • For product previews, live demonstrations and more, meet Avantes at booth #1157.

HydraHarp 500: high-performance time tagger redefines precision and scalability

One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.

At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.

The HydraHarp 500
At the forefront of photon timing The high-resolution multichannel time tagger HydraHarp 500 offers picosecond timing precision. It combines versatile trigger methods with multiple interfaces, making it ideally suited for demanding applications that require many input channels and high data throughput. (Courtesy: PicoQuant)

A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.

Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.

Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.

With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

 

The post Photonics West explores the future of optical technologies appeared first on Physics World.

  •  

Mission to Mars: from biological barriers to ethical impediments

“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.

The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.

This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.

Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.

Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.

Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.

Let’s get physical

The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.

Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.

Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.

In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.

Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.

  • February 2026 MIT Press 264pp £27hb

The post Mission to Mars: from biological barriers to ethical impediments appeared first on Physics World.

  •  

Solar storms could be forecast by monitoring cosmic rays

Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.

CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.

To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.

Pushing back cosmic rays

Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.

“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”

In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.

Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.

“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”

Multiple missions

The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.

Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.

“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.

With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.

“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.

The research is described in The Astrophysical Journal.

The post Solar storms could be forecast by monitoring cosmic rays appeared first on Physics World.

  •  

CERN team solves decades-old mystery of light nuclei formation

When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.

Baryon resonance

The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.

The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.

Smoking gun

Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.

Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.

The hunt is on

Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.

Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.

The post CERN team solves decades-old mystery of light nuclei formation appeared first on Physics World.

  •  

Anyon physics could explain coexistence of superconductivity and magnetism

New calculations by physicists in the US provide deeper insights into an exotic material in which superconductivity and magnetism can coexist. Using a specialized effective field theory, Zhengyan Shi and Todadri Senthil at the Massachusetts Institute of Technology show how this coexistence can emerge from the collective states of mobile anyons in certain 2D materials.

An anyon is a quasiparticle with statistical properties that lie somewhere between those of bosons and fermions. First observed in 2D electron gases in strong magnetic fields, anyons are known for their fractional electrical charge and fractional exchange statistics, which alter the quantum state of two identical anyons when they are exchanged for each other.

Unlike ordinary electrons, anyons produced in these early experiments could not move freely, preventing them from forming complex collective states. Yet in 2023, experiments with a twisted bilayer of molybdenum ditelluride provided the first evidence for mobile anyons through observations of fractional quantum anomalous Hall (FQAH) insulators. This effect appears as fractionally quantized electrical resistance in 2D electron systems at zero applied magnetic field.

Remarkably, these experiments revealed that molybdenum ditelluride can exhibit superconductivity and magnetism at the same time. Since superconductivity usually relies on electron pairing that can be disrupted by magnetism, this coexistence was previously thought impossible.

Anyonic quantum matter

“This then raises a new set of theoretical questions,” explains Shi. “What happens when a large number of mobile anyons are assembled together? What kind of novel ‘anyonic quantum matter’ can emerge?”

In their study, Shi and Senthil explored these questions using a new effective field theory for an FQAH insulator. Effective field theories are widely used in physics to approximate complex phenomena without modelling every microscopic detail. In this case, the duo’s model captured the competition between anyon mobility, interactions, and fractional exchange statistics in a many-body system of mobile anyons.

To test their model, the researchers considered the doping of an FQAH insulator – adding mobile anyons beyond the plateau in Hall resistance, where the existing anyons were effectively locked in place. This allowed the quasiparticles to move freely and form new collective phases.

“Crucially, we recognized that the fate of the doped state depends on the energetic hierarchy of different types of anyons,” Shi explains. “This observation allowed us to develop a powerful heuristic for predicting whether the doped state becomes a superconductor without any detailed calculations.”

In their model, Shi and Senthil focused on a specific FQAH insulator called a Jain state, which hosts two types of anyon excitations. One type has electrical charge of 1/3 of an electron and the other with 2/3. In a perfectly clean system, doping the insulator with 2/3-charge anyons produced a chiral topological superconductor, a phase that is robust against disorder and features edge currents flowing in only one direction. In contrast, doping with 1/3-charge anyons produced a metal with broken translation symmetry – still conducting, but with non-uniform patterns in its electron density.

Anomalous vortex glass

“In the presence of impurities, we showed that the chiral superconductor near the superconductor–insulator transition is a novel phase of matter dubbed the ‘anomalous vortex glass’, in which patches of swirling supercurrents are sprinkled randomly across the sample,” Shi describes. “Observing this vortex glass phase would be smoking-gun evidence for the anyonic mechanism for superconductivity.”

The results suggest that even when adding the simplest kind of anyons – like those in the Jain state – the collective behaviour of these quasiparticles can enable the coexistence of magnetism and superconductivity. In future studies, the duo hopes that more advanced methods for introducing mobile anyons could reveal even more exotic phases.

“Remarkably, our theory provides a qualitative account of the phase diagram of a particular 2D material (twisted molybdenum ditelluride), although many more tests are needed to rule out other possible explanations,” Shi says. “Overall, these findings highlight the vast potential of anyonic quantum matter, suggesting a fertile ground for future discoveries.”

The research is described in PNAS.

The post Anyon physics could explain coexistence of superconductivity and magnetism appeared first on Physics World.

  •  

Shapiro steps spotted in ultracold bosonic and fermionic gases

Shapiro steps – a series of abrupt jumps in the voltage–current characteristic of a Josephson junction that is exposed to microwave radiation – have been observed for the first time in ultracold gases by groups in Germany and Italy. Their work on atomic Josephson junctions provides new insights into the phenomenon, and could lead to a standard for chemical potential.

In 1962 Brian Josephson of the University of Cambridge calculated that, if two superconductors were separated by a thin insulating barrier, the phase difference between the wavefunctions on either side should induce quantum tunneling, leading to a current at zero potential difference.

A year later, Sidney Shapiro and colleagues at the consultants Arthur D. Little showed that inducing an alternating electric current using a microwave field causes the phase of the wavefunction on either side of a Josephson junction to evolve at different rates, leading to quantized increases in potential difference across the junction. The height of these “Shapiro steps” depends only on the applied frequency of the field and the electrical charge. This is now used as a reference standard for the volt.

Researchers have subsequently developed analogues of Josephson junctions in other systems such as liquid helium and ultracold atomic gases. In the new work, two groups have independently observed Shapiro steps in ultracold quantum gases. Instead of placing a fixed insulator in the centre and driving the system with a field, the researchers used focused laser beams to create potential barriers that divided the traps into two. Then they moved the positions of the barriers to alter the potentials of the atoms on either side.

Current emulation

“If we move the atoms with a constant velocity, that means there’s a constant velocity of atoms through the barrier,” says Herwig Ott of RPTU University Kaiserslautern-Landau in Germany, who led one of the groups. “This is how we emulate a DC current. Now for the Shapiro protocol you have to apply an AC current, and the AC current you simply get by modulating your barrier in time.”

Ott and colleagues in Kaiserslautern, in collaboration with researchers in Hamburg and the United Arab Emirates (UAE), used a Bose–Einstein condensate (BEC) of rubidium-87 atoms. Meanwhile in Italy, Giulia Del Pace of the European Laboratory for Nonlinear Spectroscopy at the University of Florence and colleagues (including the same UAE collaborators) studied ultracold lithium-6 atoms, which are fermions.

Both groups observed the theoretically-predicted Shapiro steps, but Ott and Del Pace explain that these observations do not simply confirm predictions. “The message is that no matter what your microscopic mechanism is, the phenomenon of Shapiro steps is universal,” says Ott. In superconductors, the Shapiro steps are caused by the breaking of Cooper pairs; in ultracold atomic gases, vortex rings are created. Nevertheless, the same mathematics applies. “This is really quite remarkable,” says Ott.

Del Pace says it was unclear whether Shapiro steps would be seen in strongly-interacting fermions, which are “way more interacting than the electrons in superconductors”. She asks, “Is it a limitation to have strong interactions or is it something that actually helps the dynamics to happen? It turns out it’s the latter.”

Magnetic tuning

Del Pace’s group applied a variable magnetic field to tune their system between a BEC of molecules, a system dominated by Cooper pairs and a unitary Fermi gas in which the particles were as strongly interacting as permitted by quantum mechanics. The size of the Shapiro steps was dependent on the strength of the interparticle interaction.

Ott and Del Pace both suggest that this effect could be used to create a reference standard for chemical potential – a measure of the strength of the atomic interaction (or equation of state) in a system.

“This equation of state is very well known for a BEC or for a strongly interacting Fermi gas…but there is a range of interaction strengths where the equation of state is completely unknown, so one can imagine taking inspiration from the way Josephson junctions are used in superconductors and using atomic Josephson junctions to study the equation of state in systems where the equation of state is not known,” explains Del Pace.

The two papers are published side by side in Science: Del Pace and Ott.

Rocío Jáuregui Renaud of the Autonomous University of Mexico is impressed, especially by the demonstration in both bosons and fermions.  “The two papers are important, and they are congruent in their results, but the platform is different,” she says. “At this point, the idea is not to give more information directly about superconductivity, but to learn more about phenomena that sometimes you are not able to see in electronic systems but you would probably see in neutral atoms.”

The post Shapiro steps spotted in ultracold bosonic and fermionic gases appeared first on Physics World.

  •  

Organic LED can electrically switch the handedness of emitted light

Circularly polarized (CP) light is encoded with information through its photon spin and can be utilized in applications such as low-power displays, encrypted communications and quantum technologies. Organic light emitting diodes (OLEDs) produce CP light with a left or right “handedness”, depending on the chirality of the light-emitting molecules used to create the device.

While OLEDs usually only emit either left- or right-handed CP light, researchers have now developed OLEDs that can electrically switch between emitting left- or right-handed CP light – without needing different molecules for each handedness.

“We had recently identified an alternative mechanism for the emission of circularly polarized light in OLEDs, using our chiral polymer materials, which we called anomalous circularly polarized electroluminescence,” says lead author Matthew Fuchter from the University of Oxford. “We set about trying to better understand the interplay between this new mechanism and the generally established mechanism for circularly polarized emission in the same chiral materials”.

Light handedness controlled by molecular chirality

The CP light handedness of an organic emissive molecule is controlled by its chirality. A chiral molecule is one that has two mirror-image structural isomers that can’t be superimposed on top of each other. Each of these non-superimposable molecules is called an enantiomer, and will absorb, emit and refract CP light with a defined spin angular momentum. Each enantiomer will produce CP light with a different handedness, through an optical mechanism called normal circularly polarized electroluminescence (NCPE).

OLED designs typically require access to both enantiomers, but most chemical synthesis processes will produce racemic mixtures (equal amounts of the two enantiomers) that are difficult to separate. Extracting each enantiomer so that they can be used individually is complex and expensive, but the research at Oxford has simplified this process by using a molecule that can switch between emitting left- and right-handed CP light.

The molecule in question is a helical molecule called (P)-aza[6]helicene, which is the right-handed enantiomer. Even though it is just a one-handed form, the researchers found a way to control the handedness of the OLED, enabling it to switch between both forms.

Switching handedness without changing the structure

The researchers designed the helicene molecules so that the handedness of the light could be switched electrically, without needing to change the structure of the material itself. “Our work shows that either handedness can be accessed from a single-handed chiral material without changing the composition or thickness of the emissive layer,” says Fuchter. “From a practical standpoint, this approach could have advantages in future circularly polarized OLED technologies.”

Instead of making a structural change, the researchers changed the way that the electric charges are recombined in the device, using interlayers to alter the recombination position and charge carrier mobility inside the device. Depending on where the recombination zone is located, this leads to situations where there is balanced or unbalanced charge transport, which then leads to different handedness of CP light in the device.

When the recombination zone is located in the centre of the emissive layer, the charge transport is balanced, which generates an NCPE mechanism. In these situations, the helicene adopts its normal handedness (right handedness).

However, when the recombination zone is located close to one of the transport layers, it creates an unbalanced charge transport mechanism called anomalous circularly polarized electroluminescence (ACPE). The ACPE overrides the NCPE mechanism and inverts the handedness of the device to left handedness by altering the balance of induced orbital angular momentum in electrons versus holes. The presence of these two electroluminescence mechanisms in the device enables it to be controlled electrically by tuning the charge carrier mobility and the recombination zone position.

The research allows the creation of OLEDs with controllable spin angular momentum information using a single emissive enantiomer, while probing the fundamental physics of chiral optoelectronics. “This work contributes to the growing body of evidence suggesting further rich physics at the intersection of chirality, charge and spin. We have many ongoing projects to try and understand and exploit such interplay,” Fuchter concludes.

The researchers describe their findings in Nature Photonics.

The post Organic LED can electrically switch the handedness of emitted light appeared first on Physics World.

  •  

Francis Crick: a life of twists and turns

Physicist, molecular biologist, neuroscientist: Francis Crick’s scientific career took many turns. And now, he is the subject of zoologist Matthew Cobb’s new book, Crick: a Mind in Motion – from DNA to the Brain.

Born in 1916, Crick studied physics at University College London in the mid-1930s, before working for the Admiralty Research Laboratory during the Second World War. But after reading physicist Erwin Schrödinger’s 1944 book What Is Life? The Physical Aspect of the Living Cell, and a 1946 article on the structure of biological molecules by chemist Linus Pauling, Crick left his career in physics and switched to molecular biology in 1947.

Six years later, while working at the University of Cambridge, he played a key role in decoding the double-helix structure of DNA, working in collaboration with biologist James Watson, biophysicist Maurice Wilkins and other researchers including chemist and X-ray crystallographer Rosalind Franklin. Crick, alongside Watson and Wilkins, went on to receive the 1962 Nobel Prize in Physiology and Medicine for the discovery.

Finally, Crick’s career took one more turn in the mid-1970s. After experiencing a mental health crisis, Crick left Britain and moved to California. He took up neuroscience in an attempt to understand the roots of human consciousness, as discussed in his 1994 book, The Astonishing Hypothesis: the Scientific Search for the Soul.

Parallel lives

When he died in 2004, Crick’s office wall at Salk Institute in La Jolla, US, carried portraits of Charles Darwin and Albert Einstein, as Cobb notes on the final page of his deeply researched and intellectually fascinating biography. But curiously, there is not a single other reference to Einstein in Cobb’s massive book. Furthermore, there is no reference at all to Einstein in the equally large 2009 biography of Crick, Francis Crick: Hunter of Life’s Secrets, by historian of science Robert Olby, who – unlike Cobb – knew Crick personally.

Nevertheless, a comparison of Crick and Einstein is illuminating. Crick’s family background (in the shoe industry), and his childhood and youth are in some ways reminiscent of Einstein’s. Both physicists came from provincial business families of limited financial success, with some interest in science yet little intellectual distinction. Both did moderately well at school and college, but were not academic stars. And both were exposed to established religion, but rejected it in their teens; they had little intrinsic respect for authority, without being open rebels until later in life.

The similarities continue into adulthood, with the two men following unconventional early scientific careers. Both of them were extroverts who loved to debate ideas with fellow scientists (at times devastatingly), although they were equally capable of long, solitary periods of concentration throughout their careers. In middle age, they migrated from their home countries – Germany (Einstein) and Britain (Crick) – to take up academic positions in the US, where they were much admired and inspiring to other scientists, but failed to match their earlier scientific achievements.

In their personal lives, both Crick and Einstein had a complicated history with women. Having divorced their first wives, they had a variety of extramarital affairs – as discussed by Cobb without revealing the names of these women – while remaining married to their second wives. Interestingly, Crick’s second wife, Odile Crick (whom he was married to for 55 years) was an artist, and drew the famous schematic drawing of the double helix published in Nature in 1953.

Stories of friendships

Although Cobb misses this fascinating comparison with Einstein, many other vivid stories light up his book. For example, he recounts Watson’s claim that just after their success with DNA in 1953, “Francis winged into the Eagle [their local pub in Cambridge] to tell everyone within hearing distance that we had found the secret of life” – a story that later appeared on a plaque outside the pub.

“Francis always denied he said anything of the sort,” notes Cobb, “and in 2016, at a celebration of the centenary of Crick’s birth, Watson publicly admitted that he had made it up for dramatic effect (a few years earlier, he had confessed as much to Kindra Crick, Francis’s granddaughter).” No wonder Watson’s much-read 1968 book The Double Helix caused a furious reaction from Crick and a temporary breakdown in their friendship, as Cobb dissects in excoriating detail.

Watson’s deprecatory comments on Franklin helped to provoke the current widespread belief that Crick and Watson succeeded by stealing Franklin’s data. After an extensive analysis of the available evidence, however, Cobb argues that the data was willingly shared with them by Franklin, but that they should have formally asked her permission to use it in their published work – “Ambition, or thoughtlessness, stayed their hand.”

In fact, it seems Crick and Franklin were friends in 1953, and remained so – with Franklin asking Crick for his advice on her draft scientific papers – until her premature death from ovarian cancer in 1958. Indeed, after her first surgery in 1956, Franklin went to stay with Crick and his wife at their house in Cambridge, and then returned to them after her second operation. There certainly appears to be no breakdown in trust between the two. When Crick was nominated for the Nobel prize in 1961, he openly stated, “The data which really helped us obtain the structure was mainly obtained by Rosalind Franklin.”

As for Crick’s later study of consciousness, Cobb comments, “It would be easy to dismiss Crick’s switch to studying the brain as the quixotic project of an ageing scientist who did not know his limits. After all, he did not make any decisive breakthrough in understanding the brain – nothing like the double helix… But then again, nobody else did, in Crick’s lifetime or since.” One is perhaps reminded once again of Einstein, and his preoccupation during later life with his unified field theory, which remains an open line of research today.

  • 2025 Profile Books £30.00hb 595pp

The post Francis Crick: a life of twists and turns appeared first on Physics World.

  •  

A theoretical physicist’s journey through the food and drink industry

Rob Farr is a theorist and computer modeller whose career has taken him down an unconventional path. He studied physics at the University of Cambridge, UK, from 1991 to 1994, staying on to do a PhD in statistical physics. But while many of his contemporaries then went into traditional research fields – such as quantum science, high-energy physics and photonic technologies – Farr got a taste for the food and drink manufacturing industry. It’s a multidisciplinary field in which Farr has worked for more than 25 years.

After leaving academia in 1998, first stop was Unilever’s €13bn foods division. For two decades, latterly as a senior scientist, Farr guided R&D teams working across diverse lines of enquiry – “doing the science, doing the modelling”, as he puts it. Along the way, Farr worked on all manner of consumer products including ice-cream, margarine and non-dairy spreads, as well as “dry” goods such as bouillon cubes. There was also the occasional foray into cosmetics, skin creams and other non-food products.

As a theoretical physicist working in industrial-scale food production, Farr’s focus has always been on the materials science of the end-product and how it gets processed. “Put simply,” says Farr, “that means making production as efficient as possible – regarding both energy and materials use – while developing ‘new customer experiences’ in terms of food taste, texture and appearance.” 

Ice-cream physics

One tasty multiphysics problem that preoccupied Farr for a good chunk of his time at Unilever is ice cream. It is a hugely complex material that Farr likens to a high-temperature ceramic, in the sense that the crystalline part of it is stored very near to the melting point of ice. “Equally, the non-ice phase contains fats,” he says, “so there’s all sorts of emulsion physics and surface science to take into consideration.”

Ice cream also has polymers in the mix, so theoretical modelling needs to incorporate the complex physics of polymer–polymer phase separation as well as polymer flow, or “rheology”, which contributes to the product’s texture and material properties. “Air is another significant component of ice cream,” adds Farr, “which means it’s a foam as well as an emulsion.”

As well as trying to understand how all these subcomponents interact, there’s also the thorny issue of storage. After it’s produced, ice cream is typically kept at low temperatures of about –25 °C – first in the factory, then in transit and finally in a supermarket freezer. But once that tub of salted-caramel or mint choc chip reaches a consumer’s home, it’s likely to be popped in the ice compartment of a fridge freezer at a much milder –6 or –7 °C.

Manufacturers therefore need to control how those temperature transitions affect the recrystallization of ice. This unwanted outcome can lead to phenomena like “sintering” (which makes a harder product) and “ripening” (which can lead to big ice crystals that can be detected in the mouth and detract from the creamy texture).

“Basically, the whole panoply of soft-matter physics comes into play across the production, transport and storage of ice cream,” says Farr. “Figuring out what sort of materials systems will lead to better storage stability or a more consistent product texture are non-trivial questions given that the global market for ice cream is worth in excess of €100bn annually.”

A shot of coffee?

After almost 20 years working at Unilever, in 2017 Farr took up a role as coffee science expert at JDE Peet’s, the Dutch multinational coffee and tea company. Switching from the chilly depths of ice cream science to the dark arts of coffee production and brewing might seem like a steep career phase change, but the physics of the former provides a solid bridge to the latter.

The overlap is evident, for example, in how instant coffee gets freeze-dried – a low-temperature dehydration process that manufacturers use to extend the shelf-life of perishable materials and make them easier to transport. In the case of coffee, freeze drying (or lyophilization, as it’s commonly known) also helps to retain flavour and aromas.

If you want to study a parameter space that’s not been explored before, the only way to do that is to simulate the core processes using fundamental physics

After roasting and grinding the raw coffee beans, manufacturers extract a coffee concentrate using high pressure and water. This extract is then frozen, ground up and placed in a vacuum well below 0 °C. A small amount of heat is applied to sublime the ice away and remove the remaining water from the non-ice phase.

The quality of the resulting freeze-dried instant coffee is better than ordinary instant coffee. However, freeze-drying is also a complex and expensive process, which manufacturers seek to fine-tune by implementing statistical methods to optimize, for example, the amount of energy consumed during production.

Such approaches involve interpolating the gaps between existing experimental data sets, which is where a physics mind-set comes in. “If you want to study a parameter space that’s not been explored before,” says Farr, “the only way to do that is to simulate the core processes using fundamental physics.”

Beyond the production line, Farr has also sought to make coffee more stable when it’s stored at home. Sustainability is the big driver here: JDE Peet’s has committed to make all its packaging compostable, recyclable or reusable by 2030. “Shelf-life prediction has been a big part of this R&D initiative,” he explains. “The work entails using materials science and the physics of mass transfer to develop next-generation packaging and container systems.”

Line of sight

After eight years unpacking the secrets of coffee physics at JDE Peet’s, Farr was given the option to relocate to the Netherlands in mid-2025 as part of a wider reorganization of the manufacturer’s corporate R&D function. However, he decided to stay put in Oxford and is now deciding between another role in the food manufacturing sector, or moving into a new area of research, such as nuclear energy, or even education.

Rob Farr stood in front of a blackboard
Cool science “The whole panoply of soft-matter physics comes into play across the production, transport and storage of ice-cream,” says industrial physicist Rob Farr. (Courtesy: London Institute for Mathematical Sciences)

Farr believes he gained a lot from his time at JDE Peet’s. As well as studying a wide range of physics problems, he also benefited from the company’s rigorous approach to R&D, whereby projects are regularly assessed for profitability and quickly killed off if they don’t make the cut. Such prioritization avoids wasted effort and investment, but it also demands agility from staff scientists, who have to build long-term research strategies against a project landscape in constant flux.

A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges

To thrive in that setting, Farr says collaboration and an open mind are essential. “A senior scientist needs to be someone who colleagues come to informally to discuss their technical challenges,” he says. “You can then find the scientific question which underpins seemingly disparate problems and work with colleagues to deliver commercially useful solutions.” For Farr, it’s a self-reinforcing dynamic. “As more people come to you, the more helpful you become – and I love that way of working.”

What Farr calls “line-of-sight” is another unique feature of industrial R&D in food materials. “Maybe you’re only building one span of a really long bridge,” he notes, “but when you can see the process end-to-end, as well as your part in it, that is a fantastic motivator.” Indeed, Farr believes that for physicists who want a job doing something useful, the physics of food materials makes a great career. “There are,” he concludes, “no end of intriguing and challenging research questions.”

The post A theoretical physicist’s journey through the food and drink industry appeared first on Physics World.

  •  

Band-aid like wearable sensor continuously monitors foetal movement

Pressure and strain sensors on a clinical trial volunteer
Multimodal monitoring Pressure and strain sensors on a clinical trial volunteer undergoing an ultrasound scan (left). Snapshot image of the ultrasound video recording (right). (Courtesy: Yap et al., Sci. Adv. 11 eady2661)

The ability to continuously monitor and interpret foetal movement patterns in the third trimester of a pregnancy could help detect any potential complications and improve foetal wellbeing. Currently, however, such assessment of foetal movement is performed only periodically, with an ultrasound exam at a hospital or clinic.

A lightweight, easily wearable, adhesive patch-based sensor developed by engineers and obstetricians at Monash University in Australia may change this. The patches, two of which are worn on the abdomen, can detect foetal movements such as kicking, waving, hiccups, breathing, twitching, and head and trunk motion.

Reduced foetal movement can be associated with potential impairment in the central nervous system and musculoskeletal system, and is a common feature observed in pregnancies that end in foetal death and stillbirth. A foetus compromised in utero may reduce movements as a compensatory strategy to lower oxygen consumption and conserve energy.

To help identify foetuses at risk of complications, the Monash team developed an artificial intelligence (AI)-powered wearable pressure–strain combo sensor system that continuously and accurately detects foetal movement-induced motion in the mother’s abdominal skin. As reported in Science Advances, the “band-aid”-like sensors can discriminate between foetal and non-foetal movement with over 90% accuracy.

The system comprises two soft, thin and flexible patches designed to conform to the abdomen of a pregnant woman. One patch incorporates an octagonal gold nanowire-based strain sensor (the “Octa” sensor), the other is an interdigitated electrode-based pressure sensor.

Pressure and strain combo sensor system
Pressure and strain combo Photograph of the sensors on a pregnant mother (A). Exploded illustration of the foetal kicks strain sensor (B) and the pressure sensor (C). Dimensions of the strain (D) and pressure (E) sensors. (Courtesy: Yap et al., Sci. Adv. 11 eady2661)

The patches feature a soft polyimide-based flexible printed circuit (FPC) that integrates a thin lithium polymer battery and various integrated circuit chips, including a Bluetooth radiofrequency system for reading the sensor’s electrical resistance, storing data and communicating with a smartphone app. Each patch is encapsulated with kinesiology tape and sticks to the abdomen using a medical double-sided silicone adhesive.

The Octa sensor is attached to a separate FPC connector attached to the primary device, enabling easy replacement after each study. The pressure sensor is mounted on the silicone adhesive, to connect with the interdigitated electrode beneath the primary device. The Octa and pressure sensor patches are lightweight (about 3 g) and compact, measuring 63 x 30 x 4 mm and 62 x 28 x 2 mm, respectively.

Trialling the device

The researchers validated their foetal movement monitoring system via comparison with simultaneous ultrasound exams, examining 59 healthy pregnant women at Monash Health. Each participant had the pressure sensor attached to the area of their abdomen where they felt the most vigorous foetal movements, typically in the lower quadrant, while the strain sensor was attached to the region closest to foetal limbs. An accelerometer placed on the participant’s chest captured non-foetal movement data for signal denoising and training the machine-learning model.

Principal investigator Wenlong Cheng, now at the University of Sydney, and colleagues report that “the wearable strain sensor featured isotropic omnidirectional sensitivity, enabling detection of maternal abdominal [motion] over a large area, whereas the wearable pressure sensor offered high sensitivity with a small domain, advantageous for accurate localized foetal movement detection”.

The researchers note that the pressure sensor demonstrated higher sensitivity to movements directly beneath it compared with motion farther away, while the Octa sensor performed consistently across a wider sensing area. “The combination of both sensor types resulted in a substantial performance enhancement, yielding an overall AUROC [area under the receiver operating characteristic curve] accuracy of 92.18% in binary detection of foetal movement, illustrating the potential of combining diverse sensing modalities to achieve more accurate and reliable monitoring outcomes,” they write.

In a press statement, co-author Fae Marzbanrad explains that the device’s strength lies in a combination of soft sensing materials, intelligent signal processing and AI. “Different foetal movements create distinct strain patterns on the abdominal surface, and these are captured by the two sensors,” she says. “The machine-learning system uses the signals to detect when movement occurs while cancelling maternal movements.”

The lightweight and flexible device can be worn by pregnant women for long periods without disrupting daily life. “By integrating sensor data with AI, the system automatically captures a wider range of foetal movements than existing wearable concepts while staying compact and comfortable,” Marzbanrad adds.

The next steps towards commercialization of the sensors will include large-scale clinical studies in out-of-hospital settings, to evaluate foetal movements and investigate the relationship between movement patterns and pregnancy complications.

The post Band-aid like wearable sensor continuously monitors foetal movement appeared first on Physics World.

  •  

Unlocking novel radiation beams for cancer treatment with upright patient positioning

Since the beginning of radiation therapy, almost all treatments have been delivered with the patient lying on a table while the beam rotates around them. But a resurgence in upright patient positioning is changing that paradigm. Novel radiation accelerators such as proton therapy, VHEE, and FLASH therapy are often too large to rotate around the patient, making access limited. By instead rotating the patient, these previously hard-to-access beams could now become mainstream in the future.

Join leading clinicians and experts as they discuss how this shift in patient positioning is enabling exploration of new treatment geometries and supporting the development of advanced future cancer therapies.

L-R Serdar Charyyev, Eric Deutsch, Bill Loo, Rock Mackie

Novel beams covered and their representative speaker

Serdar Charyyev – Proton Therapy – Clinical Assistant Professor at Stanford University School of Medicine
Eric Deutsch – VHEE FLASH – Head of Radiotherapy at Gustave Roussy
Bill Loo – FLASH Photons – Professor of Radiation Oncology at Stanford Medicine
Rock Mackie – Emeritus Professor at University of Wisconsin and Co-Founder and Chairman of Leo Cancer Care

The post Unlocking novel radiation beams for cancer treatment with upright patient positioning appeared first on Physics World.

  •  

The environmental and climate cost of war

Despite not being close to the frontline of Russia’s military assault on Ukraine, life at the Ivano-Frankivsk National Technical University of Oil and Gas is far from peaceful. “While we continue teaching and research, we operate under constant uncertainty – air raid alerts, electricity outages – and the emotional toll on staff and students,” says Lidiia Davybida, an associate professor of geodesy and land management.

Last year, the university became a target of a Russian missile strike, causing extensive damage to buildings that still has not been fully repaired – although, fortunately, no casualties were reported. The university also continues to leak staff and students to the war effort – some of whom will tragically never return – while new student numbers dwindle as many school graduates leave Ukraine to study abroad.

Despite these major challenges, Davybida and her colleagues remain resolute. “We adapt – moving lectures online when needed, adjusting schedules, and finding ways to keep research going despite limited opportunities and reduced funding,” she says.

Resolute research

Davybida’s research focuses on environmental monitoring using geographic information systems (GIS), geospatial analysis and remote sensing. She has been using these techniques to monitor the devastating impact that the war is having on the environment and its significant contribution to climate change.

In 2023 she published results from using Sentinel-5P satellite data and Google Earth Engine to monitor the air quality impacts of war on Ukraine (IOP Conf. Ser.: Earth Environ. Sci. 1254 012112). As with the COVID-19 lockdowns worldwide, her results reveal that levels of common pollutants such as carbon monoxide, nitrogen dioxide and sulphur dioxide were, on average, down from pre-invasion levels. This reflects the temporary disruption to economic activity that war has brought on the country.

Rescue workers lift an elder person on a stretcher out of flood water
Wider consequences Ukrainian military, emergency services and volunteers work together to rescue people from a large flooded area in Kherson on 8 June 2023. Two days earlier, the Russian army blew up the dam of the Kakhovka hydroelectric power station, meaning about 80 settlements in the flood zone had to be evacuated. (Courtesy: Sergei Chuzavkov/SOPPA Images/Shutterstock)

More worrying, from an environment and climate perspective, were the huge concentrations of aerosols, smoke and dust in the atmosphere. “High ozone concentrations damage sensitive vegetation and crops,” Davybida explains. “Aerosols generated by explosions and fires may carry harmful substances such as heavy metals and toxic chemicals, further increasing environmental contamination.” She adds that these pollutants can alter sunlight absorption and scattering, potentially disrupting local climate and weather patterns, and contributing to long-term ecological imbalances.

A significant toll has been wrought by individual military events too. A prime example is Russia’s destruction of the Kakhovka Dam in southern Ukraine in June 2023. An international team – including Ukrainian researchers – recently attempted to quantify this damage by combining on-the-ground field surveys, remote-sensing data and hydrodynamic modelling; a tool they used for predicting water flow and pollutant dispersion.

The results of this work are sobering (Science 387 1181). Though 80% of the ecosystem is expected to re-establish itself within five years, the dam’s destruction released as much as 1.7 cubic kilometres of sediment contaminated by a host of persistent pollutants, including nitrogen, phosphorous and 83,000 tonnes of heavy metals. Discharging this toxic sludge across the land and waterways will have unknown long-term environmental consequences for the region, as the contaminants could be spread by future floods, the researchers concluded (figure 1).

1 Dam destruction

Map of Ukraine with a large area of coastline highlighted in orange and smaller inland areas highlighted green
(Reused with permission from Science 387 1181 10.1126/science.adn8655)

This map shows areas of Ukraine affected or threatened by dam destruction in military operations. Arabic numbers 1 to 6 indicate rivers: Irpen, Oskil, Inhulets, Dnipro, Dnipro-Bug Estuary and Dniester, respectively. Roman numbers I to VII indicate large reservoir facilities: Kyiv, Kaniv, Kremenchuk, Kaminske, Dnipro, Kakhovka and Dniester, respectively. Letters A to C indicate nuclear power plants: Chornobyl, Zaporizhzhia and South Ukraine, respectively.

Dangerous data

A large part of the reason for the researchers’ uncertainty, and indeed more general uncertainty in environmental and climate impacts of war, stems from data scarcity. It is near-impossible for scientists to enter an active warzone to collect samples and conduct surveys and experiments. Environmental monitoring stations also get damaged and destroyed during conflict, explains Davybida – a wrong she is attempting to right in her current work. Many efforts to monitor, measure and hopefully mitigate the environmental and climate impact of the war in Ukraine are therefore less direct.

In 2022, for example, climate-policy researcher Mathijs Harmsen from the PBL Netherlands Environmental Assessment Agency and international collaborators decided to study the global energy crisis (which was sparked by Russia’s invasion of Ukraine) to look at how the war will alter climate policy (Environ. Res. Lett. 19 124088).

They did this by plugging in the most recent energy price, trade and policy data (up to May 2023) into an integrated assessment model that simulates the environmental consequences of human activities worldwide. They then imposed different potential scenarios and outcomes and let it run to 2030 and 2050. Surprisingly, all scenarios led to a global reduction of 1–5% of carbon dioxide emissions by 2030, largely due to trade barriers increasing fossil fuel prices, which in turn would lead to increased uptake of renewables.

But even though the sophisticated model represents the global energy system in detail, some factors are hard to incorporate and some actions can transform the picture completely, argues Harmsen. “Despite our results, I think the net effect of this whole war is a negative one, because it doesn’t really build trust or add to any global collaboration, which is what we need to move to a more renewable world,” he says. “Also, the recent intensification of Ukraine’s ‘kinetic sanctions’ [attacks on refineries and other fossil fuel infrastructure] will likely have a larger effect than anything we explored in our paper.”

Elsewhere, Toru Kobayakawa was, until recently, working for the Japan International Cooperation Agency (JICA), leading the Ukraine support team. Kobayakawa used a non-standard method to more realistically estimate the carbon footprint of reconstructing Ukraine when the war ends (Environ. Res.: Infrastruct. Sustain. 5 015015). The Intergovernmental Panel on Climate Change (IPCC) and other international bodies only account for carbon emissions within the territorial country. “The consumption-based model I use accounts for the concealed carbon dioxide from the production of construction materials like concrete and steel imported from outside of the country,” he says.

Using an open-source database Eora26 that tracks financial flows between countries’ major economic sectors in simple input–output tables, Kobayakawa calculated that Ukraine’s post-war reconstruction will amount to 741 million tonnes carbon dioxide equivalent over 10 years. This is 4.1 times Ukraine’s pre-war annual carbon-dioxide emissions, or the combined annual emissions of Germany and Austria.

However, as with most war-related findings, these figures come with a caveat. “Our input–output model doesn’t take into account the current situation,” notes Kobayakawa “It is the worst-case scenario.” Nevertheless, the research has provided useful insights, such as that the Ukrainian construction industry will account for 77% of total emissions.

“Their construction industry is notorious for inefficiency, needing frequent rework, which incurs additional costs, as well as additional carbon-dioxide emissions,” he says. “So, if they can improve efficiency by modernizing construction processes and implementing large-scale recycling of construction materials, that will contribute to reducing emissions during the reconstruction phase and ensure that they build back better.”

Military emissions gap

As the experiences of Davybida, Harmsen and Kobayakawa show, cobbling together relevant and reliable data in the midst of war is a significant challenge, from which only limited conclusions can be drawn. Researchers and policymakers need a fuller view of the environmental and climate cost of war if they are to improve matters once a conflict ends.

That’s certainly the view of Benjamin Neimark, who studies geopolitical ecology at Queen Mary University of London. He has been trying for some time to tackle the fact that the biggest data gap preventing accurate estimates of the climate and environmental cost of war is military emissions. During the 2021 United Nations Climate Change Conference (COP26), for example, he and colleagues partnered with the Conflict and Environment Observatory (CEOBS) to launch The Military Emissions Gap, a website to track and trace what a country accounts for as its military emissions to the United Nations Framework Convention on Climate Change (UNFCCC).

At present, reporting military emissions is voluntary, so data are often absent or incomplete – but gathering such data is vital. According to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is approximately 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.

The website is an attempt to fill this gap. “We hope that the UNFCCC picks up on this and mandates transparent and visible reporting of military emissions,” Neimark says (figure 2).

2 Closing the data gap

Five sets of icons indicating categories of military and conflict-related carbon emissions
(Reused with permission from Neimark et al. 2025 War on the Climate: A Multitemporal Study of Greenhouse Gas Emissions of the Israel–Gaza Conflict. Available at SSRN)

Current United Nations Framework Convention on Climate Change (UNFCCC) greenhouse-gas emissions reporting obligations do not include all the possible types of conflict emissions, and there is no commonly agreed methodology or scope on how different countries collect emissions data. In a recent publication War on the Climate: a Multitemporal Study of Greenhouse Gas Emissions of the Israel-Gaza Conflict, Benjamin Neimark et al. came up with this framework, using the UNFCCC’s existing protocols. These reporting categories cover militaries and armed conflicts, and hope to highlight previously “hidden” emissions.

Measuring the destruction

Beyond plugging the military emissions gap, Neimark is also involved in developing and testing methods that he and other researchers can use to estimate the overall climate impact of war. Building on foundational work from his collaborator, Dutch climate specialist Lennard de Klerk – who developed a methodology for identifying, classifying and providing ways of estimating the various sources of emissions associated with the Russia–Ukraine war – Neimark and colleagues are trying to estimate the greenhouse-gas emissions from the Israel–Gaza conflict.

Their studies encompass pre-conflict preparation, the conflict itself and post-conflict reconstruction. “We were working with colleagues who were doing similar work in Ukraine, but every war is different,” says Neimark. “In Ukraine, they don’t have large tunnel networks, or they didn’t, and they don’t have this intensive, incessant onslaught of air strikes from carbon-intensive F16 fighter aircraft.” Some of these factors, like the carbon impact of Hamas’ underground maze of tunnels under Gaza, seem unquantifiable, but Neimark has found a way.

“There’s some pretty good data for how big these are in terms of height, the amount of concrete, how far down they’re dug and how thick they are,” says Neimark. “It’s just the length we had to work out based on reported documentation.” Finding the total amount of concrete and steel used in these tunnels involved triangulating open-source information with media reports to finalize an estimate of the dimensions of these structures. Standard emission factors could then be applied to obtain the total carbon emissions. According to data from Neimark’s Confronting Military Greenhouse Gas Emissions report, the carbon emissions from construction of concrete infrastructure by both Israel and Hamas were more than the annual emissions of 33 individual countries and territories (figure 3).

3 Climate change and the Gaza war

Three lists of headline facts and figures about carbon emissions from the Israel-Gaza war, split into direct military actions, large war-related infrastructure, and future rebuilding)
(Reused with permission from Neimark et al. 2024 Confronting Military Greenhouse Gas Emissions, Interactive Policy Brief, London, UK. Available from QMUL.)

Data from Benjamin Neimark, Patrick Bigger, Frederick Otu-Larbi and Reuben Larbi’s Confronting Military Greenhouse Gas Emissions report estimates the carbon emissions of the war in Gaza for three distinct periods: direct war activities; large-scale war infrastructure; and future reconstruction.

The impact of Hamas’ tunnels and Israel’s “iron wall” border fence are just two of many pre-war activities that must be factored in to estimate the Israel–Gaza conflict’s climate impact. Then, the huge carbon cost of the conflict itself must be calculated, including, for example, bombing raids, reconnaissance flights, tanks and other vehicles, cargo flights and munitions production.

Gaza’s eventual reconstruction must also be included, which makes up a big proportion of the total impact of the war, as Kobayakawa’s Ukraine reconstruction calculations showed. The United Nations Environment Programme (UNEP) has been systematically studying and reporting on “Sustainable debris management in Gaza” as it tracks debris from damaged buildings and infrastructure in Gaza since the outbreak of the conflict in October 2023. Alongside estimating the amounts of debris, UNEP also models different management scenarios – ranging from disposal to recycling – to evaluate the time, resource needs and environmental impacts of each option.

Visa restrictions and the security situation have prevented UNEP staff from entering the Gaza strip to undertake environmental field assessments to date. “While remote sensing can provide a valuable overview of the situation … findings should be verified on the ground for greater accuracy, particularly for designing and implementing remedial interventions,” says a UNEP spokesperson. They add that when it comes to the issue of contamination, UNEP needs “confirmation through field sampling and laboratory analysis” and that UNEP “intends to undertake such field assessments once conditions allow”.

The main risk from hazardous debris – which is likely to make up about 10–20% of the total debris – arises when it is mixed with and contaminates the rest of the debris stock. “This underlines the importance of preventing such mixing and ensuring debris is systematically sorted at source,” adds the UNEP spokesperson.

The ultimate cost

With all these estimates, and adopting a Monte Carlo analysis to account for uncertainties, Neimark and colleagues concluded that, from the first 15 months of the Israel–Gaza conflict, total carbon emissions were 32 million tonnes, which is huge given that the territory has a total area of just 365 km². The number also continues to rise.

Khan Younis in ruins
Rubble and ruins Khan Younis in the Gaza Strip on 11 February 2025, showing the widespread damage to buildings and infrastructure. (Courtesy: Shutterstock/Anas Mohammed)

Why does this number matter? When lives are being lost in Gaza, Ukraine, and across Sudan, Myanmar and other regions of the world, calculating the environmental and climate cost of war might seem like something only worth bothering about when the fighting stops.

But doing so even while conflicts are taking place can help protect important infrastructure and land, avoid environmentally disastrous events, and to ensure the long rebuild, wherever the conflict may be happening, is informed by science. The UNEP spokesperson says that it is important to “systematically integrate environmental considerations into humanitarian and early recovery planning from the outset” rather than treating the environment as an afterthought. They highlight that governments should “embed it within response plans – particularly in areas where it can directly impact life-saving activities, such as debris clearance and management”.

With Ukraine still in the midst of war, it seems right to leave the final word to Davybida. “Armed conflicts cause profound and often overlooked environmental damage that persists long after the fighting stops,” she says. “Recognizing and monitoring these impacts is vital to guide practical recovery efforts, protect public health, prevent irreversible harm to ecosystems and ensure a sustainable future.”

The post The environmental and climate cost of war appeared first on Physics World.

  •  

Exploring the icy moons of the solar system

Our blue planet is a Goldilocks world. We’re at just the right distance from the Sun that Earth – like Baby Bear’s porridge – is not too hot or too cold, allowing our planet to be bathed in oceans of liquid water. But further out in our solar system are icy moons that eschew the Goldilocks principle, maintaining oceans and possibly even life far from the Sun.

We call them icy moons because their surface, and part of their interior, is made of solid water-ice. There are over 400 icy moons in the solar system – most are teeny moonlets just a few kilometres across, but a handful are quite sizeable, from hundreds to thousands of kilometres in diameter. Of the big ones, the best known are Jupiter’s moons, Europa, Ganymede and Callisto, and Saturn’s Titan and Enceladus.

Yet these moons are more than just ice. Deep beneath their frozen shells – some –160 to –200 °C cold and bathed in radiation – lie oceans of water, kept liquid thanks to tidal heating as their interiors flex in the strong gravitational grip of their parent planets. With water being a prerequisite for life as we know it, these frigid systems are our best chance for finding life beyond Earth.

The first hints that these icy moons could harbour oceans of liquid water came when NASA’s Voyager 1 and 2 missions flew past Jupiter in 1979. On Europa they saw a broken and geologically youthful-looking surface, just millions of years old, featuring dark cracks that seemed to have slushy material welling up from below. Those hints turned into certainty when NASA’s Galileo mission visited Jupiter between 1995 and 2003. Gravity and magnetometer experiments proved that not only does Europa contain a liquid layer, but so do Ganymede and Callisto.

Meanwhile at Saturn, NASA’s Cassini spacecraft (which arrived in 2004) encountered disturbances in the ringed planet’s magnetic field. They turned out to be caused by plumes of water vapour erupting out of giant fractures splitting the surface of Enceladus, and it is believed that this vapour originates from an ocean beneath the moon’s ice shell. Evidence for an ocean on Titan is a little less certain, but gravity and radio measurements performed by Cassini and its European-built lander Huygens point towards the possibility of some liquid or slushy water beneath the surface.

Water, ice and JUICE

“All of these ocean worlds are going to be different, and we have to go to all of them to understand the whole spectrum of icy moons,” says Amanda Hendrix, director of the Planetary Science Institute in Arizona, US. “Understanding what their oceans are like can tell us about habitability in the solar system and where life can take hold and evolve.”

To that end, an armada of spacecraft will soon be on their way to the icy moons of the outer planets, building on the successes of their predecessors Voyager, Galileo and Cassini–Huygens. Leading the charge is NASA’s Europa Clipper, which is already heading to Jupiter. Clipper will reach its destination in 2030, with the Jupiter Icy moons Explorer (JUICE) from the European Space Agency (ESA) just a year behind it. Europa is the primary target of scientists because it is possibly Jupiter’s most interesting moon as a result of its “astrobiological potential”. That’s the view of Olivier Witasse, who is JUICE project scientist at ESA, and it’s why Europa Clipper will perform nearly 50 fly-bys of the icy moon, some as low as 25 km above the surface. JUICE will also visit Europa twice on its tour of the Jovian system.

The challenge at Europa is that it’s close enough to Jupiter to be deep inside the giant planet’s magnetosphere, which is loaded with high-energy charged particles that bathe the moon’s surface in radiation. That’s why Clipper and JUICE are limited to fly-bys; the radiation dose in orbit around Europa would be too great to linger. Clipper’s looping orbit will take it back out to safety each time. Meanwhile, JUICE will focus more on Callisto and Ganymede – which are both farther out from Jupiter than Europa is – and will eventually go into orbit around Ganymede.

“Ganymede is a super-interesting moon,” says Witasse. For one thing, at 5262 km across it is larger than Mercury, a planet. It also has its own intrinsic magnetic field – one of only three solid bodies in the solar system to do so (the others being Mercury and Earth).

Beneath the icy exterior

It’s the interiors of these moons that are of the most interest to JUICE and Clipper. That’s where the oceans are, hidden beneath many kilometres of ice. While the missions won’t be landing on the Jovian moons, these internal structures aren’t as inaccessible as we might at first think. In fact, there are three independent methods for probing them.

A cross section of Europa
Many layers A cross section of Jupiter’s moon Europa, showing its internal layering: a rocky core and ocean floor (possibly with hydrothermal vents), the ocean itself and the ice shell above. (Courtesy: NASA/JPL–Caltech)

If a moon’s ocean contains salts or other electrically conductive contaminants, interesting things happen when passing through the parent planet’s variable magnetic field. “The liquid is a conductive layer within a varying magnetic field and that induces a magnetic field in the ocean that we can measure with a magnetometer using Faraday’s law,” says Witasse. The amount of salty contaminants, plus the depth of the ocean, influence the magnetometer readings.

Then there’s radio science – the way that an icy moon’s mass bends a radio signal from a spacecraft to Earth. By making multiple fly-bys with different trajectories during different points in a moon’s orbit around its planet, the moon’s gravity field can be measured. Once that is known to exacting detail, it can be applied to models of that moon’s internal structure.

Perhaps the most remarkable method, however, is using a laser altimeter to search for a tidal bulge in the surface of a moon. This is exactly what JUICE will be doing when in orbit around Ganymede. Its laser altimeter will map the shape of the surface – such as hills and crevasses – but gravitational tidal forces from Jupiter are expected to cause a bulge on the surface, deforming it by 1–10 m. How large the bulge is depends upon how deep the ocean is.

“If the surface ice is sitting above a liquid layer then the tide will be much bigger because if you sit on liquid, you are not attached to the rest of the moon,” says Witasse. “Whereas if Ganymede were solid the tide would be quite small because it is difficult to move one big, solid body.”

As for what’s below the oceans, those same gravity and radio-science experiments during previous missions have given us a general idea about the inner structures of Jupiter’s Europa, Ganymede and Callisto. All three have a rocky core. Inside Europa, the ocean surrounds the core, with a ceiling of ice above it. The rock–ocean interface potentially provides a source of chemical energy and nutrients for the ocean and any life there.

Ganymede’s interior structure is more complex. Separating the 3400 km-wide rocky core and the ocean is a layer, or perhaps several layers, of high-pressure ice, and there is another ice layer above the ocean. Without that rock–ocean interface, Ganymede is less interesting from an astrobiological perspective.

Meanwhile, Callisto, being the farthest from Jupiter, receives the least tidal heating of the three. This is reflected in Callisto’s lack of evolution, with its interior having not differentiated into layers as distinct as Europa and Ganymede. “Callisto looks very old,” says Witasse. “We’re seeing it more or less as it was at the beginning of the solar system.”

Crazy cryovolcanism

Tidal forces don’t just keep the interiors of the icy moons warm. They can also drive dramatic activity, such as cryovolcanoes – icy eruptions that spew out gases and volatile materials like liquid water (which quickly freezes in space), ammonia and hydrocarbons. The most obvious example of this is found on Saturn’s Enceladus, where giant water plumes squirt out through “tiger stripe” cracks at the moon’s south pole.

But there’s also growing evidence of cryovolcanism on Europa. In 2012 the Hubble Space Telescope caught sight of what looked like a water plume jetting out 200 km from the moon. But the discovery is controversial despite more data from Hubble and even supporting evidence found in archive data from the Galileo mission. What’s missing is cast-iron proof for Europa’s plumes. That’s where Clipper comes in.

Three of Jupiter’s moons
By Jove Three of Jupiter’s largest moons have solid water-ice. (Left) Europa, imaged by the JunoCam on NASA’s Juno mission to Jupiter. The surface sports myriad fractures and dark markings. (Middle) Ganymede, also imaged by the Juno mission, is the largest moon in our solar system. (Right) Our best image of ancient Callisto was taken by NASA’s Galileo spacecraft in 2001. The arrival of JUICE in the Jovian system in 2031 will place Callisto under much-needed scrutiny. (CC BY 3.0 NASA/JPL–Caltech/SwRI/MSS/ image processing by Björn Jónsson; CC BY 3.0 NASA/JPL–Caltech/SwRI/MSS/ image processing by Kalleheikki Kannisto; NASA/JPL/DLR)

“We need to find out if the plumes are real,” says Hendrix. “What we do know is if there is plume activity happening on Europa then it’s not as consistent or ongoing as is clearly happening at Enceladus.”

At Enceladus, the plumes are driven by tidal forces from Saturn, which squeeze and flex the 500 km-wide moon’s innards, forcing out water from an underground ocean through the tiger stripes. If there are plumes at Europa then they would be produced the same way, and would provide access to material from an ocean that’s dozens of kilometres below the icy crust. “I think we have a lot of evidence that something is happening at Europa,” says Hendrix.

These plumes could therefore be the key to characterizing the hidden oceans. One instrument on Clipper that will play an important role in investigating the plumes at Europa is an ultraviolet spectrometer, a technique that was very useful on the Cassini mission.

Because Enceladus’ plumes were not known until Cassini discovered them, the spacecraft’s instruments had not been designed to study them. However, scientists were able to use the mission’s ultraviolet imaging spectrometer to analyse the vapour when it was between Cassini and the Sun. The resulting absorption lines in the spectrum showed the plumes to be mostly pure water, ejected into space at a rate of 200 kg per second.

Black and white image of liquid eruptions from a moon's surface
Ocean spray Geysers of water vapour loaded with salts and organic molecules spray out from the tiger stripes on Enceladus. (Courtesy: NASA/JPL/Space Science Institute)

The erupted vapour freezes as it reaches space and some of it snows back down onto the surface. Cassini’s ultraviolet spectrometer was again used, this time to detect solar ultraviolet light reflected and scattered off these icy particles in the uppermost layers of Enceladus’ surface. Scientists found that any freshly deposited snow from the plumes has a different chemistry from older surface material that has been weathered and chemically altered by micrometeoroids and radiation, and therefore a different ultraviolet spectrum.

Icy moon landing

Another two instruments that Cassini’s scientists adapted to study the plumes were the cosmic dust analyser, and the ion and neutral mass spectrometer. When Cassini flew through the fresh plumes and Saturn’s E-ring, which is formed from older plume ejections, it could “taste” the material by sampling it directly. Recent findings from this data indicate that the plumes are rich in salt as well as organic molecules, including aliphatic and cyclic esters and ethers (carbon-bonded acid-based compounds such as fatty acids) (Nature Astron. 9 1662). Scientists also found nitrogen- and oxygen-bearing compounds that play a role in basic biochemistry and which could therefore potentially be building blocks of prebiotic molecules or even life in Enceladus’ ocean.

Direct image of Enceladus showing blue stripes
Blue moon Enceladus, as seen by Cassini in 2006. The tiger stripes are the blue fractures towards the south. (Courtesy: NASA/JPL/Space Science Institute)

While Cassini could only observe Enceladus’ plumes and fresh snow from orbit, astronomers are planning a lander that could let them directly inspect the surface snow. Currently in the technology development phase, it would be launched by ESA sometime in the 2040s to arrive at the moon in 2054, when winter at Enceladus’ southern, tiger stripe-adorned pole turns to spring and daylight returns.

“What makes the mission so exciting to me is that although it looks like every large icy moon has an ocean, Enceladus is one where there is a very high chance of actually sampling ocean water,” says Jörn Helbert, head of the solar system section at ESA, and the science lead on the prospective mission.

The planned spacecraft will fly through the plumes with more sophisticated instruments than Cassini’s, designed specifically to sample the vapour (like Clipper will do at Europa). Yet adding a lander could get us even closer to the plume material. By landing close to the edge of a tiger stripe, a lander would dramatically increase the mission’s ability to analyse the material from the ocean in the form of fresh snow. In particular, it would look for biosignatures – evidence of the ocean being habitable, or perhaps even inhabited by microbes.

However, new research urges caution in drawing hasty conclusions about organic molecules present in the plumes and snow. While not as powerful as Jupiter’s, Saturn also has a magnetosphere filled with high-energy ions that bombard Enceladus. A recent laboratory study, led by Grace Richards of the Istituto Nazionale di Astrofisica e Planetologia Spaziale (IAPS-INAF) in Rome, found that when these ions hit surface-ice they trigger chemical reactions that produce organic molecules, including some that are precursors to amino acids, similar to what Cassini tasted in the plumes.

So how can we be sure that the organics in Enceladus’ plumes originate from the ocean, and not from radiation-driven chemistry on the surface? It is the same quandary for dark patches around cracks on the surface of Europa, which seem to be rich with organic molecules that could either originate via upwelling from the ocean below, or just from radiation triggering organic chemistry. A lander on Enceladus might solve not just the mystery of that particular moon, but provide important pointers to explain what we’re seeing on Europa too.

More icy companions

Enceladus is not Saturn’s only icy moon; there’s Titan too. As the ringed planet’s largest moon at 5150 km across, Titan (like Ganymede) is larger than Mercury. However, unlike the other moons in the solar system, Titan has a thick atmosphere rich in nitrogen and methane. The atmosphere is opaque, hiding the surface from spacecraft in orbit except at infrared wavelengths and radar, which means that getting below the smoggy atmosphere is a must.

ESA did this in 2005 with the Huygens lander, which, as it parachuted down to Titan’s frozen surface, revealed it to be a land of hills and dune plains with river channels, lakes and seas of flowing liquid hydrocarbons. These organic molecules originate from the methane in its atmosphere reacting with solar ultraviolet.

Until recently, it was thought that Titan has a core of rock, surrounded by a shell of high-pressure ice, above which sits a layer of salty liquid water and then an outer crust of water ice. However, new evidence from re-analysing Cassini’s data suggests that rather than oceans of liquid water, Titan has “slush” below the frozen exterior, with pockets of liquid water (Nature 648 556). The team, led by Flavio Petricca from NASA’s Jet Propulsion Laboratory, looked at how Titan’s shape morphs as it orbits Saturn. There is a several-hour lag between the moon passing the peak of Saturn’s gravitational pull and its shape shifting, implying that while there must be some form of non-solid substance below Titan’s surface to allow for deformation, more energy is lost or dissipated than would be if it was liquid water. Instead, the researchers found that a layer of high-pressure ice close to its melting point – or slush – better fits the data.

Titan's atmosphere
Hello halo Titan is different to other icy moons in that it has a thick atmosphere, seen here with the moon in silhouette. (Courtesy: NASA/JPL/Space Science Institute)

To find out more about Titan, NASA is planning to follow in Huygens’ footsteps with the Dragonfly mission but in an excitingly different way. Set to launch in 2028, Dragonfly should arrive at Titan in 2034 where it will deploy a rotorcraft that will fly over the moon’s surface, beneath the smog, occasionally touching down to take readings. Scientists are intending to use Dragonfly to sample surface material with a mass spectrometer to identify organic compounds and therefore better assess Titan’s biological potential. It will also perform atmospheric and geological measurements, even listening for seismic tremors while landed, which could provide further clues about Titan’s interior.

Jupiter and Saturn are also not the only planets to possess icy moons. We find them around Uranus and Neptune too. Even the dwarf planet Pluto and its largest moon Charon have strong similarities to icy moons. Whether any of these bodies, so far out from the Sun, can maintain an ocean is unclear, however.

Recent findings point to an ocean deep inside Uranus’ moon Ariel that may once have been 170 km deep, kept warm by tidal heating (Icarus 444 116822). But over time Ariel’s orbit around Uranus has become increasingly circular, weakening the tidal forces acting on it, and the ocean has partly frozen. Another of Uranus’ moons, Miranda, has a chaotic surface that appears to have melted and refrozen, and the pattern of cracks on its surface strongly suggests that the moon also contains an ocean, or at least did 150 million years ago. A new mission to Uranus is a top priority in the US’s most recent Decadal Review.

It’s becoming clear that icy ocean moons could far outnumber more traditional habitable planets like Earth, not just in our solar system, but across the galaxy (although none have been confirmed yet). Understanding the internal structures of the icy moons in our solar system, and characterizing their oceans, is vital if we are to expand the search for life beyond Earth.

The post Exploring the icy moons of the solar system appeared first on Physics World.

  •  

Check your physics knowledge with our bumper end-of-year quiz

How well have you been following events in physics? There are 20 questions in total: blue is your current question and white means unanswered, with green and red being right and wrong.

16–20 Top quark – congratulations, you’ve hit Einstein level

11–15 Strong force – good but not quite Nobel standard

6–10 Weak force – better interaction needed

0–5 Bottom quark – not even wrong

The post Check your physics knowledge with our bumper end-of-year quiz appeared first on Physics World.

  •  

ZAP-X radiosurgery and ZAP-Axon SRS planning: technology overview, workflow and complex case insights from a leading SRS centre

ZAP-X is a next-generation, cobalt-free, vault-free stereotactic radiosurgery system purpose-built for the brain. Delivering highly precise, non-invasive treatments with exceptionally low whole-brain and whole-body dose, ZAP-X’s gyroscopic beam delivery, refined beam geometry and fully integrated workflow enable state-of-the-art SRS without the burdens of radioactive sources or traditional radiation bunkers.

Theresa Hofman headshot
Theresa Hofman

Theresa Hofman is deputy head of medical physics at the European Radiosurgery Center Munich (ERCM), specializing in stereotactic radiosurgery with the CyberKnife and ZAP‑X systems. She has been part of the ERCM team since 2018 and has extensive clinical experience with ZAP‑X, one of the first centres worldwide to implement the technology in 2021. Since then, the team has treated more than 900 patients with ZAP‑X, and she is deeply involved in both clinical use and evaluation of its planning software.

She holds a master’s degree in physics from Ludwig Maximilian University of Munich, where she authored two first‑author publications on range verification in carbon‑ion therapy. At ERCM, she has published additional first‑author studies on CyberKnife kidney‑treatment accuracy and on comparative planning between ZAP‑X and CyberKnife. She is currently conducting further research on the latest ZAP‑X planning software. Her work is driven by the goal of advancing high‑quality radiosurgery and ensuring the best possible treatment for every patient.

The post ZAP-X radiosurgery and ZAP-Axon SRS planning: technology overview, workflow and complex case insights from a leading SRS centre appeared first on Physics World.

  •  

Physics-based battery model parameterization from impedance data

Electrochemical impedance spectroscopy (EIS) provides valuable insights into the physical processes within batteries – but how can these measurements directly inform physics-based models? In this webinar, we present recent work showing how impedance data can be used to extract grouped parameters for physics-based models such as the Doyle–Fuller–Newman (DFN) model or the reduced-order single-particle model with electrolyte (SPMe).

We will introduce PyBaMM (Python Battery Mathematical Modelling), an open-source framework for flexible and efficient battery simulation, and show how our extension, PyBaMM-EIS, enables fast numerical impedance computation for any implemented model at any operating point. We also demonstrate how PyBOP, another open-source tool, performs automated parameter fitting of models using measured impedance data across multiple states of charge.

Battery modelling is challenging, and obtaining accurate fits can be difficult. Our technique offers a flexible way to update model equations and parameterize models using impedance data.

Join us to see how our tools create a smooth path from measurement to model to simulation.

An interactive Q&A session follows the presentation.

Noël Hallemans headshot
Noël Hallemans

Noël Hallemans is a postdoctoral research assistant in engineering science at the University of Oxford, where he previously lectured in mathematics at St Hugh’s College. He earned his PhD in 2023 from the Vrije Universiteit Brussel and the University of Warwick, focusing on frequency-domain, data-driven modelling of electrochemical systems.

His research at the Battery Intelligence Lab, led by Professor David Howey, integrates electrochemical impedance spectroscopy (EIS) with physics-based modelling to improve understanding and prediction of battery behaviour. He also develops multisine EIS techniques for battery characterisation during operation (for example, charging or relaxation).

 

The Electrochemical Society, Gamry Instruments, BioLogic, EL-Cell logos

The post Physics-based battery model parameterization from impedance data appeared first on Physics World.

  •  
❌