↩ Accueil

Vue normale

Reçu avant avant-hierPhysics World

Conflicting measurements of helium’s charge radius may be reconciled by new calculations

20 juin 2025 à 17:50

Independent measurements of the charge radius of the helium-3 nucleus using two different  methods have yielded significantly different results – prompting a re-evaluation of underlying theory to reconcile them. The international CREMA Collaboration used muonic helium-3 ions to determine the radius, whereas a team in the Netherlands used a quantum-degenerate gas of helium-3 atoms.

The charge radius is a statistical measure of how far the electric charge of a particle extends into space. Both groups were mystified by the discrepancy in the values – which hints at physics beyond the Standard Model of particle physics. However, new theoretical calculations inspired by the results may have already resolved the discrepancy.

Both groups studied the difference between the charge radii of the helium-3 and helium-4 nuclei. CREMA used muonic helium ions, in which the remaining electrons replaced by muons. Muons are much more massive than electrons, so they spend more time near the nucleus – and are therefore more sensitive to the charge radius.

Shorter wavelengths

Muonic atoms have spectra at much shorter wavelengths than normal atoms. This affects values such as the Lamb shift. This is the energy difference in the 2S1/2 and 2P1/2 atomic states, which are split by interactions with virtual photons and vacuum polarization. This is most intense near the nucleus. More importantly, a muon in an S orbital becomes more sensitive to the finite size of the nucleus.

In 2010, CREMA used the charge radius of muonic hydrogen to conclude that the charge radius of the proton is significantly smaller than the current accepted value. The same procedure was then used with muonic helium-4 ions. Now, CREMA has used pulsed laser spectroscopy of muonic helium-3 ions to extract several key parameters including the Lamb shift and used them to calculate the charge radius of muonic helium-3 nuclei. They then calculated the difference with the charge radius in helium-4. The value they obtained was 15 times more accurate than any previously reported.

Meanwhile, at the Free University of Amsterdam in the Netherlands, researchers were taking a different approach, using conventional helium-3 atoms. This has significant challenges, because the effect of the nucleus on electrons is much smaller. However, it also means that an electron affects the nucleus it measures less than does a muon, which mitigates a source of theoretical uncertainty.

The Amsterdam team utilized the fact that the 2S triplet state in helium is extremely long-lived. ”If you manage to get the atom up there, it’s like a new ground state, and that means you can do laser cooling on it and it allows very efficient detection of the atoms,” explains Kjeld Eikema, one of the team’s leaders after its initial leader Wim Vassen died in 2019. In 2018, the Amsterdam group created an ultracold Bose–Einstein condensate (BEC) of helium-4 atoms in the 2S triplet state in an optical dipole trap before using laser spectroscopy to measure the ultra-narrow transition between the 2S triplet state and the higher 2S singlet state.

Degenerate Fermi gas

In the new work, the researchers turned to helium-3, which does not form a BEC but instead forms a degenerate Fermi gas. Interpreting the spectra of this required new discoveries itself. “Current theoretical models are insufficiently accurate to determine the charge radii from measurements on two-electron atoms,” Eikema explains. However, “the nice thing is that if you measure the transition directly in one isotope and then look at the difference with the other isotope, then most complications from the two electrons are common mode and drop out,” he says. This can be used to the determine the difference in the charge radii.

The researchers obtained a value that was even more precise than CREMA’s and larger by 3.6σ. The groups could find no obvious explanation for the discrepancy. “The scope of the physics involved in doing and interpreting these experiments is quite massive,” says Eikema; “a comparison is so interesting, because you can say ‘Well, is all this physics correct then? Are electrons and muons the same aside from their mass? Did we do the quantum electrodynamics correct for both normal atoms and muonic atoms? Did we do the nuclear polarization correctly?’” The results of both teams are described in Science (CREMA, Amsterdam).

While these papers were undergoing peer review, the work attracted the attention of two groups of theoretical physicists – one led by Xiao-Qiu Qi f the Wuhan Institute of Physics and Mathematics in China, and the other by Krzysztof Pachucki of the University of Warsaw in Poland. Both revised the calculation of the hyperfine structure of helium-3, finding that incorporating previously neglected higher orders into the calculation produced an unexpectedly large shift.

“Suddenly, by plugging this new value into our experiment – ping! – our determination comes within 1.2σ of theirs,” says Eikema; “which is a triumph for all the physics involved, and it shows how, by showing there’s a difference, other people think, ‘Maybe we should go and check our calculations,’ and it has improved the calculation of the hyperfine effect.” In this manner the ever improving experiments and theory calculations continue to seek the limits of the Standard Model.

Xiao-Qiu Qi and colleagues describe their calculations in Physical Review Research, while Pachucki’s team have published in Physical Review A.

Eikema adds “Personally I would have adjusted the value in our paper according to these new calculations, but Science preferred to keep the paper as it was at the time of submission and peer review, with an added final paragraph to explain the latest developments.”

Theoretical physicist Marko Horbatsch at Canada’s York University is impressed by the experimental results and bemused by the presentation. “I would say that their final answer is a great success,” he concludes. “There is validity in having the CREMA and Eikema work published side-by-side in a high-impact journal. It’s just that the fact that they agree should not be confined to a final sentence at the end of the paper.”

The post Conflicting measurements of helium’s charge radius may be reconciled by new calculations appeared first on Physics World.

Simulation of capsule implosions during laser fusion wins Plasma Physics and Controlled Fusion Outstanding Paper Prize

20 juin 2025 à 17:00

Computational physicist Jose Milovich of the Lawrence Livermore National Laboratory (LLNL) and colleagues have been awarded the 2025 Plasma Physics and Controlled Fusion (PPCF) Outstanding Paper Prize for their computational research on capsule implosions during laser fusion.

The work – Understanding asymmetries using integrated simulations of capsule implosions in low gas-fill hohlraums at the National Ignition Facility – is an important part of understanding the physics at the heart of inertial confinement fusion (ICF).

Fusion is usually performed via two types of plasma confinement. Magnetic involves using magnetic fields to hold stable a plasma of deuterium-tritium (D-T), while inertial confinement uses rapid compression, usually by lasers, to create a confined plasma for a short period of time.

The award-winning work was based on experiments carried out at the National Ignition Facility (NIF) based in California, which is one of the leading fusion centres in the world.

During NIF’s ICF experiments, a slight imbalance of the laser can induce motion of the hot central core of an ignition capsule, which contains the D-T fuel. This effect results in a reduced performance.

Experiments at NIF in 2018 found that laser imbalances alone, however, could not account for the motion of the capsule. The simulations carried out by Milovich and colleagues demonstrated that other factors were at play such as non-concentricity of the layers of the material surrounding the D-T fuel as well as “drive perturbations” induced by diagnostic windows on the implosion.

Computational physicist Jose Milovich Jose
Computational physicist Jose Milovich of the Lawrence Livermore National Laboratory. (Courtesy: LLNL)

Changes made following the team’s findings then helped towards the recent demonstration of “energy breakeven” at NIF in December 2022.

Awarded each year, the PPCF prize aims to highlight work of the highest quality and impact published in the journal.  The award was judged on originality, scientific quality and impact as well as being based on community nominations and publication metrics. The prize will be presented at the 51st European Physical Society Conference on Plasma Physics in Vilnius, Lithuania, on 7–11 July.

The journal is now seeking nominations for next year’s prize, which will focus on papers in magnetic confinement fusion.

Below, Milovich talks to Physics World about prize, the future of fusion and what advice he has for early-career researchers.

What does winning the 2025 PPCF Outstanding Paper Prize mean to you and for your work?

The award is an incredible honour to me and my collaborators as a recognition of the detailed work required to make inertial fusion in the laboratory a reality and the dream of commercial fusion energy a possibility. The paper presented numerical confirmation of how seemingly small effects can significantly impact the performance of fusion targets.  This study led to target modifications and revised manufacturing specifications for improved performance.  My collaborators and I would like to deeply thank PPCF for granting us this award.

What excites you about fusion?

Nuclear fusion is the process that powers the stars, and achieving those conditions in the laboratory is exciting in many ways.  It is an interesting scientific problem in its own right and it is an incredibly challenging engineering problem to handle the extreme conditions required for successful energy production. This is an exciting time since the possibility of realizing this energy source became tangibly closer two years ago when NIF successfully demonstrated that more energy can be released from D-T fusion than the laser energy delivered to the target.

What are your thoughts on the future direction of ICF and NIF?

While the challenges ahead to make ICF commercially feasible are daunting, we are well positioned to address them by developing new technologies and innovative target configurations. Applications of artificial intelligence to reactor plant designs, optimized operations, and improvements on plasma confinement could potentially lead to improved designs at a fraction of the cost. The challenges are many but the potential for providing a clean and inexhaustible source of energy for the benefit of mankind is invigorating.

What advice would you give to people thinking about embarking on a career in fusion?

This is an exciting time to get involved in fusion. The latest achievements at NIF have shown that fusion is possible. There are countless difficulties to overcome, making it an ideal time to devote one’s career in this area. My advice is to get involved now since, at this early stage, any contribution will have a major and lasting impact on mankind’s future energy needs.

The post Simulation of capsule implosions during laser fusion wins <em>Plasma Physics and Controlled Fusion</em> Outstanding Paper Prize appeared first on Physics World.

AI algorithms in radiology: how to identify and prevent inadvertent bias

20 juin 2025 à 09:30

Artificial intelligence (AI) has the potential to generate a sea change in the practice of radiology, much like the introduction of radiology information system (RIS) and picture archiving and communication system (PACS) technology did in the late 1990s and 2000s. However, AI-driven software must be accurate, safe and trustworthy, factors that may not be easy to assess.

Machine learning software is trained on databases of radiology images. But these images might lack the data or procedures needed to prevent algorithmic bias. Such algorithmic bias can cause clinical errors and performance disparities that affect a subset of the analyses that the AI performs, unintentionally disadvantaging certain groups of patients.

A multinational team of radiology informaticists, biomedical engineers and computer scientists has identified potential pitfalls in the evaluation and measurement of algorithmic bias in AI radiology models. Describing their findings in Radiology, the researchers also suggest best practices and future directions to mitigate bias in three key areas: medical image datasets; demographic definitions; and statistical evaluations of bias.

Medical imaging datasets

The medical image datasets used for training and evaluation of AI algorithms are reflective of the population from which they are acquired. It is natural that a dataset acquired in a country in Asia will not be representative of the population in a Nordic country, for example. But if there’s no information available about the image acquisition location, how might this potential source of bias be determined?

Paul Yi
Team leader Paul Yi. (Courtey: RSNA)

Lead author Paul Yi, of St. Jude Children’s Research Hospital in Memphis, TN, and coauthors advise that many existing medical imaging databases lack a comprehensive set of demographic characteristics, such as age, sex, gender, race and ethnicity. Additional potential confounding factors include the scanner brand and model, the radiology protocols used for image acquisition, radiographic views acquired, the hospital location and disease prevalence. In addition to incorporating these data, the authors recommend that raw image data are collected and shared without institution-specific post-processing.

The team advise that generative AI, a set of machine learning techniques that generate new data, provides the potential to create synthetic imaging datasets with more balanced representation of both demographic and confounding variables. This technology is still in development, but might provide a solution to overcome pitfalls related to measurement of AI biases in imperfect datasets.

Defining demographics

Radiology researchers lack consensus with respect to how demographic variables should be defined. Observing that demographic categories such as gender and race are self-identified characteristics informed by many factors, including society and lived experiences, the authors advise that concepts of race and ethnicity do not necessarily translate outside of a specific society and that biracial individuals reflect additional complexity and ambiguity.

They emphasize that ensuring accurate measurements of race- and/or ethnicity-based biases in AI models is important to enable accurate comparison of bias evaluations. This not only has clinical implications, but is also essential to prevent health policies being established in error from erroneous AI-derived findings, which could potentially perpetuate pre-existing inequities.

Statistical evaluations of bias

The researchers define bias in the context of demographic fairness and how it reflects differences in metrics between demographic groups. However, establishing consensus on the definition of bias is complex, because bias can have different clinical and technical meanings. They point out that in statistics, bias refers to a discrepancy between the expected value of an estimated parameter and its true value.

As such, the radiology speciality needs to establish a standard notion of bias, as well as tackle the incompatibility of fairness metrics, the tools that measure whether a machine learning model treats certain demographic groups differently. Currently there is no universal fairness metric that can be applied to all cases and problems, and the authors do not think there ever will be one.

The different operating points of predictive AI models may result in different performance that could lead to potentially different demographic biases. These need to be documented, and thresholds should be included in research and by commercial AI software vendors.

Key recommendations

The authors suggest some key courses of action to mitigate demographic biases in AI in radiology:

  • Improve reporting of demographics by establishing a consensus panel to define and update reporting standards.
  • Improve dataset reporting of non-demographic factors, such as imaging scanner vendor and model.
  • Develop a standard lexicon of terminology for concepts of fairness and AI bias concepts in radiology.
  • Develop standardized statistical analysis frameworks for evaluating demographic bias of AI algorithms based on clinical contexts
  • Require greater demographic detail to evaluate algorithmic fairness in scientific manuscripts relating to AI models.

Yi and co-lead collaborator Jeremias Sulam, of Hopkins BME, Whiting School of Engineering, tell Physics World that their assessment of pitfalls and recommendations to mitigate demographic biases reflect years of multidisciplinary discussion. “While both the clinical and computer science literature had been discussing algorithmic bias with great enthusiasm, we learned quickly that the statistical notions of algorithmic bias and fairness were often quite different between the two fields,” says Yi.

“We noticed that progress to minimize demographic biases in AI models is often hindered by a lack of effective communication between the computer science and statistics communities and the clinical world, radiology in particular,” adds Sulam.

A collective effort to address the challenges posed by bias and fairness is important, notes Melissa Davis of Yale School of Medicine, in an accompanying editorial in Radiology. By fostering collaboration between clinicians, researchers, regulators and industry stakeholders, the healthcare community can develop robust frameworks that prioritize patient safety and equitable outcomes,” she writes.

The post AI algorithms in radiology: how to identify and prevent inadvertent bias appeared first on Physics World.

Helgoland: leading scientists reflect on 100 years of quantum physics and look to the future

19 juin 2025 à 14:59

Last week, Physics World’s Matin Durrani boarded a ferry in Hamburg that was bound for Helgoland – an archipelago in the North Sea about 70 km off the north-west coast of Germany.

It was a century ago in Helgoland that the physicist Werner Heisenberg devised the mathematical framework that underpins our understanding of quantum physics.

Matin was there with some of the world’s leading quantum physicists for the conference Helgoland 2025: 100 Years of Quantum Mechanics – which celebrated Heisenberg’s brief stay in Helgoland.

He caught up with three eminent physicists and asked them to reflect on Heisenberg’s contributions to quantum mechanics and look forward to the next 100 years of quantum science and technology. They are Tracy Northup at the University of Vienna; Michelle Simmons of the University of New South Wales, Sydney; and Peter Zoller of the University of Innsbruck.

• Don’t miss the 2025 Physics World Quantum Briefing, which is free to read via this link.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Helgoland: leading scientists reflect on 100 years of quantum physics and look to the future appeared first on Physics World.

Laser World of Photonics showcases cutting-edge optical innovation

19 juin 2025 à 10:45

Laser World of Photonics, the leading trade show for the laser and photonics industry, takes place in Munich from 24 to 27 June. Attracting visitors and exhibitors from around the world, the event features 11 exhibition areas covering the entire spectrum of photonic technologies – including illumination and energy, biophotonics, data transmission, integrated photonics, laser systems, optoelectronics, sensors and much more.

Running parallel and co-located with Laser World of Photonics is World of Quantum, the world’s largest trade fair for quantum technologies. Showcasing all aspects of quantum technologies – from quantum sensors and quantum computers to quantum communications and cryptography – the event provides a platform to present innovative quantum-based products and discuss potential applications.

Finally, the World of Photonics Congress (running from 22 to 27 June) features seven specialist conferences, over 3000 lectures and around 6700 experts from scientific and industrial research.

The event is expecting to attract around 40,000 visitors from 70 countries, with the trade shows incorporating 1300 exhibitors from 40 countries. Here are some of the companies and product innovations to look out for on the show floor.

HOLOEYE unveils compact 4K resolution spatial light modulator

HOLOEYE Photonics AG, a leading provider of spatial light modulator (SLM) devices, announces the release of the GAEA-C spatial light modulator, a compact version of the company’s high-resolution SLM series. The GAEA-C will be officially launched at Laser World of Photonics, showcasing its advanced capabilities and cost-effective design.

The GAEA-C spatial light modulator
Compact and cost-effective The GAEA-C spatial light modulator is ideal for a variety of applications requiring precise light modulation. (Courtesy: HOLOEYE)

The GAEA-C is a phase-only SLM with a 4K resolution of 4094 x 2400 pixels, with an exceptionally small pixel pitch of 3.74 µm. This compact model is equipped with a newly developed driver solution that not only reduces costs but also enhances phase stability, making it ideal for a variety of applications requiring precise light modulation.

The GAEA-C SLM features a reflective liquid crystal on silicon (LCOS) display (phase only). Other parameters include a fill factor of 90%, an input frame rate of 30 Hz and a maximum spatial resolution of 133.5 lp/mm.

The GAEA-C is available in three versions, each optimized for a different wavelength range: a VIS version (420–650 nm), a NIR version (650–1100 nm) and a version tailored for the telecommunications waveband around 1550 nm. This versatility ensures that the GAEA-C can meet the diverse needs of industries ranging from telecoms to scientific research.

HOLOEYE continues to lead the market with its innovative SLM solutions, providing unparalleled resolution and performance. The introduction of the GAEA-C underscores HOLOEYE’s commitment to delivering cutting-edge technology that meets the evolving demands of its customers.

  • For more information about the GAEA-C and other SLM products, visit HOLOEYE at booth #225 in Hall A2.

Avantes launches NIR Enhanced spectrometers

At this year’s Laser World of Photonics, Avantes unveils its newest generation of spectrometers: the NEXOS NIR Enhanced and VARIUS NIR Enhanced. Both instruments mark a significant leap in near-infrared (NIR) spectroscopy, offering up to 2x improved sensitivity and unprecedented data quality for integration into both research and industry applications.

NEXOS NIR Enhanced spectrometer
Solving spectroscopy challenges Visit Avantes at booth 218, Hall A3, for hands-on demonstrations of its newest generation of spectrometers. (Courtesy: Avantes)

Compact, robust and highly modular, the NEXOS NIR Enhanced spectrometer redefines performance in a small form factor. It features enhanced NIR quantum efficiency in the 700–1100 nm range, with up to 2x increased sensitivity, fast data transfer and improved signal-to-noise ratio. The USB-powered spectrometer is designed with a minimal footprint of just 105 x 80 x 20 mm and built using AvaMation production for top-tier reproducibility and scalability. It also offers seamless integration with third-party software platforms.

The NEXOS NIR Enhanced is ideal for food sorting, Raman applications and VCSEL/laser system integration, providing research-grade performance in a compact housing. See the NEXOS NIR Enhanced product page for further information.

Designed for flexibility and demanding industrial environments, the VARIUS NIR Enhanced spectrometer introduces a patented optical bench for supreme accuracy, with replaceable slits for versatile configurations. The spectrometer offers a dual interface – USB 3.0 and Gigabit Ethernet – plus superior stray light suppression, high dynamic range and enhanced NIR sensitivity in the 700–1100 nm region.

With its rugged form factor (183 x 130 x 45.2 mm) and semi-automated production process, the VARIUS NIR is optimized for real-time applications, ensuring fast data throughput and exceptional reliability across industries. For further information, see the VARIUS NIR Enhanced product page.

Avantes invites visitors to experience both systems live at Laser World of Photonics 2025. Meet the team for hands-on demonstrations, product insights and expert consultations. Avantes offers free feasibility studies and tailored advice to help you identify the optimal solution for your spectroscopy challenges.

  • For more information, visit www.avantes.com or meet Avantes at booth #218 in Hall A3.

HydraHarp 500: a new era in time-correlated single-photon counting

Laser World of Photonics sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.

HydraHarp 500
Redefining what’s possible PicoQuant presents HydraHarp 500, a next-generation TCSPC unit that maximizes precision, flexibility and efficiency. (Courtesy: PicoQuant)

At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.

In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.

The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.

The HydraHarp 500 is fully compatible with UniHarp, a sleek, powerful and intuitive graphical user interface. UniHarp revolutionizes the interaction with PicoQuant’s TCSPC and time tagging electronics, offering seamless access to advanced measurement modes like time trace, histogram, unfold, raw and correlation (including FCS and g²).

Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering precision, flexibility and efficiency combined with reliability and seamless integration to achieve breakthrough results.

For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at booth #216 in Hall B2.

SmarAct showcases integrated, high-precision technologies

With a strong focus on turnkey, application-specific solutions, SmarAct offers nanometre-precise motion systems, measurement equipment and scalable micro-assembly platforms for photonics, quantum technologies, semiconductor manufacturing and materials research – whether in research laboratories or high-throughput production environments.

SmarAct’s high-precision technologies
State-of-the-art solutions The SmarAct Group returns to Laser World of Photonics in 2025 with a comprehensive showcase of integrated, high-precision technologies. (Courtesy: SmarAct)

At Laser World of Photonics, SmarAct presents a new modular multi-axis positioning system for quantum computing applications and photonic integrated circuit (PIC) testing. The compact system is made entirely from titanium and features a central XY stage with integrated rotation, flanked by two XYZ modules – one equipped with a tip-tilt goniometer.

For cryogenic applications, the system can be equipped with cold plates and copper braids to provide a highly stable temperature environment, even at millikelvin levels. Thanks to its modularity, the platform can be reconfigured for tasks such as low-temperature scanning or NV centre characterization. When combined with SmarAct’s interferometric sensors, the system delivers unmatched accuracy and long-term stability under extreme conditions.

Also debuting is the SGF series of flexure-based goniometers – compact, zero-backlash rotation stages developed in collaboration with the University of Twente. Constructed entirely from non-ferromagnetic materials, the goniometers are ideal for quantum optics, electron and ion beam systems. Their precision has been validated in a research paper presented at EUSPEN 2023.

Targeting the evolving semiconductor and photonics markets, SmarAct’s optical assembly platforms enable nanometre-accurate alignment and integration of optical components. At their core is a modular high-performance toolkit for application-specific configurations, with the new SmarAct robot control software serving as the digital backbone. Key components include SMARPOD parallel kinematic platforms, long-travel SMARSHIFT electromagnetic linear stages and ultraprecise microgrippers – all seamlessly integrated to perform complex optical alignment tasks with maximum efficiency.

Highlights at Laser World of Photonics include a gantry-based assembly system developed for the active alignment of beam splitters and ferrules, and a compact, fully automated fibre array assembly system designed for multicore and polarization-maintaining fibres. Also on display are modular probing systems for fast, accurate and reliable alignment of fibres and optical elements – providing the positioning precision required for chip- and wafer-level testing of PICs prior to packaging. Finally, the microassembly platform P50 from SmarAct Automation offers a turnkey solution for automating critical micro-assembly tasks such as handling, alignment and joining of tiny components.

Whether you’re working on photonic chip packaging, quantum instrumentation, miniaturized medical systems or advanced semiconductor metrology, SmarAct invites researchers, engineers and decision-makers to experience next-generation positioning, automation and metrology solutions live in Munich.

  • Visit SmarAct at booth #107 in Hall B2.

 

The post Laser World of Photonics showcases cutting-edge optical innovation appeared first on Physics World.

Liquid carbon reveals its secrets

19 juin 2025 à 10:00

Thanks to new experiments using the DIPOLE 100-X high-performance laser at the European X-ray Free Electron Laser (XFEL), an international collaboration of physicists has obtained the first detailed view of the microstructure of carbon in its liquid state. The work will help refine models of liquid carbon, enabling important insights into the role that it plays in the interior of ice giant planets like Uranus and Neptune, where liquid carbon exists in abundance. It could also inform the choice of ablator materials in future technologies such as nuclear fusion.

Carbon is the one of the most abundant elements on Earth and indeed the universe, but we still know very little about how it behaves in its liquid state. This is because producing liquid carbon is extremely difficult: at ambient pressures it sublimes rather than melts; and the liquid phase requires pressures of at least several hundred atmospheres to form. What is more, carbon boasts the highest melting temperature (of roughly 4500 °C) of all known materials under these high-pressure conditions, which means that there is no substance that can contain it for long enough to be studied and characterized.

In situ probing laser compression technique

There is an alternative, though, which involves using X-ray free electron laser pulses – such as those produced at the European XFEL – to transform solid carbon into a liquid for a few nanoseconds. The next challenge is to make measurements during this very short period of time. But this is exactly what a team led by Dominik Kraus of the University of Rostock and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has succeeded in doing.

In their work, Kraus and colleagues transiently created liquid carbon by driving strong compression waves into solid carbon samples using the pulsed high-energy laser DIPOLE 100-X, which is a new experimental platform at the European XFEL. In this way, the researchers were able to achieve pressures exceeding one million atmospheres, with the compression waves simultaneously heating the samples to around 7000 K to form liquid carbon. They then obtained in situ snapshots of the structure using ultrabright X-ray pulses at the European XFEL that lasted just 25 fs – that is, about 100,000 times shorter than the already very short lifetime of the liquid carbon samples.

Relevance to planetary interiors and inertial fusion

Studying liquid carbon is important for modelling the interior of planets such as the ice giants Neptune and Uranus, as well as the atmosphere of white dwarfs, in which it also exists, explains Kraus. The insights gleaned from the team’s experiments will help to clarify the role that liquid carbon plays in the ice giants and perhaps even comparable carbon-rich exoplanets.

Liquid carbon also forms as a transient state during some technical processes, like in the synthesis of carbon-based materials such as carbon nanotubes, nanodiamonds or “Q-carbon”, and may be key for the synthesis of new carbon materials, such as the long sought after (but still only predicted) “BC-8” structure. The team’s findings could also help inform the choice of materials for inertial fusion implosions aiming for clean and reliable energy production, where carbon is used as an ablator material.

“Because of its relevance in these areas, I had already tried to study liquid carbon during my doctoral work more than 10 years ago,” Kraus says. “Without an XFEL for characterization, I could only obtain a tiny hint of the liquid structure of carbon (and with large error bars) and was barely able to refine any existing models.”

Until now, however, this work was considered as being the best attempt to characterize the structure of liquid carbon at Mbar pressures, he tells Physics World. “Using the XFEL as a characterization tool and the subsequent analysis was incredibly simple in comparison to all the previous work and, in the end, the most important challenge was to get the European XFEL facility ready – something that I had already discussed more than 10 years ago too when the first plans were being made for studying matter under extreme conditions at such an installation.”

The results of the new study, which is detailed in Nature, prove that simple models cannot describe the liquid state of carbon very well, and that sophisticated atomistic simulations are required for predicting processes involving this material, he says.

Looking forward, the Rostock University and HZDR researchers now plan to extend their methodology to the liquid states of various other materials. “In particular, we will study mixtures of light elements that may exist in planetary interiors and the resulting chemistry at extreme conditions,” reveals Kraus. “This work may also be interesting for forming doped nanodiamonds or other phases with potential technological applications.”

The post Liquid carbon reveals its secrets appeared first on Physics World.

Tiny laser delivers high-quality, narrowband light for metrology

18 juin 2025 à 18:00

A new solid-state laser can make a vast number of precise optical measurements each second, while sweeping across a broad range of optical wavelengths. Created by a team led by Qiang Lin at the University of Rochester in the US, the device can be fully integrated onto a single chip.

Optical metrology is a highly versatile technique that uses light to gather information about the physical properties of target objects. It involves illuminating a sample and measuring the results with great precision – using techniques such as interferometry and spectroscopy. In the 1960s, the introduction of lasers and the coherent light they emit boosted the technique to an unprecedented level of precision. This paved the way for advances ranging from optical clocks, to the detection of gravitational waves.

Yet despite the indispensable role they have played so far, lasers have also created a difficult challenge. To ensure the best possible precision, experimentalists much achieve very tight control over the wavelength, phase, polarization and other properties of the laser light. This is very difficult to do within the tiny solid-state laser diodes that are very useful in metrology.

Currently, the light from laser diodes is improved externally using optical modules. This added infrastructure is inherently bulky and it remains difficult to integrate the entire setup onto chip-scale components – which limits the development of small, fast lasers for metrology.

Two innovations

Lin and colleagues addressed this challenge by designing a new laser with two key components. One is a laser cavity that comprises a thin film of lithium niobate. Thanks to the Pockels effect, this material’s refractive index can vary depending on the strength of an applied electric field. This provides control over the wavelength of the light amplified by the cavity.

The other component is a distributed Bragg reflector (DBR), which is a structure containing periodic grooves that create alternating regions of refractive index. With the right spacing of these grooves, a DBR can strongly reflect light at a single, narrow linewidth, while scattering all other wavelengths. In previous studies, lasers were created by etching a DBR directly onto a lithium niobate film – but due to the material’s optical properties, this resulted in a broad linewidth.

“Instead, we developed an ‘extended DBR’ structure, where the Bragg grating is defined in a silica cladding,” explains team member Mingxiao Li at the University of California Santa Barbara. “This allowed for flexible control over the grating strength, via the thickness and etch depth of the cladding. It also leverages silica’s superior etchability to achieve low scattering strength, which is essential for narrow linewidth operation.”

Using a system of integrated electrodes, Lin’s team can adjust the strength of the electric field they applied to the lithium niobate film. This allows them to rapidly tune the wavelengths amplified by the cavity via the Pockels effect. In addition, they used a specially designed waveguide to control the phase of light passing into the cavity. This design enabled them to tune their laser over a broad range of wavelengths, without needing external correction modules to achieve narrow linewidths.

Narrowband performance

Altogether, the laser demonstrated an outstanding performance on a single chip – producing a clean, single wavelength with very little noise. Most importantly, the light had a linewidth of just 167 Hz – the smallest range achieved to date for a single-chip lithium niobate laser. This exceptional performance enabled the laser to rapidly sweep across a bandwidth of over 10 GHz – equivalent to scanning quintillions of points per second.

“These capabilities translated directly into successful applications,” Li describes. “The laser served as the core light source in a high-speed LIDAR system, measuring the velocity of a target 0.4 m away with better than 2 cm distance resolution. The system supports a velocity measurement as high as Earth’s orbital velocity – around 7.91 km/s – at 1 m.” Furthermore, Lin’s team were able to lock their laser’s frequency with a reference gas cell, integrated directly onto the same chip.

By eliminating the need for bulky control modules, the team’s design could now pave the way for the full miniaturization of optical metrology – with immediate benefits for technologies including optical clocks, quantum computers, self-driving vehicles, and many others.

“Beyond these, the laser’s core advantages – exceptional coherence, multifunctional control, and scalable fabrication – position it as a versatile platform for transformative advances in high-speed communications, ultra-precise frequency generation, and microwave photonics,” Lin says.

The new laser is described in Light: Science & Applications.

The post Tiny laser delivers high-quality, narrowband light for metrology appeared first on Physics World.

Astronomers capture spectacular ‘thousand colour’ image of the Sculptor Galaxy

18 juin 2025 à 14:01

Astronomers at the European Southern Observatory’s Very Large Telescope (VLT) have created a thousand colour image of the nearby Sculptor Galaxy.

First discovered by Carloine Herschel in 1783 the spiral galaxy lies 11 million light-years away and is one of the brightest galaxies in the sky.

While conventional images contain only a handful of colours, this new map contains thousands, which helps astronomers to understand the age, composition and motion of the stars, gas and dust within it.

To create the image, researchers observed the galaxy for over 50 hours with the Multi Unit Spectroscopic Explorer (MUSE) instrument on the VLT, which is based at the Paranal Observatory in Chile’s Atacama Desert.

The team then stitched together over 100 exposures to cover an area of the galaxy about 65 000 light-years wide.

The image revealed around 500 planetary nebulae – regions of gas and dust cast off from dying Sun-like stars – that can be used as distance markers to their host galaxies.

“Galaxies are incredibly complex systems that we are still struggling to understand,” notes astronomer Enrico Congiu, lead author of the study. “The Sculptor Galaxy is in a sweet spot – it is close enough that we can resolve its internal structure and study its building blocks with incredible detail, but at the same time, big enough that we can still see it as a whole system.”

Future work will involve understanding how gas flows, changes its composition, and forms stars in the galaxy.  “How such small processes can have such a big impact on a galaxy whose entire size is thousands of times bigger is still a mystery,” adds Congiu.

The post Astronomers capture spectacular ‘thousand colour’ image of the Sculptor Galaxy appeared first on Physics World.

Delving into the scientific mind, astronomy’s happy accidents, lit science experiments at home, the art of NASA: micro reviews of recent books

18 juin 2025 à 12:00

The Shape of Wonder: How Scientists Think, Work and Live
By Alan Lightman and Martin Rees

In their delightful new book, cosmologist Martin Rees and physicist and science writer Alan Lightman seek to provide “an honest picture of scientists as people and how they work and think”. The Shape of Wonder does this by exploring the nature of science, examining the role of critical thinking, and looking at how scientific theories are created and revised as new evidence emerges. It also includes profiles of individual scientists, ranging from historical Nobel-prize winners such as physicist Werner Heisenberg and biologist Barbara McClintock, to rising stars like CERN theorist Dorota Grabowska. Matin Durrani

  • 2025 Pantheon Books

Our Accidental Universe: Stories of Discovery from Asteroids to Aliens
By Chris Lintott

TV presenter and physics professor Chris Lintott brings all his charm and wit to his new book Our Accidental Universe. He looks at astronomy through the lens of the human errors and accidents that lead to new knowledge. It’s a loose theme that allows him to skip from the search for alien life to pulsars and the Hubble Space Telescope. Lintott has visited many of the facilities he discusses, and spoken to many people working in these areas, adding a personal touch to his stated aim of elucidating how science really gets done. Kate Gardner

  • 2024 Penguin

Science is Lit: Awesome Electricity and Mad Magnets
By Big Manny (Emanuel Wallace)

Want to feed your child’s curiosity about how things work (and don’t mind creating a mini lab in your house)? Take a look at Awesome Electricity and Mad Magnets, the second in the Science is Lit series by Emanuel Wallace – aka TikTok star “Big Manny”. Wallace introduces four key concepts of physics – force, sound, light and electricity – in an enthusiastic and fun way that’s accessible for 8–12 year olds. With instructions for experiments kids can do at home, and a clear explanation of the scientific process, your child can really experience what it’s like to be a scientist. Sarah Tesh

  • 2025 Puffin
Painting of a grey-white lunar landscape featuring several astronauts and dozens of scientific apparatus
NASA art This concept painting by Robert McCall shows a telescope in a hypothetical lunar observatory, sheltered from the Sun to protect its lens. (Courtesy: Robert McCall)

Space Posters & Paintings: Art About NASA
By Bill Schwartz

Astronomy is the most visually gifted of all the sciences, with endless stunning photographs of our cosmos. But perhaps what sets NASA apart from other space agencies is its art programme, which has existed since 1962. In Space Posters and Paintings: Art about NASA, documentary filmmaker Bill Schwartz has curated a striking collection of nostalgic artworks that paint the history of NASA and its various missions across the solar system and beyond. Particularly captivating are pioneering artist Robert McCall’s paintings of the Gemini and Apollo missions. This large-format coffee book is a perfect purchase for any astronomy buff. Tushna Commissariat

  • 2024 ACC Art Books

The post Delving into the scientific mind, astronomy’s happy accidents, lit science experiments at home, the art of NASA: micro reviews of recent books appeared first on Physics World.

US astronomy facing ‘extinction level’ event following Trump’s 2026 budget request

17 juin 2025 à 16:01

The administration of US president Donald Trump has proposed drastic cuts to science that would have severe consequence for physics and astronomy if passed by the US Congress. The proposal could involve the cancellation of one of the twin US-based gravitational-wave detectors as well as the axing of a proposed next-generation ground-based telescope and a suite of planned NASA mission. Scientific societies, groups of scientists and individuals have expressed their shock over the scale of the reductions.

In the budget request, which represents the start of the budgeting procedure for the year from 1 October, the National Science Foundation (NSF) would see its funding plummet from $9bn to just  $3.9bn – imperilling several significant projects. While the NSF had hoped to support both next-generation ground-based tele­scopes planned by the agency – the Giant Magellan Tele­scope (GMT) and the Thirty Meter Telescope (TMT) – the new budget would only allow one to be supported.

On 12 June the GMT, which is already 40% completed thanks to private funds, received NSF approval confirming that the observatory will advance into its “major facilities final design phase”, one of the final steps before becoming eligible for federal construction funding. The TMT, meanwhile, which is set to be built in Hawaii, has been hit with delays following protests over adding more telescopes to Mauna Kea. In a statement from the TMT International Observatory, it said it was “disappointed that the NSF’s current budget proposal does not include TMT”.

It is also possible that one of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) facilities – one in Hanford, Washington and the other in Livingston, Louisiana – would have to close down after the budget proposes a 39.6% cut to LIGO operations. Having one LIGO facility would significantly cut its ability to identify and localize events that produce gravitational waves.

“This level of cut, if enacted, would drastically reduce the science coming out of LIGO and have long-term negative consequences for gravitational-wave astrophysics,” notes LIGO executive director David Reitze. LIGO officials told Physics World that the cuts would be “extremely punishing to US gravitational wave science” and would mean “layoffs to staff, reduced scientific output, and the loss of scientific leadership in a field that made first detections just under 10 years ago”.

NASA’s science funding, meanwhile, would reduce by 47% year on year, and the agency as a whole would see more than 5500 staff lose their jobs as its workforce gets slashed from 17 391 to just 11 853. NASA would also lose planned missions to Venus, Mars, Jupiter and the asteroid Apophis that will pass close to Earth in 2029. Several scientific missions focusing on planet Earth, meanwhile, would also be axed.

The American Astronomical Society expressed “grave concern” that the cuts to NASA and the NSF “would result in an historic decline of American investment in basic scientific research”. The Planetary Society called the proposed NASA budget “an extinction-level event for the space agency’s most productive, successful and broadly supported activity”. Before the cuts were announced, the Trump administration pulled its nomination of billionaire industrialist Jared Isaacman for NASA administrator after his supporter Elon Musk left his post as head of the “Department of Government Efficiency.”

‘The elephant in the room’

The Department of Energy, meanwhile, will receive a slight increase in its defence-related budget, from the current $34.0bn to next year’s proposed $33.8bn. But its non-defence budget will fall by 26% from $16.83bn to $12.48bn. Michael Kratsios, Trump’s science adviser and head of the White House Office of Science and Technology Policy, sought to justify the administration’s planned cuts in a meeting at the National Academy of Sciences (NAS) on 19 May.

“Spending more money on the wrong things is far worse than spending less money on the right things,” Kratsios noted, adding that the country had received “diminishing returns” on its investments in science over the past four decades and that it now requires “new methods and approaches to supporting research”. He also suggested that research now undertaken at US universities falls short of what he called “gold standard science”, citing “political biases [that] have displaced the vital search for truth”. Universities, he stated, have lost public trust because they have “promoted diversity, equity and inclusion”.

The US science community, however, is unconvinced. “The elephant in the room right now is whether the drastic reductions in research budgets and new research policies across the federal agencies will allow us to remain a research and development powerhouse,” says Marcia McNutt, president of the National Academy of Sciences. “Thus, we are embarking on a radical new experiment in what conditions promote science leadership – with the US being the ‘treatment’ group, and China as the control.”

Former presidential science adviser Neal Lane, now at Rice University, told Physics World that while the US administration appears to value some aspects of scientific research such as AI, quantum, nuclear and biotechnologies, it “doesn’t seem to understand or acknowledge that technological advances and innovation often come from basic research in unlikely fields of science“. He expects the science community to “continue to push back” by writing and visiting members of Congress, many of whom support science, and “by speaking out to the public and encouraging various organizations to do that same”.

Indeed, an open letter by the group Stand Up for Science dated 26 May calls the administration’s stated commitment to “gold standard science” an approach “that will actually undermine scientific rigor and the transparent progress of science”. It would “introduce stifling limits on intellectual freedom in our nation’s laboratories and federal funding agencies”, the letter adds.

As of 13 June, the letter had more than 9250 signatures. Another letter, sent to Jay Bhattachayra, director of the National Institutes of Health (NIH), from some 350 NIH members, almost 100 of whom identified themselves, asserted that they “remain pressured to implement harmful measures” such as halting clinical trials midstream. In the budget request, the NIH would lose about 40%, leaving it with $27.5bn next year. The administration also plans to consolidate the NIH’s 27 institutes into just eight.

A political divide

On the day that the budget was announced, 16 states run by Democratic governors called on a federal court to block cuts in programmes and funding for the NSF. They point out that universities in their states could lose significant income if the cuts go ahead. In fact, the administration’s budget proposal is just that: a proposal. Congress will almost certainly make changes to it before presenting it to Trump for his signature. And while Republicans in the Senate and House of Representatives find it difficult to oppose the administration, science has historically enjoyed support by both Democrats and Republicans.

Despite that, scientists are gearing up for a difficult summer of speculation about financial support. “We are gaming matters at the moment because we are looking at the next budget cycle,” says Peter Littlewood, chair of the University of Chicago’s physics department. “The principal issues now are to bridge postdocs and graduating PhD students, who are in limbo because offers are drying up.” Littlewood says that, while alternative sources of funding such as philanthropic contributions can help, if the proposed government cuts are approved then philanthropy can’t replace federal support. “I’m less worried about whether this or that piece of research gets done than in stabilizing the pipeline, so all our discussions centre around that,” adds Littlewood.

Lane fears the cuts will put people off from careers in science, even in the unlikely event that all the cuts get reversed. “The combination of statements by the president and other administrative officials do considerable harm by discouraging young people born in the US and other parts of the world from pursuing their education and careers in [science] in America,” he says. “That’s a loss for all Americans.”

The post US astronomy facing ‘extinction level’ event following Trump’s 2026 budget request appeared first on Physics World.

Short-lived eclipsing binary pulsar spotted in Milky Way

17 juin 2025 à 14:00

Astronomers in China have observed a pulsar that becomes partially eclipsed by an orbiting companion star every few hours. This type of observation is very rare and could shed new light on how binary star systems evolve.

While most stars in our galaxy exist in pairs, the way these binary systems form and evolve is still little understood. According to current theories, when two stars orbit each other, one of them may expand so much that its atmosphere becomes large enough to encompass the other. During this “envelope” phase, mass can be transferred from one star to the other, causing the stars’ orbit to shrink over a period of around 1000 years. After this, the stars either merge or the envelope is ejected.

In the special case where one star in the pair is a neutron star, the envelope-ejection scenario should, in theory, produce a helium star that has been “stripped” of much of its material and a “recycled” millisecond pulsar – that is, a rapidly spinning neutron star that flashes radio pulses hundreds of times per second. In this type of binary system, the helium star can periodically eclipse the pulsar as it orbits around it, blocking its radio pulses and preventing us from detecting them here on Earth. Only a few examples of such a binary system have ever been observed, however, and all previous ones were in nearby dwarf galaxies called the Magellanic Clouds, rather than our own Milky Way.

A special pulsar

Astronomers led by Jinlin Han from the National Astronomical Observatories of China say they have now identified the first system of this type in the Milky Way. The pulsar in the binary, denoted PSR J1928+1815, had been previously identified using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) during the FAST Galactic Plane Pulsar Snapshot survey. These observations showed that PSR J1928+1815 has a spin period of 10.55 ms, which is relatively short for a pulsar of this type and suggests it had recently sped up by accreting mass from a companion.

The researchers used FAST to observe this suspected binary system at radio frequencies ranging from 1.0 to 1.5 GHz over a period of four and a half years. They fitted the times that the radio pulses arrived at the telescope with a binary orbit model to show that the system has an eccentricity of less than 3 × 10−5. This suggests that the pulsar and its companion star are in a nearly circular orbit. The diameter of this orbit, Han points out, is smaller than that of our own Sun, and its period – that is, the time it takes the two stars to circle each other – is correspondingly short, at 3.6 hours. For a sixth of this time, the companion star blocks the pulsar’s radio signals.

The team also found that the rate at which this orbital period is changing (the so-called spin period derivative) is unusually high for a millisecond-period pulsar, at 3.63 × 10−18 s s−1 .This shows that energy is rapidly being lost from the system as the pulsar spins down.

“We knew that PSR J1928+1815 was special from November 2021 onwards,” says Han. “Once we’d accumulated data with FAST, one of my students, ZongLin Yang, studied the evolution of such binaries in general and completed the timing calculations from the data we had obtained for this system. His results suggested the existence of the helium star companion and everything then fell into place.”

Short-lived phenomenon

This is the first time a short-life (107 years) binary consisting of a neutron star and a helium star has ever been detected, Han tells Physics World. “It is a product of the common envelope evolution that lasted for only 1000 years and that we couldn’t observe directly,” he says.

“Our new observation is the smoking gun for long-standing binary star evolution theories, such as those that describe how stars exchange mass and shrink their orbits, how the neutron star spins up by accreting matter from its companion and how the shared hydrogen envelope is ejected.”

The system could help astronomers study how neutron stars accrete matter and then cool down, he adds. “The binary detected in this work will evolve to become a system of two compact stars that will eventually merge and become a future source of gravitational waves.”

Full details of the study are reported in Science.

The post Short-lived eclipsing binary pulsar spotted in Milky Way appeared first on Physics World.

How quantum sensors could improve human health and wellbeing

17 juin 2025 à 12:00

As the world celebrates the 2025 International Year of Quantum Science and Technology, it’s natural that we should focus on the exciting applications of quantum physics in computing, communication and cryptography. But quantum physics is also set to have a huge impact on medicine and healthcare. Quantum sensors, in particular, can help us to study the human body and improve medical diagnosis – in fact, several systems are close to being commercialized.

Quantum computers, meanwhile, could one day help us to discover new drugs by providing representations of atomic structures with greater accuracy and by speeding up calculations to identify potential drug reactions. But what other technologies and projects are out there? How can we forge new applications of quantum physics in healthcare and how can we help discover new potential use cases for the technology?

Those are the some of the questions tackled in a recent report, on which this Physics World article is based, published by Innovate UK in October 2024. Entitled Quantum for Life, the report aims to kickstart new collaborations by raising awareness of what quantum physics can do for the healthcare sector. While the report says quite a bit about quantum computing and quantum networking, this article will focus on quantum sensors, which are closer to being deployed.

Sense about sensors

The importance of quantum science to healthcare isn’t new. In fact, when a group of academics and government representatives gathered at Chicheley Hall back in 2013 to hatch plans for the UK’s National Quantum Technologies Programme, healthcare was one of the main applications they identified. The resulting £1bn programme, which co-ordinated the UK’s quantum-research efforts, was recently renewed for another decade and – once again – healthcare is a key part of the remit.

As it happens, most major hospitals already use quantum sensors in the form of magnetic resonance imaging (MRI) machines. Pioneered in the 1970s, these devices manipulate the quantum spin states of hydrogen atoms using magnetic fields and radio waves. By measuring how long those states take to relax, MRI can image soft tissues, such as the brain, and is now a vital part of the modern medicine toolkit.

While an MRI machine measures the quantum properties of atoms, the sensor itself is classical, essentially consisting of electromagnetic coils that detect the magnetic flux produced when atomic spins change direction. More recently, though, we’ve seen a new generation of nanoscale quantum sensors that are sensitive enough to detect magnetic fields emitted by a target biological system. Others, meanwhile, consist of just a single atom and can monitor small changes in the environment.

There are lots of different quantum-based companies and institutions working in the healthcare sector

As the Quantum for Life report shows, there are lots of different quantum-based companies and institutions working in the healthcare sector. There are also many promising types of quantum sensors, which use photons, electrons or spin defects within a material, typically diamond. But ultimately what matters is what quantum sensors can achieve in a medical environment.

Quantum diagnosis

While compiling the report, it became clear that quantum-sensor technologies for healthcare come in five broad categories. The first is what the report labels “lab diagnostics”, in which trained staff use quantum sensors to observe what is going on inside the human body. By monitoring everything from our internal temperature to the composition of cells, the sensors can help to identify diseases such as cancer.

Currently, the only way to definitively diagnose cancer is to take a sample of cells – a biopsy – and examine them under a microscope in a laboratory. Biopsies are often done with visual light but that can damage a sample, making diagnosis tricky. Another option is to use infrared radiation. By monitoring the specific wavelengths the cells absorb, the compounds in a sample can be identified, allowing molecular changes linked with cancer to be tracked.

Unfortunately, it can be hard to differentiate these signals from background noise. What’s more, infrared cameras are much more expensive than those operating in the visible region. One possible solution is being explored by Digistain, a company that was spun out of Imperial College, London, in 2019. It is developing a product called EntangleCam that uses two entangled photons – one infrared and one visible (figure 1).

1 Entangled thoughts

Diagram of a laser beam passing through a diamond, where it is split into two: a beam directed at a cancer cell and a beam that enters a single photon detector
a (Adapted from Quantum for Life: How UK Life Sciences and Healthcare Can Benefit from Quantum Technologies by IOP Publishing)

Two false-colour images of cancer cells – one in purple on beige background, one in bright greens, reds and yellows on black background
b (Courtesy: Digistain)

a One way in which quantum physics is benefiting healthcare is through entangled photons created by passing laser light through a nonlinear crystal (left). Each laser photon gets converted into two lower-energy photons – one visible, one infrared – in a process called spontaneous parametric down conversion. In technology pioneered by the UK company Digistain, the infrared photon can be sent through a sample, with the visible photon picked up by a detector. As the photons are entangled, the visible photon gives information about the infrared photon and the presence of, say, cancer cells. b Shown here are cells seen with traditional stained biopsy (left) and with Digistain’s method (right).

If the infrared photon is absorbed by, say, a breast cancer cell, that immediately affects the visible photon with which it is entangled. So by measuring the visible light, which can be done with a cheap, efficient detector, you can get information about the infrared photon – and hence the presence of a potential cancer cell (Phys. Rev. 108 032613). The technique could therefore allow cancer to be quickly diagnosed before a tumour has built up, although an oncologist would still be needed to identify the area for the technique to be applied.

Point of care

The second promising application of quantum sensors lies in “point-of-care” diagnostics. We all became familiar with the concept during the COVID-19 pandemic when lateral-flow tests proved to be a vital part of the worldwide response to the virus. The tests could be taken anywhere and were quick, simple, reliable and relatively cheap. Something that had originally been designed to be used in a lab was now available to most people at home.

Quantum technology could let us miniaturize such tests further and make them more accurate, such that they could be used at hospitals, doctor’s surgeries or even at home. At the moment, biological indicators of disease tend to be measured by tagging molecules with fluorescent markers and measuring where, when and how much light they emit. But because some molecules are naturally fluorescent, those measurements have to be processed to eliminate the background noise.

One emerging quantum-based alternative is to characterize biological samples by measuring their tiny magnetic fields. This can be done, for example, using diamond specially engineered with nitrogen-vacancy (NV) defects. Each is made by removing two carbon atoms from the lattice and implanting a nitrogen atom in one of the gaps, leaving a vacancy in the other. Behaving like an atom with discrete energy levels, each defect’s spin state is influenced by the local magnetic field and can be “read out” from the way it fluoresces.

One UK company working in this area is Element Six. It has joined forces with the US-based firm QDTI to make a single-crystal diamond-based device that can quickly identify biomarkers in blood plasma, cerebrospinal fluid and other samples extracted from the body. The device detects magnetic fields produced by specific proteins, which can help identify diseases in their early stages, including various cancers and neurodegenerative conditions like Alzheimer’s. Another firm using single-crystal diamond to detect cancer cells is Germany-based Quantum Total Analysis Systems (QTAS).

Matthew Markham, a physicist who is head of quantum technologies at Element Six, thinks that healthcare has been “a real turning point” for the company. “A few years ago, this work was mostly focused on academic problems,” he says. “But now we are seeing this technology being applied to real-world use cases and that it is transitioning into industry with devices being tested in the field.”

An alternative approach involves using tiny nanometre-sized diamond particles with NV centres, which have the advantage of being highly biocompatible. QT Sense of the Netherlands, for example, is using these nanodiamonds to build nano-MRI scanners that can measure the concentration of molecules that have an intrinsic magnetic field. This equipment has already been used by biomedical researchers to investigate single cells (figure 2).

2 Centre of attention

Artist's illustration of a diamond with light entering and exiting, plus a zoom in to show the atomic structure of a nitrogen-vacancy defect
(Courtesy: Element Six)

A nitrogen-vacancy defect in diamond – known as an NV centre – is made by removing two carbon atoms from the lattice and implanting a nitrogen atom in one of the gaps, leaving a vacancy in the other. Using a pulse of green laser light, NV centres can be sent from their ground state to an excited state. If the laser is switched off, the defects return to their ground state, emitting a visible photon that can be detected. However, the rate at which the fluorescent light drops while the laser is off depends on the local magnetic field. As companies like Element Six and QTSense are discovering, NV centres in diamond are great way of measuring magnetic fields in the human body especially as the surrounding lattice of carbon atoms shields the NV centre from noise.

Australian firm FeBI Technologies, meanwhile, is developing a device that uses nanodiamonds to measure the magnetic properties of ferritin – a protein that stores iron in the body. The company claims its technology is nine orders of magnitude more sensitive than traditional MRI and will allow patients to monitor the amount of iron in their blood using a device that is accurate and cheap.

Wearable healthcare

The third area in which quantum technologies are benefiting healthcare is what’s billed in the Quantum for Life report as “consumer medical monitoring and wearable healthcare”. In other words, we’re talking about devices that allow people to monitor their health in daily life on an ongoing basis. Such technologies are particularly useful for people who have a diagnosed medical condition, such as diabetes or high blood pressure.

NIQS Tech, for example, was spun off from the University of Leeds in 2022 and is developing a highly accurate, non-invasive sensor for measuring glucose levels. Traditional glucose-monitoring devices are painful and invasive because they basically involve sticking a needle in the body. While newer devices use light-based spectroscopic measurements, they tend to be less effective for patients with darker skin tones.

The sensor from NIQS Tech instead uses a doped silica platform, which enables quantum interference effects. When placed in contact with the skin and illuminated with laser light, the device fluoresces, with the lifetime of the fluorescence depending on the amount of glucose in the user’s blood, regardless of skin tone. NIQS has already demonstrated proof of concept with lab-based testing and now wants to shrink the technology to create a wearable device that monitors glucose levels continuously.

Body imaging

The fourth application of quantum tech lies in body scanning, which allows patients to be diagnosed without needing a biopsy. One company leading in this area is Cerca Magnetics, which was spun off from the University of Nottingham. In 2023 it won the inaugural qBIG prize for quantum innovation from the Institute of Physics, which publishes Physics World, for developing wearable optically pumped magnetometers for magnetoencephalography (MEG), which measure magnetic fields generated by neuronal firings in the brain. Its devices can be used to scan patients’ brains in a comfortable seated position and even while they are moving.

Quantum-based scanning techniques could also help diagnose breast cancer, which is usually done by exposing a patient’s breast tissue to low doses of X-rays. The trouble with such mammograms is that all breasts contain a mix of low-density fatty and other, higher-density tissue. The latter creates a “white blizzard” effect against the dark background, making it challenging to differentiate between healthy tissue and potential malignancies.

That’s a particular problem for the roughly 40% of women who have a higher concentration of higher-density tissue. One alternative is to use molecular breast imaging (MBI), which involves imaging the distribution of a radioactive tracer that has been intravenously injected into a patient. This tracer, however, exposes patients to a higher (albeit still safe) dose of radiation than with a mammogram, which means that patients have to be imaged for a long time to get enough signal.

A solution could lie with the UK-based firm Kromek, which is using cadmium zinc telluride (CZT) semiconductors that produce a measurable voltage pulse from just a single gamma-ray photon. As well as being very efficient over a broad range of X-ray and gamma-ray photon energies, CZTs can be integrated onto small chips operating at room temperature. Preliminary results with Kromek’s ultralow-dose and ultrafast detectors show they work with barely one-eighth of the amount of tracer as traditional MBI techniques.

Four samples of cadmium zinc telluride next to a ruler for scale
Faster and better Breast cancer is often detected with X-rays using mammography but it can be tricky to spot tumours in areas where the breast tissue is dense. One alternative is molecular breast imaging (MBI), which uses a radioactive tracer to “light up” areas of cancer in the breast and works even in dense breast tissue. However, MBI currently exposes patients to more radiation than with mammography, which is where cadmium zinc telluride (CZT) semiconductors, developed by the UK firm Kromek, could help. They produce a measurable voltage pulse from just a single gamma-ray photon, opening the door for “ultralow-dose MBI” – where much clearer images are created with barely one-eighth of the radiation. (Courtesy: Kromek)

“Our prototypes have shown promising results,” says Alexander Cherlin, who is principal physicist at Kromek. The company is now designing and building a full-size prototype of the camera as part of Innovate UK’s £2.5m “ultralow-dose” MBI project, which runs until the end of 2025. It involves Kromek working with hospitals in Newcastle along with researchers at University College London and the University of Newcastle.

Microscopy matters

The final application of quantum sensors to medicine lies in microscopy, which these days no longer just means visible light but everything from Raman and two-photon microscopy to fluorescence lifetime imaging and multiphoton microscopy. These techniques allow samples to be imaged at different scales and speeds, but they are all reaching various technological limits.

Quantum technologies can help us break the technological limits of microscopy

Quantum technologies can help us break those limits. Researchers at the University of Glasgow, for example, are among those to have used pairs of entangled photons to enhance microscopy through “ghost imaging”. One photon in each pair interacts with a sample, with the image built up by detecting the effect on its entangled counterpart. The technique avoids the noise created when imaging with low levels of light (Sci. Adv. 6 eaay2652).

Researchers at the University of Strathclyde, meanwhile, have used nanodiamonds to get around the problem that dyes added to biological samples eventually stop fluorescing. Known as photobleaching, the effect prevents samples from being studied after a certain time (Roy. Soc. Op. Sci. 6 190589). In the work, samples could be continually imaged and viewed using two-photon excitation microscopy with a 10-fold increase in resolution.

Looking to the future

But despite the great potential of quantum sensors in medicine, there are still big challenges before the technology can be deployed in real, clinical settings. Scalability – making devices reliably, cheaply and in sufficient numbers – is a particular problem. Fortunately, things are moving fast. Even since the Quantum for Life report came out late in 2024, we’ve seen new companies being founded to address these problems.

One such firm is Bristol-based RobQuant, which is developing solid-state semiconductor quantum sensors for non-invasive magnetic scanning of the brain. Such sensors, which can be built with the standard processing techniques used in consumer electronics, allow for scans on different parts of the body. RobQuant claims its sensors are robust and operate at ambient temperatures without requiring any heating or cooling.

Agnethe Seim Olsen, the company’s co-founder and chief technologist, believes that making quantum sensors robust and scalable is vital if they are to be widely adopted in healthcare. She thinks the UK is leading the way in the commercialization of such sensors and will benefit from the latest phase of the country’s quantum hubs. Bringing academia and businesses together, they include the £24m Q-BIOMED biomedical-sensing hub led by University College London and the £27.5m QuSIT hub in imaging and timing led by the University of Birmingham.

Q-BIOMED is, for example, planning to use both single-crystal diamond and nanodiamonds to develop and commercialize sensors that can diagnose and treat diseases such as cancer and Alzheimer’s at much earlier stages of their development. “These healthcare ambitions are not restricted to academia, with many startups around the globe developing diamond-based quantum technology,” says Markham at Element Six.

As with the previous phases of the hubs, allowing for further research encourages start-ups – researchers from the forerunner of the QuSIT hub, for example, set up Cerca Magnetics. The growing maturity of some of these quantum sensors will undoubtedly attract existing medical-technology companies. The next five years will be a busy and exciting time for the burgeoning use of quantum sensors in healthcare.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post How quantum sensors could improve human health and wellbeing appeared first on Physics World.

Handheld device captures airborne signs of disease

16 juin 2025 à 16:00

A sensitive new portable device can detect gas molecules associated with certain diseases by condensing dilute airborne biomarkers into concentrated liquid droplets. According to its developers at the University of Chicago in the US, the device could be used to detect airborne viruses or bacteria in hospitals and other public places, improve neonatal care, and even allow diabetic patients to read glucose levels in their breath, to list just three examples.

Many disease biomarkers are only found in breath or ambient air at levels of a few parts per trillion. This makes them very difficult to detect compared with biomarkers in biofluids such as blood, saliva or mucus, where they are much more concentrated. Traditionally, reaching a high enough sensitivity required bulky and expensive equipment such as mass spectrometers, which are impractical for everyday environments.

Rapid and sensitive identification

Researchers led by biophysicist and materials chemist Bozhi Tian have now developed a highly portable alternative. Their new Airborne Biomarker Localization Engine (ABLE) can detect both non-volatile and volatile molecules in air in around 15 minutes.

This handheld device comprises a cooled condenser surface, an air pump and microfluidic enrichment modules, and it works in the following way. First, air that (potentially) contains biomarkers flows into a cooled chamber. Within this chamber, Tian explains, the supersaturated moisture condenses onto nanostructured superhydrophobic surfaces and forms droplets. Any particles in the air thus become suspended inside the droplets, which means they can be analysed using conventional liquid-phase biosensors such as colorimeteric test strips or electrochemical probes. This allows them to be identified rapidly with high sensitivity.

Tiny babies and a big idea

Tian says the inspiration for this study, which is detailed in Nature Chemical Engineering, came from a visit he made to a neonatal intensive care unit (NICU) in 2021. “Here, I observed the vulnerability and fragility of preterm infants and realized how important non-invasive monitoring is for them,” Tian explains.

“My colleagues and I envisioned a contact-free system capable of detecting disease-related molecules in air. Our biggest challenge was sensitivity and initial trials failed to detect key chemicals,” he remembers. “We overcame this problem by developing a new enrichment strategy using nanostructured condensation and molecular sieves while also exploiting evaporation physics to stabilize and concentrate the captured biomarkers.”

The technology opens new avenues for non-contact, point-of-care diagnostics, he tells Physics World. Possible near-term applications include the early detection of ailments such as inflammatory bowel disease (IBD), which can lead to markers of inflammation appearing in patients’ breath. Respiratory disorders and neurodevelopment conditions in babies could be detected in a similar way. Tian suggests the device could even be used for mental health monitoring via volatile stress biomarkers (again found in breath) and for monitoring air quality in public spaces such as schools and hospitals.

“Thanks to its high sensitivity and low cost (of around $200), ABLE could democratize biomarker sensing, moving diagnostics beyond the laboratory and into homes, clinics and underserved areas, allowing for a new paradigm in preventative and personalized medicine,” he says.

Widespread applications driven by novel physics

The University of Chicago scientists’ next goal is to further miniaturize and optimize the ABLE device. They are especially interested in enhancing its sensitivity and energy efficiency, as well as exploring the possibility of real-time feedback through closed-loop integration with wearable sensors. “We also plan to extend its applications to infectious disease surveillance and food spoilage detection,” Tian reveals.

The researchers are currently collaborating with health professionals to test ABLE in real-world settings such as NICUs and outpatient clinics. In the future, though, they also hope to explore novel physical processes that might improve the efficiency at which devices like these can capture hydrophobic or nonpolar airborne molecules.

According to Tian, the work has unveiled “unexpected evaporation physics” in dilute droplets with multiple components. Notably, they have seen evidence that such droplets defy the limit set by Henry’s law, which states that at constant temperature, the amount of a gas that dissolves in a liquid of a given type and volume is directly proportional to the partial pressure of the gas in equilibrium with the liquid. “This opens a new physical framework for such condensation-driven sensing and lays the foundation for widespread applications in the non-contact diagnostics, environmental monitoring and public health applications mentioned,” Tian says.

The post Handheld device captures airborne signs of disease appeared first on Physics World.

‘Can’t get you out of my head’: using earworms to teach physics

16 juin 2025 à 12:00

When I’m sitting in my armchair, eating chocolate and finding it hard to motivate myself to exercise, a little voice in my head starts singing “You’ve got to move it, move it” to the tune of will.i.am’s “I like to move it”. The positive reinforcement and joy of this song as it plays on a loop in my mind propels me out of my seat and onto the tennis court.

Songs like this are earworms – catchy pieces of music that play on repeat in your head long after you’ve heard them. Some tunes are more likely to become earworms than others, and there are a few reasons for this.

To truly hook you in, the music must be repetitive so that the brain can easily finish it. Generally, it is also simple, and has a rising and falling pitch shape. While you need to hear a song several times for it to stick, once it’s wormed its way into your head, some lyrics become impossible to escape – “I just can’t get you out of my head”, as Kylie would say.

In his book Musicophilia, neurologist Oliver Sacks describes these internal music loops as “the brainworms that arrive unbidden and leave only on their own time”. They can fade away, but they tend to lie in wait, dormant until an association sets them off again – like when I need to exercise. But for me as a physics teacher for 16–18 year olds, this fact is more than just of passing interest: I use it in the classroom.

There are some common mistakes students make in physics, so I play songs in class that are linked (sometimes tenuously) to the syllabus to remind them to check their work. Before I continue, I should add that I’m not advocating rote learning without understanding – the explanation of the concept must always come first. But I have found the right earworm can be a great memory aid.

I’ve been a physics teacher for a while, and I’ll admit to a slight bias towards the music of the 1980s and 1990s. I play David Bowie’s “Changes” (which the students associate with the movie Shrek) when I ask the class to draw a graph, to remind them to check if they need to process – or change – the data before plotting. The catchy “Ch…ch…ch…changes” is now the irritating tune they hear when I look over their shoulders to check if they have found, for example, the sine values for Snell’s law, or the square root of tension if looking at the frequency of a stretched wire.

When describing how to verify the law of conservation of momentum, students frequently leave out the mechanism that makes the two trollies stick together after the collision. Naturally, this is an opportunity for me to play Roxy Music’s “Let’s stick together”.

Meanwhile, “Ice ice baby” by Vanilla Ice is obviously the perfect earworm for calculating the specific latent heat of fusion of ice, which is when students often drop parts of the equations because they forget that the ice both melts and changes temperature.

In the experiment where you charge a gold leaf electroscope by induction, pupils often fail to do the four steps in the correct order. I therefore play Shirley Bassey’s “Goldfinger” to remind pupils to earth the disc with their finger. Meanwhile, Spandau Ballet’s bold and dramatic “Gold” is reserved for Rutherford’s gold leaf experiment.

“Pump up the volume” by M|A|R|R|S or Ireland’s 1990 football song “Put ‘em under pressure” are obvious candidates for investigating Boyle’s law. I use “Jump around” by House of Pain when causing a current-carrying conductor in a magnetic field to experience a force.

Some people may think that linking musical lyrics and physics in this way is a waste of time. However, it also introduces some light-hearted humour into the classroom – and I find teenagers learn better with laughter. The students enjoy mocking my taste in music and coming up with suitable (more modern) songs, and we laugh together about the tenuous links I’ve made between lyrics and physics.

More importantly, this is how my memory works. I link phrases or lyrics to the important things I need to remember. Auditory information functions as a strong mnemonic. I am not saying that this works for everyone, but I have heard my students sing the lyrics to each other while studying in pairs or groups. I smile to myself as I circulate the room when I hear them saying phrases like, “No you forgot mass × specific latent heat – remember it’s ‘Ice, ice baby!’ ”.

On their last day of school – after two years of playing these tunes in class – I hold a quiz where I play a song and the students have to link it to the physics. It turns into a bit of a sing-along, with chocolate for prizes, and there are usually a few surprises in there too. Have a go yourself with the quiz below.

Earworms quiz

Can you match the following eight physics laws or experiments with the right song? If you can’t remember the songs, we’ve provided links – but beware, they are earworms!

Law or experiment

  1. Demonstrating resonance with Barton’s pendulums
  2. Joule’s law
  3. The latent heat of vaporization of water
  4. Measuring acceleration due to gravity
  5. The movement caused when a current is applied to a coil in a magnetic field
  6. Measuring the pascal
  7. How nuclear fission releases sustainable amounts of energy
  8. Plotting current versus voltage for a diode in forward bias

Artist and song

Answers will be revealed next month – just come back to this article to find out whether you got them all right.

The post ‘Can’t get you out of my head’: using earworms to teach physics appeared first on Physics World.

Yale researcher says levitated spheres could spot neutrinos ‘within months’

14 juin 2025 à 02:18

The Helgoland 2025 meeting, marking 100 years of quantum mechanics, has featured a lot of mind-bending fundamental physics, quite a bit of which has left me scratching my head.

So it was great to hear a brilliant talk by David Moore of Yale University about some amazing practical experiments using levitated, trapped microspheres as quantum sensors to detect what he calls the “invisible” universe.

If the work sounds familar to you, that’s because Moore’s team won a Physics World Top 10 Breakthrough of the Year award in 2024 for using their technique to detect the alpha decay of individual lead-212 atoms.

Speaking in the Nordseehalle on the island of Helgoland, Moore explained the next stage of the experiment, which could see it detect neutrinos “in a couple of months” at the earliest – and “at least within a year” at the latest.

Of course, physicists have already detected neutrinos, but it’s a complicated business, generally involving huge devices in deep underground locations where background signals are minimized. Yale’s set up is much cheaper, smaller and more convenient, involving no more than a couple of lab benches.

As Moore explained, he and his colleagues first trap silica spheres at low pressure, before removing excess electrons to electrically neutralize them. They then stabilize the spheres’ rotation before cooling them to microkelvin temperatures.

In the work that won the Physics World award last year, the team used samples of radon-220, which decays first into polonium-216 and then lead-212. These nuclei embed theselves in the silica spheres, which recoil when the lead-212 decays by releasing an alpha particle (Phys. Rev. Lett. 133 023602).

Moore’s team is able to measure the tiny recoil by watching how light scatters off the spheres. “We can see the force imparted by a subatomic particle on a heavier object,” he told the audience at Helgoland. “We can see single nuclear decays.”

Now the plan is to extend the experiment to detect neutrinos. These won’t (at least initially) be the neutrinos that stream through the Earth from the Sun or even those from a nuclear reactor.

Instead, the idea will be to embed the spheres with nuclei that undergo beta decay, releasing a much lighter neutrino in the process. Moore says the team will do this within a year and, one day, potentially even use to it spot dark matter.

“We are reaching the quantum measurement regime,” he said. It’s a simple concept, even if the name – “Search for new Interactions in a Microsphere Precision Levitation Experiment” (SIMPLE) – isn’t.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Yale researcher says levitated spheres could spot neutrinos ‘within months’ appeared first on Physics World.

Worm slime could inspire recyclable polymer design

13 juin 2025 à 09:53

The animal world – including some of its ickiest parts – never ceases to amaze. According to researchers in Canada and Singapore, velvet worm slime contains an ingredient that could revolutionize the design of high-performance polymers, making them far more sustainable than current versions.

“We have been investigating velvet worm slime as a model system for inspiring new adhesives and recyclable plastics because of its ability to reversibly form strong fibres,” explains Matthew Harrington, the McGill University chemist who co-led the research with Ali Miserez of Nanyang Technological University (NTU). “We needed to understand the mechanism that drives this reversible fibre formation, and we discovered a hitherto unknown feature of the proteins in the slime that might provide a very important clue in this context.”

The velvet worm (phylum Onychophora) is a small, caterpillar-like creature that lives in humid forests. Although several organisms, including spiders and mussels, produce protein-based slimy material outside their bodies, the slime of the velvet worm is unique. Produced from specialized papillae on each side of the worm’s head, and squirted out in jets whenever the worm needs to capture prey or defend itself, it quickly transforms from a sticky, viscoelastic gel into stiff, glassy fibres as strong as nylon.

When dissolved in water, these stiff fibres return to their biomolecular precursors. Remarkably, new fibres can then be drawn from the solution – implyimg that the instructions for fibre self-assembly are “encoded” within the precursors themselves, Harrington says.

High-molecular-weight protein identified

Previously, the molecular mechanisms behind this reversibility were little understood. In the present study, however, the researchers used protein sequencing and the AI-guided protein structure prediction algorithm AlphaFold to identify a specific high-molecular-weight protein in the slime. Known as a leucine-rich repeat, this protein has a structure similar to that of a cell surface receptor protein called a Toll-like receptor (TLR).

In biology, Miserez explains, this type of receptor is involved in immune system response. It also plays a role in embryonic or neural development. In the worm slime, however, that’s not the case.

“We have now unveiled a very different role for TLR proteins,” says Miserez, who works in NTU’s materials science and engineering department. “They play a structural, mechanical role and can be seen as a kind of ‘glue protein’ at the molecular level that brings together many other slime proteins to form the macroscopic fibres.”

Miserez adds that the team found this same protein in different species of velvet worms that diverged from a common ancestor nearly 400 million years ago. “This means that this different biological function is very ancient from an evolutionary perspective,” he explains.

“It was very unusual to find such a protein in the context of a biological material,” Harrington adds. “By predicting the protein’s structure and its ability to bind to other slime proteins, we were able to hypothesize its important role in the reversible fibre formation behaviour of the slime.”

The team’s hypothesis is that the reversibility of fibre formation is based on receptor-ligand interactions between several slime proteins. While Harrington acknowledges that much work remains to be done to verify this, he notes that such binding is a well-described principle in many groups of organisms, including bacteria, plants and animals. It is also crucial for cell adhesion, development and innate immunity. “If we can confirm this, it could provide inspiration for making high-performance non-toxic (bio)polymeric materials that are also recyclable,” he tells Physics World.

The study, which is detailed in PNAS, was mainly based on computational modelling and protein structure prediction. The next step, say the McGill researchers, is to purify or recombinantly express the proteins of interest and test their interactions in vitro.

The post Worm slime could inspire recyclable polymer design appeared first on Physics World.

Helgoland researchers seek microplastics and microfibres in the sea

12 juin 2025 à 23:05

I’ve been immersed in quantum physics this week at the Helgoland 2025 meeting, which is being held to mark Werner Heisenberg’s seminal development of quantum mechanics on the island 100 years ago.

But when it comes to science, Helgoland isn’t only about quantum physics. It’s also home to an outpost of the Alfred Wegener Institute, which is part of the Helmholtz Centre for Polar and Marine Research and named after the German scientist who was the brains behind continental drift.

Dating back to 1892, the Biological Institute Helgoland (BAH) has about 80 permanent staff. They include Sebastian Primpke, a polymer scientist who studies the growing danger of microplastics and microfibres on the oceans.

Microplastics, which are any kind of small plastic materials, generally range in size from one micron to about 5 mm. They are a big danger for fish and other forms of marine life, as Marric Stephens reported in this recent feature.

Primpke studies microplastics using biofilms attached to a grid immersed in a tank containing water piped continuously in from the North Sea. The tank is covered with a lid to keep samples in the dark, mimicking underwater conditions.

Photo of reseracher looking at a computer screen.
Deep-sea spying A researcher looks at electron micrographs to spot microfibres in seawater samples. (Courtesy: Matin Durrani)

He and his team periodically take samples from the films out, studying them in the lab using infrared and Raman microscopes. They’re able to obtain information such as the length, width, area, perimeter of individual microplastic particles as well as how convex or concave they are.

Other researchers at the Hegloland lab study microfibres, which can come from cellulose and artificial plastics, using electron microscopy. You can find out more information about the lab’s work here.

Primpke, who is a part-time firefighter, has lived and worked on Helgoland for a decade. He says it’s a small community, where everyone knows everyone else, which has its good and bad sides.

With only 1500 residents on the island, which lies 50 km from the mainland, finding good accommodation can be tricky. But with so many tourists, there are more amenities than you’d expect of somewhere of that size.

 

The post Helgoland researchers seek microplastics and microfibres in the sea appeared first on Physics World.

Exploring careers in healthcare for physicists and engineers

12 juin 2025 à 15:55

In this episode of the Physics World Weekly podcast we explore the career opportunities open to physicists and engineers looking to work within healthcare – as medical physicists or clinical engineers.

Physics World’s Tami Freeman is in conversation with two early-career physicists working in the UK’s National Health Service (NHS). They are Rachel Allcock, a trainee clinical scientist at University Hospitals Coventry and Warwickshire NHS Trust, and George Bruce, a clinical scientist at NHS Greater Glasgow and Clyde. We also hear from Chris Watt, head of communications and public affairs at IPEM, about the new IPEM careers guide.

Courtesy: RADformationThis episode is supported by Radformation, which is redefining automation in radiation oncology with a full suite of tools designed to streamline clinical workflows and boost efficiency. At the centre of it all is AutoContour, a powerful AI-driven autocontouring solution trusted by centres worldwide.

The post Exploring careers in healthcare for physicists and engineers appeared first on Physics World.

Quantum island: why Helgoland is a great spot for fundamental thinking

12 juin 2025 à 02:00

Jack Harris, a quantum physicist at Yale University in the US, has a fascination with islands. He grew up on Martha’s Vineyard, an island just south of Cape Cod on the east coast of America, and believes that islands shape a person’s thinking. “Your world view has a border – you’re on or you’re off,” Harris said on a recent episode of the Physics World Stories podcast.

It’s perhaps not surprising, then, that Harris is one of the main organizers of a five-day conference taking place this week on Helgoland, where Werner Heisenberg discovered quantum mechanics exactly a century ago. Heisenberg had come to the tiny, windy, pollen-free island, which lies 50 km off the coast of Germany, in June 1925, to seek respite from the hay fever he was suffering from in Göttingen.

According to Heisenberg’s 1971 book Physics and Beyond, he supposedly made his breakthrough early one morning that month. Unable to sleep, Heisenberg left his guest house just before daybreak and climbed a tower at the top of the island’s southern headland. As the Sun rose, Heisenberg pieced together the curious observations of frequencies of light that materials had been seen to absorb and emit.

PHoto of memorial stone on Helgoland honouring Werner Heisenberg
Where it all began This memorial stone and plaque sits at the spot of Werner Heisenberg’s achievements 100 years ago. (Courtesy: Matin Durrani)

While admitting that the real history of the episode isn’t as simple as Heisenberg made out, Harris believes it’s nevertheless a “very compelling” story. “It has a place and a time: an actual, clearly defined, quantized discrete place – an island,” Harris says. “This is a cool story to have as part of the fabric of [the physics] community.” Hardly surprising, then, that more than 300 physicists, myself included, have travelled from across the world to the Helgoland 2025 meeting.

Much time has been spent so far at the event discussing the fundamentals of quantum mechanics, which might seem a touch self-indulgent and esoteric given the burgeoning  (and financially lucrative) applications of the subject. Do we really need to concern ourselves with, say, non-locality, the meaning of measurement, or the nature of particles, information and randomness?

Why did we need to hear from Juan Maldacena from the Institute for Advanced Study in Princeton getting so excited talking about information loss and black holes? (Fun fact: a “white” black hole the size of a bacterium would, he claimed, be as hot as the Sun and emit so much light we could see it with the naked eye.)

But the fundamental questions are fascinating in their own right. What’s more, if we want to build, say, a quantum computer, it’s not just a technical and engineering endeavour. “To make it work you have to absorb a lot of the foundational topics of quantum mechanics,” says Harris, pointing to challenges such as knowing what kinds of information alter how a system behaves. “We’re at a point where real-word practical things like quantum computing, code breaking and signal detection hinge on our ability to understand the foundational questions of quantum mechanics.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum island: why Helgoland is a great spot for fundamental thinking appeared first on Physics World.

‘The Trump uncertainty principle’ is destroying the position and momentum of US science

11 juin 2025 à 12:00

The Heisenberg uncertainty principle holds things together. Articulated by the German physicist Werner Heisenberg almost a century ago, it remains the foundation of the physical world. Its name suggests the rule of the vague and temporary. But the principle is quantitative. A high uncertainty about the position of, say, an electron is compensated by a low uncertainty in its momentum. The principle is vital in helping us to understand chemical bonding, which is what holds matter together.

The Trump uncertainty principle, which I hereby coin, does the opposite; it tears things apart. Having taken effect on the US president’s inauguration day back in January, it almost immediately began damaging scientific culture. Researchers can no longer be sure if their grants will be delayed or axed – or if new proposals are even in the ballpark of the potentially fundable. Work is being stalled, erased or doomed, especially in the medical and environmental sciences.

The Trump uncertainty principle, or TUP for short, is implemented in several ways. One is through new policies at funding agencies like the National Science Foundation (NSF) and the National Institutes of Health (NIH). Those new policies, the administration claims, are designed to promote “science, national health, prosperity, and defense”. Despite being exactly the same as the old policies, they’ve been used to justify the cancellation of 400 grants at the NSF alone and hollow out the NSF, NIH and other key US science funding agencies.

The Trump administration has sought to terminate billions of dollars worth of grants at Harvard University alone. It wants to ban US universities from recruiting international students and has even been cancelling the visas of current students, many of whom are enrolled in the sciences. It also wants to vet what prospective students have posted on social media, despite Trump’s supposed support for free speech. Harvard is already suing the Administration over these actions.

Back in March the Office for Civil Rights of the US Department of Education sent letters to Harvard and 59 other universities, including Columbia, Cornell, Princeton, Stanford and Yale, accusing them of what it considers “discrimination and harassment”. The office threatened “potential enforcement actions if institutions do not fulfil their obligations under Title VI of the Civil Rights Act”, which “prohibits discrimination against or otherwise excluding individuals on the basis of race, color, or national origin”.

“Saddening, traumatic and unnecessary”

But the impact of the Trump uncertainty principle reaches far beyond these 60 institutions because it is destroying the bonding of these institutions through its impact on the labs, institutions and companies that collaborate with them. It is also badly damaging the hiring of postdocs, the ability to attract undergraduates, the retention of skilled support staff, and laboratory maintenance. Most disruptively of all, the Trump uncertainty principle provides no explanation for why or where it shows up, or what it is going to be applied to.

The Trump uncertainty principle provides no explanation for why or where it shows up, or what it is going to be applied to

Stony Brook University, where I teach, is a research incubator not on the list of 60 institutions of higher learning threatened by the Department of Education. But many of my colleagues have had their NIH, NSF or Department of Energy funding paused, left unrenewed, or suspended without explanation, and nobody could tell them whether or when it might be restored or why it was stopped in the first place.

Support for 11 graduate students at Stony Brook was terminated. Though it was later restored after months of uncertainty, nobody knows if it might happen again. I, too, had a grant stopped, though it was due to a crude error and the money started up again. Everyone in the sciences I’ve spoken to – faculty, staff and students – is affected in one way or another by the Trump uncertainty principle even if they haven’t lost funding or jobs.

It is easy to sound hyperbolic. It is possible that Trump’s draconian cuts may be reversed, that the threats won’t be implemented, that they won’t stand up in court, and that the Trump administration will actually respect the court decisions. But that’s not the point. You can’t plan ahead if you are unsure how much money you have, or even why you may be in the administration’s cross-hairs. That’s what is most destructive to US science. It’s also saddening, traumatic and unnecessary.

Maintaining any culture, including an academic research one, requires supporting an active and ongoing dynamic between past, present and future. It consists of an inherited array of resources, a set of ideas about how to go forward, and existing habits and practices about how best to move from one to the other. The Trump administration targets all three. It has slashed budgets and staff of long-standing scientific institutions and redirected future-directed scientific programmes at its whim. The Trump uncertainty principle also comes into play by damaging the existing habits and practices in the present.

The critical point

In his 2016 book The Invention of Science, David Wootton – a historian at the University of York in the UK – defined scientific culture as being “innovative, combative, competitive, but at the same time obsessed with accuracy”. Science isn’t the only kind of culture, he admitted, but it’s “a practical and effective one if your goal is the acquisition of new knowledge”. It seeks to produce knowledge about the world that can withstand criticism – “bomb-proof”, as Wootton put it.

Bomb-proof knowledge is what Trump fears the most, and he is undermining it by injecting uncertainty into the culture that produces it. The administration says that the Trump uncertainty principle is grounded in the fight against financial waste, fraud and discrimination. But proof of the principle is missing.

How do you save money by ending, say, a programme aimed at diagnosing tuberculosis? Why does a study of maternal health promote discrimination? What does research into Alzheimer’s disease have to do with diversity? Has ending scientific study of climate change got anything to do with any of this?

The justifications are not credible, and their lack of credibility is a leading factor in damaging scientific culture. Quite simply, the Trump uncertainty principle is destroying the position and momentum of US science.

The post ‘The Trump uncertainty principle’ is destroying the position and momentum of US science appeared first on Physics World.

Sound waves control droplet movement in microfluidic processor

11 juin 2025 à 10:00

Thanks to a new sound-based control system, a microfluidic processor can precisely manipulate droplets with an exceptionally broad range of volumes. The minimalist device is compatible with many substrates, including metals, polymers and glass. It is also biocompatible, and its developers at the Hong Kong Polytechnic University say it could be a transformative tool for applications in biology, chemistry and lab-on-a-chip systems.

Nano- and microfluidic systems use the principles of micro- and nanotechnology, biochemistry, engineering and physics to manipulate the behaviour of liquids on a small scale. Over the past few decades, they have revolutionized fluid processing, enabling researchers in a host of fields to perform tasks on chips that would previously have required painstaking test-tube-based work. The benefits include real-time, high-throughput testing for point-of care diagnostics using tiny sample sizes.

Microfluidics also play a role in several everyday technologies, including inkjet printer heads, pregnancy tests and, as the world recently discovered, tests for viruses like SARS-Cov2, which causes COVID-19. Indeed, the latter example involves a whole series of fluidic operations, as viral RNA is extracted from swabs, amplified and quantified using the polymerase chain reaction (PCR).

In each of these operations, it is vital to avoid contaminating the sample with other fluids. Researchers have therefore been striving to develop contactless techniques – for instance, those that rely on light, heat or magnetic and electric fields to move the fluids around. However, such approaches often require strong fields or high temperatures that can damage delicate chemical or biological samples.

In recent years, scientists have experimented with using acoustic fields instead. However, this method was previously found to work only for certain types of fluids, and with a limited volume range from hundreds of nanolitres (nL) to tens of microlitres (μL).

Versatile, residue-free fluid control

The new sound-controlled fluidic processor (SFP) developed by Liqiu Wang and colleagues is not bound by this limit. Thanks to an ultrasonic transducer and a liquid-infused slippery surface that minimizes adhesion of the samples, it can manipulate droplets with volumes of between 1 nL to 3000 μL. “By adjusting the sound source’s position, we can shape acoustic pressure fields to push, pull, mix or even split droplets on demand,” explains Wang. “This method ensures versatile, residue-free fluid control.”

The technique’s non-invasive nature and precision make it ideal for point-of-care diagnostics, drug screening and automated biochemical assays, Wang adds. “It could also help streamline reagent delivery in high-throughput systems,” he tells Physics World.

A further use, Wang suggests, would be fundamental biological applications such as organoid research. Indeed, the Hong Kong researchers demonstrated this by culturing mouse primary liver organoids and screening for molecules like verapamil, a drug that can protect the liver by preventing harmful calcium buildup.

Wang and colleagues, who report their work in Science Advances, say they now plan to integrate their sound-controlled fluidic processor into fully automated, programmable lab-on-a-chip systems. “Future steps include miniaturization and incorporating multiple acoustic sources for parallel operations, paving the way for next-generation diagnostics and chemical processing,” Wang reveals.

The post Sound waves control droplet movement in microfluidic processor appeared first on Physics World.

Quartet of Nobel laureates sign Helgoland’s ‘gold book’

11 juin 2025 à 00:04

The first session at the Helgoland 2025 meeting marking the centenary of quantum mechanics began with the four Nobel-prize-winning physicsts in attendance being invited on stage to sign the island’s memorial “gold book” and add a short statement to it.

Anton Zeilinger and Alain Aspect, who shared the 2022 Nobel prize with John Clauser for their work on entanglement and quantum information science, were first up on stage. They were followed by Serge Haroche and David Wineland, who shared the 2012 prize for their work on measuring and manipulating quantum systems.

During the coffee break, the book was placed on display for participants to view and add their own signatures if they wished. Naturally, being the nosey person I am, I was keen to see what the Nobel laureates had written.

Photo of four Nobel laureates on stage at Helgoland 2025.
Signing ceremony (From left to right) Anton Zeilinger, Alain Aspect, Serge Haroche and David Wineland troop on stage to sign the Helgoland book. (Courtesy: Matin Durrani)

Here, for the record, are their comments.

“Great sailing. Great people.” Anton Zeilinger

“C’est une émotion de se trouver à l’endroit où a commencé la méchanique quantique.” Alain Aspect [It’s an emotional feeling to find yourself in the place where quantum mechanics started.]

“Thank you for your warm welcome in Helgoland, an island which is known by all quantum physicists.” Serge Haroche

“An honor to be here.” David Wineland

All the comments made sense to me apart from that of Zeilinger so after the evening’s panel debate on the foundations of quantum mechanics, in which he had taken part, I asked him what the reference to sailing was all about.

Turns out that Zeilinger (as Albert Einstein once was) is a keen sailor in his spare time and he and his wife had come to Helgoland three days before the conference began to see the final stages of a North Sea regatta that takes place in late spring every year.

In fact, Zeilinger explained that the Helgoland meeting had to start on a Tuesday as the day before the venue was host to the regatta’s awards ceremony.

As for the flag, it is that of Helgoland, with the green representing the land, the red for the island’s cliffs and the white for the sand on the beaches.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quartet of Nobel laureates sign Helgoland’s ‘gold book’ appeared first on Physics World.

Conference marking 100 years of quantum mechanics starts in Hamburg

10 juin 2025 à 15:04

“This is a birthday party! Happy 100th birthday quantum mechanics,” said Jack Harris from Yale University in the US to whoops and cheers in the banqueting suite of the Hotel Atlantic in Hamburg, Germany.

Harris was addressing the 300 or so physicists attending the Helgoland 2025 conference, which is taking place from 9–14 June to mark Werner Heisenberg’s seminal work on quantum mechanics on the island of Helgoland in the North Sea exactly 100 years ago.

Photo of delegates at Helgoland 2025
Time to celebrate Participants gather ahead of the conference buffet dinner. (Courtesy: Matin Durrani)

“Heisenberg travelled to Helgoland to escape terrible allergies” Harris told delegates, reminding them of how the young 23-year-old had taken leave of absence from his postdoc supervisor Max Born in Göttingen for the fresh air of the treeless island. “His two weeks there was one of the watershed events in the discovery of quantum mechanics.”

Harris admitted, though, that it’s open to debate if Heisenberg’s fortnight on the island was as significant as is often made out, joking that – like quantum mechanics itself – “there are many interpretations that one can apply to this occasion”.

In one interpretation I hadn’t considered before, Harris pointed out that what might be regarded as an impediment or a disability – Heisenberg’s severe hayfever – turned out to be a positive force for science. “It actually brought him to Helgoland in the first place.”

Harris also took the opportunity to remind the audience of the importance of mentoring and helping each other in science. “How we treat others is as important as what we accomplish”, he said. “Another high standard to keep in mind is that science needs to be international and science needs to be inclusive. I am preaching to the choir but this is important to say out loud.”

Photo of Philip Ball at a conference
Destination Helgoland Science writer Philip Ball addresses delegates on the early years of quantum mechanics. (Courtesy: Matin Durrani)

Harris’s opening remarks were followed a series of three talks. First was Douglas Stone from Yale University who discussed the historical development of quantum science.

Next up was philosopher of science Elise Crull from the City University of New York, who looked into some of the early debates about the philosophical implications of quantum physics – including the pioneering contributions of Grete Hermann, who Sidney Perkowitz discussed in his recent feature for Physics World.

The final after-dinner speaker was science journalist Philip Ball, who explained how quantum theory developed in 1924–25 in the run-up to Helgoland. He focused, as he did in his recent feature for Physics World, on work carried out by Niels Bohr and others that turned out to be wrong but showed the intense turmoil in physics on the brink of quantum mechanics.

Helgoland 2025 features a packed five days of talks, poster sessions and debates – on the island of Helgoland itself – covering the past, present and future of quantum physics, with five Nobel laureates in attendance. In fact, Harris and his fellow scientific co-organizers – Časlav Brukner, Steven Girvin and Florian Marquardt – had so much to squeeze in that they could easily have “filled two or three solid programmes with people from whom we would have loved to hear”.

I’ll see over the next few days on Helgoland if they made the right speaker choices, but things have certainly got off to a good start.

• Elise Crull is appearing on the next episode of Physics World Live on Tueday 17 June. You can register for free at this link.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Conference marking 100 years of quantum mechanics starts in Hamburg appeared first on Physics World.

Beyond the classroom: a high-school student’s week at the Institute of Physics

10 juin 2025 à 11:28

Year 12 students (aged 16 or 17) often do work experience while studying for their A-levels. It can provide valuable insights into what the working world is like and showcase what potential career routes are available. And that’s exactly why I requested to do my week of work experience at the Institute of Physics (IOP).

I’m studying maths, chemistry and physics, with a particular interest in the latter. I’m hoping to study physics or chemical physics at university so was keen to find out how the subject can be applied to business, and get a better understanding of what I want to do in the future. The IOP was therefore a perfect placement for me and here are a few highlights of what I did.

Monday

My week at the IOP’s headquarters in London began with a brief introduction to the Institute with the head of science and innovation, Anne Crean, and Katherine Platt, manager for the International Year of Quantum Science and Technology (IYQ). Platt, who planned and supervised my week of activities, then gave me a tour of the building and explained more about the IOP’s work, including how it aims to nurture upcoming physics innovation and projects, and give businesses and physicists resources and support.

My first task was working with Jenny Lovell, project manager in the science and innovation team. While helping her organize the latest round of the IOP’s medals and awards, she explained why the IOP honours the physics community in this way and described the different degrees of achievement that it recognizes.

Next I got to meet the IOP’s chief executive officer Tom Grinyer, and unexpectedly the president-elect Michele Dougherty, who is a space physicist at Imperial College London. They are both inspiring people, who gave me some great advice about how I might go about my future in physics.  They talked about the exciting opportunities available as a woman in physics, and how no matter where I start, I can go into many different sectors as the subject is so applicable.

Michele Dougherty, Naeya Mistry and Tom Grinyer at the Institute of Physics, London
Top people Naeya Mistry (centre) got some valuable advice from the chief executive officer of the Institute of Physics, Tom Grinyer (right), and the president-elect, Michele Dougherty (left). (Courtesy: IOP)

To round off the day, I sat in a meeting about how the science and innovation team can increase engagement, before starting on a presentation I was due to make on Thursday about quantum physics and young people.

Tuesday

My second day began with a series of meetings. First up was the science and innovation team’s weekly stand-up meeting. I then attended a larger staff meeting with most of IOP’s employees, which proved informative and gave me a chance to see how different teams interact with each other. Next was the science and innovation managers’ meeting, where I took the minutes as they spoke.

I then met data science lead, Robert Cocking, who went through his work on data insights. He talked about IOP membership statistics in the UK and Ireland, as well as age and gender splits, and how he can do similar breakdowns for the different areas of special interest (such as quantum physics or astronomy). I found the statistics around the representation of girls in the physics community, specifically at A-level, particularly fascinating as it applies to me. Notably, although a lower percentage of girls take A-level physics compared to boys, a higher proportion of those girls go on to study it at university.

The day ended with some time to work on my presentation and research different universities and pathways I could take once I have finished my A-levels.

Wednesday

It was a steady start to Wednesday as I continued with my presentation and research with Platt’s help. Later in the morning, I attended a meeting with the public engagement team about Mimi’s Tiny Adventure, a children’s book written by Toby Shannon-Smith, public programmes manager at IOP, and illustrated by Pauline Gregory. The book, which is the third in the Mimi’s Adventures series, is part of the IOP’s Limit Less campaign to engage young people in physics, and will be published later this year to coincide with the IYQ. It was interesting to see how the IOP advertises physics to a younger audience and makes it more engaging for them.

Platt and I then had a video call with the Physics World team at IOP Publishing in Bristol, joining for their daily news meeting before having an in-depth chat with the editor-in-chief, Matin Durrani, and feature editors, Tushna Commissariat and Sarah Tesh. After giving me a brief introduction to the magazine, website and team structure, we discussed physics careers. It was good hear the editors’ insights as they cover a broad range of jobs in Physics World and all have a background in physics. It was particularly good to hear from Durrani as he studied chemical physics, which combines my three subjects and my passions.

Thursday

On Thursday I met David Curry, founder of Quantum Base Alpha – a start-up using quantum-inspired algorithms to solve issues facing humanity. We talked about physics in a business context, what he and his company do, and what he hopes for the future of quantum.

I then gave my presentation on “Why should young people care about quantum?”. I detailed the importance of quantum physics, the major things happening in the field and what it can become, as well as the careers quantum will offer in the future. I also discussed diversity and representation in the physics community, and how that is translated to what I see in everyday life, such as in my school and class. As a woman of colour going into science, technology, engineering and mathematics (STEM), I think it is important for me to have conversations around diversity of both gender and race, and the combination of two. After my presentation, Curry gave me some feedback, and we discussed what I am aiming to do at university and beyond.

Friday

For my final day, I visited the University of Sussex, where I toured the campus with Curry’s daughter Kitty, an undergraduate student studying social sciences. I then met up again with Curry, who introduced me to Thomas Clarke, a PhD student in Sussex’s ion quantum technologies group. We went to the physics and maths building, where he explained the simple process of quantum computing to me, and the struggles they have implementing that on a larger scale.

Clarke then gave us a tour of the lab that he shares with other PhD students, and showed us his experiments, which consisted of multiple lasers that made up their trapped ion quantum computing platform. As we read off his oscilloscope attached to the laser system, it was interesting to hear that a lot of his work involved trial and error, and the visit helped me realize that I am probably more interested in the experimental side of physics rather than pure theory.

My work experience week at the IOP has been vital in helping me to understand how physics can be applied in both business and academia. Thanks to the IOP’s involvement in the IYQ, I now have a deeper understanding of quantum science and how it might one day be applied to almost every aspect of physics – including chemical physics – as the sector grows in interest and funding. It’s been an eye-opening week, and I’ve returned to school excited and better informed about my potential next career steps.

The post Beyond the classroom: a high-school student’s week at the Institute of Physics appeared first on Physics World.

Generative AI speeds medical image analysis without impacting accuracy

10 juin 2025 à 10:05

Artificial intelligence (AI) holds great potential for a range of data-intensive healthcare tasks: detecting cancer in diagnostic images, segmenting images for adaptive radiotherapy and perhaps one day even fully automating the radiation therapy workflow.

Now, for the first time, a team at Northwestern Medicine in Illinois has integrated a generative AI tool into a live clinical workflow to draft radiology reports on X-ray images. In routine use, the AI model increased documentation efficiency by an average of 15.5%, while maintaining diagnostic accuracy.

Medical images such as X-ray scans play a central role in diagnosing and staging disease. To interpret an X-ray, a patient’s imaging data are typically input into the hospital’s PACS (picture archiving and communication system) and sent to radiology reporting software. The radiologist then reviews and interprets the imaging and clinical data and creates a report to help guide treatment decisions.

To speed up this process, Mozziyar Etemadi and colleagues proposed that generative AI could create a draft report that radiologists could then check and edit, saving them from having to start from scratch. To enable this, the researchers built a generative AI model specifically for radiology at Northwestern, based on historical data from the 12-hospital Northwestern Medicine network.

They then integrated this AI model into the existing radiology clinical workflow, enabling it to receive data from the PACS and generate a draft AI report. Within seconds of image acquisition, this report is available within the reporting software, enabling radiologists to create a final report from the AI-generated draft.

“Radiology is a great fit [for generative AI] because the practice of radiology is inherently generative – radiologists are looking very carefully at images and then generating text to summarize what is in the image,” Etemadi tells Physics World. “This is similar, if not identical, to what generative models like ChatGPT do today. Our [AI model] is unique in that it is far more accurate than ChatGPT for this task, was developed years earlier and is thousands of times less costly.”

Clinical application

The researchers tested their AI model on radiographs obtained at Northwestern hospitals over a five month period, reporting their findings in JAMA Network Open. They first examined the AI model’s impact on documentation efficiency for 23 960 radiographs. Unlike previous AI investigations that only used chest X-rays, this work covered all anatomies, with 18.3% of radiographs from non-chest sites (including the abdomen, pelvis, spine, and upper and lower extremities).

Use of the AI model increased report completion efficiency by 15.5% on average – reducing mean documentation time from 189.2 s to 159.8 s – with some radiologists achieving gains as high as 40%. The researchers note that this corresponds to a time saving of more than 63 h over the five months, representing a reduction from roughly 79 to 67 radiologist shifts.

To assess the quality of the AI-based documentation, they investigated the rate at which addenda (used to rectify reporting errors) were made to the final reports. Addenda were required in 17 model-assisted reports and 16 non-model reports, suggesting that use of AI did not impact the quality of radiograph interpretation.

To further verify this, the team also conducted a peer review analysis – in which a second radiologist rates a report according to how well they agree with its findings and text quality – in 400 chest and 400 non-chest studies, split evenly between AI-assisted and non-assisted reports. The peer review revealed no differences in clinical accuracy or text quality between AI-assisted and non-assisted interpretations, reinforcing the radiologist’s ability to create high-quality documentation using the AI.

Rapid warning system

Finally, the researchers applied the model to flag unexpected life-threatening pathologies, such as pneumothorax (collapsed lung), using an automated prioritization system that monitors the AI-generated reports. The system exhibited a sensitivity of 72.7% and specificity of 99.9% for detecting unexpected pneumothorax. Importantly, these priority flags were generated between 21 and 45 s after study completion, compared with a median of 24.5 min for radiologist notifications.

Etemadi notes that previous AI systems were designed to detect specific findings and output a “yes” or “no” for each disease type. The team’s new model, on the other hand, creates a full text draft containing detailed comments.

“This precise language can then be searched to make more precise and actionable alerts,” he explains. “For example, we don’t need to know if a patient has a pneumothorax if we already know they have one and it is getting better. This cannot be done with existing systems that just provide a simple yes/no response.”

The team is now working to increase the accuracy of the AI tool, to enable more subtle and rare findings, as well as expand beyond X-ray images. “We currently have CT working and are looking to expand to MRI, ultrasound, mammography, PET and more, as well as modalities beyond radiology like ophthalmology and dermatology,” says Etemadi.

The researchers conclude that their generative AI tool could help alleviate radiologist shortages, with radiologist and AI collaborating to improve clinical care delivery. They emphasize, though, that the technology won’t replace humans. “You still need a radiologist as the gold standard,” says co-author Samir Abboud in a press statement. “Our role becomes ensuring every interpretation is right for the patient.”

The post Generative AI speeds medical image analysis without impacting accuracy appeared first on Physics World.

There’s an elephant in the room at the Royal Society – and for once, it’s not (just) Elon Musk

9 juin 2025 à 16:00

Just over a week ago, US President Donald Trump released a budget proposal that would, if enacted, eviscerate science research across the country. Among other cuts, it proposes a 57% drop (relative to 2024) in funding for the National Science Foundation (NSF), which provides the lion’s share of government support for basic science. Within this, the NSF’s physics and mathematics directorate stands to lose more than a billion dollars, or 67% of its funding. And despite the past closeness between Trump and SpaceX boss Elon Musk, NASA faces cuts of 24%, including 50% of its science budget.

Of course, the US is not the only nation that funds scientific research, any more than NASA is the only agency that sends spacecraft to explore the cosmos. Still, both are big enough players (and big enough partners for the UK) that I expected these developments to feature at least briefly at last Tuesday’s Royal Society conference on the future of the UK space sector.

During the conference’s opening session, it occasionally seemed like David Parker, a former chief executive of the UK Space Agency (UKSA) who now works for the European Space Agency (ESA), might say a few words on the subject. His opening remarks focused on lessons the UK could learn from the world’s other space agencies, including NASA under the first Trump administration. At one point, he joked that all aircraft have four dimensions: span, length, height and politics. But as for the politics that threaten NASA in Trump’s second administration, Parker was silent.

Let’s talk about something else

This silence continued throughout the morning. All told, 19 speakers filed on and off the stage at the Royal Society’s London headquarters without so much as mentioning what the Nobel-Prize-winning astrophysicist Adam Riess called an “almost extinction level” event for research in their field.

The most surreal omission was in a talk by Sheila Rowan, a University of Glasgow astrophysicist and past president of the Institute of Physics (which publishes Physics World). Rowan was instrumental in the 2015 detection of gravitational waves at the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO), and her talk focused on gravitational-wave research. Despite this, she did not mention that Trump’s budget would eliminate funding for one of the two LIGO detectors, drastically reducing the research LIGO can do.

When I contacted Rowan to ask why this was, she replied that she had prepared her talk before the budget was announced. The conference, she added, was “a great example of how fantastic science benefits not just the UK, but society more broadly, and globally, and that is a message we must never stop explaining”.

What’s at stake

Rowan ended her talk on a similarly positive note, with hopeful words about the future. “The things that will fly in 2075, we are thinking about now,” she said.

In some cases, that may be true. However, if Trump’s budget passes both houses of the US Congress (the House of Representatives has already passed a bill that would enact most of the administration’s wishes), the harsh reality is that many things space scientists are thinking about will never fly at all.

Over at Astrobites, a site where PhD students write about astronomy and astrophysics for undergraduates, Arizona State University student Skylar Grayson compiled a depressingly long list of threatened missions. Like other graphics that have circled on social media since the budget announcement, Grayson’s places red X’s – indicating missions that are “fully dead” under the new budget – over dozens of projects. Affected missions range from well-known workhorses like Mars Orbiter and New Horizons to planning-stage efforts like the next-generation Earth-observing satellite Landsat Next. According to Landsat Next’s live-at-the-time-of-writing NASA webpage, it is expected to launch no earlier than 2031. What does its future look like now?

And NASA’s own missions are just the start. Several missions led by other agencies – including high-profile ones like ESA’s Rosalind Franklin Mars rover – are under threat. This is because the new NASA budget would eliminate the US’s share of their funding, forcing partners to pick up the tab or see their investments go to waste. Did that possibility not deserve some mention at a conference on the future of the UK space sector?

The elephant in the room

Midway through the conference, satellite industry executive Andrew Stanniland declared that he was about to mention the “elephant in the room”. At last, I thought. Someone’s going to say something. However, Stanniland’s “elephant” was not the proposed gutting of NASA science. Instead, he wanted to discuss the apparently taboo topic of the Starlink network of communications satellites.

Like SpaceX, Tesla and, until recently, Trump’s budget-slashing “department of government efficiency”, Starlink is a Musk project. Musk is a Fellow of the Royal Society, and he remains so after the society’s leadership rejected a grassroots effort to remove him for, inter alia, calling for the overthrow of the UK government. Could it be that speakers were avoiding Musk, Trump and the new US science budget to spare the Royal Society’s blushes?

Exasperated, I submitted a question to the event’s online Q&A portal. “The second Trump administration has just proposed a budget for NASA that would gut its science funding,” I wrote. “How is this likely to affect the future of the space sector?” Alas, the moderator didn’t choose my question – though in fairness, five others also went unanswered, and Rowan, for the record, says that she could “of course” talk about whatever she wanted to.

Finally, in the event’s second-to-last session, the elephant broke through. During a panel discussion on international collaboration, an audience member asked, “Can we really operate [collaboratively] when we have an administration that’s causing irreparable harm to one of our biggest collaborators on the space science stage?”

In response, panellist Gillian Wright, a senior scientist at the UK Astronomy Technology Centre in Edinburgh, called it “an incredibly complicated question given the landscape is still shifting”. Nevertheless, she said, “My fear is that what goes won’t come back easily, so we do need to think hard about how we keep those scientific connections alive for the future, and I don’t know the answer.” The global focus of space science, Wright added, may be shifting away from the US and towards Europe and the global south.

And that was it.

A question of leadership

I logged out of the conference feeling depressed – and puzzled. Why had none of these distinguished speakers (partially excepting Wright) addressed one of the biggest threats to the future of space science? One possible answer, suggested to me on social media by the astrophysicist Elizabeth Tasker, is that individuals might hesitate to say anything that could be taken as an official statement, especially if their organization needs to maintain a relationship with the US. “I think it needs to be an agency-released statement first,” said Tasker, who works at (but was not speaking for) the Japan Aerospace Exploration Agency (JAXA). “I totally agree that silence is problematic for the community, and I think that’s where official statements come in – but those may need more time.”

Official statements from agencies and other institutions would doubtless be welcomed by members of the US science workforce whose careers and scientific dreams are at risk from the proposed budget. The initial signs, however, are not encouraging.

On the same day as the Royal Society event, the US’s National Academies of Science (NAS) hosted their annual “State of the Science” event in Washington, DC. According to reporting by John Timmer at Ars Technica, many speakers at this event were, if anything, even keener than the Royal Society speakers to avoid acknowledging the scale of the (real and potential) damage. A few oblique comments from NAS president Marcia McNutt; a few forthright ones from a Republican former congresswoman, Heather Wilson; but overall, a pronounced tendency to ignore the present in favour of a future that may never come.

Frankly, the scientific community on both sides of the Atlantic deserves better.

The post There’s an elephant in the room at the Royal Society – and for once, it’s not (just) Elon Musk appeared first on Physics World.

Quantum physics guides proton motion in biological systems

9 juin 2025 à 13:00

If you dig deep enough, you’ll find that most biochemical and physiological processes rely on shuttling hydrogen atoms – protons – around living systems. Until recently, this proton transfer process was thought to occur when protons jump from water molecule to water molecule and between chains of amino acids. In 2023, however, researchers suggested that protons might, in fact, transfer at the same time as electrons. Scientists in Israel have now confirmed this is indeed the case, while also showing that proton movement is linked to the electrons’ spin, or magnetic moment. Since the properties of electron spin are defined by quantum mechanics, the new findings imply that essential life processes are intrinsically quantum in nature.

The scientists obtained this result by placing crystals of lysozyme – an enzyme commonly found in living organisms – on a magnetic substrate. Depending on the direction of the substrate’s magnetization, the spin of the electrons ejected from this substrate may be up or down. Once the electrons are ejected from the substrate, they enter the lysozymes. There, they become coupled to phonons, or vibrations of the crystal lattice.

Crucially, this coupling is not random. Instead, the chirality, or “handedness”, of the phonons determines which electron spin they will couple with – a  property known as chiral induced spin selectivity.

Excited chiral phonons mediate electron coupling spin

When the scientists turned their attention to proton transfer through the lysozymes, they discovered that the protons moved much more slowly with one magnetization direction than they did with the opposite. This connection between proton transfer and spin-selective electron transfer did not surprise Yossi Paltiel, who co-led the study with his Hebrew University of Jerusalem (HUJI) colleagues Naama Goren, Nir Keren and Oded Livnah in collaboration with Nurit Ashkenazy of Ben Gurion University and Ron Naaman of the Weizmann Institute.

“Proton transfer in living organisms occurs in a chiral environment and is an essential process,” Paltiel says. “Since protons also have spin, it was logical for us to try to relate proton transfer to electron spin in this work.”

The finding could shed light on proton hopping in biological environments, Paltiel tells Physics World. “It may ultimately help us understand how information and energy are transferred inside living cells, and perhaps even allow us to control this transfer in the future.

“The results also emphasize the role of chirality in biological processes,” he adds, “and show how quantum physics and biochemistry are fundamentally related.”

The HUJI team now plans to study how the coupling between the proton transfer process and the transfer of spin polarized electrons depends on specific biological environments. “We also want to find out to what extent the coupling affects the activity of cells,” Paltiel says.

Their present study is detailed in PNAS.

The post Quantum physics guides proton motion in biological systems appeared first on Physics World.

Development and application of a 3-electrode setup for the operando detection of side reactions in Li-Ion batteries

9 juin 2025 à 11:13

webinar main image

Join us to learn about the development and application of a 3-Electrode setup for the operando detection of side reactions in Li-Ion batteries.

Detecting parasitic side reactions originating both from the cathode active materials (CAMs) and the electrolyte is paramount for developing more stable cell chemistries for Li-ion batteries. This talk will present a method for the qualitative analysis of oxidative electrolyte oxidation, as well as the quantification of released lattice oxygen and transition metal ions (TM ions) from the CAM. It is based on a 3-electrode cell design employing a Vulcan carbon-based sense electrode (SE) that is held at a controlled voltage against a partially delithiated lithium iron phosphate (LFP) counter electrode (CE). At this SE, reductive currents can be measured while polarizing a CAM or carbon working electrode (WE) against the same LFP CE. In voltametric scans, we show how the SE potential can be selected to specifically detect a given side reaction during CAM charge/discharge, allowing, e.g., to discriminate between lattice oxygen, protons, and dissolved TMs. Furthermore, it is shown via On-line Electrochemical Mass Spectrometry (OEMS) that O2 reduction in the here-used LP47 electrolyte consumes ~2.3 electrons/O2. Using this value, the lattice oxygen release deduced from the 3-electrode setup upon charging of the NCA WE is in good agreement with OEMS measurements up to NCA potentials >4.65 VLi. At higher potentials, the contributions from the reduction of TM ions can be quantified by comparing the integrated SE current with the O2 evolution from OEMS

Lennart Reuter headshot
Lennart Reuter

Lennart Reuter is a PhD student in the group of Prof Hubert A Gasteiger at the Chair of Technical Electrochemistry at TUM. His research focused on the interfacial processes in lithium-ion batteries that govern calendar life, cycle stability, and rate capability. He advanced the on-line electrochemical mass spectrometry (OEMS) technique to investigate gas evolution mechanisms from interfacial side reactions at the cathode and anode. His work also explored how SEI formation and graphite structural changes affect Li⁺ transport, using impedance spectroscopy and complementary analysis techniques.

 

Leonhard J Reinschluessel headshot
Leonhard J Reinschluessel

Leonhard J Reinschluessel is currently a PhD candidate at at the Chair of Technical Electrochemistry in the Gasteiger research group at the Technical University of Munich (TUM). His current work encompasses an in-depth understanding of the complex interplay of cathode- and electrolyte degradation mechanisms in lithium-ion batteries using operando lab-based and synchrotron techniques. He received his MSc in chemistry from TUM, where he investigated the mitigation of aging of FeNC-based cathode catalyst layers in PEMFCs in his thesis at the Gasteiger group Electrochemistry at TUM. His research focused on the interfacial processes in lithium-ion batteries that govern calendar life, cycle stability, and rate capability. He advanced the on-line electrochemical mass spectrometry (OEMS) technique to investigate gas evolution mechanisms from interfacial side reactions at the cathode and anode. His work also explored how SEI formation and graphite structural changes affect Li⁺ transport, using impedance spectroscopy and complementary analysis techniques.

The post Development and application of a 3-electrode setup for the operando detection of side reactions in Li-Ion batteries appeared first on Physics World.

People benefit from medicine, but machines need healthcare too

9 juin 2025 à 10:17

I began my career in the 1990s at a university spin-out company, working for a business that developed vibration sensors to monitor the condition of helicopter powertrains and rotating machinery. It was a job that led to a career developing technologies and techniques for checking the “health” of machines, such as planes, trains and trucks.

What a difference three decades has made. When I started out, we would deploy bespoke systems that generated limited amounts of data. These days, everything has gone digital and there’s almost more information than we can handle. We’re also seeing a growing use of machine learning and artificial intelligence (AI) to track how machines operate.

In fact, with AI being increasingly used in medical science – for example to predict a patient’s risk of heart attacks – I’ve noticed intriguing similarities between how we monitor the health of machines and the health of human bodies. Jet engines and hearts are very different objects, but in both cases monitoring devices gives us a set of digitized physical measurements.

A healthy perspective

Sensors installed on a machine provide various basic physical parameters, such as its temperature, pressure, flow rate or speed. More sophisticated devices can yield information about, say, its vibration, acoustic behaviour, or (for an engine) oil debris or quality. Bespoke sensors might even be added if an important or otherwise unchecked aspect of a machine’s performance needs to be monitored – provided the benefits of doing so outweigh the cost.

Generally speaking, the sensors you use in a particular situation depend on what’s worked before and whether you can exploit other measurements, such as those controlling the machine. But whatever sensors are used, the raw data then have to be processed and manipulated to extract particular features and characteristics.

If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Once you’ve done all that, you can then determine the health of the machine, rather like in medicine. Is it performing normally? Does it seem to be developing a fault? If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Generally, we do this by tracking a range of parameters to look for consistent behaviour, such as a steady increase, or by seeing if a parameter exceeds a pre-defined threshold. With further analysis, we can also try to predict the future state of the machine, work out what its remaining useful life might be, or decide if any maintenance needs scheduling.

A diagnosis typically involves linking various anomalous physical parameters (or symptoms) to a probable cause. As machines obey the laws of physics, a diagnosis can either be based on engineering knowledge or be driven by data – or sometimes the two together. If a concrete diagnosis can’t be made, you can still get a sense of where a problem might lie before carrying out further investigation or doing a detailed inspection.

One way of doing this is to use a “borescope” – essentially a long, flexible cable with a camera on the end. Rather like an endoscope in medicine, it allows you to look down narrow or difficult-to-reach cavities. But unlike medical imaging, which generally takes place in the controlled environment of a lab or clinic, machine data are typically acquired “in the field”. The resulting images can be tricky to interpret because the light is poor, the measurements are inconsistent, or the equipment hasn’t been used in the most effective way.

Even though it can be hard to work out what you’re seeing, in-situ visual inspections are vital as they provide evidence of a known condition, which can be directly linked to physical sensor measurements. It’s a kind of health status calibration. But if you want to get more robust results, it’s worth turning to advanced modelling techniques, such as deep neural networks.

One way to predict the wear and tear of a machine’s constituent parts is to use what’s known as a “digital twin”. Essentially a virtual replica of a physical object, a digital twin is created by building a detailed model and then feeding in real-time information from sensors and inspections. The twin basically mirrors the behaviour, characteristics and performance of the real object.

Real-time monitoring

Real-time health data are great because they allow machines to be serviced as and when required, rather than following a rigid maintenance schedule. For example, if a machine has been deployed heavily in a difficult environment, it can be serviced sooner, potentially preventing an unexpected failure. Conversely, if it’s been used relatively lightly and not shown any problems, then  maintenance could be postponed or reduced in scope. This saves time and money because the equipment will be out of action less than anticipated.

We can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it

Having information about a machine’s condition at any point in time not only allows this kind of “intelligent maintenance” but also lets us use associated resources wisely. For example, we can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it. Spare parts can therefore be ordered only when required, saving money and optimizing supply chains.

Real-time health-monitoring data are particularly useful for companies owning many machines of one kind, such as airlines with a fleet of planes or haulage companies with a lot of trucks. It gives them a better understanding not just of how machines behave individually – but also collectively to give a “fleet-wide” view. Noticing and diagnosing failures from data becomes an iterative process, helping manufacturers create new or improved machine designs.

This all sounds great, but in some respects, it’s harder to understand a machine than a human. People can be taken to hospitals or clinics for a medical scan, but a wind turbine or jet engine, say, can’t be readily accessed, switched off or sent for treatment. Machines also can’t tell us exactly how they feel.

However, even humans don’t always know when there’s something wrong. That’s why it’s worth us taking a leaf from industry’s book and consider getting regular health monitoring and checks. There are lots of brilliant apps out there to monitor and track your heart rate, blood pressure, physical activity and sugar levels.

Just as with a machine, you can avoid unexpected failure, reduce your maintenance costs, and make yourself more efficient and reliable. You could, potentially, even live longer too.

The post People benefit from medicine, but machines need healthcare too appeared first on Physics World.

Japan’s ispace suffers second lunar landing failure

6 juin 2025 à 15:04

The Japanese firm ispace has suffered another setback after its second attempt to land on the Moon ended in failure yesterday. The Hakuto-R Mission 2, also known as Resilience, failed to touch down near the centre of Mare Frigoris (sea of cold) in the far north of the Moon after a sensor malfunctioned during descent.

Launched on 15 January from the Kennedy Space Center, Florida, aboard a SpaceX Falcon 9 rocket, the craft spent four months travelling to the Moon before it entered lunar orbit on 7 May. It then spent the past month completing several lunar orbital manoeuvres.

During the descent phase, the 2.3 m-high lander began a landing sequence that involved firing its main propulsion system to gradually decelerate and adjust its attitude. ispace says that the lander was confirmed to be nearly vertical but then the company lost communication with the craft.

The firm concludes that the laser rangefinder experienced delays attempting to measure the distance to the lunar surface during descent, meaning that it was unable to decelerate sufficiently to carry out a soft landing.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyse the telemetry data we have obtained thus far and work diligently to identify the cause,” noted ispace founder and chief executive officer Takeshi Hakamada in a statement. “We strive to restore trust by providing a report of the findings.”

The mission was planned to have operated for about two weeks. Resilience featured several commercial payloads, worth $16m, including a food-production experiment and a deep-space radiation probe. It also carried a rover, dubbed Tenacious, which was about the size of a microwave oven and would have collected and analysed lunar regolith.

The rover would have also delivered a Swedish artwork called The Moonhouse – a small red cottage with white corners – and placed it at a “symbolically meaningful” site on the Moon.

Lunar losses

The company’s first attempt to land on the Moon also ended in failure in 2023 when the Hakuto-R Mission 1 crash landed despite being in a vertical position as it carried out the final approach to the lunar surface.

The issue was put down to a software problem that incorrectly assessed the craft’s altitude during descent.

If the latest attempt was a success, ispace would have joined the US firms Intuitive Machines and Firefly Aerospace, which both successfully landed on the Moon last year and in March, respectively.

The second lunar loss casts doubt on ispace’s plans for further lunar landings and its grand aim of establishing a lunar colony of 1000 inhabitants by the 2040s.

The post Japan’s ispace suffers second lunar landing failure appeared first on Physics World.

Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe

5 juin 2025 à 17:00

This episode of the Physics World Weekly podcast features George Efstathiou and Richard Bond, who share the 2025 Shaw Prize in Astronomy, “for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background (CMB). Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass-energy content of the universe.”

Bond and Efstathiou talk about how the CMB emerged when the universe was just 380,000 years old and explain how the CMB is observed today. They explain why studying fluctuations in today’s CMB provides a window into the nature of the universe as it existed long ago, and how future studies could help physicists understand the nature of dark matter – which is one of the greatest mysteries in physics.

Efstathiou is emeritus professor of astrophysics at the University of Cambridge in the UK – and Richard Bond is a professor at the Canadian Institute for Theoretical Astrophysics (CITA) and university professor at the University of Toronto in Canada. Bond and Efstathiou share the 2025 Shaw Prize in Astronomy and its $1.2m prize money equally.

This podcast is sponsored by The Shaw Prize Foundation.

The post Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe appeared first on Physics World.

Superconducting innovation: SQMS shapes up for scalable success in quantum computing

5 juin 2025 à 16:00

Developing quantum computing systems with high operational fidelity, enhanced processing capabilities plus inherent (and rapid) scalability is high on the list of fundamental problems preoccupying researchers within the quantum science community. One promising R&D pathway in this regard is being pursued by the Superconducting Quantum Materials and Systems (SQMS) National Quantum Information Science Research Center at the US Department of Energy’s Fermi National Accelerator Laboratory, the pre-eminent US particle physics facility on the outskirts of Chicago, Illinois.

The SQMS approach involves placing a superconducting qubit chip (held at temperatures as low as 10–20 mK) inside a three-dimensional superconducting radiofrequency (3D SRF) cavity – a workhorse technology for particle accelerators employed in high-energy physics (HEP), nuclear physics and materials science. In this set-up, it becomes possible to preserve and manipulate quantum states by encoding them in microwave photons (modes) stored within the SRF cavity (which is also cooled to the millikelvin regime).

Put another way: by pairing superconducting circuits and SRF cavities at cryogenic temperatures, SQMS researchers create environments where microwave photons can have long lifetimes and be protected from external perturbations – conditions that, in turn, make it possible to generate quantum states, manipulate them and read them out. The endgame is clear: reproducible and scalable realization of such highly coherent superconducting qubits opens the way to more complex and scalable quantum computing operations – capabilities that, over time, will be used within Fermilab’s core research programme in particle physics and fundamental physics more generally.

Fermilab is in a unique position to turn this quantum technology vision into reality, given its decadal expertise in developing high-coherence SRF cavities. In 2020, for example, Fermilab researchers demonstrated record coherence lifetimes (of up to two seconds) for quantum states stored in an SRF cavity.

“It’s no accident that Fermilab is a pioneer of SRF cavity technology for accelerator science,” explains Sir Peter Knight, senior research investigator in physics at Imperial College London and an SQMS advisory board member. “The laboratory is home to a world-leading team of RF engineers whose niobium superconducting cavities routinely achieve very high quality factors (Q) from 1010 to above 1011 – figures of merit that can lead to dramatic increases in coherence time.”

Moreover, Fermilab offers plenty of intriguing HEP use-cases where quantum computing platforms could yield significant research dividends. In theoretical studies, for example, the main opportunities relate to the evolution of quantum states, lattice-gauge theory, neutrino oscillations and quantum field theories in general. On the experimental side, quantum computing efforts are being lined up for jet and track reconstruction during high-energy particle collisions; also for the extraction of rare signals and for exploring exotic physics beyond the Standard Model.

SQMS associate scientists Yao Lu and Tanay Roy
Collaborate to accumulate SQMS associate scientists Yao Lu (left) and Tanay Roy (right) worked with PhD student Taeyoon Kim (centre) to develop a two-qudit superconducting QPU with a record coherence lifetime (>20 ms). (Courtesy: Hannah Brumbaugh, Fermilab)

Cavities and qubits

SQMS has already notched up some notable breakthroughs on its quantum computing roadmap, not least the demonstration of chip-based transmon qubits (a type of charge qubit circuit exhibiting decreased sensitivity to noise) showing systematic and reproducible improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

Key to success here is an extensive collaborative effort in materials science and the development of novel chip fabrication processes, with the resulting transmon qubit ancillas shaping up as the “nerve centre” of the 3D SRF cavity-based quantum computing platform championed by SQMS. What’s in the works is essentially a unique quantum analogue of a classical computing architecture: the transmon chip providing a central logic-capable quantum information processor and microwave photons (modes) in the 3D SRF cavity acting as the random-access quantum memory.

As for the underlying physics, the coupling between the transmon qubit and discrete photon modes in the SRF cavity allows for the exchange of coherent quantum information, as well as enabling quantum entanglement between the two. “The pay-off is scalability,” says Alexander Romanenko, a senior scientist at Fermilab who leads the SQMS quantum technology thrust. “A single logic-capable processor qubit, such as the transmon, can couple to many cavity modes acting as memory qubits.”

In principle, a single transmon chip could manipulate more than 10 qubits encoded inside a single-cell SRF cavity, substantially streamlining the number of microwave channels required for system control and manipulation as the number of qubits increases. “What’s more,” adds Romanenko, “instead of using quantum states in the transmon [coherence times just crossed into milliseconds], we can use quantum states in the SRF cavities, which have higher quality factors and longer coherence times [up to two seconds].”

In terms of next steps, continuous improvement of the ancilla transmon coherence times will be critical to ensure high-fidelity operation of the combined system – with materials breakthroughs likely to be a key rate-determining step. “One of the unique differentiators of the SQMS programme is this ‘all-in’ effort to understand and get to grips with the fundamental materials properties that lead to losses and noise in superconducting qubits,” notes Knight. “There are no short-cuts: wide-ranging experimental and theoretical investigations of materials physics – per the programme implemented by SQMS – are mandatory for scaling superconducting qubits into industrial and scientifically useful quantum computing architectures.”

Laying down a marker, SQMS researchers recently achieved a major milestone in superconducting quantum technology by developing the longest-lived multimode superconducting quantum processor unit (QPU) ever built (coherence lifetime >20 ms). Their processor is based on a two-cell SRF cavity and leverages its exceptionally high quality factor (~1010) to preserve quantum information far longer than conventional superconducting platforms (typically 1 or 2 ms for rival best-in-class implementations).

Coupled with a superconducting transmon, the two-cell SRF module enables precise manipulation of cavity quantum states (photons) using ultrafast control/readout schemes (allowing for approximately 104 high-fidelity operations within the qubit lifetime). “This represents a significant achievement for SQMS,” claims Yao Lu, an associate scientist at Fermilab and co-lead for QPU connectivity and transduction in SQMS. “We have demonstrated the creation of high-fidelity [>95%] quantum states with large photon numbers [20 photons] and achieved ultra-high-fidelity single-photon entangling operations between modes [>99.9%]. It’s work that will ultimately pave the way to scalable, error-resilient quantum computing.”

The SQMS multiqubit QPU prototype
Scalable thinking The SQMS multiqudit QPU prototype (above) exploits 3D SRF cavities held at millikelvin temperatures. (Courtesy: Ryan Postel, Fermilab)

Fast scaling with qudits

There’s no shortage of momentum either, with these latest breakthroughs laying the foundations for SQMS “qudit-based” quantum computing and communication architectures. A qudit is a multilevel quantum unit that can be more than two states and, in turn, hold a larger information density – i.e. instead of working with a large number of qubits to scale information processing capability, it may be more efficient to maintain a smaller number of qudits (with each holding a greater range of values for optimized computations).

Scale-up to a multiqudit QPU system is already underway at SQMS via several parallel routes (and all with a modular computing architecture in mind). In one approach, coupler elements and low-loss interconnects integrate a nine-cell multimode SRF cavity (the memory) to a two-cell SRF cavity quantum processor. Another iteration uses only two-cell modules, while yet another option exploits custom-designed multimodal cavities (10+ modes) as building blocks.

One thing is clear: with the first QPU prototypes now being tested, verified and optimized, SQMS will soon move to a phase in which many of these modules will be assembled and put together in operation. By extension, the SQMS effort also encompasses crucial developments in control systems and microwave equipment, where many devices must be synchronized optimally to encode and analyse quantum information in the QPUs.

Along a related coordinate, complex algorithms can benefit from fewer required gates and reduced circuit depth. What’s more, for many simulation problems in HEP and other fields, it’s evident that multilevel systems (qudits) – rather than qubits – provide a more natural representation of the physics in play, making simulation tasks significantly more accessible. The work of encoding several such problems into qudits – including lattice-gauge-theory calculations and others – is similarly ongoing within SQMS.

Taken together, this massive R&D undertaking – spanning quantum hardware and quantum algorithms – can only succeed with a “co-design” approach across strategy and implementation: from identifying applications of interest to the wider HEP community to full deployment of QPU prototypes. Co-design is especially suited to these efforts as it demands sustained alignment of scientific goals with technological implementation to drive innovation and societal impact.

In addition to their quantum computing promise, these cavity-based quantum systems will play a central role in serving both as the “adapters” and low-loss channels at elevated temperatures for interconnecting chip or cavity-based QPUs hosted in different refrigerators. These interconnects will provide an essential building block for the efficient scale-up of superconducting quantum processors into larger quantum data centres.

Researchers in the control room of the SQMS Quantum Garage facility
Quantum insights Researchers in the control room of the SQMS Quantum Garage facility, developing architectures and gates for SQMS hardware tailored toward HEP quantum simulations. From left to right: Nick Bornman, Hank Lamm, Doga Kurkcuoglu, Silvia Zorzetti, Julian Delgado, Hans Johnson (Courtesy: Hannah Brumbaugh)

 “The SQMS collaboration is ploughing its own furrow – in a way that nobody else in the quantum sector really is,” says Knight. “Crucially, the SQMS partners can build stuff at scale by tapping into the phenomenal engineering strengths of the National Laboratory system. Designing, commissioning and implementing big machines has been part of the ‘day job’ at Fermilab for decades. In contrast, many quantum computing start-ups must scale their R&D infrastructure and engineering capability from a far-less-developed baseline.”

The last word, however, goes to Romanenko. “Watch this space,” he concludes, “because SQMS is on a roll. We don’t know which quantum computing architecture will ultimately win out, but we will ensure that our cavity-based quantum systems will play an enabling role.”

Scaling up: from qubits to qudits

Conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit
Left: conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit (AI-generated). Right: an ancilla qubit with two energy levels – ground ∣g⟩ and excited ∣e⟩ – is used to control a high-coherence (d+1) dimensional qudit encoded in a cavity resonator. The ancilla enables state preparation, control and measurement of the qudit. (Courtesy: Fermilab)

The post Superconducting innovation: SQMS shapes up for scalable success in quantum computing appeared first on Physics World.

Black-hole scattering calculations could shed light on gravitational waves

4 juin 2025 à 17:00

By adapting mathematical techniques used in particle physics, researchers in Germany have developed an approach that could boost our understanding of the gravitational waves that are emitted when black holes collide. Led by Jan Plefka at The Humboldt University of Berlin, the team’s results could prove vital to the success of future gravitational-wave detectors.

Nearly a decade on from the first direct observations of gravitational waves, physicists are hopeful that the next generation of ground- and space-based observatories will soon allow us to study these ripples in space–time with unprecedented precision. But to ensure the success of upcoming projects like the LISA space mission, the increased sensitivity offered by these detectors will need to be accompanied with a deeper theoretical understanding of how gravitational waves are generated through the merging of two black holes.

In particular, they will need to predict more accurately the physical properties of gravitational waves produced by any given colliding pair and account for factors including their respective masses and orbital velocities. For this to happen, physicists will need to develop more precise solutions to the relativistic two-body problem. This problem is a key application of the Einstein field equations, which relate the geometry of space–time to the distribution of matter within it.

No exact solution

“Unlike its Newtonian counterpart, which is solved by Kepler’s Laws, the relativistic two-body problem cannot be solved exactly,” Plefka explains. “There is an ongoing international effort to apply quantum field theory (QFT) – the mathematical language of particle physics – to describe the classical two-body problem.”

In their study, Plefka’s team started from state-of-the-art techniques used in particle physics for modelling the scattering of colliding elementary particles, while accounting for their relativistic properties. When viewed from far away, each black hole can be approximated as a single point which, much like an elementary particle, carries a single mass, charge, and spin.

Taking advantage of this approximation, the researchers modified existing techniques in particle physics to create a framework called worldline quantum field theory (WQFT). “The advantage of WQFT is a clean separation between classical and quantum physics effects, allowing us to precisely target the classical physics effects relevant for the vast distances involved in astrophysical observables,” Plefka describes

Ordinarily, doing calculations with such an approach would involve solving millions of integrals that sum-up every single contribution to the black hole pair’s properties across all possible ways that the interaction between them could occur. To simplify the problem, Plefka’s team used a new algorithm that identified relationships between the integrals. This reduced the problem to just 250 “master integrals”, making the calculation vastly more manageable.

With these master integrals, the team could finally produce expressions for three key physical properties of black hole binaries within WQFT. These includes the changes in momentum during the gravity-mediated scattering of two black holes and the total energy radiated by both bodies over the course of the scattering.

Genuine physical process

Altogether, the team’s WQFT framework produced the most accurate solution to the Einstein field equations ever achieved to date. “In particular, the radiated energy we found contains a new class of mathematical functions known as ‘Calabi–Yau periods’,” Plefka explains. “While these functions are well-known in algebraic geometry and string theory, this marks the first time they have been shown to describe a genuine physical process.”

With its unprecedented insights into the structure of the relativistic two-body problem, the team’s approach could now be used to build more precise models of gravitational-wave formation, which could prove invaluable for the next generation of gravitational-wave detectors.

More broadly, however, Plefka predicts that the appearance of Calabi–Yau periods in their calculations could lead to an entirely new class of mathematical functions applicable to many areas beyond gravitational waves.

“We expect these periods to show up in other branches of physics, including collider physics, and the mathematical techniques we employed to calculate the relevant integrals will no doubt also apply there,” he says.

The research is described in Nature.

The post Black-hole scattering calculations could shed light on gravitational waves appeared first on Physics World.

Harmonious connections: bridging the gap between music and science

4 juin 2025 à 12:00

CP Snow’s classic The Two Cultures lecture, published in book form in 1959, is the usual go-to reference when exploring the divide between the sciences and humanities. It is a culture war that was raging long before the term became social-media shorthand for today’s tribal battles over identity, values and truth.

While Snow eloquently lamented the lack of mutual understanding between scientific and literary elites, the 21st-century version of the two-cultures debate often plays out with a little less decorum and a lot more profanity. Hip hop duo Insane Clown Posse certainly didn’t hold back in their widely memed 2010 track “Miracles”, which included the lyric “And I don’t wanna talk to a scientist / Y’all motherfuckers lying and getting me pissed”. An extreme example to be sure, but it hammers home the point: Snow’s two-culture concerns continue to resonate strongly almost 70 years after his influential lecture and writings.

A Perfect Harmony: Music, Mathematics and Science by David Darling is the latest addition to a growing genre that seeks to bridge that cultural rift. Like Peter Pesic’s Music and the Making of Modern Science, Susan Rogers and Ogi Ogas’ This Is What It Sounds Like, and Philip Ball’s The Music Instinct, Darling’s book adds to the canon that examines the interplay between musical creativity and the analytical frameworks of science (including neuroscience) and mathematics.

I’ve also contributed, in a nanoscopically small way, to this music-meets-science corpus with an analysis of the deep and fundamental links between quantum physics and heavy metal (When The Uncertainty Principle Goes To 11), and have a long-standing interest in music composed from maths and physics principles and constants (see my Lateral Thoughts articles from September 2023 and July 2024). Darling’s book, therefore, struck a chord with me.

Darling is not only a talented science writer with an expansive back-catalogue to his name but he is also an accomplished musician (check out his album Songs Of The Cosmos ), and his enthusiasm for all things musical spills off the page. Furthermore, he is a physicist, with a PhD in astronomy from the University of Manchester. So if there’s a writer who can genuinely and credibly inhabit both sides of the arts–science cultural divide, it’s Darling.

But is A Perfect Harmony in tune with the rest of the literary ensemble, or marching to a different beat? In other words, is this a fresh new take on the music-meets-maths (meets pop sci) genre or, like too many bands I won’t mention, does it sound suspiciously like something you’ve heard many times before? Well, much like an old-school vinyl album, Darling’s work has the feel of two distinct sides. (And I’ll try to make that my final spin on groan-worthy musical metaphors. Promise.)

Not quite perfect pitch

Although the subtitle for A Perfect Harmony is “Music, Mathematics and Science”, the first half of the book is more of a history of the development and evolution of music and musical instruments in various cultures, rather than a new exploration of the underpinning mathematical and scientific principles. Engaging and entertaining though this is – and all credit to Darling for working in a reference to Van Halen in the opening lines of chapter 1 – it’s well-worn ground: Pythagorean tuning, the circle of fifths, equal temperament, Music of the Spheres (not the Coldplay album, mercifully), resonance, harmonics, etc. I found myself wishing, at times, for a take that felt a little more off the beaten track.

One case in point is Darling’s brief discussion of the theremin. If anything earns the title of “The Physicist’s Instrument”, it’s the theremin – a remarkable device that exploits the innate electrical capacitance of the human body to load a resonant circuit and thus produce an ethereal, haunting tone whose pitch can be varied, without, remarkably, any physical contact.

While I give kudos to Darling for highlighting the theremin, the brevity of the description is arguably a lost opportunity when put in the broader context of the book’s aim to explain the deeper connections between music, maths and science. This could have been a novel and fascinating take on the links between electrical and musical resonance that went well beyond the familiar territory mapped out in standard physics-of-music texts.

Using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired

As the book progresses, however, Darling moves into more distinctive territory, choosing a variety of inventive examples that are often fascinating and never short of thought-provoking. I particularly enjoyed his description of orbital resonance in the system of seven planets orbiting the red dwarf TRAPPIST-1, 41 light-years from Earth. The orbital periods have ratios, which, when mapped to musical intervals, correspond to a minor sixth, a major sixth, two perfect fifths, a perfect fourth and another perfect fifth. And it’s got to be said that using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired.

A Perfect Harmony doesn’t entirely close the cultural gap highlighted by Snow all those years ago, but it does hum along pleasantly in the space between. Though the subject matter occasionally echoes well-trodden themes, Darling’s perspective and enthusiasm lend it freshness. There’s plenty here to enjoy, especially for physicists inclined to tune into the harmonies of the universe.

  • 2025 Oneworld Publications 288pp £10.99pb/£6.99ebook

The post Harmonious connections: bridging the gap between music and science appeared first on Physics World.

New analysis of M67 cluster helps decode the sound of stars

3 juin 2025 à 16:00

Stars are cosmic musical instruments: they vibrate with complex patterns that echo through their interiors. These vibrations, known as pressure waves, ripple through the star, similar to the earthquakes that shake our planet. The frequencies of these waves hold information about the star’s mass, age and internal structure.

In a study led by researchers at UNSW Sydney, Australia, astronomer Claudia Reyes and colleagues “listened” to the sound from stars in the M67 cluster and discovered a surprising feature: a plateau in their frequency pattern. This plateau appears during the subgiant and red giant phases of stars where they expand and evolve after exhausting the hydrogen fuel in their cores. This feature, reported in Nature, reveals how deep the outer layers of the star have pushed into the interior and offers a new diagnostic to improve mass and age estimates of stars beyond the main sequence (the core-hydrogen-burning phase).

How do stars create sound?

Beneath the surface of stars, hot gases are constantly rising, cooling and sinking back down, much like hot bubbles in boiling water. This constant churning is called convection. As these rising and sinking gas blobs collide or burst at the stellar surface, they generate pressure waves. These are essentially acoustic waves, bouncing within the stellar interior to create standing wave patterns.

Stars do not vibrate at just one frequency; they oscillate simultaneously at multiple frequencies, producing a spectrum of sounds. These acoustic oscillations cannot be heard in space directly, but are observed as tiny fluctuations in the star’s brightness over time.

M67 cluster as stellar laboratory

Star clusters offer an ideal environment in which to study stellar evolution as all stars in a cluster form from the same gas cloud at about the same time with the same chemical compositions but with different masses. The researchers investigated stars from the open cluster M67, as this cluster has a rich population of evolved stars including subgiants and red giants with a chemical composition similar to the Sun’s. They measured acoustic oscillations in 27 stars using data from NASA’s Kepler/K2 mission.

Stars oscillate across a range of tones, and in this study the researchers focused on two key features in this oscillation spectrum: large and small frequency separations. The large frequency separation, which probes stellar density, is the frequency difference between oscillations of the same angular degree () but different radial orders (n). The small frequency separation refers to frequency differences between the modes of degrees and ℓ + 2, of consecutive orders of n.  For main sequence stars, small separations are reliable age indicators because their changes during hydrogen burning are well understood. In later stages of stellar evolution, however, their relationship to the stellar interior remained unclear.

In 27 stars, Reyes and colleagues investigated the small separation between modes of degrees 0 and 2. Plotting a graph of small versus large frequency separations for each star, called a C–D diagram, they uncovered a surprising plateau in small frequency separations.

C–D diagrams for two M67 stars
A surprising feature C–D diagram showing different evolutionary stages of stars of mass 1 (left) and 1.7 solar masses (right) made from stellar models. Each point represents a specific stage in stellar evolution from the main sequence (A) to the red giant (F). The plateau seen from points F to E during the post-main-sequence phase reveals a transition in the stellar interior. (Courtesy: CC BY 4.0/C Reyes et al. Nature 10.1038/s41586-025-08760-2)

The researchers traced this plateau to the evolution of the lower boundary of the star’s convective envelope. As the envelope expands and cools, this boundary sinks deeper into the interior. Along this boundary, the density and sound speed change rapidly due to the difference in chemical composition on either side. These steep changes cause acoustic glitches that disturb how the pressure waves move through the star and temporarily stall the evolution of the small frequency separations, observed as a plateau in the frequency pattern.

This stalling occurs at a specific stage in stellar evolution – when the convective envelope deepens enough to encompass nearly 80% of the star’s mass. To confirm this connection, the researchers varied the amount of convective boundary mixing in their stellar models. They found that the depth of the envelope directly influenced both the timing and shape of the plateau in the small separations.

A new window on galactic history

This plateau serves as a new diagnostic tool to identify a specific evolutionary stage in red giant stars and improve estimates of their mass and age.

“The discovery of the ‘plateau’ frequencies is significant because it represents one more corroboration of the accuracy of our stellar models, as it shows how the turbulent regions at the bottom of a star’s envelope affect the sound speed,” explains Reyes, who is now at the Australian National University in Canberra. “They also have great potential to help determine with ease and great accuracy the mass and age of a star, which is of great interest for galactic archaeology, the study of the history of our galaxy.”

The sounds of starquakes offer a new window to study the evolution of stars and, in turn, recreate the history of our galaxy. Clusters like M67 serve as benchmarks to study and test stellar models and understand the future evolution of stars like our Sun.

“We plan to look for stars in the field which have very well-determined masses and which are in their ‘plateau’ phase,” says Reyes. “We will use these stars to benchmark the diagnostic potential of the plateau frequencies as a tool, so it can later be applied to stars all over the galaxy.”

The post New analysis of M67 cluster helps decode the sound of stars appeared first on Physics World.

Bury it, don’t burn it: turning biomass waste into a carbon solution

3 juin 2025 à 12:00

If a tree fell in a forest almost 4000 years ago, did it make a sound? Well, in the case of an Eastern red cedar in what is now Quebec, Canada, it’s certainly still making noise today.

That’s because in 2013, a team of scientists were digging a trench when they came across the 3775-year-old log. Despite being buried for nearly four millennia, the wood wasn’t rotten and useless. In fact, recent analysis unearthed an entirely different story.

The team, led by atmospheric scientist Ning Zeng of the University of Maryland in the US, found that the wood had only lost 5% of its carbon compared with a freshly cut Eastern red cedar log. “The wood is nice and solid – you could probably make a piece of furniture out of it,” says Zeng. The log had been preserved in such remarkable shape because the clay soil it was buried in was highly impermeable. That limited the amount of oxygen and water reaching the wood, suppressing the activity of micro-organisms that would otherwise have made it decompose.

Asian man in an office holding an ancient wooden log
Fortified and ancient Ning Zeng and colleagues discovered this 3775-year-old preserved log while conducting a biomass burial pilot project in Quebec, Canada. (Courtesy: Mark Sherwood)

This ancient log is a compelling example of “biomass burial”. When plants decompose or are burnt, they release the carbon dioxide (CO2) they had absorbed from the atmosphere. One idea to prevent this CO2 being released back into the atmosphere is to bury the waste biomass under conditions that prevent or slow decomposition, thereby trapping the carbon underground for centuries.

In fact, Zeng and his colleagues discovered the cedar log while they were digging a huge trench to bury 35 tonnes of wood to test this very idea. Nine years later, when they dug up some samples, they found that the wood had barely decomposed. Further analysis suggested that if the logs had been left buried for a century, they would still hold 97% of the carbon that was present when they were felled.

Digging holes

To combat climate change, there is often much discussion about how to remove carbon from the atmosphere. As well as conventional techniques like restoring peatland and replanting forests, there are a variety of more technical methods being developed (figure 1). These include direct air capture (DAC) and ocean alkalinity enhancement, which involves tweaking the chemistry of oceans so that they absorb more CO2. But some scientists – like Sinéad Crotty, a managing director at the Carbon Containment Lab in Connecticut, US – think that biomass burial could be a simpler and cheaper way to sequester carbon.

1 Ready or not

Diagram showing a list of 15 methods of carbon removal
(Adapted from Smith et al. (2024) State of Carbon Dioxide Removal – Edition 2. DOI:10.17605/OSF.IO/F85QJ)

There are multiple methods being developed for capturing, converting and storing carbon dioxide (CO2), each at different stages of readiness for deployment, with varying removal capabilities and storage durability timescales.

This figure – adapted from the State of Carbon Dioxide Removal report – shows methods that are already deployed or analysed in research literature. They are categorized as either “conventional”, processes that are widely established and deployed at scale; or “novel”, those that are at a lower level of readiness and therefore only used on smaller scales. The figure also rates their Technology Readiness Level (TRL), maximum mitigation potential (how many gigatonnes (109 tonnes) of CO2 can be sequestered per year), and storage timescale.

The report defines each technique as follows:

  • Afforestation – Conversion to forest of land that was previously not forest.
  • Reforestation – Conversion to forest of land that was previously deforested.
  • Agroforestry – Growing trees on agricultural land while maintaining agricultural production.
  • Forest management – Stewardship and use of existing forests. To count as carbon dioxide removal (CDR), forest management practices must enhance the long-term average carbon stock in the forest system.
  • Peatland and coastal wetland restoration – Assisted recovery of inland ecosystems that are permanently or seasonally flooded or saturated by water (such as peatlands) and of coastal ecosystems (such as tidal marshes, mangroves and seagrass meadows). To count as CDR, this recovery must lead to a durable increase in the carbon content of these systems.
  • Durable wood products – Wood products which meet a given threshold of durability, typically used in construction. These can include sawn wood, wood panels and composite beams, but exclude less durable products such as paper.
  • Biochar – Relatively stable, carbon-rich material produced by heating biomass in an oxygen-limited environment. Assumed to be applied as a soil amendment unless otherwise stated.
  • Mineral products – Production of solid carbonate materials for use in products such as aggregates, asphalt, cement and concrete, using CO2 captured from the atmosphere.
  • Enhanced rock weathering – Increasing the natural rate of removal of CO2 from the atmosphere by applying crushed rocks, rich in calcium and magnesium, to soil or beaches.
  • Biomass burial – Burial of biomass in land sites such as soils or exhausted mines. Excludes storage in the typical geological formations associated with carbon capture and storage (CCS).
  • Bio-oil storage – Oil made by biomass conversion and placed into geological storage.
  • Bioenergy with carbon capture and storage – Process by which biogenic CO2 is captured from a bioenergy facility, with subsequent geological storage.
  • Direct air carbon capture and storage – Chemical process by which CO2 is captured from the ambient air, with subsequent geological storage.
  • Ocean fertilization – Enhancement of nutrient supply to the near-surface ocean with the aim of sequestering additional CO2 from the atmosphere stimulated through biological production. Methods include direct addition of micro-nutrients or macro-nutrients. To count as CDR, the biomass must reach the deep ocean where the carbon has the potential to be sequestered durably.
  • Ocean alkalinity enhancement – Spreading of alkaline materials on the ocean surface to increase the alkalinity of the water and thus increase ocean CO2 uptake.
  • Biomass sinking – Sinking of terrestrial (e.g. straw) or marine (e.g. macroalgae) biomass in the marine environment. To count as CDR, the biomass must reach the deep ocean where the carbon has the potential to be sequestered durably.
  • Direct ocean carbon capture and storage – Chemical process by which CO2 is captured directly from seawater, with subsequent geological storage. To count as CDR, this capture must lead to increased ocean CO2 uptake.

The 3775-year-old log shows that carbon can be stored for centuries underground, but the wood has to be buried under specific conditions. “People tend to think, ‘Who doesn’t know how to dig a hole and bury some wood?’” Zeng says. “But think about how many wooden coffins were buried in human history. How many of them survived? For a timescale of hundreds or thousands of years, we need the right conditions.”

The key for scientists seeking to test biomass burial is to create dry, low-oxygen environments, similar to those in the Quebec clay soil. Last year, for example, Crotty and her colleagues dug more than 100 pits at a site in Colorado, in the US, filled them with woody material and then covered them up again. In five years’ time they plan to dig the biomass back out of the pits to see how much it has decomposed.

The pits vary in depth, and have been refilled and packed in different ways, to test how their build impacts carbon storage. The researchers will also be calculating the carbon emissions of processes such as transporting and burying the biomass – including the amount of carbon released from the soil when the pits are dug. “What we are trying to do here is build an understanding of what works and what doesn’t, but also how we can measure, report and verify that what we are doing is truly carbon negative,” Crotty says.

Over the next five years the team will continuously measure surface CO2 and methane fluxes from several of the pits, while every pit will have its CO2 and methane emissions measured monthly. There are also moisture sensors and oxygen probes buried in the pits, plus a full weather station on the site.

Crotty says that all this data will allow them to assess how different depths, packing styles and the local environment alter conditions in the chambers. When the samples are excavated in five years, the researchers will also explore what types of decomposition the burial did and did not suppress. This will include tests to identify different fungal and bacterial signatures, to uncover the micro-organisms involved in any decay.

The big questions

Experiments like Crotty’s will help answer one of the key concerns about terrestrial storage of biomass: how long can the carbon be stored?

In 2023 a team led by Lawrence Livermore National Laboratory (LLNL) did a large-scale analysis of the potential for CO2 removal in the US. The resulting Road to Removal report outlined how CO2 removal could be used to help the US achieve its net zero goals (these have since been revoked by the Trump administration), focusing on techniques like direct air capture (DAC), increasing carbon uptake in forests and agricultural lands, and converting waste biomass into fuels and CO2.

The report did not, however, look at biomass burial. One of the report authors, Sarah Baker – an expert in decarbonization and CO2 removal at LLNL – told Physics World that this was because of a lack of evidence around the durability of the carbon stored. The report’s minimum requirement for carbon storage was at least 100 years, and there were not enough data available to show how much carbon stored in biomass would remain after that period, Baker explains.

The US Department of Energy is also working to address this question. It has funded a set of projects, which Baker is involved with, to bridge some of the knowledge gaps on carbon-removal pathways. This includes one led by the National Renewable Energy Lab, measuring how long carbon in buried biomass remains stored under different conditions.

Bury the problem

Crotty’s Colorado experiment is also addressing another question: are all forms of biomass equally appropriate for burial? To test this, Crotty’s team filled the pits with a range of woody materials, including different types of wood and wood chip as well as compressed wood, and “slash” – small branches, leaves, bark and other debris created by logging and other forestry work.

Indeed, Crotty and her colleagues see biomass storage as crucial for those managing our forests. The western US states, in particular, have seen an increased risk of wildfires through a mix of climate change and aggressive fire-suppression policies that do not allow smaller fires to burn and thereby produce overgrown forests. “This has led to a build-up of fuels across the landscape,” Crotty says. “So, in a forest that would typically have a high number of low-severity fires, it’s changed the fire regime into a very high-intensity one.”

These concerns led the US Forest Service to announce a 10-year wildfire crisis plan in 2022 that seeks to reduce the risk of fires by thinning and clearing 50 million acres of forest land, in addition to 20 million acres already slated for treatment. But this creates a new problem.

“There are currently very few markets for the types of residues that need to come out of these forests – it is usually small-diameter, low-value timber,” explains Crotty. “They typically can’t pay their way out of the forests, so business as usual in many areas is to simply put them in a pile and burn them.”

Large pile of wood burning in snowy landscape at edge of forest
Cheap but costly Typically, waste biomass from forest management is burnt, like this pile of slash at the edge of Coconino National Forest in Arizona – but doing so releases carbon dioxide. (Courtesy: Josh Goldstein/Coconino National Forest)

A recent study Crotty co-authored suggests that every year “pile burning” in US National Forests emits greenhouse gases equivalent to almost two million tonnes of CO2, and more than 11 million tonnes of fine particulate matter – air pollution that is linked to a range of health problems. Conservative estimates by the Carbon Containment Lab indicate that the material scheduled for clearance under the Forest Service’s 10-year crisis plan will contain around two gigatonnes (Gt) of CO2 equivalents. This is around 5% of current annual global CO2 emissions.

There are also cost implications. Crotty’s recent analysis found that piling and burning forest residue costs around $700 to $1300 per acre. By adding value to the carbon in the forest residues and keeping it out of the atmosphere, biomass storage may offer a solution to these issues, Crotty says.

As an incentive to remove carbon from the atmosphere, trading mechanisms exist whereby individuals, companies and governments can buy and sell carbon emissions. In essence, carbon has a price attached to it, meaning that someone who has emitted too much, say, can pay someone else to capture and store the equivalent amount of emissions, with an often-touted figure being $100 per tonne of CO2 stored. For a long time, this has been seen as the price at which carbon capture becomes affordable, enabling scale up to the volumes needed to tackle climate change.

“There is only so much capital that we will ever deploy towards [carbon removal] and thus the cheaper the solution, the more credits we’ll be able to generate, the more carbon we will be able to remove from the atmosphere,” explains Justin Freiberg, a managing director of the Carbon Containment Lab. “$100 is relatively arbitrary, but it is important to have a target and aim low on pricing for high quality credits.”

DAC has not managed to reach this magical price point. Indeed, the Swiss firm Climeworks – which is one of the biggest DAC companies – has stated that its costs might be around $300 per tonne by 2030.

A tomb in a mine

Another carbon-removal company, however, claims it has hit this benchmark using biomass burial. “We’re selling our first credits at $100 per tonne,” says Hannah Murnen, chief technology officer at Graphyte – a US firm backed by Bill Gates.

Graphyte is confident that there is significant potential in biomass burial. Based in Pine Bluff, Arkansas, the firm dries and compresses waste biomass into blocks before storage. “We dry it to below a level at which life can exist,” says Murnen, which effectively halts decomposition.

The company claims that it will soon be storing 50,000 tonnes of CO2 per year and is aiming for five million tonnes per year by 2030. Murnen acknowledges that these are “really significant figures”, particularly compared with what has been achieved in carbon capture so far. Nevertheless, she adds, if you look at the targets around carbon capture “this is the type of scale we need to get to”.

The need for carbon capture

The Intergovernmental Panel on Climate Change says that carbon capture is essential to limit global warming to 1.5 °C above pre-industrial levels.

To stay within the Paris Agreement’s climate targets, the 2024 State of Carbon Dioxide Removal report estimated that 7–9 gigatonnes (Gt) of CO2 removal will be needed annually by 2050. According to the report – which was put together by multiple institutions, led by the University of Oxford – currently two billion tonnes of CO2 are being removed per year, mostly through “conventional” methods like tree planting and wetland restoration. “Novel” methods – such as direct air capture (DAC), bioenergy with carbon capture, and ocean alkalinity enhancement – contribute 1.3 million tonnes of CO₂ removal per year, less than 0.1% of the total.

Graphyte is currently working with sawmill residue and rice hulls, but in the future Murnen says it plans to accept all sorts of biomass waste. “One of the great things about biomass for the purpose of carbon removal is that, because we are not doing any sort of chemical transformation on the biomass, we’re very flexible to the type of biomass,” Murnen adds.

And there appears to be plenty available. Estimates by researchers in the UK and India (NPJ Climate and Atmospheric Science 2 35) suggest that every year around 140 Gt of biomass waste is generated globally from forestry and agriculture. Around two-thirds of the agricultural residues are from cereals, like wheat, rice, barley and oats, while sugarcane stems and leaves are the second largest contributors. The rest is made up of things like leaves, roots, peels and shells from other crops. Like forest residues, much of this waste ends up being burnt or left to rot, releasing its carbon.

Currently, Graphyte has one storage site about 30 km from Pine Bluff, where its compressed biomass blocks are stored underground, enclosed in an impermeable layer that prevents water ingress. “We took what used to be an old gravel mine – so basically a big hole in the ground – and we’ve created a lined storage tomb where we are placing the biomass and then sealing it closed,” says Murnen.

Large quarry-like area with hundreds of black blocks stacked in rows and large plant machinery moving more blocks around
Big hole in the ground Graphyte is using an old gravel mine 30 km from Pine Bluff in Arkansas to store its compressed biomass bricks. (Courtesy: Graphyte)

Once sealed, Graphyte monitors the CO2 and methane concentrations in the headspace of the vaults, to check for any decomposition of the biomass. The company also analyses biomass as it enters the facility, to track how much carbon it is storing. Wood residues, like sawmill waste are generally around 50% carbon, says Murnen, but rice hulls are closer to 35% carbon.

Graphyte is confident that its storage is physically robust and could avoid any re-emission for what Murnen calls “a very long period of time”. However, it is also exploring how to prevent accidental disturbance of the biomass in the future – possibly long after the company ceases to exist. One option is to add a conservation easement to the site, a well-established US legal mechanism for adding long-term protection to land.

“We feel pretty strongly that the way we are approaching [carbon removal] is one of the most scalable ways,” Murnen says. “In as far as impediments or barriers to scale, we have a much easier permitting pathway, we don’t need pipelines, we are pretty flexible on the type of land that we can use for our storage sites, and we have a huge variety of feedstocks that we can take into the process.”

A simple solution

Back at LLNL, Baker says that although she hasn’t “run the numbers”, and there are a lot caveats, she suspects that biomass burial is “true carbon removal because it is so simple”.

Once associated upstream and downstream emissions are taken into account, many techniques that people call carbon removal are probably not, she says, because they emit more fossil CO2 than they store.

Biomass burial is also cheap. As the Road to Removal analysis found, “thermal chemical” techniques, like pyrolysis, have great potential for removing and storing carbon while converting biomass into hydrogen and sustainable aviation fuel. But they require huge investment, with larger facilities potentially costing hundreds of millions of dollars. Biomass burial could even act as temporary storage until facilities are ready to convert the carbon into sustainable fuels. “Buy ourselves time and then use it later,” says Baker.

Either way, biomass burial has great potential for the future of carbon storage, and therefore our environment. “The sooner we can start doing these things the greater the climate impact,” Baker says.

We just need to know that the storage is durable – and if that 3775-year-old log is any indication, there’s the potential to store biomass for hundreds, maybe thousands of years.

The post Bury it, don’t burn it: turning biomass waste into a carbon solution appeared first on Physics World.

Wireless e-tattoos help manage mental workload

3 juin 2025 à 10:00

Managing one’s mental workload is a tricky balancing act that can affect cognitive performance and decision making abilities. Too little engagement with an ongoing task can lead to boredom and mistakes; too high could cause a person to become overwhelmed.

For those performing safety-critical tasks, such as air traffic controllers or truck drivers for example, monitoring how hard their brain is working is even more important – lapses in focus could have serious consequences. But how can a person’s mental workload be assessed? A team at the University of Texas at Austin proposes the use of temporary face tattoos that can track when a person’s brain is working too hard.

“Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded,” says lead author Nanshu Lu in a press statement. “There is an optimal mental workload for optimal performance, which differs from person to person.”

The traditional approach for monitoring mental workload is electroencephalography (EEG), which analyses the brain’s electrical activity. But EEG devices are wired, bulky and uncomfortable, making them impractical for real-world situations. Measurements of eye movements using electrooculography (EOG) are another option for assessing mental workload.

Lu and colleagues have developed an ultrathin wireless e-tattoo that records high-fidelity EEG and EOG signals from the forehead. The e-tattoo combines a disposable sticker-like electrode layer and a reusable battery-powered flexible printed circuit (FPC) for data acquisition and wireless transmission.

The serpentine-shaped electrodes and interconnects are made from low-cost, conductive graphite-deposited polyurethane, coated with an adhesive polymer composite to reduce contact impedance and improve skin attachment. The e-tattoo stretches and conforms to the skin, providing reliable signal acquisition, even during dynamic activities such as walking and running.

To assess the e-tattoo’s ability to record basic neural activities, the team used it to measure alpha brainwaves as a volunteer opened and closed their eyes. The e-tattoo captured equivalent neural spectra to that recorded by a commercial gel electrode-based EEG system with comparable signal fidelity.

The researchers next tested the e-tattoo on six participants while they performed a visuospatial memory task that gradually increased in difficulty. They analysed the signals collected by the e-tattoo during the tasks, extracting EEG band powers for delta, theta, alpha, beta and gamma brainwaves, plus various EOG features.

As the task got more difficult, the participants showed higher activity in the theta and delta bands, a feature associated with increased cognitive demand. Meanwhile, activity in the alpha and beta bands decreased, indicating mental fatigue.

The researchers built a machine learning model to predict the level of mental workload experienced during the tasks, training it on forehead EEG and EOG features recorded by the e-tattoo. The model could reliably estimate mental workload in each of the six subjects, demonstrating the feasibility of real-time cognitive state decoding.

“Our key innovation lies in the successful decoding of mental workload using a wireless, low-power, low-noise and ultrathin EEG/EOG e-tattoo device,” the researchers write. “It addresses the unique challenges of monitoring forehead EEG and EOG, where wearability, non-obstructiveness and signal stability are critical to assessing mental workload in the real world.”

They suggest that future applications could include real-time cognitive load monitoring in pilots, operators and healthcare professionals. “We’ve long monitored workers’ physical health, tracking injuries and muscle strain,” says co-author Luis Sentis. “Now we have the ability to monitor mental strain, which hasn’t been tracked. This could fundamentally change how organizations ensure the overall well-being of their workforce.”

The e-tattoo is described in Device.

The post Wireless e-tattoos help manage mental workload appeared first on Physics World.

Andromeda galaxy may not collide with the Milky Way after all

2 juin 2025 à 17:00

Since 1912, we’ve known that the Andromeda galaxy is racing towards our own Milky Way at about 110 kilometres per second. A century later, in 2012, astrophysicists at the Space Telescope Science Institute (STScI) in Maryland, US came to a striking conclusion. In four billion years, they predicted, a collision between the two galaxies was a sure thing.

Now, it’s not looking so sure.

Using the latest data from the European Space Agency’s Gaia astrometric mission, astrophysicists led by Till Sawala of the University of Helsinki, Finland re-modelled the impending crash, and found that it’s 50/50 as to whether a collision happens or not.

This new result differs from the 2012 one because it considers the gravitational effect of an additional galaxy, the Large Magellanic Cloud (LMC), alongside the Milky Way, Andromeda and the nearby Triangulum spiral galaxy, M33. While M33’s gravity, in effect, adds to Andromeda’s motion towards us, Sawala and colleagues found that the LMC’s gravity tends to pull the Milky Way out of Andromeda’s path.

“We’re not predicting that the merger is not going to happen within 10 billion years, we’re just saying that from the data we have now, we can’t be certain of it,” Sawala tells Physics World.

“A step in the right direction”

While the LMC contains only around 10% of the Milky Way’s mass, Sawala and colleagues’ work indicates that it may nevertheless be massive enough to turn a head-on collision into a near-miss. Incorporating its gravitational effects into simulations is therefore “a step in the right direction”, says Sangmo Tony Sohn, a support scientist at the STScI and a co-author of the 2012 paper that predicted a collision.

Even with more detailed simulations, though, uncertainties in the motion and masses of the galaxies leave room for a range of possible outcomes. According to Sawala, the uncertainty with the greatest effect on merger probability lies in the so-called “proper motion” of Andromeda, which is its motion as it appears on our night sky. This motion is a mixture of Andomeda’s radial motion towards the centre of the Milky Way and the two galaxies’ transverse motion perpendicular to one another.

If the combined transverse motion is large enough, Andromeda will pass the Milky Way at a distance greater than 200 kiloparsecs (652,000 light years). This would avert a collision in the next 10 billion years, because even when the two galaxies loop back on each other, their next pass would still be too distant, according to the models.

Conversely, a smaller transverse motion would limit the distance at closest approach to less than 200 kiloparsecs. If that happens, Sawala says the two galaxies are “almost certain to merge” because of the dynamical friction effect, which arises from the diffuse halo of old stars and dark matter around galaxies. When two galaxies get close enough, these haloes begin interacting with each other, generating tidal and frictional heating that robs the galaxies of orbital energy and makes them fall ever closer.

The LMC itself is an excellent example of how this works. “The LMC is already so close to the Milky Way that it is losing its orbital energy, and unlike [Andromeda], it is guaranteed to merge with the Milky Way,” Sawala says, adding that, similarly, M33 stands a good chance of merging with Andromeda.

“A very delicate task”

Because Andromeda is 2.5 million light years away, its proper motion is very hard to measure. Indeed, no-one had ever done it until the STScI team spent 10 years monitoring the galaxy, which is also known as M31, with the Hubble Space Telescope – something Sohn describes as “a very delicate task” that continues to this day.

Another area where there is some ambiguity is in the mass estimate of the LMC. “If the LMC is a little more massive [than we think], then it pulls the Milky Way off the collision course with M31 a little more strongly, reducing the possibility of a merger between the Milky Way and M31,” Sawala explains.

The good news is that these ambiguities won’t be around forever. Sohn and his team are currently analysing new Hubble data to provide fresh constraints on the Milky Way’s orbital trajectory, and he says their results have been consistent with the Gaia analyses so far. Sawala agrees that new data will help reduce uncertainties. “There’s a good chance that we’ll know more about what is going to happen fairly soon, within five years,” he says.

Even if the Milky Way and Andromeda don’t collide in the next 10 billion years, though, that won’t be the end of the story. “I would expect that there is a very high probability that they will eventually merge, but that could take tens of billions of years,” Sawala says.

The research is published in Nature Astronomy.

The post Andromeda galaxy may not collide with the Milky Way after all appeared first on Physics World.

Ask me anything: Tom Woodroof – ‘Curiosity, self-education and carefully-chosen guidance can get you surprisingly far’

2 juin 2025 à 09:50

What skills do you use every day in your job?

I co-founded Mutual Credit Services in 2020 to help small businesses thrive independently of the banking sector. As a financial technology start-up, we’re essentially trying to create a “commons” economy, where power lies in the hands of people, not big institutions, thereby making us more resilient to the climate crisis.

Those goals are probably as insanely ambitious as they sound, which is why my day-to-day work is a mix of complexity economics, monetary theory and economic anthropology. I spend a lot of time thinking hard about how these ideas fit together, before building new tech platforms, apps and services, which requires analytical and design thinking.

There are still many open questions about business, finance and economics that I’d like to do research on, and ultimately develop into new services. I’m constantly learning through trial projects and building a pipeline of ideas for future exploration.

Developing the business involves a lot of decision-making, project management and team-building. In fact, I’m spending more and more of my time on commercialization – working out how to bring new services to market, nurturing partnerships and talking to potential early adopters. It’s vital that I can explain novel financial ideas to small businesses in a way they can understand and have confidence in. So I’m always looking for simpler and more compelling ways to describe what we do.

What do you like best and least about your job?

What I like best is the variety and creativity. I’m a generalist by nature, and love using insights from a variety of disciplines. The practical application of these ideas to create a better economy feels profoundly meaningful, and something that I’d be unlikely to get in any other job. I also love the autonomy of running a business. With a small but hugely talented and enthusiastic team, we’ve so far managed to avoid the company becoming rigid and institutionalized. It’s great to work with people on our team and beyond who are excited by what we’re doing, and want to be involved.

The hardest thing is facing the omnicrisis of climate breakdown and likely societal collapse that makes this work necessary in the first place. As with all start-ups, the risk of failure is huge, no matter how good the ideas are, and it’s frustrating to spend so much time on tasks that just keep things afloat, rather than move the mission forward. I work long hours and the job can be stressful.

What do you know today, that you wish you knew when you were starting out in your career?

I spent a lot of time during my PhD at Liverpool worrying that I’d get trapped in one narrow field, or drift into one of the many default career options. I wish I’d known how many opportunities there are to do original, meaningful and self-directed work – especially if you’re open to unconventional paths, such as the one I’ve followed, and can find the right people to do it with.

It’s also easy to assume that certain skills or fields are out of reach, whereas I’ve found again and again that a mix of curiosity, self-education and carefully-chosen guidance can get you surprisingly far. Many things that once seemed intimidating now feel totally manageable. That said, I’ve also learned that everything takes at least three times longer than expected – especially when you’re building something new. Progress often looks like small compounding steps, rather than a handful of breakthroughs.

The post Ask me anything: Tom Woodroof – ‘Curiosity, self-education and carefully-chosen guidance can get you surprisingly far’ appeared first on Physics World.

Thinking of switching research fields? Beware the citation ‘pivot penalty’ revealed by new study

2 juin 2025 à 14:00

Scientists who switch research fields suffer a drop in the impact of their new work – a so-called “pivot penalty”. That is according to a new analysis of scientific papers and patents, which finds that the pivot penalty increases the further away a researcher shifts from their previous topic of research.

The analysis has been carried out by a team led by Dashun Wang and Benjamin Jones of Northwestern University in Illinois. They analysed more than 25 million scientific papers published between 1970 and 2015 across 154 fields as well as 1.7 million US patents across 127 technology classes granted between 1985 and 2020.

To identify pivots and quantify how far a scientist moves from their existing work, the team looked at the scientific journals referenced in a paper and compared them with those cited by previous work. The more the set of journals referenced in the main work diverged from those usually cited, the larger the pivot. For patents, the researchers used “technological field codes” to measure pivots.

Larger pivots are associated with fewer citations and a lower propensity for high-impact papers, defined as those in the top 5% of citations received in their field and publication year. Low-pivot work – moving only slightly away from the typical field of research – led to a high-impact paper 7.4% of the time, yet the highest-pivot shift resulted in a high-impact paper only 2.2% of the time. A similar trend was seen for patents.

When looking at the output of an individual researcher, low-pivot work was 2.1% more likely to have a high-impact paper while high-pivot work was 1.8% less likely to do so. The study found the pivot penalty to be almost universal across scientific fields and it persists regardless of a scientist’s career stage, productivity and collaborations.

COVID impact

The researchers also studied the impact of COVID-19, when many researchers pivoted to research linked to the pandemic. After analysing 83,000 COVID-19 papers and 2.63 million non-COVID papers published in 2020, they found that COVID-19 research was not immune to the pivot penalty. Such research had a higher impact than average, but the further a scientist shifted from their previous work to study COVID-19 the less impact the research had.

“Shifting research directions appears both difficult and costly, at least initially, for individual researchers,” Wang told Physics World. He thinks, however, that researchers should not avoid change but rather “approach it strategically”. Researchers should, for example, try anchoring their new work in the conventions of their prior field or the one they are entering.

To help researchers pivot, Wang says research institutions should “acknowledge the friction” and not “assume that a promising researcher will thrive automatically after a pivot”. Instead, he says, institutions need to design support systems, such as funding or protected time to explore new ideas, or pairing researchers with established scholars in the new field.

The post Thinking of switching research fields? Beware the citation ‘pivot penalty’ revealed by new study appeared first on Physics World.

Laboratory-scale three-dimensional X-ray diffraction makes its debut

2 juin 2025 à 10:00

Trips to synchrotron facilities could become a thing of the past for some researchers thanks to a new laboratory-scale three-dimensional X-ray diffraction microscope designed by a team from the University of Michigan, US. The device, which is the first of its kind, uses a liquid-metal-jet electrode to produce high-energy X-rays and can probe almost everything a traditional synchrotron can. It could therefore give a wider community of academic and industrial researchers access to synchrotron-style capabilities.

Synchrotrons are high-energy particle accelerators that produce bright, high-quality beams of coherent electromagnetic radiation at wavelengths ranging from the infrared to soft X-rays. To do this, they use powerful magnets to accelerate electrons in a storage ring, taking advantage of the fact that accelerated electrons emit electromagnetic radiation.

One application for this synchrotron radiation is a technique called three-dimensional X-ray diffraction (3DXRD) microscopy. This powerful technique enables scientists to study the mechanical behaviour of polycrystalline materials, and it works by constructing three-dimensional images of a sample from X-ray images taken at multiple angles, much as a CT scan images the human body. Instead of the imaging device rotating around a patient, however, it is the sample that rotates in the focus of the powerful X-ray beam.

At present, 3DXRD can only be performed at synchrotrons. These are national and international facilities, and scientists must apply for beamtime months or even years in advance. If successful, they receive a block of time lasting six days at the most, during which they must complete all their experiments.

A liquid-metal-jet anode

Previous attempts to make 3DXRD more accessible by downscaling it have largely been unsuccessful. In particular, efforts to produce high-energy X-rays using electrical anodes have foundered because these anodes are traditionally made of solid metal, which cannot withstand the extremely high power of electrons needed to produce X-rays.

The new lab-scale device developed by mechanical engineer Ashley Bucsek and colleagues overcomes this problem thanks to a liquid-metal-jet anode that can absorb more power and therefore produce a greater number of X-ray photons per electrode surface area. The sample volume is illuminated by a monochromatic box or line-focused X-ray beam while diffraction patterns are serially recorded as the sample rotates full circle. “The technique is capable of measuring the volume, position, orientation and strain of thousands of polycrystalline grains simultaneously,” Bucsek says.

When members of the Michigan team tested the device by imaging samples of titanium alloy samples, they found it was as accurate as synchrotron-based 3DXRD, making it a practical alternative. “I conducted my PhD doing 3DXRD experiments at synchrotron user facilities, so having full-time access to a personal 3DXRD microscope was always a dream,” Bucsek says. “My colleagues and I hope that the adaptation of this technology from the synchrotron to the laboratory scale will make it more accessible.”

The design for the device, which is described in Nature Communications, was developed in collaboration with a US-based instrumentation firm, PROTO Manufacturing. Bucsek says she is excited by the possibility that commercialization will make 3DXRD more “turn-key” and thus reduce the need for specialized knowledge in the field.

The Michigan researchers now hope to use their instrument to perform experiments that must be carried out over long periods of time. “Conducting such prolonged experiments at synchrotron user facilities would be difficult, if not impossible, due to the high demand, so, lab-3DXRD can fill a critical capability gap in this respect,” Bucsek tells Physics World.

The post Laboratory-scale three-dimensional X-ray diffraction makes its debut appeared first on Physics World.

Majorana bound states spotted in system of three quantum dots

31 mai 2025 à 12:46

Firm evidence of Majorana bound states in quantum dots has been reported by researchers in the Netherlands. Majorana modes appeared at both edges of a quantum dot chain when an energy gap suppressed them in the centre, and the experiment could allow researchers to investigate the unique properties of these particles in hitherto unprecedented detail. This could bring topologically protected quantum bits (qubits) for quantum computing one step closer.

Majorana fermions were first proposed in 1937 by the Italian physicist Ettore Majorana. They were imagined as elementary particles that would be their own antiparticles. However, such elementary particles have never been definitively observed. Instead, physicists have worked to create Majorana quasiparticles (particle-like collective excitations) in condensed matter systems.

In 2001, the theoretical physicist Alexei Kitaev  at Microsoft Research, proposed that “Majorana bound states” could be produced in nanowires comprising topological superconductors. The Majorana quasiparticle would exist as a single nonlocal mode at either end of a wire, while being zero-valued in the centre. Both ends would be constrained by the laws of physics to remain identical despite being spatially separated. This phenomenon could produce “topological qubits” robust to local disturbance.

Microsoft and others continue to research Majorana modes using this platform to this day.  Multiple groups claim to have observed them, but this remains controversial. “It’s still a matter of debate in these extended 1D systems: have people seen them? Have they not seen them?”, says Srijit Goswami of QuTech in Delft.

 Controlling disorder

 In 2012, theoretical physicists Jay Sau, then of Harvard University and Sankar Das Sarma of the University of Maryland proposed looking for Majorana bound states in quantum dots. “We looked at [the nanowires] and thought ‘OK, this is going to be a while given the amount of disorder that system has – what are the ways this disorder could be controlled?’ and this is exactly one of the ways we thought it could work,” explains Sau. The research was not taken seriously at the time, however, Sau says, partly because people underestimated the problem of disorder.

Goswami and others have previously observed “poor man’s Majoranas” (PMMs) in two quantum dots. While they share some properties with Majorana modes, PMMs lack topological protection. Last year the group coupled two spin-polarized quantum dots connected by a semiconductor–superconductor hybrid material. At specific points, the researchers found zero-bias conductance peaks.

“Kitaev says that if you tune things exactly right you have one Majorana on one dot and another Majorana on another dot,” says Sau. “But if you’re slightly off then they’re talking to each other. So it’s an uncomfortable notion that they’re spatially separated if you just have two dots next to each other.”

Recently, a group that included Goswami’s colleagues at QuTech found that the introduction of a third quantum dot stabilized the Majorana modes. However, they were unable to measure the energy levels in the quantum dots.

Zero energy

In new work, Goswami’s team used systems of three electrostatically-gated, spin-polarized quantum dots in a 2D electron gas joined by hybrid semiconductor–superconductor regions. The quantum dots had to be tuned to zero energy. The dots exchanged charge in two ways: by standard electron hopping through the semiconductor and by Cooper-pair mediated coupling through the superconductor.

“You have to change the energy level of the superconductor–semiconductor hybrid region so that these two processes have equal probability,” explains Goswami. “Once you satisfy these conditions, then you get Majoranas at the ends.”

In addition to more topological protection, the addition of a third qubit provided the team with crucial physical insight. “Topology is actually a property of a bulk system,” he explains; “Something special happens in the bulk which gives rise to things happening at the edges. Majoranas are something that emerge on the edges because of something happening in the bulk.” With three quantum dots, there is a well-defined bulk and edge that can be probed separately: “We see that when you have what is called a gap in the bulk your Majoranas are protected, but if you don’t have that gap your Majoranas are not protected,” Goswami says.

To produce a qubit will require more work to achieve the controllable coupling of four Majorana bound states and the integration of a readout circuit to detect this coupling. In the near-term, the researchers are investigating other phenomena, such as the potential to swap Majorana bound states.

Sau is now at the University of Maryland and says that an important benefit of the experimental platform is that it can be determined unambiguously whether or not Majorana bound states have been observed. “You can literally put a theory simulation next to the experiment and they look very similar.”

 The research is published in Nature.

The post Majorana bound states spotted in system of three quantum dots appeared first on Physics World.

Leinweber Foundation ploughs $90m into US theoretical physics

30 mai 2025 à 19:30

The Leinweber Foundation has awarded five US institutions $90m to create their own theoretical research institutes. The investment, which the foundation says is the largest ever for theoretical physics research, will be used to fund graduate students and postdocs at each institute as well as several Leinweber Physics Fellows.

The Leinweber Foundation was founded in 2015 by the software entrepreneur Larry Leinweber. In 1982 Leinweber founded the software company New World Systems Corporation, which provided software to the emergency services. In 2015 he sold the company to Tyler Technologies for $670m.

Based in Michigan, Leinweber Foundation supports research, education and community endeavours where it has provided Leinweber Software Scholarships to undergraduates at Michigan’s universities.

A Leinweber Institute for Theoretical Physics (LITP) will now be created at the universities of California, Berkeley, Chicago and Michigan as well as at the Massachusetts Institute of Technology (MIT) and at Princeton’s Institute for Advanced Study (IAS), where the institute will instead be named the Leinweber Forum for Theoretical and Quantum Physics.

The MIT LIPT, initially led by Washington Taylor before physicist Tracy Slatyer takes over later this year, will receive $20m from the foundation and will provide support for six postdocs, six graduate students as well as visitors, seminars and “other scholarly activities”.

“This landmark endowment from the Leinweber Foundation will enable us to support the best graduate students and postdoctoral researchers to develop their own independent research programmes and to connect with other researchers in the Leinweber Institute network,” says Taylor.

Spearing innovation

UC Berkeley, meanwhile, will receive $14.4m from the foundation in which the existing Berkeley Center for Theoretical Physics (BITP) will be renamed LITP at Berkeley and led by physicist Yasunori Nomura.

The money will be used for four postdoc positions to join the existing 15 at the BITP as well as to support graduate students and visitors. “This is transformative,” notes Nomura. “The gift will really have a huge impact on a wide range of research at Berkeley, including particle physics, quantum gravity, quantum information, condensed matter physics and cosmology.”

Chicago will receive $18.4m where the existing Kadanoff Center for Theoretical Physics will be merged into a new LITP at the University of Chicago and led by physicist Dam Thanh Son.

The remaining $37.2m will be split between the Leinweber Forum for Theoretical and Quantum Physics at the IAS and at Michigan, in which the existing Leinweber Center for Theoretical Physics will expand and become an institute.

“Theoretical physics may seem abstract to many, but it is the tip of the spear for innovation. It fuels our understanding of how the world works and opens the door to new technologies that can shape society for generations,” says Leinweber in a statement. “As someone who has had a lifelong fascination with theoretical physics, I hope this investment not only strengthens U.S. leadership in basic science, but also inspires curiosity, creativity, and groundbreaking discoveries for generations to come.”

The post Leinweber Foundation ploughs $90m into US theoretical physics appeared first on Physics World.

China launches Tianwen-2 asteroid sample-return mission

30 mai 2025 à 16:56

China has launched its first mission to retrieve samples from an asteroid. The Tianwen-2 mission launched at 01:31 a.m. local time on 28 May from the Xichang satellite launch centre, south-west China, aboard a Long March B rocket.

Tianwen-2’s target is a small near-Earth asteroid called 469219 Kamoʻoalewa, which is 15–39 million km away and is known as a “quasi-satellite” of Earth.

The mission is set to reach the body, which is 40–100 m wide, in July 2026. It will first study it up close using a suite of 11 instruments including cameras, spectrometers and radar, before aiming to collect about 100 g of material.

This will be achieved via three possible methods. One is via hovering close to the asteroid; another is using a robotic arm to collect samples from the body; while a third is dubbed “touch and go”, which involves gently landing on the asteroid and using drills at the end of each leg to retrieve material.

The collected samples will then be stored in a module that is released and returned to Earth in November 2027. If successful, it will make China the third nation to retrieve asteroid material behind the US and Japan.

Next steps

The second part of the 10-year mission involves using Earth for a gravitational swing-by to spend six year travelling to another target – 311P/PanSTARRS. The body lies in the main asteroid belt between Mars and Jupiter and at its closest distance is about 140 million km away from Earth.

The 480 m-wide object, which was discovered in 2013, has six dust tails and has characteristics of both asteroids and comets. Tianwen-2 will not land on 311P/PanSTARRS but instead use its instruments to study the “active asteroid” from a distance.

Tianwen-2’s predecessor, Tianwen-1, was China’s first mission to Mars, successfully landing on Utopia Planitia – a largely flat impact basin but scientifically interesting with potential water-ice underneath – following a six-month journey.

China’s third interplanetary mission, Tianwen-3, will aim to retrieve sample from Mars and could launch as soon as 2028. If successful, it would make China the first country to achieve the feat.

The post China launches Tianwen-2 asteroid sample-return mission appeared first on Physics World.

Ancient woodworking technique inspires improved memristor

30 mai 2025 à 16:00

Researchers in China have adapted the interlocking structure of mortise-and-tenon joints – as used by woodworkers around the world since ancient times – to the design of nanoscale devices known as memristors. The new devices are far more uniform than previous such structures, and the researchers say they could be ideal for scientific computing applications.

The memory-resistor, or “memristor”, was described theoretically at the beginning of the 1970s, but the first practical version was not built until 2008. Unlike standard resistors, the resistance of a memristor changes depending on the current previously applied to it, hence the “memory” in its name. This means that a desired resistance can be programmed into the device and subsequently stored. Importantly, the remembered value of the resistive state persists even when the power is switched off.

Thanks to numerous technical advances since 2008, memristors can now be integrated onto chips in large numbers. They are also capable of processing large amounts of data in parallel, meaning they could be ideal for emerging “in-memory” computing technologies that require calculations known as large-scale matrix-vector multiplications (MVMs). Many such calculations involve solving partial differential equations (PDEs), which are used to model complex behaviour in fields such as weather forecasting, fluid dynamics and astrophysics, to name but a few.

One remaining hurdle, however, is that it is hard to make memristors with uniform characteristics. The electronic properties of devices containing multiple memristors can therefore vary considerably, which adversely affects the computational accuracy of large-scale arrays.

Inspiration from an ancient technique

Physicists co-led by Shi-Jun Liang and Feng Miao of Nanjing University’s School of Physics say they have now overcome this problem by designing a memristor that uses a mortise-tenon-shaped (MTS) architecture. Humans have been using these strong and stable structures in wooden furniture for thousands of years, with one of the earliest examples dating back to the Hemudu culture in China 7 000 years ago.

Liang, Miao and colleagues created the mortise part of their structure by using plasma etching to create a hole within a nanosized-layer of hexagonal boron nitride (h-BN). They then constructed a tenon in a top electrode made of tantalum (Ta) that precisely matches the mortise. This ensures that this electrode directly contacts the device’s switching layer (which is made from HfO2) only in the designated region. A bottom electrode completes the device.

The new architecture ensures highly uniform switching within the designated mortise-and-tenon region, resulting in a localized path for electronic conduction. “The result is a memristor with exceptional fundamental properties across three key metrics,” Miao tells Physics World. “These are: high endurance (over more than 109 cycles); long-term and stable memory retention (of over 104 s), and a fast switching speed of around 4.2 ns.”

The cycle-to-cycle variation of the low-resistance state (LRS) can also be reduced from 30.3% for a traditional memristor to 2.5% for the MTS architecture and the high-resistance state (HRS) from 62.4 to 27.2%.

To test their device, the researchers built a PDE solver with it. They found that their new MTS memristor could solve the Poisson equation five times faster than a conventional memristor based on HfO2 without h-BN.

The new technique, which is detailed in Science Advances, is a promising strategy for developing high-uniformity memristors, and could pave the way for high-accuracy, energy-efficient scientific computing platforms, Liang claims. “We are now looking to develop large-scale integration of our MTS device and make a prototype system,” he says.

The post Ancient woodworking technique inspires improved memristor appeared first on Physics World.

New contact lenses allow wearers to see in the near-infrared

30 mai 2025 à 13:00

A new contact lens enables humans to see near-infrared light without night vision goggles or other bulky equipment. The lens, which incorporates metallic nanoparticles that “upconvert” normally-invisible wavelengths into visible ones, could have applications for rescue workers and others who would benefit from enhanced vision in conditions with poor visibility.

The infrared (IR) part of the electromagnetic spectrum encompasses light with wavelengths between 700 nm and 1 mm. Human eyes cannot normally detect these wavelengths because opsins, the light-sensitive protein molecules that allow us to see, do not have the required thermodynamic properties. This means we see only a small fraction of the electromagnetic spectrum, typically between 400‒700 nm.

While devices such as night vision goggles and infrared-visible converters can extend this range, they require external power sources. They also cannot distinguish between different wavelengths of IR light.

Photoreceptor-binding nanoparticles

In a previous work, researchers led by neuroscientist Tian Xue of the University of Science and Technology of China (USTC) injected photoreceptor-binding nanoparticles into the retinas of mice. While this technique was effective, it is too invasive and risky for human volunteers. In the new study, therefore, Xue and colleagues integrated the nanoparticles into biocompatible polymeric materials similar to those used in standard soft contact lenses.

The nanoparticles in the lenses are made from Au/NaGdF4: Yb3+, Er3+ and have a diameter of approximately 45 nm each. They work by capturing photons with lower energies (longer wavelengths) and re-emitting them as photons with higher energies (shorter wavelengths). This process is known as upconversion and the emitted light is said to be anti-Stokes shifted.

When the researchers tested the new upconverting contact lenses (UCLs) on mice, the rodents’ behaviour suggested they could sense IR wavelengths. For example, when given a choice between a dark box and an IR-illuminated one, the lens-wearing mice scurried into the dark box. In contrast, a control group of mice not wearing lenses showed no preference for one box over the other. The pupils of the lens-wearing mice also constricted when exposed to IR light, and brain imaging revealed that processing centres in their visual cortex were activated.

Flickering seen even with eyes closed

The team then moved on to human volunteers. “In humans, the near-infrared UCLs enabled participants to accurately detect flashing Morse code-like signals and perceive the incoming direction of near-infrared (NIR) light,” Xue says, referring to light at wavelengths between 800‒1600 nm. Counterintuitively, the flashing images appeared even clearer when the volunteers closed their eyes – probably because IR light is better than visible light at penetrating biological tissue such as eyelids. Importantly, Xue notes that wearing the lenses did not affect participants’ normal vision.

The team also developed a wearable system with built-in flat UCLs. This system allowed volunteers to distinguish between patterns such as horizontal and vertical lines; S and O shapes; and triangles and squares.

But Xue and colleagues did not stop there. By replacing the upconverting nanoparticles with trichromatic orthogonal ones, they succeeded in converting NIR light into three different spectral bands. For example, they converted infrared wavelengths of 808, 980 nm and 1532 nm into 540, 450, and 650 nm respectively – wavelengths that humans perceive as green, blue and red.

“As well as allowing wearers to garner more detail within the infrared spectrum, this technology could also help colour-blind individuals see wavelengths they would otherwise be unable to detect by appropriately adjusting the absorption spectrum,” Xue tells Physics World.

According to the USTC researchers, who report their work in Cell, the devices could have several other applications. Apart from providing humans with night vision and offering an adaptation for colour blindness, the lenses could also give wearers better vision in foggy or dusty conditions.

At present, the devices only work with relatively bright IR emissions (the study used LEDs). However, the researchers hope to increase the photosensitivity of the nanoparticles so that lower levels of light can trigger the upconversion process.

The post New contact lenses allow wearers to see in the near-infrared appeared first on Physics World.

How magnetar flares give birth to gold and platinum

30 mai 2025 à 10:21

Powerful flares on highly-magnetic neutron stars called magnetars could produce up to 10% of the universe’s gold, silver and platinum, according to a new study. What is more, astronomers may have already observed this cosmic alchemy in action.

Gold, silver, platinum and a host of other rare heavy nuclei are known as rapid-process (r-process) elements. This is because astronomers believe that these elements are produced by the rapid capture of neutrons by lighter nuclei. Neutrons can only exist outside of an atomic nucleus for about 15 min before decaying (except in the most extreme environments). This means that the r-process must be fast and take place in environments rich in free neutrons.

In August 2017, an explosion resulting from the merger of two neutron stars was witnessed by telescopes operating across the electromagnetic spectrum and by gravitational-wave detectors. Dubbed a kilonova, the explosion produced approximately 16,000 Earth-masses worth of r-process elements, including about ten Earth masses of gold and platinum.

While the observations seem to answer the question of where precious metals came from, there remains a suspicion that neutron-star mergers cannot explain the entire abundance of r-process elements in the universe.

Giant flares

Now researchers led by Anirudh Patel, who is a PhD student at New York’s Columbia University, have created a model that describes how flares on the surface of magnetars can create r-process elements.

Patel tells Physics World that “The rate of giant flares is significantly greater than mergers.” However, given that one merger “produces roughly 10,000 times more r-process mass than a single magnetar flare”, neutron-star mergers are still the dominant factory of rare heavy elements.

A magnetar is an extreme type of neutron star with a magnetic field strength of up to a thousand trillion gauss. This makes magnetars the most magnetic objects in the universe. Indeed, if a magnetar were as close to Earth as the Moon, its magnetic field would wipe your credit card.

Astrophysicists believe that when a magnetar’s powerful magnetic fields are pulled taut, the magnetic tension will inevitably snap. This would result in a flare, which is an energetic ejection of neutron-rich material from the magnetar’s surface.

Mysterious mechanism

However, the physics isn’t entirely understood, according to Jakub Cehula of Charles University in the Czech Republic, who is a member of Patel’s team. “While the source of energy for a magnetar’s giant flares is generally agreed to be the magnetic field, the exact mechanism by which this energy is released is not fully understood,” he explains.

One possible mechanism is magnetic reconnection, which creates flares on the Sun. Flares could also be produced by energy released during starquakes following a build-up of magnetic stress. However, neither satisfactorily explains the giant flares, of which only nine have thus far been detected.

In 2024 Cehula led research that attempted to explain the flares by combining starquakes with magnetic reconnection. “We assumed that giant flares are powered by a sudden and total dissipation of the magnetic field right above a magnetar’s surface,” says Cehula.

This sudden release of energy drives a shockwave into the magnetar’s neutron-rich crust, blasting a portion of it into space at velocities greater than a tenth of the speed of light, where in theory heavy elements are formed via the r-process.

Gamma-ray burst

Remarkably, astronomers may have already witnessed this in 2004, when a giant magnetar flare was spotted as a half-second gamma-ray burst that released more energy than the Sun does in a million years. What happened next remained unexplained until now. Ten minutes after the initial burst, the European Space Agency’s INTEGRAL satellite detected a second, weaker signal that was not understood.

Now, Patel and colleagues have shown that the r-process in this flare created unstable isotopes that quickly decayed into stable heavy elements – creating the gamma-ray signal.

Patel calculates that the 2004 flare resulted in the creation of two million billion billion kilograms of r-process elements, equivalent to about the mass of Mars.

Extrapolating, Patel calculates that giant flares on magnetars contribute between 1–10% of all the r-process elements in the universe.

Lots of magnetars

“This estimate accounts for the fact that these giant flares are rare,” he says, “But it’s also important to note that magnetars have lifetimes of 1000 to 10,000 years, so while there may only be a couple of dozen magnetars known to us today, there have been many more magnetars that have lived and died over the course of the 13 billion-year history of our galaxy.”

Magnetars would have been produced early in the universe by the supernovae of massive stars, whereas it can take a billion years or longer for two neutron stars to merge. Hence, magnetars would have been a more dominant source of r-process elements in the early universe. However, they may not have been the only source.

“If I had to bet, I would say there are other environments in which r-process elements can be produced, for example in certain rare types of core-collapse supernovae,” says Patel.

Either way, it means that some of the gold and silver in your jewellery was forged in the violence of immense magnetic fields snapping on a dead star.

The research is described in Astrophysical Journal Letters.

The post How magnetar flares give birth to gold and platinum appeared first on Physics World.

Teaching quantum physics to everyone: pictures offer a new way of understanding

29 mai 2025 à 16:49

Quantum science is enjoying a renaissance as nascent quantum computers emerge from the lab and quantum sensors are being used for practical applications.

As the technologies we use become more quantum in nature, it follows that everyone should have a basic understanding of quantum physics. To explore how quantum physics can be taught to the masses, I am joined by Arjan Dhawan, Aleks Kissinger and Bob Coecke – who are all based in the UK.

Coecke is chief scientist at Quantinuum – which develops quantum computing hardware and software. Kissinger is associate professor of quantum computing at the University of Oxford; and Dhawan is studying mathematics at the University of Durham.

Kissinger and Coecke have developed a way of teaching quantum physics using diagrams. In 2023, Oxford and Quantinuum joined forces to use the method in a pilot summer programme for 15 to 17 year-olds. Dhawan was one of their students.

Physics World is brought to you by IOP Publishing, which also publishes scholarly journals, conference proceedings and ebooks.

You can download the book The Ringed Planet: Second Edition free of charge for a limited time only. By Joshua Colwell, the book is a must read on Saturn and the Cassini mission. An updated and expanded third edition is also hot off the press.

Browse all ebooks here and remember that you can always read the first chapters of all IOPP ebooks for free.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Teaching quantum physics to everyone: pictures offer a new way of understanding appeared first on Physics World.

Secondary dose checks ensure safe and accurate delivery of adaptive radiotherapy

29 mai 2025 à 13:00

Adaptive radiotherapy, an advanced cancer treatment in which each fraction is tailored to the patient’s daily anatomy, offers the potential to maximize target conformality and minimize dose to surrounding healthy tissue. Based on daily scans – such as MR images recorded by an MR-Linac, for example – treatment plans are adjusted each day to account for anatomical changes in the tumour and surrounding healthy tissue.

Creating a new plan for every treatment fraction, however, increase the potential for errors, making fast and effective quality assurance (QA) procedures more important than ever. To meet this need, the physics team at Hospital Almater in Mexicali, Mexico, is using Elekta ONE | QA, powered by ThinkQA Secondary Dose Check* (ThinkQA SDC) software to ensure that each adaptive plan is safe and accurate before it is delivered to the patient.

Radiotherapy requires a series of QA checks prior to treatment delivery, starting with patient-specific QA, where the dose calculated by the treatment planning system is delivered to a phantom. This procedure ensures that the delivered dose distribution matches the prescribed plan. Alongside, secondary dose checks can be performed, in which an independent algorithm verifies that the calculated dose distribution corresponds with that delivered to the actual patient anatomy.

“The secondary dose check is an independent dose calculation that uses a different algorithm to the one in the treatment planning system,” explains Alexis Cabrera Santiago, a medical physicist at Hospital Almater. “ThinkQA SDC software calculates the dose based on the patient anatomy, which is actually more realistic than using a rigid phantom, so we can compare both results and catch any differences before treatment.”

ThinkQA SDC
Pre-treatment verification ThinkQA SDC’s unique dose calculation method has been specifically designed for Elekta Unity. (Courtesy: Elekta)

For adaptive radiotherapy in particular, this second check is invaluable. Performing phantom-based QA following each daily imaging session is often impractical. Instead, in many cases, it’s possible to use ThinkQA SDC instead.

“Secondary dose calculation is necessary in adaptive treatments, for example using the MR-Linac, because you are changing the treatment plan for each session,” says José Alejandro Rojas‑López, who commissioned and validated ThinkQA SDC at Hospital Almater. “You are not able to shift the patient to realise patient-specific QA, so this secondary dose check is needed to analyse each treatment session.”

ThinkQA SDC’s ability to achieve patient-specific QA without shifting the patient is extremely valuable, allowing time savings while upholding the highest level of QA safety. “The AAPM TG 219 report recognises secondary dose verification as a validated alternative to patient-specific QA, especially when there is no time for traditional phantom checks in adaptive fractions,” adds Cabrera Santiago.

The optimal choice

At Hospital Almater, all external-beam radiation treatments are performed using an Elekta Unity MR-Linac (with brachytherapy employed for gynaecological cancers). This enables the hospital to offer adaptive radiotherapy for all cases, including head-and-neck, breast, prostate, rectal and lung cancers.

To ensure efficient workflow and high-quality treatments, the team turned to the ThinkQA SDC software. ThinkQA SDC received FDA 510(k) clearance in early 2024 for use with both the Unity MR-Linac and conventional Elekta linacs.

Rojas‑López (who now works at Hospital Angeles Puebla) says that the team chose ThinkQA SDC because of its user-friendly interface, ease of integration into the clinical workflow and common integrated QA platform for both CT and MR-Linac systems. The software also offers the ability to perform 3D evaluation of the entire planning treatment volume (PTV) and the organs-at-risk, making the gamma evaluation more robust.

Alexis Cabrera Santiago and José Alejandro Rojas‑López
Physics team Alexis Cabrera Santiago and José Alejandro Rojas‑López. (Courtesy: José Alejandro Rojas‑López/Hospital Almater)

Commissioning of ThinkQA SDC was fast and straightforward, Rojas‑López notes, requiring minimal data input into the software. For absolute dose calibration, the only data needed are the cryostat dose attenuation response, the output dose geometry and the CT calibration.

“This makes a difference compared with other commercial solutions where you have to introduce more information, such as MLC [multileaf collimator] leakage and MLC dosimetric leaf gap, for example,” he explains. “If you have to introduce more data for commissioning, this delays the clinical introduction of the software.”

Cabrera Santiago is now using ThinkQA SDC to provide secondary dose calculations for all radiotherapy treatments at Hospital Almater. The team has established a protocol with a 3%/2 mm gamma criterion, a tolerance limit of 95% and an action limit of 90%. He emphasizes that the software has proved robust and flexible, and provides confidence in the delivered treatment.

“ThinkQA SDC lets us work with more confidence, reduces risk and saves time without losing control over the patient’s safety,” he says. “It checks that the plan is correct, catches issues before treatment and helps us find any problems like set-up errors, contouring mistakes and planning issues.”

The software integrates smoothly into the Elekta ONE adaptive workflow, providing reliable results without slowing down the clinical workflow. “In our institution, we set up ThinkQA SDC so that it automatically receives the new plan, runs the check, compares it with the original plan and creates a report – all in around two minutes,” says Cabrera Santiago. “This saves us a lot of time and removes the need to do everything manually.”

A case in point

As an example of ThinkQA SDC’s power to ease the treatment workflow, Rojas‑López describes a paediatric brain tumour case at Hospital Almater. The young patient needed sedation during their treatment, requiring the physics team to optimize the treatment time for the entire adaptive radiotherapy workflow. “ThinkQA SDC served to analyse, in a fast mode, the treatment plan QA for each session. The measurements were reliable, enabling us to deliver all of the treatment sessions without any delay,” he explains.

Indeed, the ability to use secondary dose checks for each treatment fraction provides time advantages for the entire clinical workflow over phantom-based pre-treatment QA. “Time in the bunker is very expensive,” Rojas‑López points out. “If you reduce the time required for QA, you can use the bunker for patient treatments instead and treat more patients during the clinical time. Secondary dose check can optimize the workflow in the entire department.”

Importantly, in a recent study comparing patient-specific QA measurements using Sun Nuclear’s ArcCheck with ThinkQA SDC calculations, Rojas‑López and colleagues confirmed that the two techniques provided comparable results, with very similar gamma passing rates. As such, they are working to reduce phantom measurements and, in most cases, replace them with a secondary dose check using ThinkQA SDC.

The team at Hospital Almater concur that ThinkQA SDC provides a reliable tool to evaluate radiation treatments, including the first fraction and all of the adaptive sessions, says Rojas‑López. “You can use it for all anatomical sites, with reliable and confident results,” he notes. “And you can reduce the need for measurements using another patient-specific QA tool.”

“I think that any centre doing adaptive radiotherapy should seriously consider using a tool like ThinkQA SDC,” adds Cabrera Santiago.

*ThinkQA is manufactured by DOSIsoft S.A. and distributed by Elekta.

The post Secondary dose checks ensure safe and accurate delivery of adaptive radiotherapy appeared first on Physics World.

‘Zombie’ volcano reveals its secrets

29 mai 2025 à 10:30

The first high-resolution images of Bolivia’s Uturuncu volcano have yielded unprecedented insights into whether this volcanic “zombie” is likely to erupt in the near future. The images were taken using a technique that combines seismology, rock physics and petrological analyses, and the scientists who developed it say it could apply to other volcanoes, too.

Volcanic eruptions occur when bubbles of gases such as SO2 and CO2 rise to the Earth’s surface through dikes and sills in the planet’s crust, bringing hot, molten rock known as magma with them. To evaluate the chances of this happening, researchers need to understand how much gas and melted rock have accumulated in the volcano’s shallow upper crust, or crater. This is not easy, however, as the structures that convey gas and magma to the surface are complex and mapping them is challenging with current technologies.

A zombie volcano

In the new work, a team led by Mike Kendall of the University of Oxford, UK and Haijiang Zhang from the University of Science and Technology of China (USTC) employed a combination of seismological and petrophysical analyses to create such a map for Uturuncu. Located in the Central Andes, this volcano formed in the Pleistocene era (around 2.58 million to 11,700 years ago) as the oceanic Nazca plate was forced beneath the South American continental plate. It is made up of around 50 km3 of homogeneous, porphyritic dacite lava flows that are between 62% and 67% silicon dioxide (SiO2) by weight, and it sits atop the Altiplano–Puna magma body, which is the world’s largest body of partially-melted silicic rock.

Although Uturuncu has not erupted for nearly 250,000 years, it is not extinct. It regularly emits plumes of gas, and earthquakes are a frequent occurrence in the shallow crust beneath and around it. Previous geodetic studies also detected a 150-km-wide deformed region of rock centred around 3 km south-west of its summit. These signs of activity, coupled with Uturuncu’s lack of a geologically recent eruption, have led some scientists to describe it as a “zombie”.

Movement of liquid and gas explains Uturuncu’s unrest

To tease out the reasons for Uturuncu’s semi-alive behaviour, the team turned to seismic tomography – a technique Kendall compares to medical imaging of a human body. The idea is to detect the seismic waves produced by earthquakes travelling through the Earth’s crust, analyse their arrival times, and use this information to create three-dimensional images of what lies beneath the surface of the structure being studied.

Writing in PNAS, Kendall and colleagues explain that they used seismic tomography to analyse signals from more than 1700 earthquakes in the region around Uturuncu. They performed this analysis in two ways. First, they assumed that seismic waves travel through the crust at the same speed regardless of their direction of propagation. This isotropic form of tomography gave them a first image of the region’s structure. In their second analysis, they took the directional dependence of the seismic waves’ speed into account. This anisotropic tomography gave them complementary information about the structure.

The researchers then combined their tomographic measurements with previous geophysical imaging results to construct rock physics models. These models contain information about the paths that hot migrating fluids and gases take as they migrate to the surface. In Uturuncu’s case, the models showed fluids and gases accumulating in shallow magma reservoirs directly below the volcano’s crater and down to a depth of around 5 km. This movement of liquid and gas explains Uturuncu’s unrest, the team say, but the good news is that it has a low probability of producing eruptions any time soon.

According to Kendall, the team’s methods should be applicable to more than 1400 other potentially active volcanoes around the world. “It could also be applied to identifying potential geothermal energy sites and for critical metal recovery in volcanic fluids,” he tells Physics World.

The post ‘Zombie’ volcano reveals its secrets appeared first on Physics World.

Shengxi Huang: how defects can boost 2D materials as single-photon emitters

28 mai 2025 à 17:01
Photo of researchers in a lab at Rice University.
Hidden depths Shengxi Huang (left) with members of her lab at Rice University in the US, where she studies 2D materials as single-photon sources. (Courtesy: Jeff Fitlow)

Everyday life is three dimensional, with even a sheet of paper having a finite thickness. Shengxi Huang from Rice University in the US, however, is attracted by 2D materials, which are usually just one atomic layer thick. Graphene is perhaps the most famous example — a single layer of carbon atoms arranged in a hexagonal lattice. But since it was first created in 2004, all sorts of other 2D materials, notably boron nitride, have been created.

An electrical engineer by training, Huang did a PhD at the Massachusetts Institute of Technology and postdoctoral research at Stanford University before spending five years as an assistant professor at the Pennsylvania State University. Huang has been at Rice since 2022, where she is now an associate professor in the Department of Electrical and Computer Engineering, the Department of Material Science and NanoEngineering, and the Department of Bioengineering.

Her group at Rice currently has 12 people, including eight graduate students and four postdocs. Some are physicists, some are engineers, while others have backgrounds in material science or chemistry. But they all share an interest in understanding the optical and electronic properties of quantum materials and seeing how they can be used, for example, as biochemical sensors. Lab equipment from Picoquant is vital in helping in that quest, as Huang explains in an interview with Physics World.

Why are you fascinated by 2D materials?

I’m an electrical engineer by training, which is a very broad field. Some electrical engineers focus on things like communication and computing, but others, like myself, are more interested in how we can use fundamental physics to build useful devices, such as semiconductor chips. I’m particularly interested in using 2D materials for optoelectronic devices and as single-photon emitters.

What kinds of 2D materials do you study?

The materials I am particularly interested in are transition metal dichalcogenides, which consist of a layer of transition-metal atoms sandwiched between two layers of chalcogen atoms – sulphur, selenium or tellurium. One of the most common examples is molybdenum disulphide, which in its monolayer form has a layer of sulphur on either side of a layer of molybdenum. In multi-layer molybdenum disulphide, the van der Waals forces between the tri-layers are relatively weak, meaning that the material is widely used as a lubricant – just like graphite, which is a many-layer version of graphene.

Why do you find transition metal dichalcogenides interesting?

Transition metal dichalcogenides have some very useful optoelectronic properties. In particular, they emit light whenever the electron and hole that make up an “exciton” recombine. Now because these dichalcogenides are so thin, most of the light they emit can be used. In a 3D material, in contrast, most light is generated deep in the bulk of the material and doesn’t penetrate beyond the surface. Such 2D materials are therefore very efficient and, what’s more, can be easily integrated onto chip-based devices such as waveguides and cavities.

Transition metal dichalcogenide materials also have promising electronic applications, particularly as the active material in transistors. Over the years, we’ve seen silicon-based transistors get smaller and smaller as we’ve followed Moore’s law, but we’re rapidly reaching a limit where we can’t shrink them any further, partly because the electrons in very thin layers of silicon move so slowly. In 2D transition metal dichalcogenides, in contrast, the electron mobility can actually be higher than in silicon of the same thickness, making them a promising material for future transistor applications.

What can such sources of single photons be used for?

Single photons are useful for quantum communication and quantum cryptography. Carrying information as zero and one, they basically function as a qubit, providing a very secure communication channel. Single photons are also interesting for quantum sensing and even quantum computing. But it’s vital that you have a highly pure source of photons. You don’t want them mixed up with “classical photons”, which — like those from the Sun — are emitted in bunches as otherwise the tasks you’re trying to perform cannot be completed.

What approaches are you taking to improve 2D materials as single-photon emitters?

What we do is introduce atomic defects into a 2D material to give it optical properties that are different to what you’d get in the bulk. There are several ways of doing this. One is to irradiate a sample with ions or electrons, which can bombard individual atoms out to generate “vacancy defects”. Another option is to use plasmas, whereby atoms in the sample get replaced by atoms from the plasma.

So how do you study the samples?

We can probe defect emission using a technique called photoluminescence, which basically involves shining a laser beam onto the material. The laser excites electrons from the ground state to an excited state, prompting them to emit light. As the laser beam is about 500-1000 nm in diameter, we can see single photon emission from an individual defect if the defect density is suitable.

Photo of researchers in a lab at Rice University
Beyond the surface Shengxi Huang (second right) uses equipment from PicoQuant to probe 2D materials. (Courtesy: Jeff Fitlow)

What sort of experiments do you do in your lab?

We start by engineering our materials at the atomic level to introduce the correct type of defect. We also try to strain the material, which can increase how many single photons are emitted at a time. Once we’ve confirmed we’ve got the correct defects in the correct location, we check the material is emitting single photons by carrying out optical measurements, such as photoluminescence. Finally, we characterize the purity of our single photons – ideally, they shouldn’t be mixed up with classical photons but in reality, you never have a 100% pure source. As single photons are emitted one at a time, they have different statistical characteristics to classical light. We also check the brightness and lifetime of the source, the efficiency, how stable it is, and if the photons are polarized. In fact, we have a feedback loop: what improvements can we do at the atomic level to get the properties we’re after?

Is it difficult adding defects to a sample?

It’s pretty challenging. You want to add just one defect to an area that might be just one micron square so you have to control the atomic structure very finely. It’s made harder because 2D materials are atomically thin and very fragile. So if you don’t do the engineering correctly, you may accidentally introduce other types of defects that you don’t want, which will alter the defects’ emission.

What techniques do you use to confirm the defects are in the right place?

Because the defect concentration is so low, we cannot use methods that are typically used to characterise materials, such as X-ray photo-emission spectroscopy or scanning electron microscopy. Instead, the best and most practical way is to see if the defects generate the correct type of optical emission predicted by theory. But even that is challenging because our calculations, which we work on with computational groups, might not be completely accurate.

How do your PicoQuant instruments help in that regard?

We have two main pieces of equipment – a MicroTime 100 photoluminescence microscope and a FluoTime 300 spectrometer. These have been customized to form a Hanbury Brown Twiss interferometer, which measures the purity of a single photon source. We also use the microscope and spectrometer to characterise photoluminescence spectrum and lifetime. Essentially, if the material emits light, we can then work out how long it takes before the emission dies down.

Did you buy the equipment off-the-shelf?

It’s more of a customised instrument with different components – lasers, microscopes, detectors and so on — connected together so we can do multiple types of measurement. I put in a request to Picoquant, who discussed my requirements with me to work out how to meet my needs. The equipment has been very important for our studies as we can carry out high-throughput measurements over and over again. We’ve tailored it for our own research purposes basically.

So how good are your samples?

The best single-photon source that we currently work with is boron nitride, which has a single-photon purity of 98.5% at room temperature. In other words, for every 200 photons only three are classical. With transition-metal dichalcogenides, we get a purity of 98.3% at cryogenic temperatures.

What are your next steps?

There’s still lots to explore in terms of making better single-photon emitters and learning how to control them at different wavelengths. We also want to see if these materials can be used as high-quality quantum sensors. In some cases, if we have the right types of atomic defects, we get a high-quality source of single photons, which we can then entangle with their spin. The emitters can therefore monitor the local magnetic environment with better performance than is possible with classical sensing methods.

The post Shengxi Huang: how defects can boost 2D materials as single-photon emitters appeared first on Physics World.

Richard Bond and George Efstathiou share the 2025 Shaw Prize in Astronomy

28 mai 2025 à 14:00

The 2025 Shaw Prize in Astronomy has been awarded to Richard Bond and George Efstathiou “for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background”. The prize citation continues, “Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass–energy content of the universe”.

Efstathiou is professor of astrophysics at the University of Cambridge in the UK. Bond is a professor at the Canadian Institute for Theoretical Astrophysics (CITA) and university professor at the University of Toronto in Canada. They share the $1.2m prize money equally.

The annual award is given by the Shaw Prize Foundation, which was founded in 2002 by the Hong Kong-based filmmaker, television executive and philanthropist Run Run Shaw (1907–2014). It will be presented at a ceremony in Hong Kong on 21 October. There are also Shaw Prizes for life sciences and medicine; and mathematical sciences.

Bond studied mathematics and physics at Toronto. In 1979 he completed a PhD in theoretical physics at the California Institute of Technology (Caltech). He directed CITA in 1996-2006.

Efstathiou studied physics at Oxford before completing a PhD in astronomy at the UK’s Durham University in 1979. He is currently director of the Institute of Astronomy in Cambridge.

The post Richard Bond and George Efstathiou share the 2025 Shaw Prize in Astronomy appeared first on Physics World.

No laughing matter: a comic book about the climate crisis

28 mai 2025 à 12:00
Comic depicting a parachutist whose chute is on fire and their thought process about not using their backup chute
Blunt message Anti-nuclear thinking is mocked in World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Comics are regarded as an artform in France, where they account for a quarter of all book sales. Nevertheless, the graphic novel World Without End: an Illustrated Guide to the Climate Crisis was a surprise French bestseller when it first came out in 2022. Taking the form of a Socratic dialogue between French climate expert Jean-Marc Jancovici and acclaimed comic artist Christophe Blain, it’s serious, scientific stuff.

Now translated into English by Edward Gauvin, the book follows the conventions of French-language comic strips or bandes dessinées. Jancovici is drawn with a small nose – denoting seriousness – while Blain’s larger nose signals humour. The first half explores energy and consumption, with the rest addressing the climate crisis and possible solutions.

Overall, this is a Trojan horse of a book: what appears to be a playful comic is packed with dense, academic content. Though marketed as a graphic novel, it reads more like illustrated notes from a series of sharp, provocative university lectures. It presents a frightening vision of the future and the humour doesn’t always land.

The book spans a vast array of disciplines – not just science and economics but geography and psychology too. In fact, there’s so much to unpack that, had I Blain’s skills, I might have reviewed it in the form of a comic strip myself. The old adage that “a picture is worth a thousand words” has never rung more true.

Absurd yet powerful visual metaphors feature throughout. We see a parachutist with a flaming main chute that represents our dependence on fossil fuels. The falling man jettisons his reserve chute – nuclear power – and tries to knit an alternative using clean energy, mid-fall. The message is blunt: nuclear may not be ideal, but it works.

World Without End is bold, arresting, provocative and at times polemical

The book is bold, arresting, provocative and at times polemical. Charts and infographics are presented to simplify complex issues, even if the details invite scrutiny. Explanations are generally clear and concise, though the author’s claim that accidents like Chernobyl and Fukushima couldn’t happen in France smacks of hubris.

Jancovici makes plenty of attention-grabbing statements. Some are sound, such as the notion that fossil fuels spared whales from extinction as we didn’t need this animal’s oil any more. Others are dubious – would a 4 °C temperature rise really leave a third of humanity unable to survive outdoors?

But Jancovici is right to say that the use of fossil fuels makes logical sense. Oil can be easily transported and one barrel delivers the equivalent of five years of human labour. A character called Armor Man (a parody of Iron Man) reminds us that fossil fuels are like having 200 mechanical slaves per person, equivalent to an additional 1.5 trillion people on the planet.

Fossil fuels brought prosperity – but now threaten our survival. For Jancovici, the answer is nuclear power, which is perhaps not surprising as it produces 72% of electricity in the author’s homeland. But he cherry picks data, accepting – for example – the United Nations figure that only about 50 people died from the Chernobyl nuclear accident.

While acknowledging that many people had to move following the disaster, the author downplays the fate of those responsible for “cleaning up” the site, the long-term health effects on the wider population and the staggering economic impact – estimated at €200–500bn. He also sidesteps nuclear-waste disposal and the cost and complexity of building new plants.

While conceding that nuclear is “not the whole answer”, Jancovici dismisses hydrogen and views renewables like wind and solar as too intermittent – they require batteries to ensure electricity is supplied on demand – and diffuse. Imagine blanketing the Earth in wind turbines.

Cartoon of a doctor and patient. The patient has increased their alcohol intake but also added in some healthy orange juice
Humorous point A joke from World Without End by Jean-Marc Jancovici and Christophe Blain. (Published by Particular Books. Illustration © DARGAUD — Blancovici & Blain)

Still, his views on renewables seem increasingly out of step. They now supply nearly 30% of global electricity – 13% from wind and solar, ahead of nuclear at 9%. Renewables also attract 70% of all new investment in electricity generation and (unlike nuclear) continue to fall in price. It’s therefore disingenuous of the author to say that relying on renewables would be like returning to pre-industrial life; today’s wind turbines are far more efficient than anything back then.

Beyond his case for nuclear, Jancovici offers few firm solutions. Weirdly, he suggests “educating women” and providing pensions in developing nations – to reduce reliance on large families – to stabilize population growth. He also cites French journalist Sébastien Bohler, who thinks our brains are poorly equipped to deal with long-term threats.

But he says nothing about the need for more investment in nuclear fusion or for “clean” nuclear fission via, say, liquid fluoride thorium reactors (LFTRs), which generate minimal waste, won’t melt down and cannot be weaponized.

Perhaps our survival depends on delaying gratification, resisting the lure of immediate comfort, and adopting a less extravagant but sustainable world. We know what changes are needed – yet we do nothing. The climate crisis is unfolding before our eyes, but we’re paralysed by a global-scale bystander effect, each of us hoping someone else will act first. Jancovici’s call for “energy sobriety” (consuming less) seems idealistic and futile.

Still, World Without End is a remarkable and deeply thought-provoking book that deserves to be widely read. I fear that it will struggle to replicate its success beyond France, though Raymond Briggs’ When the Wind Blows – a Cold War graphic novel about nuclear annihilation – was once a British bestseller. If enough people engaged with the book, it would surely spark discussion and, one day, even lead to meaningful action.

  • 2024 Particular Books £25.00hb 196pp

The post No laughing matter: a comic book about the climate crisis appeared first on Physics World.

The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration

28 mai 2025 à 10:00

The 20th of May is World Metrology Day, and this year it was extra special because it was also the 150th anniversary of the treaty that established the metric system as the preferred international measurement standard. Known as the Metre Convention, the treaty was signed in 1875 in Paris, France by representatives of all 17 nations that belonged to the Bureau International des Poids et Mesures (BIPM) at the time, making it one of the first truly international agreements. Though nations might come and go, the hope was that this treaty would endure “for all times and all peoples”.

To celebrate the treaty’s first century and a half, the BIPM and the United Nations Educational, Scientific and Cultural Organisation (UNESCO) held a joint symposium at the UNESCO headquarters in Paris. The event focused on the achievements of BIPM as well as the international scientific collaborations the Metre Convention enabled. It included talks from the Nobel prize-winning physicist William Phillips of the US National Institute of Standards and Technology (NIST) and the BIPM director Martin Milton, as well as panel discussions on the future of metrology featuring representatives of other national metrology institutes (NMIs) and metrology professionals from around the globe.

A long and revolutionary tradition

The history of metrology dates back to ancient times. As UNESCO’s Hu Shaofeng noted in his opening remarks, the Egyptians recognized the importance of precision measurements as long ago as the 21st century BCE.  Like other early schemes, the Egyptians’ system of measurement used parts of the human body as references, with units such as the fathom (the length of a pair of outstretched arms) and the foot. This was far from ideal since, as Phillips pointed out in his keynote address, people come in various shapes and sizes. These variations led to a profusion of units. By some estimates, pre-revolutionary France had a whopping 250,000 different measures, with differences arising not only between towns but also between professions.

The French Revolutionaries were determined to put an end to this mess. In 1795, just six years after the Revolution, the law of 18 Geminal An III (according to the new calendar of the French Republic) created a preliminary version of the world’s first metric system. The new system tied length and mass to natural standards (the metre was originally one-forty-millionth of the Paris meridian, while the kilogram is the mass of a cubic decimetre of water), and it became the standard for all of France in 1799. That same year, the system also became more practical, with units becoming linked, for the first time, to physical artefacts: a platinum metre and kilogram deposited in the French National Archives.

When the Metre Convention adopted this standard internationally 80 years later, it kick-started the construction of new length and mass standards. The new International Prototype of the Metre and International Prototype of the Kilogram were manufactured in 1879 and officially adopted as replacements for the Revolutionaries’ metre and kilogram in 1889, though they continued to be calibrated against the old prototypes held in the National Archives.

A short history of the BIPM

The BIPM itself was originally conceived as a means of reconciling France and Germany after the 1870–1871 Franco–Prussian War. At first, its primary roles were to care for the kilogram and metre prototypes and to calibrate the standards of its member states. In the opening decades of the 20th century, however, it extended its activities to cover other kinds of measurements, including those related to electricity, light and radiation. Then, from the 1960s onwards, it became increasingly interested in improving the definition of length, thanks to new interferometer technology that made it possible to measure distance at a precision rivalling that of the physical metre prototype.

Photo of William Phillips on stage at the Metre Convention symposium, backed by a large slide that reads "The Revolutionary Dream: A tous les temps, a tous les peuples, For all times, for all peoples". The slide also contains two large symbolic medallions, one showing a female figure dressed in Classical garments holding out a metre ruler under the logo "A tous les temps, a tous les peuples" and another showing a winged figure measuring the Earth with an instrument.
Metre man: William Phillips giving the keynote address at the Metre Convention’s 150th anniversary symposium. (Courtesy: Isabelle Dumé)

It was around this time that the BIPM decided to replace its expanded metric system with a framework encompassing the entire field of metrology. This new framework consisted of six basic units – the metre, kilogram, second, ampere, degree Kelvin (later simply the kelvin), candela and mole – plus a set of “derived” units (the Newton, Hertz, Joule and Watt) built from the six basic ones. Thus was born the International System of Units, or SI after the French initials for Système International d’unités.

The next major step – a “brilliant choice”, in Phillips’ words – came in 1983, when the BIPM decided to redefine the metre in terms of the speed of light. In the future, the Bureau decreed that the metre would officially be the length travelled by light in vacuum during a time interval of 1/299,792,458 seconds.

This decision set the stage for defining the rest of the seven base units in terms of natural fundamental constants. The most recent unit to join the club was the kilogram, which was defined in terms of the Planck constant, h, in 2019. In fact, the only base unit currently not defined in terms of a fundamental constant is the second, which is instead determined by the transition between the two hyperfine levels of the ground state of caesium-133. The international metrology community is, however, working to remedy this, with meetings being held on the subject in Versailles this month.

Measurement affects every aspect of our daily lives, and as the speakers at last week’s celebrations repeatedly reminded the audience, a unified system of measurement has long acted as a means of building trust across international and disciplinary borders. The Metre Convention’s survival for 150 years is proof that peaceful collaboration can triumph, and it has allowed humankind to advance in ways that would not have been possible without such unity. A lesson indeed for today’s troubled world.

The post The evolution of the metre: How a product of the French Revolution became a mainstay of worldwide scientific collaboration appeared first on Physics World.

The Physics Chanteuse: when science hits a high note

27 mai 2025 à 17:00

What do pulsars, nuclear politics and hypothetical love particles have in common? They’ve all inspired songs by Lynda Williams – physicist, performer and self-styled “Physics Chanteuse”.

In this month’s Physics World Stories podcast, host Andrew Glester is in conversation with Williams, whose unique approach to science communication blends physics with cabaret and satire. You’ll be treated to a selection of her songs, including a toe-tapping tribute to Jocelyn Bell Burnell, the Northern Irish physicist who discovered pulsars.

Williams discusses her writing process, which includes a full-blooded commitment to getting the science right. She describes how her shows evolve throughout the course of a tour, how she balances life on the road with other life commitments, and how Kip Thorne once arranged for her to perform at a birthday celebration for Stephen Hawking. (Yes, really.)

Her latest show, Atomic Cabaret, dives into the existential risks of the nuclear age, marking 80 years since Hiroshima and Nagasaki. The one-woman musical kicks off in Belfast on 18 June and heads to the Edinburgh Festival in August.

If you like your physics with a side of showbiz and social activism, this episode hits all the right notes. Find out more at Lynda’s website.

The post The Physics Chanteuse: when science hits a high note appeared first on Physics World.

💾

The quantum eraser doesn’t rewrite the past – it rewrites observers

27 mai 2025 à 15:00

“Welcome to this special issue of Physics World, marking the 200th anniversary of quantum mechanics. In this double-quantum edition, the letters in this text are stored using qubits. As you read, you project the letters into a fixed state, and that information gets copied into your mind as the article that you are reading. This text is actually in a superposition of many different articles, but only one of them gets copied into your memory. We hope you enjoy the one that you are reading.”

That’s how I imagine the opening of the 2125 Physics World quantum special issue, when fully functional quantum computers are commonplace, and we have even figured out how to control individual qubits on display screens. If you are lucky enough to experience reading such a magazine, you might be disappointed as you can read only one of the articles the text gets projected into. The problem is that by reading the superposition of articles, you made them decohere, because you copied the information about each letter into your memory. Can you figure out a way to read the others too? After all, more Physics World articles is always better.

A possible solution may be if you could restore the coherence of the text by just erasing your memory of the particular article you read. Once you no longer have information identifying which article your magazine was projected into, there is then no fundamental reason for it to remain decohered into a single state. You could then reread it to enjoy a different article.

While this thought experiment may sound fantastical, the concept is closely connected to a mind-bending twist on the famous double-slit experiment, known as the delayed-choice quantum eraser. It is often claimed to exhibit a radical phenomenon: where measurements made in the present alter events that occurred in the past. But is such a paradoxical suggestion real, even in the notoriously strange quantum realm?

A double twist on the double slit

In a standard double-slit experiment, photons are sent one by one through two slits to create an interference pattern on a screen, illustrating the wave-like behaviour of light. But if we add a detector that can spot which of the two slits the photon goes through, the interference disappears and we see only two distinct clumps on the screen, signifying particle-like behaviour. Crucially, gaining information about which path the photon took changes the photon’s quantum state, from the wave-like interference pattern to the particle-like clumps.

The first twist on this thought experiment is attributed to proposals from physicist John Wheeler in 1978, and a later collaboration with Wojciech Zurek in 1983. Wheeler’s idea was to delay the measurement of which slit the photon goes through. Instead of measuring the photon as it passes through the double-slit, the measurement could be delayed until just before the photon hits the screen. Interestingly, the delayed detection of which slit the photon goes through still determines whether or not it displays the wave-like or particle-like behaviour. In other words, even a detection done long after the photon has gone through the slit determines whether or not that photon is measured to have interfered with itself.

If that’s not strange enough, the delayed-choice quantum eraser is a further modification of this idea. First proposed by American physicists Marlan Scully and Kai Drühl in 1982 (Phys. Rev. A 25 2208), it was later experimentally implemented by Yoon-Ho Kim and collaborators using photons in 2000 (Phys. Rev. Lett. 84 1). This variation adds a second twist: if recording which slit the photon passes through causes it to decohere, then what happens if we were to erase that information? Imagine shrinking the detector to a single qubit that becomes entangled with the photon: “left” slit might correlate to the qubit being 0, “right” slit to 1. Instead of measuring whether the qubit is a 0 or 1 (revealing the path), we could measure it in a complementary way, randomising the 0s and 1s (erasing the path information).

1 Delayed detections, path revelations and complementary measurements

Detailed illustration explaining the quantum eraser effect
(Courtesy: Mayank Shreshtha)

This illustration depicts how the quantum eraser restores the wave-like behaviour of photons in a double-slit experiment, using 3D-glasses as an analogy.

The top left box shows the set-up for the standard double-slit experiment. As there are no detectors at the slits measuring which pathway a photon takes, an interference pattern emerges on the screen.  In box 1, detectors are present at each slit, and measuring which slit the photon might have passed through, the interference patter is destroyed. Boxes 2 and 3 show that by erasing the “which-slit” information, the interference patterns are restored. This is done by separating out the photons using the eraser, represented here by a red filter and a blue filter of the 3D glasses. The final box 4 shows that the overall pattern with the eraser has no interference, identical to patten seen in box 1.

In boxes 2, 3 and 4, a detector qubit measures “which-slit” information, with states |0> for left and |1> for right. These are points on the z-axis of the “Bloch sphere”, an abstract representation of the qubit. Then the eraser measures the detector qubit in a complementary way, along the x-axis of the Bloch sphere. This destroys the “which-slit information”, but reveals the red and blue lens information used to filter the outcomes, as depicted in the image of the 3D glasses.

Strikingly, while the screen still shows particle-like clumps overall, these complementary measurements of the single-qubit detector can actually be used to extract a wave-like interference pattern. This works through a sorting process: the two possible outcomes of the complementary measurements are used to separate out the photon detections on the screen. The separated patterns then each individually show bright and dark fringes.

I like to visualize this using a pair of 3D glasses, with one blue and one red lens. Each colour lens reveals a different individual image, like the two separate interference patterns. Without the 3D glasses, you see only the overall sum of the images. In the quantum eraser experiment, this sum of the images is a fully decohered pattern, with no trace of interference. Having access to the complementary measurements of the detector is like getting access to the 3D glasses: you now get an extra tool to filter out the two separate interference patterns.

Rewriting the past – or not?

If erasing the information at the detector lets us extract wave-like patterns, it may seem like we’ve restored wave-like behaviour to an already particle-like photon. That seems truly head-scratching. However, Jonte Hance, a quantum physicist at Newcastle University in the UK, highlights a different conclusion, focused on how the individual interference patterns add up to show the usual decohered pattern. “They all feel like they shouldn’t be able to fit together,” Hance explains. “It’s really showing that the correlations you get through entanglement have to be able to fit every possible way you could measure a system.” The results therefore reveal an intriguing aspect of quantum theory – the rich, counterintuitive structure of quantum correlations from entanglement – rather than past influences.

Even Wheeler himself did not believe the thought experiment implies backward-in-time influence, as explained by Lorenzo Catani, a researcher at the International Iberian Nanotechnology Laboratory (INL) in Portugal. Commenting on the history of the thought experiment, Catani notes that “Wheeler concluded that one must abandon a certain type of realism – namely, the idea that the past exists independently of its recording in the present. As far as I know, only a minority of researchers have interpreted the experiment as evidence for retrocausality.”

Eraser vs Bell: a battle of the bizarre

One physicist who is attempting to unpack this problem is Johannes Fankhauser at the University of Innsbruck, Austria. “I’d heard about the quantum eraser, and it had puzzled me a lot because of all these bizarre claims of backwards-in-time influence”, he explains. “I see something that sounds counterintuitive and puzzling and bizarre and then I want to understand it, and by understanding it, it gets a bit demystified.”

Fankhauser realized that the quantum eraser set-up can be translated into a very standard Bell experiment. These experiments are based on entangling a pair of qubits, the idea being to rule out local “hidden-variable” models of quantum theory. This led him to see that there is no need to explain the eraser using backwards-in-time influence, since the related Bell experiments can be understood without it, as explained in his 2017 paper (Quanta 8 44). Fankhauser then further analysed the thought experiment using the de Broglie–Bohm interpretation of quantum theory, which gives a physical model for the quantum wavefunction (as particles are guided by a “pilot” wave). Using this, he showed explicitly that the outcomes of the eraser experiment can be fully explained without requiring backwards-in-time influences.

So does that mean that the eraser doesn’t tell us anything else beyond what Bell experiments already tell us? Not quite. “It turns different knobs than the Bell experiment,” explains Fankhauser. “I would say it asks the question ‘what do measurements signify?’, and ‘when can I talk about the system having a property?’. That’s an interesting question and I would say we don’t have a full answer to this.”

In particular, the eraser demonstrates the importance that the very act of observation has on outcomes, with the detector playing the role of an observer. “You measure some of its properties, you change another property,” says Fankhauser. “So the next time you measure it, the new property was created through the observation. And I’m trying to formalize this now more concretely. I’m trying to come up with a new approach and framework to study these questions.”

Meanwhile, Catani found an intriguing contrast between Bell experiments and the eraser in his research. “The implications of Bell’s theorem are far more profound,” says Catani. In the 2023 paper (Quantum 7 1119) he co-authored, Catani considers a model for classical physics, with an extra condition: there is a restriction on what you can know about the underlying physical states. Applying this model to the quantum eraser, he finds that its results can be reproduced by such a classical theory. By contrast, the classical model cannot reproduce the statistical violations of a Bell experiment. This shows that having incomplete knowledge of the physical state is not, by itself, enough to explain the strange results of the Bell experiment. It is therefore demonstrating a more powerful deviation from classical physics than the eraser. Catani also contrasts the mathematical rigour of the two cases. While Bell experiments are based on explicitly formulated assumptions, claims about backwards-in-time influence in the quantum eraser rely on a particular narrative – one that gives rise to the apparent paradox

The eraser as a brainteaser

Physicists therefore broadly agree that the mathematics of the quantum eraser thought experiment fits well within standard quantum theory. Even so, Hance argues that formal results alone are not the entire story: “This is something we need to pick apart, not just in terms of mathematical assumptions, but also in terms of building intuitions for us to be able to actually play around with what quantumness is.” Hance has been analysing the physical implications of different assumptions in the thought experiment, with some options discussed in his 2021 preprint (arXiv:2111.09347) with collaborators on the quantum eraser paradox.

It therefore provides a tool for understanding how quantum correlations match up in a way that is not described by classical physics. “It’s a great thinking aid – partly brainteaser, partly demonstration of the nature of this weirdness.”

Information, observers and quantum computers

Every quantum physicist takes something different from the quantum eraser, whether it is a spotlight on the open problems surrounding the properties of measured systems; a lesson from history in mathematical rigour; or a counterintuitive puzzle to make sense of. For a minority that deviate from standard approaches to quantum theory, it may even be some form of backwards-in-time influence.

For myself, as explained in my video on YouTube and my 2023 paper (IEEE International Conference on Quantum Computing and Engineering 10.1109/QCE57702.2023.20325) on quantum thought experiments, the most dramatic implication of the quantum eraser is explaining the role of observers in the double-slit experiment. The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required. This also explains why building a quantum computer is so challenging, as unwanted entanglement with even one particle can cause the whole computation to collapse into a random state.

The quantum eraser emphasizes that even a single entanglement between qubits will cause decoherence, whether or not it is measured afterwards – meaning that no mysterious macroscopic observer is required

Where does this leave the futuristic readers of our 200-year double-quantum special issue of Physics World? Simply erasing their memories is not enough to restore the quantum behaviour of the article. It is too late to change which article was selected. Though, following an eraser-type protocol, our futurists can do one better than those sneaky magazine writers: they can use the outcomes of complementary measurements on their memory, to sort the article into two individual smaller articles, each displaying their own quantum entanglement structure that was otherwise hidden. So even if you can’t use the quantum eraser to rewrite the past, perhaps it can rewrite what you read in the future.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post The quantum eraser doesn’t rewrite the past – it rewrites observers appeared first on Physics World.

Has bismuth been masquerading as a topological material?

27 mai 2025 à 14:00

Bismuth has puzzled scientists for nearly 20 years. Notably, the question of whether it is topological – that is, whether electrons behave differently on its surface than they do inside it – gets different answers depending on whether you ask a theorist or an experimentalist. Researchers in Japan now say they have found a way to resolve this conflict. A mechanism called surface relaxation, they report, may have masked or “blocked” bismuth’s true topological nature.

The classic way of describing topology is to compare objects that have a hole, such as a doughnut or a coffee mug, with objects that don’t, such as a muffin. Although we usually think of doughnuts as having more in common with muffins than with mugs – you can’t eat a mug – the fact that they have the same number of holes means the mug and doughnut share topological features that the muffin does not.

While no-one has ever wondered whether they can eat an electron, scientists have long been curious about whether materials conduct electricity. As it turns out, topology is one way of answering that question.

“Previously, people classified materials as metallic or insulating,” says Yuki Fuseya, a quantum solid state physicist at Kobe University. Beginning in the 2000s, however, Fuseya says scientists started focusing more on the topology of the electrons’ complex wavefunctions. This enriched our understanding of how materials behave, because wavefunctions with apparently different shapes can share important topological features.

For example, if the topology of certain wavefunctions on a material’s surface corresponds to that of apparently different wavefunctions within its bulk, the material may be insulating in its bulk, yet still able to conduct electricity on its surface. Materials with this property are known as topological insulators, and they have garnered a huge amount of interest due to the possibility of exploiting them in quantum computing, spintronics and magnetic devices.

Topological or not topological

While it’s not possible to measure the topology of wavefunctions directly, it is generally possible to detect whether a material supports certain surface states. This information can then be used to infer something about its bulk using the so-called bulk-edge state correspondence.

In bismuth, the existence of these surface states ought to indicate that the bulk material is topologically trivial. However, experiments have delivered conflicting information.

Fuseya was intrigued. “If you look at the history of solid-state physics, many physical phenomena were found firstly in bismuth,” he tells Physics World. Examples include diamagnetism, the Seebeck effect and the Shubnikov-de Haas effect, as well as phenomena related to the giant spin Hall effect and the potential for Turing patterns that Fuseya discovered himself. “That’s one of the reasons why I am so interested in bismuth,” he says.

Fuseya’s interest attracted colleagues with different specialisms. Using density functional theory, Rikako Yaguchi of the University of Electro-Communications in Tokyo calculated that layers of bismuth’s crystal lattice expand, or relax, by 3-6% towards the surface. According to Fuseya, this might not have seemed noteworthy. However, since the team was already looking at bismuth’s topological properties, another colleague, Kazuki Koie, went ahead and calculated how this lattice expansion changed the material’s surface wavefunction.

These calculations showed that the expansion is, in fact, significant. This is because bismuth is close to the topological transition point, where a change in parameters can flip the shape of the wavefunction and give topological properties to a material that was once topologically trivial. Consequently, the reason it is not possible to observe surface states indicating that bulk bismuth is topologically trivial is that the material is effectively different – and topologically non-trivial – on its surface.

Topological blocking

Although “very surprised” at first, Fuseya says that after examining the physics in more detail, they found the result “quite reasonable”. They are now looking for evidence of similar “topological blocking” in other materials near the transition point, such as lead telluride and tin telluride.

“It is remarkable that there are still big puzzles when trying to match data to the theoretical predictions,” says Titus Mangham Neupert, a theoretical physicist at the University of Zurich, Switzerland, who was not directly involved in the research. Since “so many compounds that made the headlines in topological physics” contain bismuth, Neupert says it will be interesting to re-evaluate existing experiments and conceive new ones. “In particular, the implication for higher-order topology could be tested,” he says.

Fuseya’s team is already studying how lattice relaxation might affect hinges where two surfaces come together. In doing so, they hope to understand why angle resolved photoemission spectroscopy (ARPES), which probes surfaces, yields results that contradict those from scanning tunnelling microscopy experiments, which probe hinges. “Maybe we can find a way to explain every experiment consistently,” Fuseya says. The insights they gain, he adds, might also be useful for topological engineering: by bending a material, scientists could alter its lattice constants, and thereby tailor its topological properties.

This aspect also interests Zeila Zanolli and Matthieu Verstraete of Utrecht University in the Netherlands. Though not involved in the current study, they had previously shown that free-standing two-dimensional bismuth (bismuthene) can take on several geometric structures in-plane – not all of which are topological – depending on the material’s strain, bonding coordination and directionality. The new work, they say, “opens the way to (computational) design of topological materials, playing with symmetries, strain and the substrate interface”.

The research is published in Physical Review B.

The post Has bismuth been masquerading as a topological material? appeared first on Physics World.

Proton arc therapy eliminates hard-to-treat cancer with minimal side effects

27 mai 2025 à 09:30

Head-and-neck cancers are difficult to treat with radiation therapy because they are often located close to organs that are vital for patients to maintain a high quality-of-life. Radiation therapy can also alter a person’s shape, through weight loss or swelling, making it essential to monitor such changes throughout the treatment to ensure effective tumour targeting.

Researchers from Corewell Health William Beaumont University Hospital have now used a new proton therapy technique called step-and-shoot proton arc therapy (a spot-scanning proton arc method) to treat head-and-neck cancer in a human patient – the first person in the US to receive this highly accurate treatment.

“We envisioned that this technology could significantly improve the quality of treatment plans for patients and the treatment efficiency compared with the current state-of-the-art technique of intensity-modulated proton therapy (IMPT),” states senior author Xuanfeng Ding.

Progression towards dynamic proton arc therapy

“The first paper on spot-scanning proton arc therapy was published in 2016 and the first prototype for it was built in 2018,” says Ding. However, step-and-shoot proton arc therapy is an interim solution towards a more advanced technique known as dynamic proton arc therapy – which delivered its first pre-clinical treatment in 2024. Dynamic proton arc therapy is still undergoing development and regulatory approval clearance, so researchers have chosen to use step-and-shoot proton arc therapy clinically in the meantime.

Other proton therapies are more manual in nature and require a lot of monitoring, but the step-and-shoot technology delivers radiation directly to a tumour in a more continuous and automated fashion, with less lag time between radiation dosages. “Step-and-shoot proton arc therapy uses more beam angles per plan compared to the current clinical practice using IMPT and optimizes the spot and energy layers sparsity level,” explains Ding.

The extra beam angles provide a greater degree-of-freedom to optimize the treatment plan and provide a better dose conformity, robustness and linear energy transfer (LET, the energy deposited by ionizing radiation) through a more automated approach. During treatment delivery, the gantry rotates to each beam angle and stops to deliver the treatment irradiation.

In the dynamic proton arc technique that is also being developed, the gantry rotates continuously while irradiating the proton spot or switching energy layer. The step-and-shoot proton arc therapy therefore acts as an interim stage that is allowing more clinical data to be acquired to help dynamic proton arc therapy become clinically approved. The pinpointing ability of these proton therapies enables tumours to be targeted more precisely without damaging surrounding healthy tissue and organs.

The first clinical treatment

The team trialled the new technique on a patient with adenoid cystic carcinoma in her salivary gland – a rare and highly invasive cancer that’s difficult to treat as it targets the nerves in the body. This tendency to target nerves also means that fighting such tumours typically causes a lot of side effects. Using the new step-and-shoot proton arc therapy, however, the patient experienced minimal side effects and no radiation toxicity to other areas of her body (including the brain) after 33 treatments. Since finishing her treatment in August 2024, she continues to be cancer-free.

Tiffiney Beard and Rohan Deraniyagala
First US patient Tiffiney Beard, who underwent step-and-shoot proton arc therapy to treat her rare head-and-neck cancer, at a follow-up appointment with Rohan Deraniyagala. (Courtesy: Emily Rose Bennett, Corewell Health)

“Radiation to the head-and-neck typically results in dryness of the mouth, pain and difficulty swallowing, abnormal taste, fatigue and difficulty with concentration,” says Rohan Deraniyagala, a Corewell Health radiation oncologist involved with this research. “Our patient had minor skin irritation but did not have any issues with eating or performing at her job during treatment and for the last year since she was diagnosed.”

Describing the therapeutic process, Ding tells Physics World that “we developed an in-house planning optimization algorithm to select spot and energy per beam angle so the treatment irradiation time could be reduced to four minutes. However, because the gantry still needs to stop at each beam angle, the total treatment time is about 16 minutes per fraction.”

On monitoring the progression of the tumour over time and developing treatment plans, Ding confirms that the team “implemented a machine-learning-based synthetic CT platform which allows us to track the daily dosage of radiation using cone-beam computed tomography (CBCT) so that we can schedule an adaptive treatment plan for the patient.”

On the back of this research, Ding says that the next step is to help further develop the dynamic proton arc technique – known as DynamicARC – in collaboration with industry partner IBA.

The research was published in the International Journal of Particle Therapy.

The post Proton arc therapy eliminates hard-to-treat cancer with minimal side effects appeared first on Physics World.

Superconducting microwires detect high-energy particles

23 mai 2025 à 10:10

Arrays of superconducting wires have been used to detect beams of high-energy charged particles. Much thinner wires are already used to detect single photons, but this latest incarnation uses thicker wires that can absorb the large amounts of energy carried by fast-moving protons, electrons, and pions. The new detector was created by an international team led by Cristián Peña at Fermilab.

In a single-photon detector, an array of superconducting nanowires is operated below the critical temperature for superconductivity – with current flowing freely through the nanowires. When a nanowire absorbs a photon it creates a hotspot that temporarily destroys superconductivity and boosts the electrical resistance. This creates a voltage spike across the nanowire, allowing the location and time of the photon detection to be determined very precisely.

“These detectors have emerged as the most advanced time-resolved single-photon sensors in a wide range of wavelengths,” Peña explains. “Applications of these photon detectors include quantum networking and computing, space-to-ground communication, exoplanet exploration and fundamental probes for new physics such as dark matter.”

A similar hotspot is created when a superconducting wire is impacted by a high-energy charged particle. In principle, this could be used to create particle detectors that could be used in experiments at labs such as Fermilab and CERN.

New detection paradigm

“As with photons, the ability to detect charged particles with high spatial and temporal precision, beyond what traditional sensing technologies can offer, has the potential to propel the field of high-energy physics towards a new detection paradigm,” Peña explains.

However, the nanowire single-photon detector design is not appropriate for detecting charged particles. Unlike photons, charged particles do not deposit all of their energy at a single point in a wire. Instead, the energy can be spread out along a track, which becomes longer as particle energy increases. Also, at the relativistic energies reached at particle accelerators, the nanowires used in single-photon detectors are too thin to collect the energy required to trigger a particle detection.

To create their new particle detector, Peña’s team used the latest advances in superconductor fabrication. On a thin film of tungsten silicide, they deposited an 8×8, 2 mm2 array of micron-thick superconducting wires.

Tested at Fermilab

To test out their superconducting microwire single-photon detector (SMSPD), they used it to detect high-energy particle beams generated at the Fermilab Test Beam Facility. These included a 12 GeV beam of protons and 8 GeV beams of electrons and pions.

“Our study shows for the first time that SMSPDs are sensitive to protons, electrons, and pions,” Peña explains. “In fact, they behave very similarly when exposed to different particle types. We measured almost the same detection efficiency, as well as spatial and temporal properties.”

The team now aims to develop a deeper understanding of the physics that unfolds as a charged particle passes through a superconducting microwire. “That will allow us to begin optimizing and engineering the properties of the superconducting material and sensor geometry to boost the detection efficiency, the position and timing precision, as well as optimize for the operating temperature of the sensor,” Peña says. With further improvements SMSPDs to become an integral part of high-energy physics experiments – perhaps paving the way for a deeper understanding of fundamental physics.

The research is described in the Journal of Instrumentation.

The post Superconducting microwires detect high-energy particles appeared first on Physics World.

What is meant by neuromorphic computing – a webinar debate

23 mai 2025 à 10:08
AI circuit board
(Courtesy: Shutterstock/metamorworks)

There are two main approaches to what we consider neuromorphic computing. The first involves emulating biological neural processing systems through the physics of computation of computational substrates that have similar properties and constraints as real neural systems, with potential for denser structures and advantages in energy cost. The other simulates neural processing systems on scalable architectures that allow the simulation of large neural networks, with higher degree of abstraction, arbitrary precision, high resolution, and no constraints imposed by the physics of the computing medium.

Both may be required to advance the field, but is either approach ‘better’? Hosted by Neuromorphic Computing and Engineering, this webinar will see teams of leading experts in the field of neuromorphic computing argue the case for either approach, overseen by an impartial moderator.

Speakers image. Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo
Left to right: Elisa Donati, Jennifer Hasler, Catherine (Katie) Schuman, Emre Neftci, Giulia D’Angelo

Team emulation:
Elisa Donati. Elisa’s research interests aim at designing neuromorphic circuits that are ideally suited for interfacing with the nervous system and show how they can be used to build closed-loop hybrid artificial and biological neural processing systems.  She is also involved in the development of neuromorphic hardware and software systems able to mimic the functions of biological brains to apply for medical and robotics applications.

Jennifer Hasler received her BSE and MS degrees in electrical engineering from Arizona State University in August 1991. She received her PhD in computation and neural systems from California Institute of Technology in February 1997. Jennifer is a professor at the Georgia Institute of Technology in the School of Electrical and Computer Engineering; Atlanta is the coldest climate in which she has lived. Jennifer founded the Integrated Computational Electronics (ICE) laboratory at Georgia Tech, a laboratory affiliated with the Laboratories for Neural Engineering. She is a member of Tau Beta P, Eta Kappa Nu, and the IEEE.

Team simulation:
Catherine (Katie) Schuman is an assistant professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee (UT). She received her PhD in computer science from UT in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. Katie previously served as a research scientist at Oak Ridge National Laboratory, where her research focused on algorithms and applications of neuromorphic systems. Katie co-leads the TENNLab Neuromorphic Computing Research Group at UT. She has written for more than 70 publications as well as seven patents in the field of neuromorphic computing. She received the Department of Energy Early Career Award in 2019. Katie is a senior member of the Association of Computing Machinery and the IEEE.

Emre Neftci received his MSc degree in physics from EPFL in Switzerland, and his PhD in 2010 at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. He is currently an institute director at the Jülich Research Centre and professor at RWTH Aachen. His current research explores the bridges between neuroscience and machine learning, with a focus on the theoretical and computational modelling of learning algorithms that are best suited to neuromorphic hardware and non-von Neumann computing architectures.

Discussion chair:
Giulia D’Angelo is currently a Marie Skłodowska-Curie postdoctoral fellow at the Czech Technical University in Prague, where she focuses on neuromorphic algorithms for active vision. She obtained a bachelor’s degree in biomedical engineering from the University of Genoa and a master’s degree in neuroengineering with honours. During her master’s, she developed a neuromorphic system for the egocentric representation of peripersonal visual space at King’s College London. She earned her PhD in neuromorphic algorithms at the University of Manchester, receiving the President’s Doctoral Scholar Award, in collaboration with the Event-Driven Perception for Robotics Laboratory at the Italian Institute of Technology. There, she proposed a biologically plausible model for event-driven, saliency-based visual attention. She was recently awarded the Marie Skłodowska-Curie Fellowship to explore sensorimotor contingency theories in the context of neuromorphic active vision algorithms.

About this journal
Neuromorphic Computing and Engineering journal cover

Neuromorphic Computing and Engineering is a multidisciplinary, open access journal publishing cutting-edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective.

Editor-in-chief: Giacomo Indiveri, University of Zurich, Switzerland

 

The post What is meant by neuromorphic computing – a webinar debate appeared first on Physics World.

A Martian aurora, how the universe fades away, Heisenberg on holiday, physics of fake coins

22 mai 2025 à 17:53

In this episode of the Physics World Weekly podcast I look at what’s new in the world of physics with the help of my colleagues Margaret Harris and Matin Durrani.

We begin on Mars, where NASA’s Perseverance Rover has made the first observation of an aurora from the surface of the Red Planet. Next, we look deep into the future of the universe and ponder the physics that will govern how the last stars will fade away.

Then, we run time in reverse and go back to the German island of Helgoland, where in 1925 Werner Heisenberg laid the foundations of modern quantum mechanics. The island will soon host an event celebrating the centenary and Physics World will be there.

Finally, we explore how neutrons are being used to differentiate between real and fake antique coins and chat about the Physics World Quantum Briefing 2025.

The post A Martian aurora, how the universe fades away, Heisenberg on holiday, physics of fake coins appeared first on Physics World.

Ultrasound-activated structures clear biofilms from medical implants

22 mai 2025 à 16:20

When implanted medical devices like urinary stents and catheters get clogged with biofilms, the usual solution is to take them out and replace them with new ones. Now, however, researchers at the University of Bern and ETH Zurich, Switzerland have developed an alternative. By incorporating ultrasound-activated moving structures into their prototype “stent-on-a-chip” device, they showed it is possible to remove biofilms without removing the device itself. If translated into clinical practice, the technology could increase the safe lifespan of implants, saving money and avoiding operations that are uncomfortable and sometimes hazardous for patients.

Biofilms are communities of bacterial cells that adhere to natural surfaces in the body as well as artificial structures such as catheters, stents and other implants. Because they are encapsulated by a protective, self-produced extracellular matrix made from polymeric substances, they are mechanically robust and resistant to standard antibacterial measures. If not removed, they can cause infections, obstructions and other complications.

Intense, steady flows push away impurities

The new technology, which was co-developed by Cornel Dillinger, Pedro Amado and other members of Francesco Clavica and Daniel Ahmed’s research teams, takes advantage of recent advances in the fields of robotics and microfluidics. Its main feature is a coating made from microscopic hair-like structures known as cilia. Under the influence of an acoustic field, which is applied externally via a piezoelectric transducer, these cilia begin to move. This movement produces intense, steady fluid flows with velocities of up to 10 mm/s – enough to break apart encrusted deposits (made from calcium carbonate, for example) and flush away biofilms from the inner and outer surfaces of implanted urological devices.

Microscope image showing square and diamond shapes in various shades of grey
All fouled up: Typical examples of crystals known as encrustations that develop on the surfaces of urinary stents and catheters. (Courtesy: Pedro Amado and Shaokai Zheng)

“This is a major advance compared to existing stents and catheters, which require regular replacements to avoid obstruction and infections,” Clavica says.

The technology is also an improvement on previous efforts to clear implants by mechanical means, Ahmed adds. “Our polymeric cilia in fact amplify the effects of ultrasound by allowing for an effect known as acoustic streaming at frequencies of 20 to 100 kHz,” he explains. “This frequency is lower than that possible with previous microresonator devices developed to work in a similar way that had to operate in the MHz-frequency range.”

The lower frequency achieves the desired therapeutic effects while prioritizing patient safety and minimizing the risk of tissue damage, he adds.

Wider applications

In creating their technology, the researchers were inspired by biological cilia, which are a natural feature of physiological systems such as the reproductive and respiratory tracts and the central nervous system. Future versions, they say, could apply the ultrasound probe directly to a patient’s skin, much as handheld probes of ultrasound scanners are currently used for imaging. “This technology has potential applications beyond urology, including fields like visceral surgery and veterinary medicine, where keeping implanted medical devices clean is also essential,” Clavica says.

The researchers now plan to test new coatings that would reduce contact reactions (such as inflammation) in the body. They will also explore ways of improving the device’s responsiveness to ultrasound – for example by depositing thin metal layers. “These modifications could not only improve acoustic streaming performance but could also provide additional antibacterial benefits,” Clavica tells Physics World.

In the longer term, the team hope to translate their technology into clinical applications. Initial tests that used a custom-built ultrasonic probe coupled to artificial tissue have already demonstrated promising results in generating cilia-induced acoustic streaming, Clavica notes. “In vivo animal studies will then be critical to validate safety and efficacy prior to clinical adoption,” he says.

The present study is detailed in PNAS.

The post Ultrasound-activated structures clear biofilms from medical implants appeared first on Physics World.

Former IOP president Cyril Hilsum celebrates 100th birthday

22 mai 2025 à 13:33

Cyril Hilsum, a former president of the Institute of Physics (IOP), celebrated his 100th birthday last week at a special event held at the Royal Society of Chemistry.

Born on 17 May 1925, Hilsum completed a degree in physics at University College London in 1945. During his career he worked at the Services Electronics Research Laboratory and the Royal Radar Establishment and in 1983 was appointed chief scientist of GEC Hirst Research Centre, where he later became research director before retiring aged 70.

Hilsum helped develop commercial applications for the semiconductor gallium arsenide and is responsible for creating the UK’s first semiconductor laser as well as developments that led to modern liquid crystal display technologies.

Between 1988 and 1990 he was president of the IOP, which publishes Physics World, and in 1990 was appointed a Commander of the Order of the British Empire (CBE) for “services to the electrical and electronics industry”.

Hilsum was honoured by many prizes during his career including IOP awards such as the Max Born Prize in 1987, the Faraday Medal in 1988 as well as the Richard Glazebrook Medal and Prize in 1998. In 2007 he was awarded the Royal Society’s Royal Medal “for his many outstanding contributions and for continuing to use his prodigious talents on behalf of industry, government and academe to this day”.

Cyril Hilsum at an event to mark his 100th birthday
Looking back: Hilsum examines photographs that form an exhibition charting his life. (Courtesy: Lindey Hilsum)

Despite now being a centenarian, Hilsum still works part-time as chief science officer for Infi-tex Ltd, which produces force sensors for use in textiles.

“My birthday event was an amazing opportunity for me to greet old colleagues and friends,” Hilsum told Physics World. “Many had not seen each other since they had worked together in the distant past. It gave me a rare opportunity to acknowledge the immense contributions they had made to my career.”

Hilsum says that while the IOP gives much support to applied physics, there is still a great need for physicists “to give critical contributions to the lives of society as a whole”.

“As scientists, we may welcome progress in the subject, but all can get pleasure in seeing the results in their home, on their iPhone, or especially in their hospital!” he adds.

The post Former IOP president Cyril Hilsum celebrates 100th birthday appeared first on Physics World.

Bacteria-killing paint could dramatically improve hospital hygiene

21 mai 2025 à 17:20
Antimicrobial efficacy of chlorhexidine epoxy resin
Antimicrobial efficacy SEM images of steel surfaces inoculated with bacteria show a large bacterial concentration on surfaces painted with control epoxy resin (left) and little to no bacteria on those painted with chlorhexidine epoxy resin. (Courtesy: University of Nottingham)

Scientists have created a novel antimicrobial coating that, when mixed with paint, can be applied to a range of surfaces to destroy bacteria and viruses – including particularly persistent and difficult to kill strains like MRSA, flu virus and SARS-CoV-2. The development potentially paves the way for substantial improvements in scientific, commercial and clinical hygiene.

The University of Nottingham-led team made the material by combining chlorhexidine digluconate (CHX) – a disinfectant commonly used by dentists to treat mouth infections and by clinicians for cleaning before surgery – with everyday paint-on epoxy resin. Using this material, the team worked with staff at Birmingham-based specialist coating company Indestructible Paint to create a prototype antimicrobial paint. They found that, when dried, the coating can kill a wide range of pathogens.

The findings of the study, which was funded by the Royal Academy of Engineering Industrial Fellowship Scheme, were published in Scientific Reports.

Persistent antimicrobial protection

As part of the project, the researchers painted the antimicrobial coating onto a surface and used a range of scientific techniques to analyse the distribution of the biocide in the paint, to confirm that it remained uniformly distributed at a molecular level.

According to project leader Felicity de Cogan, the new paint can be used to provide antimicrobial protection on a wide array of plastic and hard non-porous surfaces. Crucially, it could be effective in a range of clinical environments, where surfaces like hospital beds and toilet seats can act as a breeding ground for bacteria for extended periods of time – even after the introduction of stringent cleaning regimes.

The team, based at the University’s School of Pharmacy, is also investigating the material’s use in the transport and aerospace industries, especially on frequently touched surfaces in public spaces such as aeroplane seats and tray tables.

“The antimicrobial in the paint is chlorhexidine – a biocide commonly used in products like mouthwash. Once it is added, the paint works in exactly the same way as all other paint and the addition of the antimicrobial doesn’t affect its application or durability on the surface,” says de Cogan.

Madeline Berrow from the University of Nottingham
In the lab Co-first author Madeline Berrow, who performed the laboratory work for the study. (Courtesy: University of Nottingham)

The researchers also note that adding CHX to the epoxy resin did not affect its optical transparency.

According to de Cogan, the novel concoction has a range of potential scientific, clinical and commercial applications.

“We have shown that it is highly effective against a range of different pathogens like E. coli and MRSA. We have also shown that it is effective against bacteria even when they are already resistant to antibiotics and biocides,” she says. “This means the technology could be a useful tool to circumvent the global problem of antimicrobial resistance.”

In de Cogan’s view, there are also number of major advantages to using the new coating to tackle bacterial infection – especially when compared to existing approaches – further boosting the prospects of future applications.

The key advantage of the technology is that the paint is “self-cleaning” – meaning that it would no longer be necessary to carry out the arduous task of repeatedly cleaning a surface to remove harmful microbes. Instead, after a single application, the simple presence of the paint on the surface would actively and continuously kill bacteria and viruses whenever they come into contact with it.

“This means that you can be sure a surface won’t pass on infections when you touch it,” says de Cogan.

“We are looking at more extensive testing in harsher environments and long-term durability testing over months and years. This work is ongoing and we will be following up with another publication shortly,” she adds.

The post Bacteria-killing paint could dramatically improve hospital hygiene appeared first on Physics World.

Why I stopped submitting my work to for-profit publishers

21 mai 2025 à 12:00

Peer review is a cornerstone of academic publishing. It is how we ensure that published science is valid. Peer review, by which researchers judge the quality of papers submitted to journals, stops pseudoscience from being peddled as equivalent to rigorous research. At the same time, the peer-review system is under considerable strain as the number of journal articles published each year increases, jumping from 1.9 million in 2016 to 2.8 million in 2022, according to Scopus and Web of Science.

All these articles require experienced peer reviewers, with papers typically taking months to go through peer review. This cannot be blamed alone on the time taken to post manuscripts and reviews back and forth between editors and reviewers, but instead is a result of high workloads and, fundamentally, how busy everyone is. Given peer reviewers need to be expert in their field, the pool of potential reviewers is inherently limited. A bottleneck is emerging as the number of papers grows quicker than the number of researchers in academia.

Scientific publishers have long been central to managing the process of peer review. For anyone outside academia, the concept of peer review may seem illogical given that researchers spend their time on it without much acknowledgement. While initiatives are in place to change this such as outstanding-reviewer awards and the Web of Science recording reviewer data, there is no promise that such recognition will be considered when looking for permanent positions or applying for promotion.

The impact of open access

Why, then, do we agree to review? As an active researcher myself in quantum physics, I peer-reviewed more than 40 papers last year and I’ve always viewed it as a duty. It’s a necessary time-sink to make our academic system function, to ensure that published research is valid and to challenge questionable claims. However, like anything people do out of a sense of duty, inevitably there are those who will seek to exploit it for profit.

Many journals today are open access, in which fees, known as article-processing charges, are levied to make the published work freely available online. It makes sense that costs need to be imposed – staff working at publishing companies need paying; articles need editing and typesetting; servers need be maintained and web-hosting fees have to be paid. Recently, publishers have invested heavily in digital technology and developed new ways to disseminate research to a wider audience.

Open access, however, has encouraged some publishers to boost revenues by simply publishing as many papers as possible. At the same time, there has been an increase in retractions, especially of fabricated or manipulated manuscripts sold by “paper mills”. The rise of retractions isn’t directly linked to the emergence of open access, but it’s not a good sign, especially when the academic publishing industry reports profit margins of roughly 40% – higher than many other industries. Elsevier, for instance, publishes nearly 3000 journals and in 2023 its parent company, Relx, recorded a profit of £1.79bn. This is all money that was either paid in open-access fees or by libraries (or private users) for journal subscriptions but ends up going to shareholders rather than science.

It’s important to add that not all academic publishers are for-profit. Some, like the American Physical Society (APS), IOP Publishing, Optica, AIP Publishing and the American Association for the Advancement of Science – as well as university presses – are wings of academic societies and universities. Any profit they make is reinvested into research, education or the academic community. Indeed, IOP Publishing, AIP Publishing and the APS have formed a new “purpose-led publishing” coalition, in which the three publishers confirm that they will continue to reinvest the funds generated from publishing back into research and “never” have shareholders that result in putting “profit above purpose”.

But many of the largest publishers – the likes of Springer Nature, Elsevier, Taylor and Francis, MDPI and Wiley – are for-profit companies and are making massive sums for their shareholders. Should we just accept that this is how the system is? If not, what can we do about it and what impact can we as individuals have on a multi-billion-dollar industry? I have decided that I will no longer review for, nor submit my articles (when corresponding author) to, any for-profit publishers.

I’m lucky in my field that I have many good alternatives such as the arXiv overlay journal Quantum, IOP Publishing’s Quantum Science and Technology, APS’s Physical Review X Quantum and Optica Quantum. If your field doesn’t, then why not push for them to be created? We may not be able to dismantle the entire for-profit publishing industry, but we can stop contributing to it (especially those who have a permanent job in academia and are not as tied down by the need to publish in high impact factor journals). Such actions may seem small, but together can have an effect and push to make academia the environment we want to be contributing to. It may sound radical to take change into your own hands, but it’s worth a try. You never know, but it could help more money make its way back into science.

The post Why I stopped submitting my work to for-profit publishers appeared first on Physics World.

Visual assistance system helps blind people navigate

21 mai 2025 à 10:00
Structure and workflow of a wearable visual assistance system
Visual assistance system The wearable system uses intuitive multimodal feedback to assist visually impaired people with daily life tasks. (Courtesy: J Tang et al. Nature Machine Intelligence 10.1038/s42256-025-01018-6, 2005, Springer Nature)

Researchers from four universities in Shanghai, China, are developing a practical visual assistance system to help blind and partially sighted people navigate. The prototype system combines lightweight camera headgear, rapid-response AI-facilitated software and artificial “skins” worn on the wrists and finger that provide physiological sensing. Functionality testing suggests that the integration of visual, audio and haptic senses can create a wearable navigation system that overcomes current designs’ adoptability and usability concerns.

Worldwide, 43 million people are blind, according to 2021 estimates by the International Agency for the Prevention of Blindness. Millions more are so severely visually impaired that they require the use of a cane to navigate.

Visual assistance systems offer huge potential as navigation tools, but current designs have many drawbacks and challenges for potential users. These include limited functionality with respect to the size and weight of headgear, battery life and charging issues, slow real-time processing speeds, audio command overload, high system latency that can create safety concerns, and extensive and sometimes complex learning requirements.

Innovations in miniaturized computer hardware, battery charge longevity, AI-trained software to decrease latency in auditory commands, and the addition of lightweight wearable sensory augmentation material providing near-real-time haptic feedback are expected to make visual navigation assistance viable.

The team’s prototype visual assistance system, described in Nature Machine Intelligence, incorporates an RGB-D (red, green, blue, depth) camera mounted on a 3D-printed glasses frame, ultrathin artificial skins, a commercial lithium-ion battery, a wireless bone-conducting earphone and a virtual reality training platform interfaced via triboelectric smart insoles. The camera is connected to a microcontroller via USB, enabling all computations to be performed locally without the need for a remote server.

When a user sets a target using a voice command, AI algorithms process the RGB-D data to estimate the target’s orientation and determine an obstacle-free direction in real time. As the user begins to walk to the target, bone conduction earphones deliver spatialized cues to guide them, and the system updates the 3D scene in real time.

The system’s real-time visual recognition incorporates changes in distance and perspective, and can compensate for low ambient light and motion blur. To provide robust obstacle avoidance, it combines a global threshold method with a ground interval approach to accurately detect overhead hanging, ground-level and sunken obstacles, as well as sloping or irregular ground surfaces.

First author Jian Tang of Shanghai Jiao Tong University and colleagues tested three audio feedback approaches: spatialized cues, 3D sounds and verbal instructions. They determined that spatialized cues are the most rapid to convey and be understood and provide precise direction perception.

Real-world testing A visually impaired person navigates through a cluttered conference room. (Courtesy: Tang et al. Nature Machine Intelligence)

To complement the audio feedback, the researchers developed stretchable artificial skin – an integrated sensory-motor device that provides near-distance alerting. The core component is a compact time-of-flight sensor that vibrates to stimulate the skin when the distance to an obstacle or object is smaller than a predefined threshold. The actuator is designed as a slim, lightweight polyethylene terephthalate cantilever. A gap between the driving circuit and the skin promotes air circulation to improve skin comfort, breathability and long-term wearability, as well as facilitating actuator vibration.

Users wear the sensor on the back of an index or middle finger, while the actuator and driving circuit are worn on the wrist. When the artificial skin detects a lateral obstacle, it provides haptic feedback in just 18 ms.

The researchers tested the trained system in virtual and real-world environments, with both humanoid robots and 20 visually impaired individuals who had no prior experience of using visual assistance systems. Testing scenarios included walking to a target while avoiding a variety of obstacles and navigating through a maze. Participants’ navigation speed increased with training and proved comparable to walking with a cane. Users were also able to turn more smoothly and were more efficient at pathfinding when using the navigation system than when using a cane.

“The proficient completion of tasks mirroring real-world challenges underscores the system’s effectiveness in meeting real-life challenges,” the researchers write. “Overall, the system stands as a promising research prototype, setting the stage for the future advancement of wearable visual assistance.”

The post Visual assistance system helps blind people navigate appeared first on Physics World.

Universe may end much sooner than predicted, say theorists

20 mai 2025 à 18:28

The universe’s maximum lifespan may be considerably shorter than was previously thought, but don’t worry: there’s still plenty of time to finish streaming your favourite TV series.

According to new calculations by black hole expert Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom of Radboud University in the Netherlands, the most persistent stellar objects in the universe – white dwarf stars – will decay away to nothingness in around 1078 years. This, Falcke admits, is “a very long time”, but it’s a far cry from previous predictions, which suggested that white dwarfs could persist for at least 101100 years. “The ultimate end of the universe comes much sooner than expected,” he says.

Writing in the Journal of Cosmology and Astroparticle Physics, Falcke and colleagues explain that the discrepancy stems from different assumptions about how white dwarfs decay. Previous calculations of their lifetime assumed that, in the absence of proton decay (which has never been observed experimentally), their main decay process would be something called pyconuclear fusion. This form of fusion occurs when nuclei in a crystalline lattice essentially vibrate their way into becoming fused with their nearest neighbours.

If that sounds a little unlikely, that’s because it is. However, in the dense, cold cores of white dwarf stars, and over stupendously long time periods, pyconuclear fusion happens often enough to gradually (very, very gradually) turn the white dwarf’s carbon into nickel, which then transmutes into iron by emitting a positron. The resulting iron-cored stars are known as black dwarfs, and some theories predict that they will eventually (very, very eventually) collapse into black holes. Depending on how massive they were to start with, the whole process takes between 101100‒1032 000 years.

An alternative mechanism

Those estimates, however, do not take into account an alternative decay mechanism known as Hawking radiation. First proposed in the early 1970s by Stephen Hawking and Jacob Bekenstein, Hawking radiation arises from fluctuations in the vacuum of spacetime. These fluctuations allow particle-antiparticle pairs to pop into existence by essentially “borrowing” energy from the vacuum for brief periods before the pairs recombine and annihilate.

If this pair production happens in the vicinity of a black hole, one particle in the pair may stray over the black hole’s event horizon before it can recombine. This leaves its partner free to carry away some of the “borrowed” energy as Hawking radiation. After an exceptionally long time – but, crucially, not as long as the time required to disappear a white dwarf via pyconuclear fusion – Hawking radiation will therefore cause black holes to dissipate.

The fate of life, the universe and everything?

But what about objects other than black holes? Well, in a previous work published in 2023, Falcke, Wondrak and van Suijlekom showed that a similar process can occur for any object that curves spacetime with its gravitational field, not just objects that have an event horizon. This means that white dwarfs, neutron stars, the Moon and even human beings can, in principle, evaporate away into nothingness via Hawking radiation – assuming that what the trio delicately call “other astrophysical evolution and decay channels” don’t get there first.

Based on this tongue-in-cheek assumption, the trio calculated that white dwarfs will dissipate in around 1078 years, while denser objects such as black holes and neutron stars will vanish in no more than 1067 years. Less dense objects such as humans, meanwhile, could persist for as long as 1090 years – albeit only in a vast, near-featureless spacetime devoid of anything that would make life worth living, or indeed possible.

While that might sound unrealistic as well as morbid, the trio’s calculations do have a somewhat practical goal. “By asking these kinds of questions and looking at extreme cases, we want to better understand the theory,” van Suijlekom says. “Perhaps one day, we [will] unravel the mystery of Hawking radiation.”

The post Universe may end much sooner than predicted, say theorists appeared first on Physics World.

Subtle quantum effects dictate how some nuclei break apart

20 mai 2025 à 14:46

Subtle quantum effects within atomic nuclei can dramatically affect how some nuclei break apart. By studying 100 isotopes with masses below that of lead, an international team of physicists uncovered a previously unknown region in the nuclear landscape where fragments of fission split in an unexpected way. This is driven not by the usual forces, but by shell effects rooted in quantum mechanics.

“When a nucleus splits apart into two fragments, the mass and charge distribution of these fission fragments exhibits the signature of the underlying nuclear structure effect in the fission process,” explains Pierre Morfouace of Université Paris-Saclay, who led the study. “In the exotic region of the nuclear chart that we studied, where nuclei do not have many neutrons, a symmetric split was previously expected. However, the asymmetric fission means that a new quantum effect is at stake.”

This unexpected discovery not only sheds light on the fine details of how nuclei break apart but also has far-reaching implications. These range from the development of safer nuclear energy to understanding how heavy elements are created during cataclysmic astrophysical events like stellar explosions.

Quantum puzzle

Fission is the process by which a heavy atomic nucleus splits into smaller fragments. It is governed by a complex interplay of forces. The strong nuclear force, which binds protons and neutrons together, competes with the electromagnetic repulsion between positively charged protons. The result is that certain nuclei are unstable and typically leads to a symmetric fission.

But there’s another, subtler phenomenon at play: quantum shell effects. These arise because protons and neutrons inside the nucleus tend to arrange themselves into discrete energy levels or “shells,” much like electrons do in atoms.

“Quantum shell effects [in atomic electrons] play a major role in chemistry, where they are responsible for the properties of noble gases,” says Cedric Simenel of the Australian National University, who was not involved in the study. “In nuclear physics, they provide extra stability to spherical nuclei with so-called ‘magic’ numbers of protons or neutrons. Such shell effects drive heavy nuclei to often fission asymmetrically.”

In the case of very heavy nuclei, such as uranium or plutonium, this asymmetry is well documented. But in lighter, neutron-deficient nuclei – those with fewer neutrons than their stable counterparts – researchers had long expected symmetric fission, where the nucleus breaks into two roughly equal parts. This new study challenges that view.

New fission landscape

To investigate fission in this less-explored part of the nuclear chart, scientists from the R3B-SOFIA collaboration carried out experiments at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. They focused on nuclei ranging from iridium to thorium, many of which had never been studied before. The nuclei were fired at high energies into a lead target to induce fission.

The fragments produced in each fission event were carefully analysed using a suite of high-resolution detectors. A double ionization chamber captured the number of protons in each product, while a superconducting magnet and time-of-flight detectors tracked their momentum, enabling a detailed reconstruction of how the split occurred.

Using this method, the researchers found that the lightest fission fragments were frequently formed with 36 protons, which is the atomic number of krypton. This pattern suggests the presence of a stabilizing shell effect at that specific proton number.

“Our data reveal the stabilizing effect of proton shells at Z=36,” explains Morfouace. “This marks the identification of a new ‘island’ of asymmetric fission, one driven by the light fragment, unlike the well-known behaviour in heavier actinides. It expands our understanding of how nuclear structure influences fission outcomes.”

Future prospects

“Experimentally, what makes this work unique is that they provide the distribution of protons in the fragments, while earlier measurements in sub-lead nuclei were essentially focused on the total number of nucleons,” comments Simenel.

Since quantum shell effects are tied to specific numbers of protons or neutrons, not just the overall mass, these new measurements offer direct evidence of how proton shell structure shapes the outcome of fission in lighter nuclei. This makes the results particularly valuable for testing and refining theoretical models of fission dynamics.

“This work will undoubtedly lead to further experimental studies, in particular with more exotic light nuclei,” Simenel adds. “However, to me, the ball is now in the camp of theorists who need to improve their modelling of nuclear fission to achieve the predictive power required to study the role of fission in regions of the nuclear chart not accessible experimentally, as in nuclei formed in the astrophysical processes.”

The research is described in Nature.

The post Subtle quantum effects dictate how some nuclei break apart appeared first on Physics World.

New coronagraph pushes exoplanet discovery to the quantum limit

19 mai 2025 à 18:21
Diagram of the new coronagraph
How it works Diagram showing simulated light from an exoplanet and its companion star (far left) moving through the new coronagraph. (Courtesy: Nico Deshler/University of Arizona)

A new type of coronagraph that could capture images of dim exoplanets that are extremely close to bright stars has been developed by a team led by Nico Deshler at the University of Arizona in the US. As well as boosting the direct detection of exoplanets, the new instrument could support advances in areas including communications, quantum sensing, and medical imaging.

Astronomers have confirmed the existence of nearly 6000 exoplanets, which are planets that orbit stars other as the Sun. The majority of these were discovered based on their effects on their companion stars, rather than being observed directly. This is because most exoplanets are too dim and too close to their companion stars for the exoplanet light to be differentiated from starlight. That is where a coronagraph can help.

A coronagraph is an astronomical instrument that blocks light from an extremely bright source to allow the observation of dimmer objects in the nearby sky. Coronagraphs were first developed a century ago to allow astronomers to observe the outer atmosphere (corona) of the Sun , which would otherwise be drowned out by light from the much brighter photosphere.

At the heart of a coronagraph is a mask that blocks the light from a star, while allowing light from nearby objects into a telescope. However, the mask (and the telescope aperture) will cause the light to interfere and create diffraction patterns that blur tiny features. This prevents the observation of dim objects that are closer to the star than the instrument’s inherent diffraction limit.

Off limits

Most exoplanets lie within the diffraction limit of today’s coronagraphs and Deshler’s team addressed this problem using two spatial mode sorters. The first device uses a sequence of optical elements to separate starlight from light originating from the immediate vicinity of the star. The starlight is then blocked by a mask while the rest of the light is sent through a second spatial mode sorter, which reconstructs an image of the region surrounding the star.

As well as offering spatial resolution below the diffraction limit, the technique approaches the fundamental limit on resolution that is imposed by quantum mechanics.

“Our coronagraph directly captures an image of the surrounding object, as opposed to measuring only the quantity of light it emits without any spatial orientation,” Deshler describes. “Compared to other coronagraph designs, ours promises to supply more information about objects in the sub-diffraction regime – which lie below the resolution limits of the detection instrument.”

To test their approach, Deshler and colleagues simulated an exoplanet orbiting at a sub-diffraction distance from a host star some 1000 times brighter. After passing the light through the spatial mode sorters, they could resolve the exoplanet’s position – which would have been impossible with any other coronagraph.

Context and composition

The team believe that their technique will improve astronomical images. “These images can provide context and composition information that could be used to determine exoplanet orbits and identify other objects that scatter light from a star, such as exozodiacal dust clouds,” Deshler says.

The team’s coronagraph could also have applications beyond astronomy. With the ability to detect extremely faint signals close to the quantum limit, it could help to improve the resolution of quantum sensors. This could to lead to new methods for detecting tiny variations in magnetic or gravitational fields.

Elsewhere, the coronagraph could help to improve non-invasive techniques for imaging living tissue on the cellular scale – with promising implications in medical applications such as early cancer detection and the imaging of neural circuits. Another potential use could be new multiplexing techniques for optical communications. This would see the coronagraph being used to differentiate between overlapping signals. This has the potential of boosting the rate at which data could be transferred between satellites and ground-based receivers.

The research is described in Optica.

The post New coronagraph pushes exoplanet discovery to the quantum limit appeared first on Physics World.

Miniaturized pixel detector characterizes radiation quality in clinical proton fields

19 mai 2025 à 15:07
Experimental setups for phantom measurements
Experimental setup Top: schematic and photo of the setup for measurements behind a homogeneous phantom. Bottom: IMPT treatment plan for the head phantom (left); the detector sensor position (middle, sensor thickness not to scale); and the setup for measurements behind the phantom (right). (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

Proton therapy is a highly effective and conformal cancer treatment. Proton beams deposit most of their energy at a specific depth – the Bragg peak – and then stop, enabling proton treatments to destroy tumour cells while sparing surrounding normal tissue. To further optimize the clinical treatment planning process, there’s recently been increased interest in considering the radiation quality, quantified by the proton linear energy transfer (LET).

LET – defined as the mean energy deposited by a charged particle over a given distance – increases towards the end of the proton range. Incorporating LET as an optimization parameter could better exploit the radiobiological properties of protons, by reducing LET in healthy tissue, while maintaining or increasing it within the target volume. This approach, however, requires a method for experimental verification of proton LET distributions and patient-specific quality assurance in terms of proton LET.

To meet this need, researchers at the Institute of Nuclear Physics, Polish Academy of Sciences have used the miniaturized semiconductor pixel detector Timepix3 to perform LET characterization of intensity-modulated proton therapy (IMPT) plans in homogeneous and heterogeneous phantoms. They report their findings in Physics in Medicine & Biology.

Experimental validation

First author Paulina Stasica-Dudek and colleagues performed a series of experiments in a gantry treatment room at the Cyclotron Centre Bronowice (CCB), a proton therapy facility equipped with a proton cyclotron accelerator and pencil-beam scanning system that provides IMPT for up to 50 cancer patients per day.

The MiniPIX Timepix3 is a radiation imaging pixel detector based on the Timepix3 chip developed at CERN within the Medipix collaboration (provided commercially by Advacam). It provides quasi-continuous single particle tracking, allowing particle type recognition and spectral information in a wide range of radiation environments.

For this study, the team used a Timepix3 detector with a 300 µm-thick silicon sensor operated as a miniaturized online radiation camera. To overcome the problem of detector saturation in the relatively high clinical beam currents, the team developed a pencil-beam scanning method with the beam current reduced to the picoampere (pA) level.

The researchers used Timepix3 to measure the deposited energy and LET spectra for spread-out Bragg peak (SOBP) and IMPT plans delivered to a homogeneous water-equivalent slab phantom, with each plan energy layer irradiated and measured separately. They also performed measurements on an IMPT plan delivered to a heterogeneous head phantom. For each scenario, they used a Monte Carlo (MC) code to simulate the corresponding spectra of deposited energy and LET for comparison.

The team first performed a series of experiments using a homogeneous phantom irradiated with various fields, mimicking patient-specific quality assurance procedures. The measured and simulated dose-averaged LET (LETd) and LET spectra agreed to within a few percent, demonstrating proper calibration of the measurement methodology.

The researchers also performed an end-to-end test in a heterogeneous CIRS head phantom, delivering a single field of an IMPT plan to a central 4 cm-diameter target volume in 13 energy layers (96.57–140.31 MeV) and 315 spots.

Energy deposition and LET spectra for an IMPT plan delivered to a head phantom
End-to-end testing Energy deposition (left) and LET in water (right) spectra for an IMPT plan measured in the CIRS head phantom obtained based on measurements (blue) and MC simulations (orange). The vertical lines indicate LETd values. (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

For head phantom measurements, the peak positions for deposited energy and LET spectra obtained based on experiment and simulation agreed within the error bars, with LETd values of about 1.47 and 1.46 keV/µm, respectively. The mean LETd values derived from MC simulation and measurement differed on average by 5.1% for individual energy layers.

Clinical translation

The researchers report that implementing the proposed LET measurement scheme using Timepix3 in a clinical setting requires irradiating IMPT plans with a reduced beam current (at the pA level). While they successfully conducted LET measurements at low beam currents in the accelerator’s research mode, pencil-beam scanning at pA-level currents is not currently available in the commercial clinical or quality assurance modes. Therefore, they note that translating the proposed approach into clinical practice would require vendors to upgrade the beam delivery system to enable beam monitoring at low beam currents.

“The presented results demonstrate the feasibility of the Timepix3 detector to validate LET computations in IMPT fields and perform patient-specific quality assurance in terms of LET. This will support the implementation of LET in treatment planning, which will ultimately increase the effectiveness of the treatment,” Stasica-Dudek and colleagues write. “Given the compact design and commercial availability of the Timepix3 detector, it holds promise for broad application across proton therapy centres.”

The post Miniaturized pixel detector characterizes radiation quality in clinical proton fields appeared first on Physics World.

Protons take to the road

16 mai 2025 à 18:33

Physicists at CERN have completed a “test run” for taking antimatter out of the laboratory and transporting it across the site of the European particle-physics facility. Although the test was carried out with ordinary protons, the team that performed it says that antiprotons could soon get the same treatment. The goal, they add, is to study antimatter in places other than the labs that create it, as this would enable more precise measurements of the differences between matter and antimatter. It could even help solve one of the biggest mysteries in physics: why does our universe appear to be made up almost entirely of matter, with only tiny amounts of antimatter?

According to the Standard Model of particle physics, each of the matter particles we see around us – from baryons like protons to leptons such as electrons – should have a corresponding antiparticle that is identical in every way apart from its charge and magnetic properties (which are reversed). This might sound straightforward, but it leads to a peculiar prediction. Under the Standard Model, the Big Bang that formed our universe nearly 14 billion years ago should have generated equal amounts of antimatter and matter. But if that were the case, there shouldn’t be any matter left, because whenever pairs of antimatter and matter particles collide, they annihilate each other in a burst of energy.

Physicists therefore suspect that there are other, more subtle differences between matter particles and their antimatter counterparts – differences that could explain why the former prevailed while the latter all but disappeared. By searching for these differences, they hope to shed more light on antimatter-matter asymmetry – and perhaps even reveal physics beyond the Standard Model.

Extremely precise measurements

At CERN’s Baryon-Antibaryon Symmetry Experiment (BASE) experiment, the search for matter-antimatter differences focuses on measuring the magnetic moment (or charge-to-mass ratio) of protons and antiprotons. These measurements need to be extremely precise, but this is difficult at CERN’s “Antimatter Factory” (AMF), which manufactures the necessary low-energy antiprotons in profusion. This is because essential nearby equipment – including the Antiproton Decelerator and ELENA, which reduce the energy of incoming antiprotons from GeV to MeV – produces magnetic field fluctuations that blur the signal.

To carry out more precise measurements, the team therefore needs a way of transporting the antiprotons to other, better-shielded, laboratories. This is easier said than done, because antimatter needs to be carefully isolated from its environment to prevent it from annihilating with the walls of its container or with ambient gas molecules.

The BASE team’s solution was to develop a device that can transport trapped antiprotons on a truck for substantial distances. It is this device, known as BASE-STEP (for Symmetry Tests in Experiments with Portable Antiprotons), that has now been field-tested for the first time.

Protons on the go

During the test, the team successfully transported a cloud of about 105 trapped protons out of the AMF and across CERN’s Meyrin campus over a period of four hours. Although protons are not the same as antiprotons, BASE-STEP team leader Christian Smorra says they are just as sensitive to disturbances in their environment caused by, say, driving them around. “They are therefore ideal stand-ins for initial tests, because if we can transport protons, we should also be able to transport antiprotons,” he says.

Photo of the BASE-STEP system sitting on a bright yellow trolley after being unloaded from the transport crane, which is visible above it. A woman in a hard hat and head scarf watches from the ground, while a man in a hard hat stands above her on a set of steps, also watching.
The next step: BASE-STEP on a transfer trolley, watched over by BASE team members Fatma Abbass and Christian Smorra. (Photo: BASE/Maria Latacz)

The BASE-STEP device is mounted on an aluminium frame and measures 1.95 m x 0.85 m x 1.65 m. At 850‒900 kg, it is light enough to be transported using standard forklifts and cranes.

Like BASE, it traps particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper. To further confine the protons and prevent them from colliding with the trap’s walls, this trap is surrounded by a superconducting magnet bore operated at cryogenic temperatures. The second electrode stack is also kept at ultralow pressures of 10-19 bar, which Smorra says is low enough to keep antiparticles from annihilating with residual gas molecules. To transport antiprotons instead of protons, Smorra adds, they would just need to switch the polarity of the electrodes.

The transportable trap system, which is detailed in Nature, is designed to remain operational on the road. It uses a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, and its frame can handle accelerations of up to 1g (9.81 m/s2) in all directions over and above the usual (vertical) force of gravity. This means it can travel up and down slopes with a gradient of up to 10%, or approximately 6°.

Once the BASE-STEP device is re-configured to transport antiprotons, the first destination on the team’s list is a new Penning-trap system currently being constructed at the Heinrich Heine University in Düsseldorf, Germany. Here, physicists hope to search for charge-parity-time (CPT) violations in protons and antiprotons with a precision at least 100 times higher than is possible at CERN’s AMF.

“At BASE, we are currently performing measurements with a precision of 16 parts in a trillion,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine and a researcher at CERN and Japan’s RIKEN laboratory. “These experiments are the most precise tests of matter/antimatter symmetry in the baryon sector to date, but to make these experiments better, we have no choice but to transport the particles out of CERN’s antimatter factory,” he tells Physics World.

The post Protons take to the road appeared first on Physics World.

Quantum computing for artists, musicians and game designers

15 mai 2025 à 15:55

Many creative industries rely on cutting-edge digital technologies, so it is not surprising that this sector could easily become an early adopter of quantum computing.

In this episode of the Physics World Weekly podcast I am in conversation with James Wootton, who is chief scientific officer at Moth Quantum. Based in the UK and Switzerland, the company is developing quantum-software tools for the creative industries – focusing on artists, musicians and game developers.

Wootton joined Moth Quantum in September 2024 after working on quantum error correction at IBM. He also has long-standing interest in quantum gaming and creating tools that make quantum computing more accessible. If you enjoyed this interview with Wootton, check out this article that he wrote for Physics World in 2018: “Playing games with quantum computers“.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Quantum computing for artists, musicians and game designers appeared first on Physics World.

Five-body recombination could cause significant loss from atom traps

15 mai 2025 à 10:05

Five-body recombination, in which five identical atoms form a tetramer molecule and a single free atom, could be the largest contributor to loss from ultracold atom traps at specific “Efimov resonances”, according to calculations done by physicists in the US. The process, which is less well understood than three- and four-body recombination, could be useful for building molecules, and potentially for modelling nuclear fusion.

A collision involving trapped atoms can be either elastic – in which the internal states of the atoms and their total kinetic energy remain unchanged – or inelastic, in which there is an interchange between the kinetic energy of the system and the internal energy states of the colliding atoms.

Most collisions in a dilute quantum gas involve only two atoms, and when physicists were first studying Bose-Einstein condensates (the ultralow-temperature state of some atomic gases), they suppressed inelastic two-body collisions, keeping the atoms in the desired state and preserving the condensate. A relatively small number of collisions, however, involve three or more bodies colliding simultaneously.

“They couldn’t turn off three body [inelastic collisions], and that turned out to be the main reason atoms leaked out of the condensate,” says theoretical physicist Chris Greene of Purdue University in the US.

Something remarkable

While attempting to understand inelastic three-body collisions, Greene and colleagues made the connection to work done in the 1970s by the Soviet theoretician Vitaly Efimov. He showed that at specific “resonances” of the scattering length, quantum mechanics allowed two colliding particles that could otherwise not form a bound state to do so in the presence of a third particle. While Efimov first considered the scattering of nucleons (protons and neutrons) or alpha particles, the effect applies to atoms and other quantum particles.

In the case of trapped atoms, the bound dimer and free atom are then ejected from the trap by the energy released from the binding event. “There were signatures of this famous Efimov effect that had never been seen experimentally,” Greene says. This was confirmed in 2005 by experiments from Rudolf Grimm’s group at the University of Innsbruck in Austria.

Hundreds of scientific papers have now been written about three-body recombination. Greene and colleagues subsequently predicted resonances at which four-body Efimov recombination could occur, producing a trimer. These were observed almost immediately by Grimm and colleagues. “Five was just too hard for us to do at the time, and only now are we able to go that next step,” says Greene.

Principal loss channel

In the new work, Greene and colleague Michael Higgins modelled collisions between identical caesium atoms in an optical trap. At specific resonances, five-body recombination – in which four atoms combine to produce a tetramer and a free particle – is not only enhanced but becomes the principal loss channel. The researchers believe these resonances should be experimentally observable using today’s laser box traps, which hold atomic gases in a square-well potential.

“For most ultracold experiments, researchers will be avoiding loss as much as possible – they would stay away from these resonances,” says Greene; “But for those of us in the few-body community interested in how atoms bind and resonate and how to describe complicated rearrangement, it’s really interesting to look at these points where the loss becomes resonant and very strong.” This is one technique that can be used to create new molecules, for example.

In future, Greene hopes to apply the model to nucleons themselves. “There have been very few people in the few-body theory community willing to tackle a five-particle collision – the Schrödinger equation has so many dimensions,” he says.

Fusion reactions

He hopes it may be possible to apply the researchers’ toolkit to nuclear reactions. “The famous one is the deuterium/tritium fusion reaction. When they collide they can form an alpha particle and a neutron and release a ton of energy, and that’s the basis of fusion reactors…There’s only one theory in the world from the nuclear community, and it’s such an important reaction I think it needs to be checked,” he says.

The researchers also wish to study the possibility of even larger bound states. However, they foresee a problem because the scattering length of the ground state resonance gets shorter and shorter with each additional particle. “Eventually the scattering length will no longer be the dominant length scale in the problem, and we think between five and six is about where that border line occurs,” Greene says. Nevertheless, higher-lying, more loosely-bound six-body Efimov resonances could potentially be visible at longer scattering lengths.

The research is described in Proceedings of the National Academy of Sciences.

Theoretical physicist Ravi Rau of Louisiana State University in the US is impressed by Greene and Higgins’ work. “For quite some time Chris Greene and a succession of his students and post-docs have been extending the three-body work that they did, using the same techniques, to four and now five particles,” he says. “Each step is much more complicated, and that he could use this technique to extend it to five bosons is what I see as significant.” Rau says, however, that “there is a vast gulf” between five atoms and the number treated by statistical mechanics, so new theoretical approaches may be required to bridge the gap.

The post Five-body recombination could cause significant loss from atom traps appeared first on Physics World.

This is what an aurora looks like on Mars

14 mai 2025 à 20:01

The Mars rover Perseverance has captured the first image of an aurora as seen from the surface of another planet. The visible-light image, which was taken during a solar storm on 18 March 2024, is not as detailed or as colourful as the high-resolution photos of green swirls, blue shadows and pink whorls familiar to aurora aficionados on Earth. Nevertheless, it shows the Martian sky with a distinctly greenish tinge, and the scientists who obtained it say that similar aurorae would likely be visible to future human explorers.

“Kind of like with aurora here on Earth, we need a good solar storm to induce a bright green colour, otherwise our eyes mostly pick up on a faint grey-ish light,” explains Elise Wright Knutsen, a postdoctoral researcher in the Centre for Space Sensors and Systems at the University of Oslo, Norway. The storm Knutsen and her colleagues captured was, she adds, “rather moderate”, and the aurora it produced was probably too faint to see with the naked eye. “But with a camera, or if the event had been more intense, the aurora will appear as a soft green glow covering more or less the whole sky.”

The role of planetary magnetic fields

Aurorae happen when charged particles from the Sun – the solar wind – interact with the magnetic field around a planet. On Earth, this magnetic field is the product of an internal, planetary-scale magnetic dynamo. Mars, however, lost its dynamo (and, with it, its oceans and its thick protective atmosphere) around four billion years ago, so its magnetic field is much weaker. Nevertheless, it retains some residual magnetization in its southern highlands, and its conductive ionosphere affects the shape of the nearby interplanetary magnetic field. Together, these two phenomena give Mars a hybrid magnetosphere too feeble to protect its surface from cosmic rays, but strong enough to generate an aurora.

Scientists had previously identified various types of aurorae on Mars (and every other planet with an atmosphere in our solar system) in data from orbiting spacecraft. However, no Mars rover had ever observed an aurora before, and all the orbital aurora observations, from Mars and elsewhere, were at ultraviolet wavelengths.

An artist's impression of what the aurora would have looked like. The image shows uneven terrain silhouetted against a greeish sky with several visible stars. The Perseverance rovers is in the foreground.
Awesome sight: An artist’s impression of the aurora and the Perseverance rover. (Courtesy: Alex McDougall-Page)

How to spot an aurora on Mars

According to Knutsen, the lack of visible-light, surface-based aurora observations has several causes. First, the visible-wavelength instruments on Mars rovers are generally designed to observe the planet’s bright “dayside”, not to detect faint emissions on its nightside. Second, rover missions focus primarily on geology, not astronomy. Finally, aurorae are fleeting, and there is too much demand for Perseverance’s instruments to leave them pointing at the sky just in case something interesting happens up there.

“We’ve spent a significant amount of time and effort improving our aurora forecasting abilities,” Knutsen says.

Getting the timing of observations right was the most challenging part, she adds. The clock started whenever solar satellites detected events called coronal mass ejections (CMEs) that create unusually strong pulses of solar wind. Next, researchers at the NASA Community Coordinated Modeling Center simulated how these pulses would propagate through the solar system. Once they posted the simulation results online, Knutsen and her colleagues – an international consortium of scientists in Belgium, France, Germany, the Netherlands, Spain, the UK and the US as well as Norway – had a decision to make. Was this CME likely to trigger an aurora bright enough for Perseverance to detect?

If the answer was “yes”, their next step was to request observation time on Perseverance’s SuperCam and Mastcam-Z instruments. Then they had to wait, knowing that although CMEs typically take three days to reach Mars, the simulations are only accurate to within a few hours and the forecast could change at any moment. Even if they got the timing right, the CME might be too weak to trigger an aurora.

“We have to pick the exact time to observe, the whole observation only lasts a few minutes, and we only get one chance to get it right per solar storm,” Knutsen says. “It took three unsuccessful attempts before we got everything right, but when we did, it appeared exactly as we had imagined it: as a diffuse green haze, uniform in all directions.”

Future observations

Writing in Science Advances, Knutsen and colleagues say it should now be possible to investigate how Martian aurorae vary in time and space – information which, they note, is “not easily obtained from orbit with current instrumentation”. They also point out that the visible-light instruments they used tend to be simpler and cheaper than UV ones.

“This discovery will open up new avenues for studying processes of particle transport and magnetosphere dynamics,” Knutsen tells Physics World. “So far we have only reported our very first detection of this green emission, but observations of aurora can tell us a lot about how the Sun’s particles are interacting with Mars’s magnetosphere and upper atmosphere.”

The post This is what an aurora looks like on Mars appeared first on Physics World.

Robert P Crease: ‘I’m yet another victim of the Trump administration’s incompetence’

14 mai 2025 à 16:00

Late on Friday 18 April, the provost of Stony Brook University, where I teach, received a standard letter from the National Science Foundation (NSF), the body that funds much academic research in the US. “Termination of certain awards is necessary,” the e-mail ran, “because they are not in alignment with current NSF priorities”. The e-mail mentioned “NSF Award Id 2318247”. Mine.

The termination notice, forwarded to me a few minutes later, was the same one that 400 other researchers all over the US received the same day, in which the agency, following a directive from the Trump administration, grabbed back $233m in grant money. According to the NSF website, projects terminated were “including but not limited to those on diversity, equity, and inclusion (DEI) and misinformation/disinformation”.

Losing grant money is disastrous for research and for the faculty, postdocs, graduate students and support staff who depend on that support. A friend of mine tried to console me by saying that I had earned a badge of honour for being among the 400 people who threatened the Trump Administration so much that it set out to stop their work. Still, I was baffled. Did I really deserve the axe?

My award, entitled “Social and political dynamics of lab-community relations”, was small potatoes. As the sole principal investigator, I’d hired no postdocs or grad students. I’d also finished most of the research and been given a “no-cost extension” to write it up that was due to expire in a few months. In fact, I’d spent all but $21,432 of the $263,266 of cash.

That may sound like a lot for a humanities researcher, but it barely covered a year of my salary and included indirect costs (to which my grant was subject like any other), along with travel and so on. What’s more, my project’s stated aim was to “enhance the effectiveness of national scientific facilities”, which was clearly within the NSF’s mission.

Such facilities, I had pointed out in my official proposal, are vital if the US is to fulfil its national scientific, technological, medical and educational goals. But friction between a facility and the surrounding community can hamper its work, particularly if the lab’s research is seen as threatening – for example, involving chemical, radiological or biological hazards. Some labs, in fact, have had important, yet perfectly safe, facilities permanently closed out of such fear.

“In an age of Big Science,” I argued, “understanding the dynamics of lab-community interaction is crucial to advancing national, scientific, and public interests.” What’s so contentious about that?

“New bad words”

Maybe I had been careless. After all, Ted Cruz, who chairs the Senate’s commerce committee, had claimed in February that 3400 NSF awards worth over $2 billion made during the Biden–Harris administration had promoted DEI and advanced “neo-Marxist class warfare propaganda”. I wondered if I might have inadvertently used some trigger word that outed me as an enemy of the state.

I knew, for instance, that the Trump Administration had marked for deletion photos of the Enola Gay aircraft, which had dropped an atomic bomb on Hiroshima, in a Defense Department database because officials had not realized that “Gay” was part of the name of the pilot’s mother. Administration officials had made similar misinterpretations in scientific proposals that included the words “biodiversity” and “transgenic”.

Had I used one of those “new bad words”? I ran a search on my proposal. Did it mention “equity”? No. “Inclusion”? Also no. The word “diversity” appeared only once, in the subtitle of an article in the bibliography about radiation fallout. “Neo-Marxist”? Again, no. Sure, I’d read Marx’s original texts during my graduate training in philosophy, but my NSF documents hadn’t tapped him or his followers as essential to my project.

Then I remembered a sentence in my proposal. “Well-established scientific findings,” I wrote, “have been rejected by activists and politicians, distorted by lurid headlines, and fuelled partisan agendas.” These lead in turn to “conspiracy theories, fake facts, science denial and charges of corruption”.

Was that it, I wondered? Had the NSF officials thought that I had meant to refer to the administration’s attacks on climate change science, vaccines, green energy and other issues? If so, that was outrageous! There was not a shred of truth to it – no truth at all!

Ructions and retractions

On 23 April – five days after the NSF termination notice – two researchers at Harvard University put together an online “Terminated NSF grant tracker”, which contained information based on what they found in the NSF database. Curious, I scrolled down to SUNY at Stony Brook and found mine: “Social and political dynamics of lab-community relations”.

I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract

I was shocked to discover that almost everything about it in the NSF database was wrong, including the abstract. The abstract given for my grant was apparently that of another NSF award, for a study that touched on DEI themes – a legitimate and useful thing to study under any normal regime, but not this one. At last, I had the reason for my grant termination: an NSF error.

The next day, 24 April, I managed to speak to the beleaguered NSF programme director, who was kind and understanding and said there’d been a mistake in the database. When I asked her if it could be fixed she said, “I don’t know”. When I asked her if the termination can be reversed, she said, “I don’t know”. I alerted Stony Brook’s grants-management office, which began to press the NSF to reverse its decision. A few hours later I learned that NSF director Sethuraman Panchanathan had resigned.

I briefly wondered if Panchanathan had been fired because my grant had been bungled. No such luck; he was probably disgusted with the administration’s treatment of the agency. But while the mistake over my abstract evidently wasn’t deliberate, the malice behind my grant’s termination certainly was. Further, doesn’t one routinely double-check before taking such an unprecedented and monumental step as terminating a grant by a major scientific agency?

I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately

I then felt guilty about my anger; who was I to complain? After all, some US agencies have been shockingly incompetent lately. A man was mistakenly sent by the Department of Homeland Security to a dangerous prison in El Salvador and they couldn’t (or wouldn’t) get him back. The Department of Health and Human Services has downplayed the value of vaccines, fuelling a measles epidemic in Texas, while defence secretary Pete Hegseth used the Signal messaging app to release classified military secrets regarding a war in progress to a journalist.

How narcissistic of me to become livid only when personally affected by termination of an award that’s almost over anyway.

A few days later, on 28 April, Stony Brook’s provost received another e-mail about my grant from the NSF. Forwarded to me, it said: “the termination notice is retracted; NSF terminated this project in error”. Since then, the online documents at the NSF, and the information about my grant in the tracker, have thankfully been corrected.

The critical point

In a few years’ time, I’ll put together another proposal to study the difference between the way that US government handles science and the needs of its citizens. I’ll certainly have a lot more material to draw on. Meanwhile, I’ll reluctantly wear my badge of honour. For I deserve it – though not, as I initially thought, because I had threatened the Trump Administration enough that they tried to halt my research.

I got it simply because I’m yet another victim of the Trump Administration’s incompetence.

The post Robert P Crease: ‘I’m yet another victim of the Trump administration’s incompetence’ appeared first on Physics World.

Plasma physics sets upper limit on the strength of ‘dark electromagnetism’

14 mai 2025 à 15:00

Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.

Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.

Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.

Bullet Cluster experiment

In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.

Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.

A graph of the dark electromagnetic coupling constant 𝛼𝐷 as a function of the dark matter mass 𝑚𝐷. There is a blue triangle in the upper left corner of the graph, a wide green region below it running from the bottom left to the top right, and a thin red strip below that. A white triangle at the bottom right of the graph represents a region not disallowed by the measurements.
Interaction strength: Constraints on the dark electromagnetic coupling constant 𝛼𝐷 based on observations from the Bullet Cluster. 𝛼𝐷 must lie below the blue, green and red regions. Dashed lines show the reference value used for the mass of 1 TeV. (Courtesy: K Schoefler et al., “Can plasma physics establish a significant bound on long-range dark matter interactions?” Phys Rev D 111 L071701, https://doi.org/10.1103/PhysRevD.111.L071701)

The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.

Ruling out many of the simplest dark matter theories

The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.

This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”

As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.

The present study is detailed in Physical Review D.

The post Plasma physics sets upper limit on the strength of ‘dark electromagnetism’ appeared first on Physics World.

Quantum effect could tame noisy nanoparticles by rendering them invisible

14 mai 2025 à 10:00

In the quantum world, observing a particle is not a passive act. If you shine light on a quantum object to measure its position, photons scatter off it and disturb its motion. This disturbance is known as quantum backaction noise, and it limits how precisely physicists can observe or control delicate quantum systems.

Physicists at Swansea University have now proposed a technique that could eliminate quantum backaction noise in optical traps, allowing a particle to remain suspended in space undisturbed. This would bring substantial benefits for quantum sensors, as the amount of noise in a system determines how precisely a sensor can measure forces such as gravity; detect as-yet-unseen interactions between gravity and quantum mechanics; and perhaps even search for evidence of dark matter.

There’s just one catch: for the technique to work, the particle needs to become invisible.

Levitating nanoparticles

Backaction noise is a particular challenge in the field of levitated optomechanics, where physicists seek to trap nanoparticles using light from lasers. “When you levitate an object, the whole thing moves in space and there’s no bending or stress, and the motion is very pure,” explains James Millen, a quantum physicist who studies levitated nanoparticles at Kings College, London, UK. “That’s why we are using them to detect crazy stuff like dark matter.”

While some noise is generally unavoidable, Millen adds that there is a “sweet spot” called the Heisenberg limit. “This is where you have exactly the right amount of measurement power to measure the position optimally while causing the least noise,” he explains.

The problem is that laser beams powerful enough to suspend a nanoparticle tend to push the system away from the Heisenberg limit, producing an increase in backaction noise.

Blocking information flow

The Swansea team’s method avoids this problem by, in effect, blocking the flow of information from the trapped nanoparticle. Its proposed setup uses a standing-wave laser to trap a nanoparticle in space with a hemispherical mirror placed around it. When the mirror has a specific radius, the scattered light from the particle and its reflection interfere so that the outgoing field no longer encodes any information about the particle’s position.

At this point, the particle is effectively invisible to the observer, with an interesting consequence: because the scattered light carries no usable information about the particle’s location, quantum backaction disappears. “I was initially convinced that we wanted to suppress the scatter,” team leader James Bateman tells Physics World. “After rigorous calculation, we arrived at the correct and surprising answer: we need to enhance the scatter.”

In fact, when scattering radiation is at its highest, the team calculated that the noise should disappear entirely. “Even though the particle shines brighter than it would in free space, we cannot tell in which direction it moves,” says Rafał Gajewski, a postdoctoral researcher at Swansea and Bateman’s co-author on a paper in Physical Review Research describing the technique.

Gajewski and Bateman’s result flips a core principle of quantum mechanics on its head. While it’s well known that measuring a quantum system disturbs it, the reverse is also true: if no information can be extracted, then no disturbance occurs, even when photons continuously bombard the particle. If physicists do need to gain information about the trapped nanoparticle, they can use a different, lower-energy laser to make their measurements, allowing experiments to be conducted at the Heisenberg limit with minimal noise.

Putting it into practice

For the method to work experimentally, the team say the mirror needs a high-quality surface and a radius that is stable with temperature changes. “Both requirements are challenging, but this level of control has been demonstrated and is achievable,” Gajewski says.

Positioning the particle precisely at the center of the hemisphere will be a further challenge, he adds, while the “disappearing” effect depends on the mirror’s reflectivity at the laser wavelength. The team is currently investigating potential solutions to both issues.

If demonstrated experimentally, the team says the technique could pave the way for quieter, more precise experiments and unlock a new generation of ultra-sensitive quantum sensors. Millen, who was not involved in the work, agrees. “I think the method used in this paper could possibly preserve quantum states in these particles, which would be very interesting,” he says.

Because nanoparticles are far more massive than atoms, Millen adds, they interact more strongly with gravity, making them ideal candidates for testing whether gravity follows the strange rules of quantum theory.  “Quantum gravity – that’s like the holy grail in physics!” he says.

The post Quantum effect could tame noisy nanoparticles by rendering them invisible appeared first on Physics World.

Delta.g wins IOP’s qBIG prize for its gravity sensors

13 mai 2025 à 19:00

The UK-based company Delta.g has bagged the 2025 qBIG prize, which is awarded by the Institute of Physics (IOP). Initiated in 2023, qBIG celebrates and promotes the innovation and commercialization of quantum technologies in the UK and Ireland.

Based in Birmingham, Delta.g makes quantum sensors that measure the local gravity gradient. This is done using atom interferometry, whereby laser pulses are fired at a cloud of cold atoms that is freefalling under gravity.

On the Earth’s surface, this gradient is sensitive to the presence of buildings and underground voids such as tunnels. The technology was developed by physicists at the University of Birmingham and in 2022 they showed how it could be used to map out a tunnel below a road on campus. The system has also been deployed in a cave and on a ship to test its suitability for use in navigation.

Challenging to measure

“Gravity is a fundamental force, yet its full potential remains largely untapped because it is so challenging to measure,” explains Andrew Lamb who is co-founder and chief technology officer at Delta.g. “As the first to take quantum technology gravity gradiometry from the lab to the field, we have set a new benchmark for high-integrity, noise-resistant data transforming how we understand and navigate the subsurface.”

Awarded by the IOP, the qBIG prize is sponsored by Quantum Exponential, which is the UK’s first enterprise venture capital fund focused on quantum technology. The winner was announced today at the Economist’s Commercialising Quantum Global 2025 event in London. Delta.g receives a £10,000 unrestricted cash prize; 10 months of mentoring from Quantum Exponential; and business support from the IOP.

Louis Barson, the IOP’s director of science, innovation and skills says, “The IOP’s role as UK and Ireland coordinator of the International Year of Quantum 2025 gives us a unique opportunity to showcase the exciting developments in the quantum sector. Huge congratulations must go to the Delta.g team, whose incredible work stood out in a diverse and fast-moving field.”

Two runners-up were commended by the IOP. One is Glasgow-based  Neuranics, which makes quantum sensors that detect tiny magnetic signals from the human body. This other is Southampton’s Smith Optical, which makes an augmented-reality display based on quantum technology.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Delta.g wins IOP’s qBIG prize for its gravity sensors appeared first on Physics World.

Electrolysis workstation incorporates mass spectrometry to accelerate carbon-dioxide reduction research

13 mai 2025 à 15:59

The electrochemical reduction of carbon dioxide is used to produce a range of chemical and energy feedstocks including syngas (hydrogen and carbon monoxide), formic acid, methane and ethylene. As well as being an important industrial process, the large-scale reduction of carbon dioxide by electrolysis offers a practical way to capture and utilize carbon dioxide.

As a result, developing new and improved electrochemical processes for carbon-dioxide reduction is an important R&D activity. This work involves identifying which catalyst and electrolyte materials are optimal for efficient production. And when a promising electrochemical system is identified in the lab, the work is not over because the design must be then scaled up to create an efficient and practical industrial process.

Such R&D activities must overcome several challenges in operating and characterizing potential electrochemical systems. These include maintaining the correct humidification of carbon-dioxide gas during the electrolysis process and minimizing the production of carbonates – which can clog membranes and disrupt electrolysis.

While these challenges can be daunting, they can be overcome using the 670 Electrolysis Workstation from US-based Scribner. This is a general-purpose electrolysis system designed to test the materials used in the conversion of electrical energy to fuels and chemical feedstocks – and it is ideal for developing systems for carbon-dioxide reduction.

Turn-key and customizable

The workstation is a flexible system that is both turn-key and customizable. Liquid and gas reactants can be used on one or both of the workstation’s electrodes. Scribner has equipped the 670 Electrolysis Workstation with cells that feature gas diffusion electrodes and membranes from US-based Dioxide Materials. The company specializes in the development of technologies for converting carbon dioxide into fuels and chemicals, and it was chosen by Scribner because Dioxide Materials’ products are well documented in the scientific literature.

The gas diffusion electrodes are porous graphite cathodes through which carbon-dioxide gas flows between input and output ports. The gas can migrate from the graphite into a layer containing a metal catalyst. Membranes are used in electrolysis cells to ensure that only the desired ions are able to migrate across the cell, while blocking the movement of gases.

Two men in a lab
Fully integrated Scribner’s Jarrett Mansergh (left) and Luke Levin-Pompetzki of Hiden Analytical in Scribner’s lab after integrating the electrolysis and mass-spectrometry systems. (Courtesy: Scribner)

The system employs a multi-range  ±20 A and 5 V potentiostat for high-accuracy operation over a wide range of reaction rates and cell sizes. The workstation is controlled by Scribner’s FlowCell™ software, which provides full control and monitoring of test cells and comes pre-loaded with a wide range of experimental protocols. This includes electrochemical impedance spectroscopy (EIS) capabilities up to 20 KHz and cyclic voltammetry protocols – both of which are used to characterize the health and performance of electrochemical systems. FlowCell™ also allows users to set up long duration experiments while providing safety monitoring with alarm settings for the purging of gases.

Humidified gas

The 670 Electrolysis Workstation features a gas handling unit that can supply humidified gas to test cells. Adding water vapour to the carbon-dioxide reactant is crucial because the water provides the protons that are needed to convert carbon dioxide to products such as methane and syngas. Humidifying gas is very difficult and getting it wrong leads to unwanted condensation in the system. The 670 Electrolysis Workstation uses temperature control to minimize condensation. The same degree of control can be difficult to achieve in homemade systems, leading to failure.

The workstation offers electrochemical cells with 5 cm2 and 25 cm2 active areas. These can be used to build carbon-dioxide reduction cells using a range of materials, catalysts and membranes – allowing the performance of these prototype cells to be thoroughly evaluated. By studying cells at these two different sizes, researchers can scale up their electrochemical systems from a preliminary experiment to something that is closer in size to an industrial system. This makes the 670 Electrolysis Workstation ideal for use across university labs, start-up companies and corporate R&D labs.

The workstation can handle, acids, bases and organic solutions. For carbon-dioxide reduction, the cell is operated with a liquid electrolyte on the positive electrode (anode) and gaseous carbon dioxide at the negative electrode (cathode). An electric potential is applied across the electrodes and the product gas comes off the cathode side.

The specific product is largely dependent on the catalyst used at the cathode. If a silver catalyst is used for example, the cell is likely to produce the syngas. If a tin catalyst is used, the product is more likely to be formic acid.

Mass spectrometry

The best way to ensure that the desired products are being made in the cell is to connect the gas output to a mass spectrometer. As a result, Scribner has joined forces with Hiden Analytical to integrate the UK-based company’s HPR-20 mass spectrometer for gas analysis. The Hiden system is specifically configured to perform continuous analysis of evolved gases and vapours from the 670 Electrolysis Workstation.

CO2 reduction cell feature
The Scribner CO2 Reduction Cell Fixture (Courtesy: Scribner)

If a cell is designed to create syngas, for example, the mass spectrometer will determine exactly how much carbon monoxide is being produced and how much hydrogen is being produced. At the same time, researchers can monitor the electrochemical properties of the cell. This allows researchers to study relationships between a system’s electrical performance and the chemical species that it produces.

Monitoring gas output is crucial for optimizing electrochemical processes that minimize negative effects such as the production of carbonates, which is a significant problem when doing carbon dioxide reduction.

In electrochemical cells, carbon dioxide is dissolved in a basic solution. This results in the precipitation of carbonate salts that clog up the membranes in cells, greatly reducing performance. This is a significant problem when scaling up cell designs for industrial use because commercial cells must be very long-lived.

Pulsed-mode operation

One strategy for dealing with carbonates is to operate electrochemical cells in pulsed mode, rather than in a steady state. The off time allows the carbonates to migrate away from electrodes, which minimizes clogging. The 670 Electrolysis Workstation allows users to explore the use of short, second-scale pulses. Another option that researchers can explore is the use of pulses of fresh water to flush carbonates away from the cathode area. These and other options are available in a set of pre-programmed experiments that allow users to explore the mitigation of salt formation in their electrochemical cells.

The gaseous products of these carbonate-mitigation modes can be monitored in real time using Hiden’s mass spectrometer. This allows researchers to identify any changes in cell performance that are related to pulsed operation. Currently, electrochemical and product characteristics can be observed on time scales as short as 100 ms. This allows researchers to fine-tune how pulses are applied to minimize carbonate production and maximize the production of desired gases.

Real-time monitoring of product gases is also important when using EIS to observe the degradation of the electrochemical performance of a cell over time. This provides researchers with a fuller picture of what is happening in a cell as it ages.

The integration of Hiden’s mass spectrometer to the 670 Electrolysis Workstation is the latest innovation from Scribner. Now, the company is working on improving the time resolution of the system so that even shorter pulse durations can be studied by users. The company is also working on boosting the maximum current of the 670 to 100 A.

The post Electrolysis workstation incorporates mass spectrometry to accelerate carbon-dioxide reduction research appeared first on Physics World.

‘We must prioritize continuity and stability to maintain momentum’: Mauro Paternostro on how to ensure that quantum tech continues to thrive

13 mai 2025 à 15:05

As we celebrate the International Year of Quantum Science and Technology, the quantum technology landscape is a swiftly evolving place. From developments in error correction and progress in hybrid classical-quantum architectures all the way to the commercialization of quantum sensors, there is much to celebrate.

An expert in quantum information processing and quantum technology, physicist Mauro Paternostro is based at the University of Palermo and Queen’s University Belfast. He is also editor-in-chief of the IOP Publishing journal Quantum Science and Technology, which celebrates its 10th anniversary this year. Paternostro talks to Tushna Commissariat about the most exciting recent developments in the filed, his call for a Quantum Erasmus programme and his plans for the future of the journal.

What’s been the most interesting development in quantum technologies over the last year or so?

I have a straightforward answer as well as a more controversial one. First, the simpler point: the advances in quantum error correction for large-scale quantum registers are genuinely exciting. I’m specifically referring to the work conducted by Mikhail LukinDolev Bluvstein and colleagues at Harvard University, and at the Massachusetts Institute of Technology and QuEra Computing, who built a quantum processor with 48 logical qubits that can execute algorithms while correcting errors in real time. In my opinion, this marks a significant step forward in developing computational platforms with embedded robustness. Error correction plays a vital role in the development of practical quantum computers, and Lukin and colleagues won Physics World’s 2024 Breakthrough of the Year award for their work.

Quantum error correction
Logical minds Dolev Bluvstein (left) and Mikhail Lukin with their quantum processor. (Courtesy: Jon Chase/Harvard University)

You can listen to Mikhail Lukin and Dolev Bluvstein explain how they used trapped atoms to create 48 logical qubits on the Physics World Weekly podcast.

Now, for the more complex perspective. Aside from ongoing debate about whether Microsoft’s much-discussed eight-qubit topological quantum processor – Majorana 1 – is genuinely using topological qubits, I believe the device will help to catalyze progress in integrated quantum chips. While it may not qualify as a genuine breakthrough in the long run, this moment could be the pivotal turning-point in the evolution of quantum computational platforms. All the major players will likely feel compelled to accelerate their efforts toward the unequivocal demonstration of “quantum chip” capabilities, and such a competitive drive is just want both industry and government need right now.

25-2-25 Majorana 1
Technical turning-point? Microsoft has unveiled a quantum processor called Majorana 1 that boasts a “topological core”. (Courtesy: John Brecher/Microsoft)

How do you think quantum technologies will scale up as they emerge from the lab and into real-world applications?

I am optimistic in this regard. In fact, progress is already underway, with quantum-sensing devices and atomic quantum clocks are achieving the levels of technological readiness necessary for practical, real-world applications. In the future, hybrid quantum-high-performance computing (HPC) architectures will play crucial roles in bridging classical data-analysis with whatever the field evolves into, once quantum computers can offer genuine “quantum advantage” over classical machines.

Regarding communication, the substantial push toward networked, large-scale communication structures is noteworthy. The availability of the first operating system for programmable quantum networks opens “highways” toward constructing a large-scale “quantum internet”. This development promises to transform the landscape of communication, enabling new possibilities that we are just beginning to explore.

What needs to be done to ensure that the quantum sector can deliver on its promises in Europe and the rest of the world?

We must prioritize continuity and stability to maintain momentum. The national and supranational funding programmes that have supported developments and achievements over the past few years should not only continue, but be enhanced. I am concerned, however, that the current geopolitical climate, which is undoubtedly challenging, may divert attention and funding away from quantum technologies. Additionally, I worry that some researchers might feel compelled to shift their focus toward areas that align more closely with present priorities, such as military applications. While such shifts are understandable, they may not help us keep pace with the remarkable progress the field has made since governments in Europe and beyond began to invest substantially.

On a related note, we must take education seriously. It would be fantastic to establish a Quantum Erasmus programme that allows bachelor’s, master’s and PhD students in quantum technology to move freely across Europe so that they can acquire knowledge and expertise. We need coordinated national and supranational initiatives to build a pipeline of specialists in this field. Such efforts would provide the significant boost that quantum technology needs to continue thriving.

How can the overlap between quantum technology and artificial intelligence (AI) help each other develop?

The intersection and overlap between AI, high-performance computing, and quantum technologies are significant, and their interplay is, in my opinion, one of the most promising areas of exploration. While we are still in the early stages, we have only just started to tap into the potential of AI-based tools for tackling quantum tasks. We are already witnessing the emergence of the first quantum experiments supported by this hybrid approach to information processing.

The convergence of AI, HPC, and quantum computing would revolutionize how we conceive data processing, analysis, forecasting and many other such tasks. As we continue to explore and refine these technologies, the possibilities for innovation and advancement are vast, paving the way for transformations in various fields.

What do you hope the International Year of Quantum Science and Technology (IYQ) will have achieved, going forward?

The IYQ represents a global acknowledgment, at the highest levels, of the immense potential within this field. It presents a genuine opportunity to raise awareness worldwide about what a quantum paradigm for technological development can mean for humankind. It serves as a keyhole into the future, and IYQ could enable an unprecedented number of individuals – governments, leaders and policymakers alike – to peek though it and glimpse at this potential.

All stakeholders in the field should contribute to making this a memorable year. With IYQ, 2025 might even be considered as “year zero” of the quantum technology era.

As we mark its 10th anniversary, how have you enjoyed your time over the last year as editor-in-chief of the journal Quantum Science and Technology (QST)?

Time flies when you have fun, and this is a good time for me to reflect on the past year. Firstly, I want to express my heartfelt gratitude to Rob Thew, the founding editor-in-chief of QST, for his remarkable leadership during the journal’s early years. With unwavering dedication, he and the rest of the entire editorial board, has established QST as an authoritative and selective reference point for the community engaged in the broad field of quantum science and technology. The journal is now firmly recognized as a leading platform for timely and significant research outcomes. A 94% increase in submissions since our fifth anniversary has led to an impressive 747 submissions from 62 countries in 2024 alone, revealing the growing recognition and popularity of QST among scholars. Our acceptance rate of 27% further demonstrates our commitment to publishing only the highest calibre research.

QST has, over the last 10 years, sought to feature research covering the breadth of the field within our curated focus issues covering topics such as: Quantum optomechanics, Quantum photonics: chips and dots; Quantum software, Perspectives on societal aspects and impacts of quantum technologies and Cold atoms in space.

As we celebrate IYQ, QST will lead the way with several exciting editorial initiatives aimed at disseminating the latest achievements in addressing the essential “pillars” of quantum technologies – computing, communication, sensing, and simulation – while also providing authoritative perspectives and visions for the future. Our focus collections seek research within Quantum technologies for quantum gravity & Focus on perspectives on the future of variational quantum computing.

What are your goals with QST, looking ahead?

As quantum technologies advance into an inter- and multi-disciplinary realm, merging fundamental quantum-science with technological applications, QST is evolving as well. We have an increasing number of submissions addressing the burgeoning area of machine learning-enhanced quantum information processing, alongside pioneering studies exploring the application of quantum computing in fields such as chemistry, materials science and quantitative finance. All of this illustrates how QST is proactive in seizing opportunities to advance knowledge from our community of scholars and authors.

This dynamic growth is a fantastic way to celebrate the journal’s 10th anniversary, especially with the added significant milestone of IYQ. Finally, I want to highlight a matter that is very close to my heart, reflecting a much-needed “duty of care” for our readership. As editor-in-chief, I am honoured to support a journal that is part of the ‘Purpose-Led Publishing’ initiative. I view this as a significant commitment to integrity, ethics, high standards, and transparency, which should be the foundation of any scientific endeavour.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post ‘We must prioritize continuity and stability to maintain momentum’: Mauro Paternostro on how to ensure that quantum tech continues to thrive appeared first on Physics World.

❌