↩ Accueil

Vue normale

Reçu hier — 23 octobre 2025 Physics World

Performance metrics and benchmarks point the way to practical quantum advantage

23 octobre 2025 à 17:35
Quantum connections Measurement scientists are seeking to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies. (Courtesy: iStock/Bartlomiej Wroblewski)

From quantum utility today to quantum advantage tomorrow: incumbent technology companies – among them Google, Amazon, IBM and Microsoft – and a wave of ambitious start-ups are on a mission to transform quantum computing from applied research endeavour to mainstream commercial opportunity. The end-game: quantum computers that can be deployed at-scale to perform computations significantly faster than classical machines while addressing scientific, industrial and commercial problems beyond the reach of today’s high-performance computing systems.

Meanwhile, as technology translation gathers pace across the quantum supply chain, government laboratories and academic scientists must maintain their focus on the “hard yards” of precompetitive research. That means prioritizing foundational quantum hardware and software technologies, underpinned by theoretical understanding, experimental systems, device design and fabrication – and pushing out along all these R&D pathways simultaneously.

Bringing order to disorder

Equally important is the requirement to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies – among them superconducting circuits, trapped ions, neutral atoms as well as photonic and semiconductor processors. A case study in this regard is a broad-scope UK research collaboration that, for the past four years, has been reviewing, collecting and organizing a holistic taxonomy of metrics and benchmarks to evaluate the performance of quantum computers against their classical counterparts as well as the relative performance of competing quantum platforms.

Funded by the National Quantum Computing Centre (NQCC), which is part of the UK National Quantum Technologies Programme (NQTP), and led by scientists at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, the cross-disciplinary consortium has taken on an endeavour that is as sprawling as it is complex. The challenge lies in the diversity of quantum hardware platforms in the mix; also the emergence of two different approaches to quantum computing – one being a gate-based framework for universal quantum computation, the other an analogue approach tailored to outperforming classical computers on specific tasks.

“Given the ambition of this undertaking, we tapped into a deep pool of specialist domain knowledge and expertise provided by university colleagues at Edinburgh, Durham, Warwick and several other centres-of-excellence in quantum,” explains Ivan Rungger, a principal scientist at NPL, professor in computer science at Royal Holloway, University of London, and lead scientist on the quantum benchmarking project. That core group consulted widely within the research community and with quantum technology companies across the nascent supply chain. “The resulting study,” adds Rungger, “positions transparent and objective benchmarking as a critical enabler for trust, comparability and commercial adoption of quantum technologies, aligning closely with NPL’s mission in quantum metrology and standards.”

Not all metrics are equal – or mature

2025-10-npl-na-aqml-image
Made to measure NPL’s Institute for Quantum Standards and Technology (above) is the UK’s national metrology institute for quantum science. (Courtesy: NPL)

For context, a number of performance metrics used to benchmark classical computers can also be applied directly to quantum computers, such as the speed of operations, the number of processing units, as well as the probability of errors to occur in the computation. That only goes so far, though, with all manner of dedicated metrics emerging in the past decade to benchmark the performance of quantum computers – ranging from their individual hardware components to entire applications.

Complexity reigns, it seems, and navigating the extensive literature can prove overwhelming, while the levels of maturity for different metrics varies significantly. Objective comparisons aren’t straightforward either – not least because variations of the same metric are commonly deployed; also the data disclosed together with a reported metric value is often not sufficient to reproduce the results.

“Many of the approaches provide similar overall qualitative performance values,” Rungger notes, “but the divergence in the technical implementation makes quantitative comparisons difficult and, by extension, slows progress of the field towards quantum advantage.”

The task then is to rationalize the metrics used to evaluate the performance for a given quantum hardware platform to a minimal yet representative set agreed across manufacturers, algorithm developers and end-users. These benchmarks also need to follow some agreed common approaches to fairly and objectively evaluate quantum computers from different equipment vendors.

With these objectives in mind, Rungger and colleagues conducted a deep-dive review that has yielded a comprehensive collection of metrics and benchmarks to allow holistic comparisons of quantum computers, assessing the quality of hardware components all the way to system-level performance and application-level metrics.

Drill down further and there’s a consistent format for each metric that includes its definition, a description of the methodology, the main assumptions and limitations, and a linked open-source software package implementing the methodology. The software transparently demonstrates the methodology and can also be used in practical, reproducible evaluations of all metrics.

“As research on metrics and benchmarks progresses, our collection of metrics and the associated software for performance evaluation are expected to evolve,” says Rungger. “Ultimately, the repository we have put together will provide a ‘living’ online resource, updated at regular intervals to account for community-driven developments in the field.”

From benchmarking to standards

Innovation being what it is, those developments are well under way. For starters, the importance of objective and relevant performance benchmarks for quantum computers has led several international standards bodies to initiate work on specific areas that are ready for standardization – work that, in turn, will give manufacturers, end-users and investors an informed evaluation of the performance of a range of quantum computing components, subsystems and full-stack platforms.

What’s evident is that the UK’s voice on metrics and benchmarking is already informing the collective conversation around standards development. “The quantum computing community and international standardization bodies are adopting a number of concepts from our approach to benchmarking standards,” notes Deep Lall, a quantum scientist in Rungger’s team at NPL and lead author of the study. “I was invited to present our work to a number of international standardization meetings and scientific workshops, opening up widespread international engagement with our research and discussions with colleagues across the benchmarking community.”

He continues: “We want the UK effort on benchmarking and metrics to shape the broader international effort. The hope is that the collection of metrics we have pulled together, along with the associated open-source software provided to evaluate them, will guide the development of standardized benchmarks for quantum computers and speed up the progress of the field towards practical quantum advantage.”

That’s a view echoed – and amplified – by Cyrus Larijani, NPL’s head of quantum programme. “As we move into the next phase of NPL’s quantum strategy, the importance of evidence-based decision making becomes ever-more critical,” he concludes. “By grounding our strategic choices in robust measurement science and real-world data, we ensure that our innovations not only push the boundaries of quantum technology but also deliver meaningful impact across industry and society.”

Further reading

Deep Lall et al. 2025 A  review and collection of metrics and benchmarks for quantum computers: definitions, methodologies and software https://arxiv.org/abs/2502.06717

The headline take from NQCC

Quantum computing technology has reached the stage where a number of methods for performance characterization are backed by a large body of real-world implementation and use, as well as by theoretical proofs. These mature benchmarking methods will benefit from commonly agreed-upon approaches that are the only way to fairly, unambiguously and objectively benchmark quantum computers from different manufacturers.

“Performance benchmarks are a fundamental enabler of technology innovation in quantum computing,” explains Konstantinos Georgopoulos, who heads up the NQCC’s quantum applications team and is responsible for the centre’s liaison with the NPL benchmarking consortium. “How do we understand performance? How do we compare capabilities? And, of course, what are the metrics that help us to do that? These are the leading questions we addressed through the course of this study.

”If the importance of benchmarking is a given, so too is collaboration and the need to bring research and industry stakeholders together from across the quantum ecosystem. “I think that’s what we achieved here,” says Georgopoulos. “The long list of institutions and experts who contributed their perspectives on quantum computing was crucial to the success of this project. What we’ve ended up with are better metrics, better benchmarks, and a better collective understanding to push forward with technology translation that aligns with end-user requirements across diverse industry settings.”

End note: NPL retains copyright on this article.

The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.

Quantum computing and AI join forces for particle physics

23 octobre 2025 à 15:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

Master’s programme takes microelectronics in new directions

23 octobre 2025 à 10:28
hong-kong-university-na-main-image
Professor Zhao Jiong, who leads a Master’s programme in microelectronics technology and material, has been recognized for his pioneering research in 2d ferroelectronics (Courtesy: PolyU)

The microelectronics sector is known for its relentless drive for innovation, continually delivering performance and efficiency gains within ever more compact form factors. Anyone aspiring to build a career in this fast-moving field needs not just a thorough grounding in current tools and techniques, but also an understanding of the next-generation materials and structures that will propel future progress.

That’s the premise behind a Master’s programme in microelectronics technology and materials at the Hong Kong Polytechnic University (PolyU). Delivered by the Department for Applied Physics, globally recognized for its pioneering research in technologies such as two-dimensional materials, nanoelectronics and artificial intelligence, the aim is to provide students with both the fundamental knowledge and practical skills they need to kickstart their professional future – whether they choose to pursue further research or to find a job in industry.

“The programme provides students with all the key skills they need to work in microelectronics, such as circuit design, materials processing and failure analysis,” says programme leader Professor Zhao Jiong, who research focuses on 2D ferroelectrics. “But they also have direct access to more than 20 faculty members who are actively investigating novel materials and structures that go beyond silicon-based technologies.”

The course in also unusual in providing a combined focus on electronics engineering and materials science, providing students with a thorough understanding of the underlying semiconductors and device structures as well as their use in mass-produced integrated circuits. That fundamental knowledge is reinforced through regular experimental work, providing the students with hands-on experience of fabricating and testing electronic devices. “Our cleanroom laboratory is equipped with many different instruments for microfabrication, including thin-film deposition, etching and photolithography, as well as advanced characterization tools for understanding their operating mechanisms and evaluating their performance,” adds Zhao.

In a module focusing on thin-film materials, for example, students gain valuable experience from practical sessions that enable them to operate the equipment for different growth techniques, such as sputtering, molecular beam epitaxy, and both physical and chemical vapour deposition. In another module on materials analysis and characterization, the students are tasked with analysing the layered structure of a standard computer chip by making cross-sections that can be studied with a scanning electron microscope.

During the programme students have access to a cleanroom laboratory that gives them hand-on experience of using advanced tools for fabricating and characterizing electronic materials and structures (Courtesy: PolyU)

That practical experience extends to circuit design, with students learning how to use state-of-the-art software tools for configuring, simulating and analysing complex electronic layouts. “Through this experimental work students gain the technical skills they need to design and fabricate integrated circuits, and to optimize their performance and reliability through techniques like failure analysis,” says Professor Dai Jiyan, PolyU Associate Dean of Students, who also teaches the module on thin-film materials. “This hands-on experience helps to prepare them for working in a manufacturing facility or for continuing their studies at the PhD level.”

Also integrated into the teaching programme is the use of artificial intelligence to assist key tasks, such as defect analysis, materials selection and image processing. Indeed, PolyU has established a joint laboratory with Huawei to investigate possible applications of AI tools in electronic design, providing the students with early exposure to emerging computational methods that are likely to shape the future of the microelectronics industry. “One of our key characteristics is that we embed AI into our teaching and laboratory work,” says Dai. “Two of the modules are directly related to AI, while the joint lab with Huawei helps students to experiment with using AI in circuit design.”

Now in its third year, the Master’s programme was designed in collaboration with Hong Kong’s Applied Science and Technology Research Institute (ASTRI), established in 2000 to enhance the competitiveness of the region through the use of advanced technologies. Researchers at PolyU already pursue joint projects with ASTRI in areas like chip design, microfabrication and failure analysis,. As part of the programme, these collaborators are often invited to give guest lectures or to guide the laboratory work. “Sometimes they even provide some specialized instruments for the students to use in their experiments,” says Zhao. “We really benefit from this collaboration.”

Once primed with the knowledge and experience from the taught modules, the students have the opportunity to work alongside one of the faculty members on a short research project. They can choose whether to focus on a topic that is relevant to present-day manufacturing, such as materials processing or advanced packaging technologies, or to explore the potential of emerging materials and devices across applications ranging from solar cells and microfluidics to next-generation memories and neuromorphic computing.

“It’s very interesting for the students to get involved in these projects,” says Zhao. “They learn more about the research process, which can make them more confident to take their studies to the next level. All of our faculty members are engaged in important work, and we can guide the students towards a future research field if that’s what they are interested in.”

There are also plenty of progression opportunities for those who are more interested in pursuing a career in industry. As well as providing support and advice through its joint lab in AI, Huawei arranges visits to its manufacturing facilities and offers some internships to interested students. PolyU also organizes visits to Hong Kong’s Science Park, home to multinational companies such as Infineon as well as a large number of start-up companies in the microelectronics sector. Some of these might support a student’s research project, or offer an internship in areas such as circuit design or microfabrication.

The international outlook offered by PolyU has made the Master’s programme particularly appealing to students from mainland China, but Zhao and Dai believe that the forward-looking ethos of the course should make it an appealing option for graduates across Asia and beyond. “Through the programme, the students gain knowledge about all aspects of the microelectronics industry, and how it is likely to evolve in the future,” says Dai. “The knowledge and technical skills gained by the students offer them a competitive edge for building their future career, whether they want to find a job in industry or to continue their research studies.”

The post Master’s programme takes microelectronics in new directions appeared first on Physics World.

Resonant laser ablation selectively destroys pancreatic tumours

23 octobre 2025 à 10:00

Pancreatic ductal adenocarcinoma (PDAC), the most common type of pancreatic cancer, is an aggressive tumour with a poor prognosis. Surgery remains the only potential cure, but is feasible in just 10–15% of cases. A team headed up at Sichuan University in China has now developed a selective laser ablation technique designed to target PDAC while leaving healthy pancreatic tissue intact.

Thermal ablation techniques, such as radiofrequency, microwave or laser ablation, could provide a treatment option for patients with locally advanced PDAC, but existing methods risk damaging surrounding blood vessels and healthy pancreatic tissues. The new approach, described in Optica, uses the molecular fingerprint of pancreatic tumours to enable selective ablation.

The technique exploits the fact that PDAC tissue contains a large amount of collagen compared with healthy pancreatic tissue. Amide-I collagen fibres exhibit a strong absorption peak at 6.1 µm, thus the researchers surmised that tuning the treatment laser to this resonant wavelength could enable efficient tumour ablation with minimal collateral thermal damage. As such, they designed a femtosecond pulsed laser that can deliver 6.1 µm pulses with a power of more than 1 W.

FTIR spectra of PDAC and the laser
Resonant wavelength Fourier-transform infrared spectra of PDAC (blue) and the laser (red). (Courtesy: Houkun Liang, Sichuan University)

“We developed a mid-infrared femtosecond laser system for the selective tissue ablation experiment,” says team leader Houkun Liang. “The system is tunable in the wavelength range of 5 to 11 µm, aligning with various molecular fingerprint absorption peaks such as amide proteins, cholesteryl ester, hydroxyapatite and so on.”

Liang and colleagues first examined the ablation efficiency of three different laser wavelengths on two types of pancreatic cancer cells. Compared with non-resonant wavelengths of 1 and 3 µm, the collagen-resonant 6.1 µm laser was far more effective in killing pancreatic cancer cells, reducing cell viability to ranges of 0.27–0.32 and 0.37–0.38, at 0 and 24 h, respectively.

The team observed similar results in experiments on ectopic PDAC tumours cultured on the backs of mice. Irradiation at 6.1 µm led to five to 10 times deeper tumour ablation than seen for the non-resonant wavelengths (despite using a laser power of 5 W for 1 µm ablation and just 500 mW for 6.1 and 3 µm), indicating that 6.1 µm is the optimal wavelength for PDAC ablation surgery.

To validate the feasibility and safety of 6.1 µm laser irradiation, the team used the technique to treat PDAC tumours on live mice. Nine days after ablation, the tumour growth rate in treated mice was significantly suppressed, with an average tumour volume of 35.3 mm3. In contrast, tumour volume in a control group of untreated mice reached an average of 292.7 mm3, roughly eight times the size of the ablated tumours. No adverse symptoms were observed following the treatment.

Clinical potential

The researchers also used 6.1 µm laser irradiation to ablate pancreatic tissue samples (including normal tissue and PDAC) from 13 patients undergoing surgical resection. They used a laser power of 1 W and four scanning speeds (0.5, 1, 2 and 3 mm/s) with 10 ablation passes, examining 20 to 40 samples for each parameter.

At the slower scanning speeds, excessive energy accumulation resulted in comparable ablation depths. At speeds of 2 or 3 mm/s, however, the average ablation depths in PDAC samples were 2.30 and 2.57 times greater than in normal pancreatic tissue, respectively, demonstrating the sought-after selective ablation. At 3 mm/s, for example, the ablation depth in tumour was 1659.09±405.97 µm, compared with 702.5±298.32 µm in normal pancreas.

The findings show that by carefully controlling the laser power, scanning speed and number of passes, near-complete ablation of PDACs can be achieved, with minimal damage to surrounding healthy tissues.

To further investigate the clinical potential of this technique, the researchers developed an anti-resonant hollow-core fibre (AR-HCF) that can deliver high-power 6.1 µm laser pulses deep inside the human body. The fibre has a core diameter of approximately 113 µm and low bending losses at radii under 10 cm. The researchers used the AR-HCF to perform 6.1 µm laser ablation of PDAC and normal pancreas samples. The ablation depth in PDAC was greater than in normal pancreas, confirming the selective ablation properties.

“We are working together with a company to make a medical-grade fibre system to deliver the mid-infrared femtosecond laser. It consists of AR-HCF to transmit mid-infrared femtosecond pulses, a puncture needle and a fibre lens to focus the light and prevent liquid tissue getting into the fibre,” explains Liang. “We are also making efforts to integrate an imaging unit into the fibre delivery system, which will enable real-time monitoring and precise surgical guidance.”

Next, the researchers aim to further optimize the laser parameters and delivery systems to improve ablation efficiency and stability. They also plan to explore the applicability of selective laser ablation to other tumour types with distinct molecular signatures, and to conduct larger-scale animal studies to verify long-term safety and therapeutic outcomes.

“Before this technology can be used for clinical applications, highly comprehensive biological safety assessments are necessary,” Liang emphasizes. “Designing well-structured clinical trials to assess efficacy and risks, as well as navigating regulatory and ethical approvals, will be critical steps toward translation. There is a long way to go.”

The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.

Reçu avant avant-hier Physics World

Doorway states spotted in graphene-based materials

22 octobre 2025 à 15:51

Low-energy electrons escape from some materials via distinct “doorway” states, according to a study done by physicists at Austria’s Vienna Institute of Technology. The team studied graphene-based materials and found that the nature of the doorway states depended on the number of graphene layers in the sample.

Low-energy electron (LEE) emission from solids is used across a range of materials analysis and processing applications including scanning electron microscopy and electron-beam induced deposition. However, the precise physics of the emission process is not well understood.

Electrons are ejected from a material when a beam of electrons is fired at its surface. Some of these incident electrons will impart energy to electrons residing in the material, causing some resident electrons to be emitted from the surface. In the simplest model, the minimum energy needed for this LEE emission is the electron binding energy of the material.

Frog in a box

In this new study, however, researchers have shown that exceeding the binding energy is not enough for LEE emission from graphene-based materials. Not only does the electron need this minimum energy, it must also be in a specific doorway state or it is unlikely to escape. The team compare this phenomenon to the predicament of a frog in a cardboard box with a window. Not only must the frog hop a certain height to escape the box, it must also begin its hop from a position that will result in it travelling through the hole (see figure).

For most materials, the energy spectrum of LEE electrons is featureless. However, it was known that graphite’s spectrum has an “X state” at about 3.3 eV, where emission is enhanced. This state could be related to doorway states.

To search for doorway states, the Vienna team studied LEE emission from graphite as well as from single-layer and bi-layer graphene. Graphene is a sheet of carbon just one atom thick. Sheets can stick together via the relatively weak Van der Waals force to create multilayer graphene – and ultimately graphite, which comprises a large number of layers.

Because electrons are mostly confined within the graphene layers, the electronic states of single-layer, bi-layer and multi-layer graphene are broadly similar. As a result, it was expected that these materials would have similar LEE emission spectra . However, the Vienna team found a surprising difference.

Emission and reflection

The team made their discovery by firing a beam of relatively low energy electrons (173 eV) incident at 60° to the surface of single-layer and bi-layer graphene as well as graphite. The scattered electrons are then detected at the same angle of reflection. Meanwhile, a second detector is pointed normal to the surface to capture any emitted electrons. In quantum mechanics electrons are indistinguishable, so the modifiers scattered and emitted are illustrative, rather than precise.

The team looked for coincident signals in both detectors and plotted their results as a function of energy in 2D “heat maps”. These plots revealed that bi-layer graphene and graphite each had doorway states – but at different energies. However, single-layer graphene did not appear to have any doorway states. By combining experiments with calculations, the team showed that doorway states emerge above a certain number of layers. As a result the researchers showed that graphite’s X state can be attributed in part to a doorway state that appears at about five layers of graphene.

“For the first time, we’ve shown that the shape of the electron spectrum depends not only on the material itself, but crucially on whether and where such resonant doorway states exist,” explains Anna Niggas at the Vienna Institute of Technology.

As well as providing important insights in how the electronic properties of graphene morph into the properties of graphite, the team says that their research could also shed light on the properties of other layered materials.

The research is described in Physical Review Letters.

The post Doorway states spotted in graphene-based materials appeared first on Physics World.

NASA’s Jet Propulsion Lab lays off a further 10% of staff

22 octobre 2025 à 14:02

NASA’s Jet Propulsion Laboratory (JPL) is to lay off some 550 employees as part of a restructuring that began in July. The action affects about 11% of JPL’s employees and represents the lab’s third downsizing in the past 20 months. When the layoffs are complete by the end of the year, the lab will have roughly 4500 employees, down from about 6500 at the start of 2024. A further 4000 employees have already left NASA during the past six months via sacking, retirement or voluntary buyouts.

Managed by the California Institute of Technology in Pasadena, JPL oversees scientific missions such as the Psyche asteroid probe, the Europa Clipper and the Perseverance rover on Mars. The lab also operates the Deep Space Network that keeps Earth in communication with unmanned space missions. JPL bosses already laid off about 530 staff – and 140 contractors – in February last year followed by another 325 people in November 2024.

JPL director Dave Gallagher insists, however, that the new layoffs are not related to the current US government shutdown that began on 1 October. “[They are] essential to securing JPL’s future by creating a leaner infrastructure, focusing on our core technical capabilities, maintaining fiscal discipline, and positioning us to compete in the evolving space ecosystem,” he says in a message to employees.

Judy Chu, Democratic Congresswoman for the constituency that includes JPL, is less optimistic. “Every layoff devastates the highly skilled and uniquely talented workforce that has made these accomplishments possible,” she says. “Together with last year’s layoffs, this will result in an untold loss of scientific knowledge and expertise that threatens the very future of American leadership in space exploration and scientific discovery.”

John Logsdon, professor emeritus at George Washington University and founder of the university’s Space Policy Institute, says that the cuts are a direct result of the Trump administration’s approach to science and technology. “The administration gives low priority to robotic science and exploration, and has made draconic cuts to the science budget; that budget supports JPL’s work,” he told Physics World. “With these cuts, there is not enough money to support a JPL workforce sized for more ambitious activities. Ergo, staff cuts.”

The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.

How to solve the ‘future of physics’ problem

22 octobre 2025 à 12:00

I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

Problem solving

Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

Developing skillsets

During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

The post How to solve the ‘future of physics’ problem appeared first on Physics World.

A recipe for quantum chaos

22 octobre 2025 à 11:44

The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

Quantum ergodicity is simply the extension of this concept to the quantum realm.

Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

These results will be invaluable for experimentalists working across a wide range of quantum sciences.

The post A recipe for quantum chaos appeared first on Physics World.

Neural simulation-based inference techniques at the LHC

22 octobre 2025 à 11:44

Precision measurements of theoretical parameters are a core element of the scientific program of experiments at the Large Hadron Collider (LHC) as well as other particle colliders. 

These are often performed using statistical techniques such as the method of maximum likelihood. However, given the size of datasets generated, reduction techniques, such as grouping data into bins, are often necessary. 

These can lead to a loss of sensitivity, particularly in non-linear cases like off-shell Higgs boson production and effective field theory measurements.  The non-linearity in these cases comes from quantum interference and traditional methods are unable to optimally distinguish the signal from background.

In this paper, the ATLAS collaboration pioneered the use of a neural network based technique called neural simulation-based inference (NSBI) to combat these issues. 

A neural network is a machine learning model originally inspired by how the human brain works. It’s made up of layers of interconnected units called neurons, which process information and learn patterns from data. Each neuron receives input, performs a simple calculation, and passes the result to other neurons. 

NSBI uses these neural networks to analyse each particle collision event individually, preserving more information and improving accuracy.

The framework developed here can handle many sources of uncertainty and includes tools to measure how confident scientists can be in their results.

The researchers benchmarked their method by using it to calculate the Higgs boson signal strength and compared it to previous methods with impressive results (see here for more details about this).

The greatly improved sensitivity gained from using this method will be invaluable in the search for physics beyond the Standard Model in future experiments at ATLAS and beyond.

Read the full article

An implementation of neural simulation-based inference for parameter estimation in ATLAS – IOPscience

The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 067801

The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.

‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann

21 octobre 2025 à 15:30

As a teenager in her native Rio Grande do Sul, a state in Southern Brazil, Thaisa Storchi Bergmann enjoyed experimenting in an improvised laboratory her parents built in their attic. They didn’t come from a science background – her father was an accountant, her mother a primary school teacher – but they encouraged her to do what she enjoyed. With a friend from school, Storchi Bergmann spent hours looking at insects with a microscope and running experiments from a chemistry toy kit. “We christened the lab Thasi-Cruz after a combination of our names,” she chuckles.

At the time, Storchi Bergmann could not have imagined that one day this path would lead to cosmic discoveries and international recognition at the frontiers of astrophysics. “I always had the curiosity inside me,” she recalls. “It was something I carried since adolescence.”

That curiosity almost got lost to another discipline. By the time Storchi Bergmann was about to enter university, she was swayed by a cousin living with her family who was passionate about architecture. By 1974 she began studying architecture at the Federal University of Rio Grande do Sul (UFRGS). “But I didn’t really like technical drawing. My favourite part of the course were physics classes,” she says. Within a semester, she switched to physics.

There she met Edemundo da Rocha Vieira, the first astrophysicist UFRGS ever hired – who later went on to structure the university’s astronomy department. He nurtured Storchi Bergmann’s growing fascination with the universe and introduced her to research.

In 1977, newly married after graduation, Storchi Bergmann followed her husband to Rio de Janeiro, where she did a master’s degree and worked with William Kunkel, an American astronomer who was in Rio to help establish Brazil’s National Astrophysics Laboratory. She began working on data from a photometric system to measure star radiation. “But Kunkel said galaxies were a lot more interesting to study, and that stuck in my head,” she says.

Three years after moving to Rio, she returned to Porto Alegre, in Rio Grande do Sul, to start her doctoral research and teach at UFRGS. Vital to her career was her decision to join the group of Miriani Pastoriza, one of the pioneers of extragalactic astrophysics in Latin America. “She came from Argentina, where [in the late 1970s and early 1980s] scientists were being strongly persecuted [by the country’s military dictatorship] at the time,” she recalls. Pastoriza studied galaxies with “peculiar nuclei” – objects later known to harbour supermassive black holes. Under Pastoriza’s guidance, she moved from stars to galaxies, laying the foundation for her career.

Between 1986 and 1987, Storchi Bergmann often travelled to Chile to make observations and gather data for her PhD, using some of the largest telescopes available at the time. Then came a transformative period – a postdoc fellowship in Maryland, US, just as the Hubble Space Telescope was launched in 1990. “Each Thursday, I would drive to Baltimore for informal bag-lunch talks at the Space Telescope Science Institute, absorbing new results on active galactic nuclei (AGN) and supermassive black holes,” Storchi Bergmann recalls.

Discoveries and insights

In 1991, during an observing campaign, she and a collaborator saw something extraordinary in the galaxy NGC 1097: gas moving at immense speeds, captured by the galaxy’s central black hole. The work, published in 1993, became one of the earliest documented cases of what are now called “tidal disruption events”, in which a star or cloud gets too close to a black hole and is torn apart.

Her research also contributed to one of the defining insights of the Hubble era: that every massive galaxy hosts a central black hole. “At first, we didn’t know if they were rare,” she explains. “But gradually it became clear: these objects are fundamental to galaxy evolution.”

Another collaboration brought her into contact with Daniela Calzetti, whose work on the effects of interstellar dust led to the formulation of the widely used “Calzetti law”. These and other contributions placed Storchi Bergmann among the most cited scientists worldwide, recognition of which came in 2015 when she received the L’Oréal-UNESCO Award for Women in Science.

Her scientific achievements, however, unfolded against personal and structural obstacles. As a young mother, she often brought her baby to observatories and conferences so she could breastfeed. This kind of juggling is no stranger to many women in science.

“It was never easy,” Storchi Bergmann reflects. “I was always running, trying to do 20 things at once.” The lack of childcare infrastructure in universities compounded the challenge. She recalls colleagues who succeeded by giving up on family life altogether. “That is not sustainable,” she insists. “Science needs all perspectives – male, female and everything in-between. Otherwise, we lose richness in our vision of the universe.”

When she attended conferences early in her career, she was often the only woman in the room. Today, she says, the situation has greatly improved, even if true equality remains distant.

Now a tenured professor at UFRGS and a member of the Brazilian Academy of Sciences, Storchi Bergmann continues to push at the cosmic frontier. Her current focus is the Legacy Survey of Space and Time (LSST), about to begin at the Vera Rubin Observatory in Chile.

Her group is part of the AGN science collaboration, developing methods to analyse the characteristic flickering of accreting black holes. With students, she is experimenting with automated pipelines and artificial intelligence to make sense of and manage the massive amounts of data.

Challenges ahead

Yet this frontier science is not guaranteed. Storchi Bergmann is frustrated by the recent collapse in research scholarships. Historically, her postgraduate programme enjoyed a strong balance of grants from both of Brazil’s federal research funding agencies, CNPq (from the Ministry of Science) and CAPES (from the Ministry of Education). But cuts at CNPq, she says, have left students without support, and CAPES has not filled the gap.

“The result is heartbreaking,” she says. “I have brilliant students ready to start, including one from Piauí (a state in north-eastern Brazil), but without a grant, they simply cannot continue. Others are forced to work elsewhere to support themselves, leaving no time for research.”

She is especially critical of the policy of redistributing scarce funds away from top-rated programmes to newer ones without expanding the overall budget. “You cannot build excellence by dismantling what already exists,” she argues.

For her, the consequences go beyond personal frustration. They risk undermining decades of investment that placed Brazil on the international astrophysics map. Despite these challenges, Storchi Bergmann remains driven and continues to mentor master’s and PhD students, determined to prepare them for the LSST era.

At the heart of her research is a question as grand as any in cosmology: which came first – the galaxy or its central black hole? The answer, she believes, will reshape our understanding of how the universe came to be. And it will carry with it the fingerprint of her work: the persistence of a Brazilian scientist who followed her curiosity from a home-made lab to the centres of galaxies, overcoming obstacles along the way.

The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.

Chip-integrated nanoantenna efficiently harvests light from diamond defects

22 octobre 2025 à 10:00

When diamond defects emit light, how much of that light can be captured and used for quantum technology applications? According to researchers at the Hebrew University of Jerusalem, Israel and Humboldt Universität of Berlin, Germany, the answer is “nearly all of it”. Their technique, which relies on positioning a nanoscale diamond at an optimal location within a chip-integrated nanoantenna, could lead to improvements in quantum communication and quantum sensing.

Illustration showing photon emission from a nanodiamond being directed by a bullseye antenna. The bullseye antenna is shown flat, and seven parallel orange arrows representing photons emerge from different parts of the bullseye, like candles on a birthday cake. At the centre of the bullseye is a diamond
Guided light: Illustration showing photon emission from a nanodiamond and light directed by a bullseye antenna. (Courtesy: Boaz Lubotzky)

Nitrogen-vacancy (NV) centres are point defects that occur when one carbon atom in diamond’s lattice structure is replaced by a nitrogen atom next to an empty lattice site (a vacancy). Together, this nitrogen atom and its adjacent vacancy behave like a negatively charged entity with an intrinsic quantum spin.

When excited with laser light, an electron in an NV centre can be promoted into an excited state. As the electron decays back to the ground state, it emits light. The exact absorption-and-emission process is complicated by the fact that both the ground state and the excited state of the NV centre have three sublevels (spin triplet states). However, by exciting an individual NV centre repeatedly and collecting the photons it emits, it is possible to determine the spin state of the centre.

The problem, explains Boaz Lubotzky, who co-led this research effort together with his colleague Ronen Rapaport, is that NV centres radiate over a wide range of angles. Hence, without an efficient collection interface, much of the light they emit is lost.

Standard optics capture around 80% of the light

Lubotzky and colleagues say they have now solved this problem thanks to a hybrid nanostructure made from a PMMA dielectric layer above a silver grating. This grating is arranged in a precise bullseye pattern that accurately guides light in a well-defined direction thanks to constructive interference. Using a nanometre-accurate positioning technique, the researchers placed the nanodiamond containing the NV centres exactly at the optimal location for light collection: right at the centre of the bullseye.

For standard optics with a numerical aperture (NA) of about 0.5, the team found that the system captures around 80% of the light emitted from the NV centres. When NA >0.7, this value exceeds 90%, while for NA > 0.8, Lubotzky says it approaches unity.

“The device provides a chip-based, room-temperature interface that makes NV emission far more directional, so a larger fraction of photons can be captured by standard lenses or coupled into fibres and photonic chips,” he tells Physics World. “Collecting more photons translates into faster measurements, higher sensitivity and lower power, thereby turning NV centres into compact precision sensors and also into brighter, easier-to-use single-photon sources for secure quantum communication.”

The researchers say their next priority is to transition their prototype into a plug-and-play, room-temperature module – one that is fully packaged and directly coupled to fibres or photonic chips – with wafer-level deterministic placement for arrays. “In parallel, we will be leveraging the enhanced collection for NV-based magnetometry, aiming for faster, lower-power measurements with improved readout fidelity,” says Lubotzky. “This is important because it will allow us to avoid repeated averaging and enable fast, reliable operation in quantum sensors and processors.”

They detail their present work in APL Quantum.

The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.

Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh

21 octobre 2025 à 18:25

Homes and cities around the world are this week celebrating Diwali or Deepavali – the Indian “festival of lights”. For Indian physicist Rupamanjari Ghosh, who is the former vice chancellor of Shiv Nadar University Delhi-NCR, this festival sheds light on the quantum world. Known for her work on nonlinear optics and entangled photons, Ghosh finds a deep resonance between the symbolism of Diwali and the ongoing revolution in quantum science.

“Diwali comes from Deepavali, meaning a ‘row of lights’. It marks the triumph of light over dark; good over evil; and knowledge over ignorance,” Ghosh explains. “In science too, every discovery is a Diwali –  a victory of knowledge over ignorance.”

With 2025 being marked by the International Year of Quantum Science and Technology, a victory of knowledge over ignorance couldn’t ring truer. “It has taken us a hundred years since the birth of quantum mechanics to arrive at this point, where quantum technologies are poised to transform our lives,” says Ghosh.

Ghosh has another reason to celebrate, having been named as this year’s Institute of Physics (IOP) Homi Bhabha lecturer. The IOP and the Indian Physical Association (IPA) jointly host the Homi Bhabha and Cockcroft Walton bilateral exchange of lecturers. Running since 1998, these international programmes aim to promote dialogue on global challenges through physics and provide physicists with invaluable opportunities for global exposure and professional growth. Ghosh’s online lecture, entitled “Illuminating quantum frontiers: from photons to emerging technologies”, will be aired at 3 p.m. GMT on Wednesday 22 October.

From quantum twins to quantum networks

Ghosh’s career in physics took off in the mid-1980s, when she and American physicist Leonard Mandel – who is often referred to as one of the founding fathers of quantum optics – demonstrated a new quantum source of twin photons through spontaneous parametric down-conversion: a process where a high-energy photon splits into two lower-energy, correlated photons (Phys. Rev. A 34 3962).

“Before that,” she recalls, “no-one was looking for quantum effects in this nonlinear optical process. The correlations between the photons defied classical explanation. It was an elegant early verification of quantum nonlocality.”

Those entangled photon pairs are now the building blocks of quantum communication and computation. “We’re living through another Diwali of light,” she says, “where theoretical understanding and experimental innovation illuminate each other.”

Entangled light

During Diwali, lamps unite households in a shimmering network of connection,  and so too does entanglement of photons. “Quantum entanglement reminds us that connection transcends locality,” Ghosh says. “In the same way, the lights of Diwali connect us across borders and cultures through shared histories.”

Her own research extends that metaphor further. Ghosh’s team has worked on mapping quantum states of light onto collective atomic excitations. These “slow-light” techniques –  using electromagnetically induced transparency or Raman interactions –  allow photons to be stored and retrieved, forming the backbone of long-distance quantum communication (Opt. Lett. 36 1551).

“Symbolically,” she adds, “it’s like passing the flame from one diya (lamp) to another. We’re not just spreading light –  we’re preserving, encoding and transmitting it. Success comes through connection and collaboration.”

Rupamanjari Ghosh
Beyond the shadows: Ghosh calls for the bright light of inclusivity in science. (Courtesy: Rupamanjari Ghosh)

The dark side of light

Ghosh is quick to note that in quantum physics, “darkness” is far from empty. “In quantum optics, even the vacuum is rich –  with fluctuations that are essential to our understanding of the universe.”

Her group studies the transition from quantum to classical systems, using techniques such as error correction, shielding and coherence-preserving materials. “Decoherence –  the loss of quantum behaviour through environmental interaction –  is a constant threat. To build reliable quantum technologies, we must engineer around this fragility,” Ghosh explains.

There are also human-engineered shadows: some weaknesses in quantum communication devices aren’t due to the science itself – they come from mistakes or flaws in how humans built them. Hackers can exploit these “side channels” to get around security. “Security,” she warns, “is only as strong as the weakest engineering link.”

Beyond the lab, Ghosh finds poetic meaning in these challenges. “Decoherence isn’t just a technical problem –  it helps us understand the arrows of time, why the universe evolves irreversibly. The dark side has its own lessons.”

Lighting every corner

For Ghosh, Diwali’s illumination is also a call for inclusivity in science. “No corner should remain dark,” she says. “Science thrives on diversity. Diverse teams ask broader questions and imagine richer answers. It’s not just morally right – it’s good for science.”

She argues that equity is not sameness but recognition of uniqueness. “Innovation doesn’t come from conformity. Gender diversity, for example, brings varied cognitive and collaborative styles – essential in a field like quantum science, where intuition is constantly stretched.”

The shadows she worries most about are not in the lab, but in academia itself. “Unconscious biases in mentorship or gatekeeping in opportunity can accumulate to limit visibility. Institutions must name and dismantle these hidden shadows through structural and cultural change.”

Her vision of inclusion extends beyond gender. “We shouldn’t think of work and life as opposing realms to ‘balance’,” she says. “It’s about creating harmony among all dimensions of life – work, family, learning, rejuvenation. That’s where true brilliance comes from.”

As the rows of diyas are lit this Diwali, Ghosh’s reflections remind us that light –  whether classical or quantum –  is both a physical and moral force: it connects, illuminates and endures. “Each advance in quantum science,” she concludes, “is another step in the age-old journey from darkness to light.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.

Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103

21 octobre 2025 à 15:31

The Chinese particle physicist Chen-Ning Yang died on 18 October at the age of 103. Yang shared half of the 1957 Nobel Prize for Physics with Tsung-Dao Lee for their theoretical work that overturned the notion that parity is conserved in the weak force – one of the four fundamental forces of nature.

Born on 22 September 1922 in Hefei, China, Yang competed a BSc at the National Southwest Associated University in Kunming in 1942. After finishing an MSc in statistical physics at Tsinghua University two years later, in 1945 he moved to the University of Chicago in the US as part of a government-sponsored programme. He received his PhD in physics in 1948 working under the guidance of Edward Teller.

In 1949 Yang moved to the Institute for Advanced Study in Princeton, where he made pioneering contributions to quantum field theory, wotrking together with Robert Mills. In 1953 they proposed the Yang-Mills theory, which became a cornerstone of the Standard Model of particle physics.

The ‘Wu experiment’

It was also at Princeton where Yang began a fruitful collaboration with Lee, who died last year aged 97. Their work on parity – a property of elementary particles that expresses their behaviour upon reflection in a mirror – led to the duo winning the Nobel prize.

In the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta, which are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1.

There were two possible explanations: either the tau and theta are different particles or that parity in the weak interaction is not conserved with Yang and Lee proposing various ways to test their ideas (Phys. Rev. 104 254).

This “parity violation” was later proved experimentally by, among others, Chien-Shiung Wu at Columbia University. She carried out an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60 – what became known as the “Wu experiment”. For their work, Yang, who was 35 at the time, shared the 1957 Nobel Prize for Physics with Lee.

Influential physicist

In 1965 Yang moved to Stony Brook University, becoming the first director of the newly founded Institute for Theoretical Physics, which is now known as the C N Yang Institute for Theoretical Physics. During this time he also contributed to advancing science and education in China, setting up the Committee on Educational Exchange with China – a programme that has sponsored some 100 Chinese scholars to study in the US.

In 1997, Yang returned to Beijing where he became an honorary director of the Centre for Advanced Study at Tsinghua University. He then retired from Stony Brook in 1999, becoming a professor at Tsinghua University. During his time in the US, Yang obtained US citizenship, but renounced it in 2015.

More recently, Yang was involved in debates over whether China should build the Circular Electron Positron Collider (CEPC) – a huge 100 km circumference underground collider that would study the Higgs boson in unprecented detail and be a successor to CERN’s Large Hadron Collider. Yang took a sceptical view calling it “inappropriate” for a developing country that is still struggling with “more acute issues like economic development and environment protection”.

Yang also expressed concern that the science performed on the CEPC is just “guess” work and without guaranteed results. “I am not against the future of high-energy physics, but the timing is really bad for China to build such a super collider,” he noted in 2016. “Even if they see something with the machine, it’s not going to benefit the life of Chinese people any sooner.”

Lasting legacy

As well as the Nobel prize, Yang won many other awards such as the US National Medal of Science in 1986, the Einstein Medal in 1995, which is presented by the Albert Einstein Society in Bern, and the American Physical Society’s Lars Onsager Prize in 1990.

“The world has lost one of the most influential physicists of the modern era,” noted Stony Brook president Andrea Goldsmith in a statement. “His legacy will continue through his transformational impact on the field of physics and through the many colleagues and students influenced by his teaching, scholarship and mentorship.”

The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.

Precision sensing experiment manipulates Heisenberg’s uncertainty principle

21 octobre 2025 à 10:00

Physicists in Australia and the UK have found a new way to manipulate Heisenberg’s uncertainty principle in experiments on the vibrational mode of a trapped ion. Although still at the laboratory stage, the work, which uses tools developed for error correction in quantum computing, could lead to improvements in ultra-precise sensor technologies like those used in navigation, medicine and even astronomy.

“Heisenberg’s principle says that if two operators – for example, position x and momentum, p – do not commute, then one cannot simultaneously measure both of them to absolute precision,” explains team leader Ting Rei Tan of the University of Sydney’s Nano Institute. “Our result shows that one can instead construct new operators – namely ‘modular position’ x̂ and ‘modular momentum’ p̂. These operators can be made to commute, meaning that we can circumvent the usual limitation imposed by the uncertainty principle.”

The modular measurements, he says, give the true measurement of displacements in position and momentum of the particle if the distance is less than a specific length l, known as the modular length. In the new work, they measured x̂ = x mod lx and p̂ = p mod lp, where lx and lp are the modular length in position and momentum.

“Since the two modular operators x̂ and p̂ commute, this means that they are now bounded by an uncertainty principle where the product is larger or equal to 0 (instead of the usual ℏ/2),” adds team member Christophe Valahu. “This is how we can use them to sense position and momentum below the standard quantum limit. The catch, however, is that this scheme only works if the signal being measured is within the sensing range defined by the modular lengths.”

The researchers stress that Heisenberg’s uncertainty principle is in no way “broken” by this approach, but it does mean that when observables associated with these new operators are measured, the precision of these measurements is not limited by this principle. “What we did was to simply push the uncertainty to a sensing range that is relatively unimportant for our measurement to obtain a better precision at finer details,” Valahu tells Physics World.

This concept, Tan explains, is related to an older method known as quantum squeezing that also works by shifting uncertainties around. The difference is that in squeezing, one reshapes the probability, reducing the spread in position at the cost of enlarging the spread of momentum, or vice versa. “In our scheme, we instead redistribute the probability, reducing the uncertainties of position and momentum within a defined sensing range, at the cost of an increased uncertainty if the signal is not guaranteed to lie within this range,” Tan explains. “We effectively push the unavoidable quantum uncertainty to places we don’t care about (that is, big, coarse jumps in position and momentum) so the fine details we do care about can be measured more precisely.

“Thus, as long as we know the signal is small (which is almost always the case for precision measurements), modular measurements give us the correct answer.”

Repurposed ideas and techniques

The particle being measured in Tan and colleagues’ experiment was a 171Yb+ ion trapped in a so-called grid state, which is a subclass of error-correctable logical state for quantum bits, or qubits. The researchers then used a quantum phase estimation protocol to measure the signal they imprinted onto this state, which acts as a sensor.

This measurement scheme is similar to one that is commonly used to measure small errors in the logical qubit state of a quantum computer. “The difference is that in this case, the ‘error’ corresponds to a signal that we want to estimate, which displaces the ion in position and momentum,” says Tan. “This idea was first proposed in a theoretical study.”

Towards ultra-precise quantum sensors

The Sydney researchers hope their result will motivate the development of next-generation precision quantum sensors. Being able to detect extremely small changes is important for many applications of quantum sensing, including navigating environments where GPS isn’t effective (such as on submarines, underground or in space). It could also be useful for biological and medical imaging, materials analysis and gravitational systems.

Their immediate goal, however, is to further improve the sensitivity of their sensor, which is currently about 14 x10-24 N/Hz1/2, and calculate its limit. “It would be interesting if we could push that to the 10-27 N level (which, admittedly, will not be easy) since this level of sensitivity could be relevant in areas like the search for dark matter,” Tan says.

Another direction for future research, he adds, is to extend the scheme to other pairs of observables. “Indeed, we have already taken some steps towards this: in the latter part of our present study, which is published in Science Advances, we constructed a modular number operator and a modular phase operator to demonstrate that the strategy can be extended beyond position and momentum.”

The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.

Eye implant restores vision to patients with incurable sight loss

20 octobre 2025 à 17:59

A tiny wireless implant inserted under the retina can restore central vision to patients with sight loss due to age-related macular degeneration (AMD). In an international clinical trial, the PRIMA (photovoltaic retina implant microarray) system restored the ability to read in 27 of 32 participants followed up after a year.

AMD is the most common cause of incurable blindness in older adults. In its advanced stage, known as geographic atrophy, AMD can cause progressive, irreversible death of light-sensitive photoreceptors in the centre of the retina. This loss of photoreceptors means that light is not transduced into electrical signals, causing profound vision loss.

The PRIMA system works by replacing these lost photoreceptors. The two-part system includes the implant itself: a 2 × 2 mm array of 378 photovoltaic pixels, plus PRIMA glasses containing a video camera that captures images and, after processing, projects them onto the implant using near-infrared light. The pixels in the implant convert this light into electrical pulses, restoring the flow of visual information to the brain. Patients can use the glasses to focus and zoom the image that they see.

The clinical study, led by Frank Holz of the University of Bonn in Germany, enrolled 38 participants at 17 hospital sites in five European countries. All participants had geographic atrophy due to AMD in both eyes, as well as loss of central sight in the study eye over a region larger than the implant (more than 2.4 mm in diameter), leaving only limited peripheral vision.

Around one month after surgical insertion of the 30 μm-thick PRIMA array into one eye, the patients began using the glasses. All underwent training to learn to interpret the visual signals from the implant, with their vision improving over months of training.

Eye images before and after array implantation
The PRIMA implant Representative fundus and OCT images obtained before and after implantation of the array in a patient’s eye. (Courtesy: Science Corporation)

After one year, 27 of the 32 patients who completed the trial could read letters and words (with some able to read pages in a book) and 26 demonstrated clinically meaningful improvement in visual acuity (the ability to read at least two extra lines on a standard eye chart). On average, participants could read an extra five lines, with one person able to read an additional 12 lines.

Nineteen of the participants experienced side-effects from the surgical procedure, with 95% of adverse events resolving within two months. Importantly, their peripheral vision was not impacted by PRIMA implantation. The researchers note that the infrared light used by the implant is not visible to remaining photoreceptors outside the affected region, allowing patients to combine their natural peripheral vision with the prosthetic central vision.

“Before receiving the implant, it was like having two black discs in my eyes, with the outside distorted,” Sheila Irvine, a trial patient treated at Moorfields Eye Hospital in the UK, says in a press statement. “I was an avid bookworm, and I wanted that back. There was no pain during the operation, but you’re still aware of what’s happening. It’s a new way of looking through your eyes, and it was dead exciting when I began seeing a letter. It’s not simple, learning to read again, but the more hours I put in, the more I pick up. It’s made a big difference.”

The PRIMA system – originally designed by Daniel Palanker at Stanford University – is being developed and manufactured by Science Corporation. Based on these latest results, reported in the New England Journal of Medicine, the company has applied for clinical use authorization in Europe and the United States.

The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.

Single-phonon coupler brings different quantum technologies together

20 octobre 2025 à 09:51

Researchers in the Netherlands have demonstrated the first chip-based device capable of splitting phonons, which are quanta of mechanical vibrations. Known as a single-phonon directional coupler, or more simply as a phonon splitter, the new device could make it easier for different types of quantum technologies to “talk” to each other. For example, it could be used to transfer quantum information from spins, which offer advantages for data storage, to superconducting circuits, which may be better for data processing.

“One of the main advantages of phonons over photons is they interact with a lot of different things,” explains team leader Simon Gröblacher of the Kavli Institute of Nanoscience at Delft University of Technology. “So it’s very easy to make them interface with systems.”

There are, however, a few elements still missing from the phononic circuitry developer’s toolkit. One such element is a reversible beam splitter that can either combine two phonon channels (which might be carrying quantum information transferred from different media) or split one channel into two, depending on its orientation.

While several research groups have already investigated designs for such phonon splitters, these works largely focused on surface acoustic waves. This approach has some advantages, as waves of this type have already been widely explored and exploited commercially. Mobile phones, for example, use surface acoustic waves as filters for microwave signals. The problem is that these unconfined mechanical excitations are prone to substantial losses as phonons leak into the rest of the chip.

Mimicking photonic beam splitters

Gröblacher and his collaborators chose instead to mimic the design of beam splitters used in photonic chips. They used a strip of thin silicon to fashion a waveguide for phonons that confined them in all dimensions but one, giving additional control and reducing loss. They then brought two waveguides into contact with each other so that one waveguide could “feel” the mechanical excitations in the other. This allowed phonon modes to be coupled between the waveguides – something the team demonstrated down to the single-phonon level. The researchers also showed they could tune the coupling between the two waveguides by altering the contact length.

Although this is the first demonstration of single-mode phonon coupling in this kind of waveguide, the finite element method simulations Gröblacher and his colleagues ran beforehand made him pretty confident it would work from the outset. “I’m not surprised that it worked. I’m always surprised how hard it is to get it to work,” he tells Physics World. “Making it to look and do exactly what you design it to do – that’s the really hard part.”

Prospects for integrated quantum phononics

According to A T Charlie Johnson, a physicist at the University of Pennsylvania, US whose research focuses on this area, that hard work paid off. “These very exciting new results further advance the prospects for phonon-based qubits in quantum technology,” says Johnson, who was not directly involved in the demonstration. “Integrated quantum phononics is one significant step closer.”

As well as switching between different quantum media, the new single-phonon coupler could also be useful for frequency shifting. For instance, microwave frequencies are close to the frequencies of ambient heat, which makes signals at these frequencies much more prone to thermal noise. Gröblacher already has a company working on transducers to transform quantum information from microwave to optical frequencies with this challenge in mind, and he says a single-phonon coupler could be handy.

One remaining challenge to overcome is dispersion, which occurs when phonon modes couple to other unwanted modes. This is usually due to imperfections in the nanofabricated device, which are hard to avoid. However, Gröblacher also has other aspirations. “I think the one component that’s missing for us to have the similar level of control over phonons as people have with photons is a phonon phase shifter,” he tells Physics World. This, he says, would allow on-chip interferometry to route phonons to different parts of a chip, and perform advanced quantum experiments with phonons.

The study is reported in Optica.

The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.

This jumping roundworm uses static electricity to attach to flying insects

17 octobre 2025 à 16:30

Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used highspeed microscopy techniques to film the worms as they leapt onto a fruit fly.

They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

Wearable UVA sensor warns about overexposure to sunlight

17 octobre 2025 à 10:09
Illustration showing the operation of the UVA detector
Transparent healthcare Illustration of the fully transparent sensor that reacts to sunlight and allows real-time monitoring of UVA exposure on the skin. The device could be integrated into wearable items, such as glasses or patches. (Courtesy: Jnnovation Studio)

A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.

UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.

While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.

Many challenges

“For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Soongsil University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”

While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.

In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.

The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.

Efficient transport

When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.

To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”

As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.

“The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.

The research is described in Science Advances.

The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

Astronauts could soon benefit from dissolvable eye insert

16 octobre 2025 à 15:41

Spending time in space has a big impact on the human body and can cause a range of health issues. Many astronauts develop vision problems because microgravity causes body fluids to redistribute towards the head. This can lead to swelling in the eye and compression of the optic nerve.

While eye conditions can generally be treated with medication, delivering drugs in space is not a straightforward task. Eye drops simply don’t work without gravity, for example. To address this problem, researchers in Hungary are developing a tiny dissolvable eye insert that could deliver medication directly to the eye. The size of a grain of rice, the insert has now been tested by an astronaut on the International Space Station.

This episode of the Physics World Weekly podcast features two of those researchers – Diána Balogh-Weiser of Budapest University of Technology and Economics and Zoltán Nagy of Semmelweis University – who talk about their work with Physics World’s Tami Freeman.

The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.

Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids

16 octobre 2025 à 13:00

Researchers in Japan and Taiwan have captured three-dimensional images of an entire geothermal system deep in the Earth’s crust for the first time. By mapping the underground distribution of phenomena such as fracture zones and phase transitions associated with seismic activity, they say their work could lead to improvements in earthquake early warning models. It could also help researchers develop next-generation versions of geothermal power – a technology that study leader Takeshi Tsuji of the University of Tokyo says has enormous potential for clean, large-scale energy production.

“With a clear three-dimensional image of where supercritical fluids are located and how they move, we can identify promising drilling targets and design safer and more efficient development plans,” Tsuji says. “This could have direct implications for expanding geothermal power generation, reducing dependence on fossil fuels, and contributing to carbon neutrality and energy security in Japan and globally.”

In their study, Tsuji and colleagues focused on a region known as the brittle-ductile transition zone, which is where rocks go from being seismically active to mostly inactive. This zone is important for understanding volcanic activity and geothermal processes because it lies near an impermeable sealing band that allows fluids such as water to accumulate in a high-pressure, supercritical state. When these fluids undergo phase transitions, earthquakes may follow. However, such fluids could also produce more geothermal energy than conventional systems. Identifying their location is therefore important for this reason, too.

A high-resolution “digital map”

Many previous electromagnetic and magnetotelluric surveys suffered from low spatial resolution and were limited to regions relatively close to the Earth’s surface. In contrast, the techniques used in the latest study enabled Tsuji and colleagues to create a clear high-resolution “digital map” of deep geothermal reservoirs – something that has never been achieved before.

To make their map, the researchers used three-dimensional multichannel seismic surveys to image geothermal structures in the Kuju volcanic group, which is located on the Japanese island of Kyushu. They then analysed these images using a method they developed known as extended Common Reflection Surface (CRS) stacking. This allowed them to visualize deeper underground features such as magma-related structures, fracture-controlled fluid pathways and rock layers that “seal in” supercritical fluids.

“In addition to this, we applied advanced seismic tomography and machine-learning based analyses to determine the seismic velocity of specific structures and earthquake mechanisms with high accuracy,” explains Tsuji. “It was this integrated approach that allowed us to image a deep geothermal system in unprecedented detail.” He adds that the new technique is also better suited to mountainous geothermal regions where limited road access makes it hard to deploy the seismic sources and receivers used in conventional surveys.

A promising site for future supercritical geothermal energy production

Tsuji and colleagues chose to study the Kuju area because it is home to several volcanoes that were active roughly 1600 years ago and have erupted intermittently in recent years. The region also hosts two major geothermal power plants, Hatchobaru and Otake. The former has a capacity of 110 MW and is the largest geothermal facility in Japan.

The heat source for both plants is thought to be located beneath Mt Kuroiwa and Mt Sensui, and the region is considered a promising site for supercritical geothermal energy production. Its geothermal reservoir appears to consist of water that initially fell as precipitation (so-called meteoric water) and was heated underground before migrating westward through the fault system. Until now, though, no detailed images of the magmatic structures and fluid pathways had been obtained.

Tsuji says he has long wondered why geothermal power is not more widely used in Japan, despite the country’s abundant volcanic and thermal resources. “Our results now provide the scientific and technical foundation for next-generation supercritical geothermal power,” he tells Physics World.

The researchers now plan to try out their technique using portable seismic sources and sensors deployed in mountainous areas (not just along roads) to image the shallower parts of geothermal systems in greater detail as well. “We also plan to extend our surveys to other geothermal fields to test the general applicability of our method,” Tsuji says. “Ultimately, our goal is to provide a reliable scientific basis for the large-scale deployment of supercritical geothermal power as a sustainable energy source.”

The present work is detailed in Communications Earth & Environment.

The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.

Researchers visualize blood flow in pulsating artificial heart

16 octobre 2025 à 10:00

A research team in Sweden has used real-time imaging technology to visualize the way that blood pumps around a pulsating artificial heart – moving medicine one step closer to the safe use of such devices in people waiting for donor transplants.

The Linköping University (LiU) team used 4D flow MRI to examine the internal processes of a mechanical heart prototype created by Västerås-based technology company Scandinavian Real Heart. The researchers evaluated blood flow patterns and compared them with similar measurements taken in a native human heart, outlining their results in Scientific Reports.

“As the pulsatile total artificial heart contains metal parts, like the motor, we used 3D printing [to replace most metal parts] and a physiological flow loop so we could run it in the MRI scanner under representable conditions,” says first author Twan Bakker, a PhD student at the Center for Medical Image Science and Visualization at LiU.

No elevated risk

According to Bakker, this is first time that a 3D-printed MRI-compatible artificial heart has been built and successfully evaluated using 4D flow MRI. The team was pleased to discover that the results corroborate the findings of previous computational fluid dynamics simulations indicating “low shear stress and low stagnation”. Overall flow patterns also suggest there is no elevated risk for blood complications compared with hearts in healthy humans and those suffering from valvular disease.

“[The] patterns of low blood flow, a risk for thrombosis, were in the same range as for healthy native human hearts. Patterns of turbulent flow, a risk for activation of blood platelets, which can contribute to thrombosis, were lower than those found in patients with valvular disease,” says Bakker.

“4D flow MRI allows us to measure the flow field without altering the function of the total artificial heart, which is therefore a valuable tool to complement computer simulations and blood testing during the development of the device. Our measurements provided valuable information to the design team that could improve the artificial heart prototype further,” he adds.

Improved diagnostics

A key advantage of 4D flow MRI over alternative measurement techniques – such as particle image velocimetry and laser doppler anemometry – is that it doesn’t require the creation of a fully transparent model. This is an important distinction for Bakker, since some components in the artificial heart are made with materials possessing unique mechanical properties, meaning that replication in a see-through version would be extremely challenging.

Visualizing blood flow The central image shows a representation of the full cardiac cycle in the artificial heart, with circulating flow patterns in various locations highlighted at specified time points. (Courtesy: CC BY 4.0/Sci. Rep. 10.1038/s41598-025-18422-y)

“With 4D flow MRI we had to move the motor away from the scanner bore, but the material in contact with the blood and the motion of the device remained as the original design,” says Bakker.

According to Bakker, the velocity measurements can also be used for visualization and analysis of hemodynamic parameters, such as turbulent kinetic energy, wall shear stresses and more in the heart, as well as for larger vessels in our bodies.

“By studying the flow dynamics in patients and healthy subjects, we can better understand its role in health and disease, which can then support improved diagnostics, interventions and surgical therapies,” he explains.

Moving forward, Bakker says that the research team will continue to evaluate the improved heart design, which was recently granted designation as a Humanitarian Use Device (HUD) by the US Food and Drug Administration (FDA).

“This makes it possible to apply for designation as a Humanitarian Device Exemption (HDE) – which may grant the device limited marketing rights and paves the way for the pre-clinical and clinical studies,” he says.

“In addition, we are currently developing tools to compute blood flow using simulations. This may provide us with a deeper understanding of the mechanisms that cause the formation of thrombosis and haemolysis,” he tells Physics World.

The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.

Evo CT-Linac eases access to online adaptive radiation therapy

15 octobre 2025 à 14:15

Adaptive radiation therapy (ART) is a personalized cancer treatment in which a patient’s treatment plan can be updated throughout their radiotherapy course to account for any anatomical variations – either between fractions (offline ART) or immediately prior to dose delivery (online ART). Using high-fidelity images to enable precision tumour targeting, ART improves outcomes while reducing side effects by minimizing healthy tissue dose.

Elekta, the company behind the Unity MR-Linac, believes that in time, all radiation treatments will incorporate ART as standard. Towards this goal, it brings its broad knowledge base from the MR-Linac to the new Elekta Evo, a next-generation CT-Linac designed to improve access to ART. Evo incorporates AI-enhanced cone-beam CT (CBCT), known as Iris, to provide high-definition imaging, while its Elekta ONE Online software automates the entire workflow, including auto-contouring, plan adaptation and end-to-end quality assurance.

A world first

In February of this year, Matthias Lampe and his team at the private centre DTZ Radiotherapy in Berlin, Germany became the first in the world to treat patients with online ART (delivering daily plan updates while the patient is on the treatment couch) using Evo. “To provide proper tumour control you must be sure to hit the target – for that, you need online ART,” Lampe tells Physics World.

The team at DTZ Radiotherapy
Initiating online ART The team at DTZ Radiotherapy in Berlin treated the first patient in the world using Evo. (Courtesy: Elekta)

The ability to visualize and adapt to daily anatomy enables reduction of the planning target volume, increasing safety for nearby organs-at-risk (OARs). “It is highly beneficial for all treatments in the abdomen and pelvis,” says Lampe. “My patients with prostate cancer report hardly any side effects.”

Lampe selected Evo to exploit the full flexibility of its C-arm design. He notes that for the increasingly prevalent hypofractionated treatments, a C-arm configuration is essential. “CT-based treatment planning and AI contouring opened up a new world for radiation oncologists,” he explains. “When Elekta designed Evo, they enabled this in an achievable way with an extremely reliable machine. The C-arm linac is the primary workhorse in radiotherapy, so you have the best of everything.”

Time considerations

While online ART can take longer than conventional treatments, Evo’s use of automation and AI limits the additional time requirement to just five minutes – increasing the overall workflow from 12 to 17 minutes and remaining within the clinic’s standard time slots.

Patient being set up on an Elekta treatment system
Elekta Evo Evo is a next-generation CT-Linac designed to improve access to adaptive radiotherapy. (Courtesy: Elekta)

The workflow begins with patient positioning and CBCT imaging, with Evo’s AI-enhanced Iris imaging significantly improving image quality, crucial when performing ART. The radiation therapist then matches the cone-beam and planning CTs and performs any necessary couch shift.

Simultaneously, Elekta ONE Online performs AI auto-contouring of OARs, which are reviewed by the physician, and the target volume is copied in. The physicist then simulates the dose distribution on the new contours, followed by a plan review. “Then you can decide whether to adapt or not,” says Lampe. “This is an outstanding feature.” The final stage is treatment delivery and online dosimetry.

When DTZ Berlin first began clinical treatments with Evo, some of Lampe’s colleagues were apprehensive as they were attached to the conventional workflow. “But now, with CBCT providing the chance to see what will be treated, every doctor on my team has embraced the shift and wouldn’t go back,” he says.

The first treatments were for prostate cancer, a common indication that’s relatively easy to treat. “I also thought that if the Elekta ONE workflow struggled, I could contour this on my own in a minute,” says Lampe. “But this was never necessary, the process is very solid. Now we also treat prostate cancer patients with lymph node metastases and those with relapse after radiotherapy. It’s a real success story.”

Lampe says that older and frailer patients may benefit the most from online ART, pointing out that while published studies often include relatively young, healthy patients, “our patients are old, they have chronic heart disease, they’re short of breath”.

For prostate cancer, for example, patients are instructed to arrive with a full bladder and an empty rectum. “But if a patient is in his eighties, he may not be able to do this and the volumes will be different every day,” Lampe explains. “With online adaptive, you can tell patients: ‘if this is not possible, we will handle it, don’t stress yourself’. They are very thankful.”

Making ART available to all

At UMC Utrecht in the Netherlands, the radiotherapy team has also added CT-Linac online adaptive to its clinical toolkit.

UMC Utrecht is renowned for its development of MR-guided radiotherapy, with physicists Bas Raaymakers and Jan Lagendijk pioneering the development of a hybrid MR-Linac. “We come from the world of MR-guidance, so we know that ART makes sense,” says Raaymakers. “But if we only offer MR-guided radiotherapy, we miss out on a lot of patients. We wanted to bring it to the wider community.”

The radiotherapy team at UMC Utrecht
ART for all The radiotherapy team at UMC Utrecht in the Netherlands has added CT-Linac online adaptive to its clinical toolkit. (Courtesy: UMC Utrecht)

At the time of speaking to Physics World, the team was treating its second patient with CBCT-guided ART, and had delivered about 30 fractions. Both patients were treated for bladder cancer, with future indications to explore including prostate, lung and breast cancers and bone metastases.

“We believe in ART for all patients,” says medical physicist Anette Houweling. “If you have MR and CT, you should be able to choose the optimal treatment modality based on image quality. For below the diaphragm, this is probably MR, while for the thorax, CT might be better.”

Ten minute target for OART

Houweling says that ART delivery has taken 19 minutes on average. “We record the CBCT, perform image fusion and then the table is moved, that’s all standard,” she explains. “Then the adaptive part comes in: delineation on the CBCT and creating a new plan with Elekta ONE Planning as part of Elekta One Online.”

The plan adaptation, when selected to perform, takes roughly four minutes to create a clinical-grade volumetric-modulated arc therapy (VMAT) plan. With the soon to be installed next-generation optimizer, it is expected to take less than one minute to generate a VMAT plan.

“As you start with the regular workflow, you can still decide not to choose adaptive treatment, and do a simple couch shift, up until the last second,” says Raaymakers. It’s very close to the existing workflow, which makes adoption easier. Also, the treatment slots are comparable to standard slots. Now with CBCT it takes 19 minutes and we believe we can get towards 10. That’s one of the drivers for cone-beam adaptive.”

Shorter treatment times will impact the decision as to which patients receive ART. If fully automated adaptive treatment is deliverable in a 10-minute time slot, it could be available to all patients. “From the physics side, our goal is to have no technological limitations to delivering ART. Then it’s up to the radiation oncologists to decide which patients might benefit,” Raaymakers explains.

Future gazing

Looking to the future, Raaymakers predicts that simulation-free radiotherapy will be adopted for certain standard treatments. “Why do you need days of preparation if you can condense the whole process to the moment when the patient is on the table,” he says. “That would be very much helped by online ART.”

“Scroll forward a few years and I expect that ART will be automated and fast such that the user will just sign off the autocontours and plan in one, maybe tune a little, and then go ahead,” adds Houweling. “That will be the ultimate goal of ART. Then there’s no reason to perform radiotherapy the traditional way.”

The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.

Jesper Grimstrup’s The Ant Mill: could his anti-string-theory rant do string theorists a favour?

15 octobre 2025 à 12:00

Imagine you had a bad breakup in college. Your ex-partner is furious and self-publishes a book that names you in its title. You’re so humiliated that you only dimly remember this ex, though the book’s details and anecdotes ring true.

According to the book, you used to be inventive, perceptive and dashing. Then you started hanging out with the wrong crowd, and became competitive, self-involved and incapable of true friendship. Your ex struggles to turn you around; failing, they leave. The book, though, is so over-the-top that by the end you stop cringing and find it a hoot.

That’s how I think most Physics World readers will react to The Ant Mill: How Theoretical High-energy Physics Descended into Groupthink, Tribalism and Mass Production of Research. Its author and self-publisher is the Danish mathematician-physicist Jesper Grimstrup, whose previous book was Shell Beach: the Search for the Final Theory.

After receiving his PhD in theoretical physics at the Technical University of Vienna in 2002, Grimstrup writes, he was “one of the young rebels” embarking on “a completely unexplored area” of theoretical physics, combining elements of loop quantum gravity and noncommutative geometry. But there followed a decade of rejected articles and lack of opportunities.

Grimstrup became “disillusioned, disheartened, and indignant” and in 2012 left the field, selling his flat in Copenhagen to finance his work. Grimstrup says he is now a “self-employed researcher and writer” who lives somewhere near the Danish capital. You can support him either through Ko-fi or Paypal.

Fomenting fear

The Ant Mill opens with a copy of the first page of the letter that Grimstrup’s fellow Dane Niels Bohr sent in 1917 to the University of Copenhagen successfully requesting a four-storey building for his physics institute. Grimstrup juxtaposes this incident with the rejection of his funding request, almost a century later, by the Danish Council for Independent Research.

Today, he writes, theoretical physics faces a situation “like the one it faced at the time of Niels Bohr”, but structural and cultural factors have severely hampered it, making it impossible to pursue promising new ideas. These include Grimstrup’s own “quantum holonomy theory, which is a candidate for a fundamental theory”. The Ant Mill is his diagnosis of how this came about.

The Standard Model of particle physics, according to Grimstrup, is dominated by influential groups that squeeze out other approaches.

A major culprit, in Grimstrup’s eyes, was the Standard Model of particle physics. That completed a structure for which theorists were trained to be architects and should have led to the flourishing of a new crop of theoretical ideas. But it had the opposite effect. The field, according to Grimstrup, is now dominated by influential groups that squeeze out other approaches.

The biggest and most powerful is string theory, with loop quantum gravity its chief rival. Neither member of the coterie can make testable predictions, yet because they control jobs, publications and grants they intimidate young researchers and create what Grimstrup calls an “undercurrent of fear”. (I leave assessment of this claim to young theorists.)

Half the chapters begin with an anecdote in which Grimstrup describes an instance of rejection by a colleague, editor or funding agency. In the book’s longest chapter Grimstrup talks about his various rejections – by the Carlsberg Foundation, The European Physics Journal C, International Journal of Modern Physics A, Classical and Quantum Gravity, Reports on Mathematical Physics, Journal of Geometry and Physics, and the Journal of Noncommutative Geometry.

Grimstrup says that the reviewers and editors of these journals told him that his papers variously lacked concrete physical results, were exercises in mathematics, seemed the same as other papers, or lacked “relevance and significance”. Grimstrup sees this as the coterie’s handiwork, for such journals are full of string theory papers open to the same criticism.

“Science is many things,” Grimstrup writes at the end. “[S]imultaneously boring and scary, it is both Indiana Jones and anonymous bureaucrats, and it is precisely this diversity that is missing in the modern version of science”. What the field needs is “courage…hunger…ambition…unwillingness to compromise…anarchy.

Grimstrup hopes that his book will have an impact, helping to inspire young researchers to revolt, and to make all the scientific bureaucrats and apparatchiks and bookkeepers and accountants “wake up and remember who they truly are”.

The critical point

The Ant Mill is an example of what I have called “rant literature” or rant-lit. Evangelical, convinced that exposing truth will make sinners come to their senses and change their evil ways, rant lit can be fun to read, for it is passionate and full of florid metaphors.

Theoretical physicists, Grimstrup writes, have become “obedient idiots” and “technicians”. He slams theoretical physics for becoming a “kingdom”, a “cult”, a “hamster wheel”, and “ant mill”, in which the ants march around in a pre-programmed “death spiral”.

Grimstrup hammers away at theories lacking falsifiability, but his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

An attentive reader, however, may come away with a different lesson. Grimstrup calls falsifiability the “crown jewel of the natural sciences” and hammers away at theories lacking it. But his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

In his 2013 book String Theory and the Scientific Method, for instance, the Stockholm University philosopher of science Richard Dawid suggested rescuing the scientific status of string theory by adding such non-empirical criteria to evaluating theories as clarity, coherence and lack of alternatives. It’s an approach that both rescues the formalistic approach to the scientific method and undermines it.

Dawid, you see, is making the formalism follow the practice rather than the other way around. In other words, he is able to reformulate how we make theories because he already knows how theorizing works – not because he only truly knows what it is to theorize after he gets the formalism right.

Grimstrup’s rant, too, might remind you of the birth of the Yang–Mills theory in 1954. Developed by Chen Ning Yang and Robert Mills, it was a theory of nuclear binding that integrated much of what was known about elementary particle theory but implied the existence of massless force-carrying particles that then were known not to exist. In fact, at one seminar Wolfgang Pauli unleashed a tirade against Yang for proposing so obviously flawed a theory.

The theory, however, became central to theoretical physics two decades later, after theorists learned more about the structure of the world. The Yang-Mills story, in other words, reveals that theory-making does not always conform to formal strictures and does not always require a testable prediction. Sometimes it just articulates the best way to make sense of the world apart from proof or evidence.

The lesson I draw is that becoming the target of a rant might not always make you feel repentant and ashamed. It might inspire you into deep reflection on who you are in a way that is insightful and vindicating. It might even make you more rather than less confident about why you’re doing what you’re doing

Your ex, of course, would be horrified.

The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.

Further evidence for evolving dark energy?

15 octobre 2025 à 11:34

The term dark energy, first used in 1998, is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe – an observation that was awarded the 2011 Nobel Prize in Physics.

Dark energy is now a well established concept and forms a key part of the standard model of Big Bang cosmology, the Lambda-CDM model.

The trouble is, we’ve never really been able to explain exactly what dark energy is, or why it has the value that it does.

Even worse, new data acquired by cutting-edge telescopes have suggested that dark energy might not even exist as we had imagined it.

This is where the new work by Mukherjee and Sen comes in. They combined two of these datasets, while making as few assumptions as possible, to understand what’s going on.

The first of these datasets came from baryon acoustic oscillations. These are patterns in the distribution of matter in the universe, created by sound waves in the early universe.

The second dataset is based on a survey of supernovae data from the last 5 years. Both sets of data can be used to track the expansion history of the universe by measuring distances at different snapshots in time.

The team’s results are in tension with the Lambda-CDM model at low redshifts. Put simply, the results disagree with the current model at recent times. This provides further evidence for the idea that dark energy, previously considered to have a constant value, is evolving over time.

Evolving dark energy
The tension in the expansion rate is most evident at low redshifts (Courtesy: P. Mukherjee)

The is far from the end of the story with dark energy. New observational data, and new analyses such as this one are urgently required to provide a clearer picture.

However, where there’s uncertainty, there’s opportunity. Understanding dark energy could hold the key to understanding quantum gravity, the Big Bang and the ultimate fate of the universe.

 

 

 

The post Further evidence for evolving dark energy? appeared first on Physics World.

❌