↩ Accueil
❌

Vue lecture

There’s an elephant in the room at the Royal Society – and for once, it’s not (just) Elon Musk

Just over a week ago, US President Donald Trump released a budget proposal that would, if enacted, eviscerate science research across the country. Among other cuts, it proposes a 57% drop (relative to 2024) in funding for the National Science Foundation (NSF), which provides the lion’s share of government support for basic science. Within this, the NSF’s physics and mathematics directorate stands to lose more than a billion dollars, or 67% of its funding. And despite the past closeness between Trump and SpaceX boss Elon Musk, NASA faces cuts of 24%, including 50% of its science budget.

Of course, the US is not the only nation that funds scientific research, any more than NASA is the only agency that sends spacecraft to explore the cosmos. Still, both are big enough players (and big enough partners for the UK) that I expected these developments to feature at least briefly at last Tuesday’s Royal Society conference on the future of the UK space sector.

During the conference’s opening session, it occasionally seemed like David Parker, a former chief executive of the UK Space Agency (UKSA) who now works for the European Space Agency (ESA), might say a few words on the subject. His opening remarks focused on lessons the UK could learn from the world’s other space agencies, including NASA under the first Trump administration. At one point, he joked that all aircraft have four dimensions: span, length, height and politics. But as for the politics that threaten NASA in Trump’s second administration, Parker was silent.

Let’s talk about something else

This silence continued throughout the morning. All told, 19 speakers filed on and off the stage at the Royal Society’s London headquarters without so much as mentioning what the Nobel-Prize-winning astrophysicist Adam Riess called an “almost extinction level” event for research in their field.

The most surreal omission was in a talk by Sheila Rowan, a University of Glasgow astrophysicist and past president of the Institute of Physics (which publishes Physics World). Rowan was instrumental in the 2015 detection of gravitational waves at the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO), and her talk focused on gravitational-wave research. Despite this, she did not mention that Trump’s budget would eliminate funding for one of the two LIGO detectors, drastically reducing the research LIGO can do.

When I contacted Rowan to ask why this was, she replied that she had prepared her talk before the budget was announced. The conference, she added, was “a great example of how fantastic science benefits not just the UK, but society more broadly, and globally, and that is a message we must never stop explaining”.

What’s at stake

Rowan ended her talk on a similarly positive note, with hopeful words about the future. “The things that will fly in 2075, we are thinking about now,” she said.

In some cases, that may be true. However, if Trump’s budget passes both houses of the US Congress (the House of Representatives has already passed a bill that would enact most of the administration’s wishes), the harsh reality is that many things space scientists are thinking about will never fly at all.

Over at Astrobites, a site where PhD students write about astronomy and astrophysics for undergraduates, Arizona State University student Skylar Grayson compiled a depressingly long list of threatened missions. Like other graphics that have circled on social media since the budget announcement, Grayson’s places red X’s – indicating missions that are “fully dead” under the new budget – over dozens of projects. Affected missions range from well-known workhorses like Mars Orbiter and New Horizons to planning-stage efforts like the next-generation Earth-observing satellite Landsat Next. According to Landsat Next’s live-at-the-time-of-writing NASA webpage, it is expected to launch no earlier than 2031. What does its future look like now?

And NASA’s own missions are just the start. Several missions led by other agencies – including high-profile ones like ESA’s Rosalind Franklin Mars rover – are under threat. This is because the new NASA budget would eliminate the US’s share of their funding, forcing partners to pick up the tab or see their investments go to waste. Did that possibility not deserve some mention at a conference on the future of the UK space sector?

The elephant in the room

Midway through the conference, satellite industry executive Andrew Stanniland declared that he was about to mention the “elephant in the room”. At last, I thought. Someone’s going to say something. However, Stanniland’s “elephant” was not the proposed gutting of NASA science. Instead, he wanted to discuss the apparently taboo topic of the Starlink network of communications satellites.

Like SpaceX, Tesla and, until recently, Trump’s budget-slashing “department of government efficiency”, Starlink is a Musk project. Musk is a Fellow of the Royal Society, and he remains so after the society’s leadership rejected a grassroots effort to remove him for, inter alia, calling for the overthrow of the UK government. Could it be that speakers were avoiding Musk, Trump and the new US science budget to spare the Royal Society’s blushes?

Exasperated, I submitted a question to the event’s online Q&A portal. “The second Trump administration has just proposed a budget for NASA that would gut its science funding,” I wrote. “How is this likely to affect the future of the space sector?” Alas, the moderator didn’t choose my question – though in fairness, five others also went unanswered, and Rowan, for the record, says that she could “of course” talk about whatever she wanted to.

Finally, in the event’s second-to-last session, the elephant broke through. During a panel discussion on international collaboration, an audience member asked, “Can we really operate [collaboratively] when we have an administration that’s causing irreparable harm to one of our biggest collaborators on the space science stage?”

In response, panellist Gillian Wright, a senior scientist at the UK Astronomy Technology Centre in Edinburgh, called it “an incredibly complicated question given the landscape is still shifting”. Nevertheless, she said, “My fear is that what goes won’t come back easily, so we do need to think hard about how we keep those scientific connections alive for the future, and I don’t know the answer.” The global focus of space science, Wright added, may be shifting away from the US and towards Europe and the global south.

And that was it.

A question of leadership

I logged out of the conference feeling depressed – and puzzled. Why had none of these distinguished speakers (partially excepting Wright) addressed one of the biggest threats to the future of space science? One possible answer, suggested to me on social media by the astrophysicist Elizabeth Tasker, is that individuals might hesitate to say anything that could be taken as an official statement, especially if their organization needs to maintain a relationship with the US. “I think it needs to be an agency-released statement first,” said Tasker, who works at (but was not speaking for) the Japan Aerospace Exploration Agency (JAXA). “I totally agree that silence is problematic for the community, and I think that’s where official statements come in – but those may need more time.”

Official statements from agencies and other institutions would doubtless be welcomed by members of the US science workforce whose careers and scientific dreams are at risk from the proposed budget. The initial signs, however, are not encouraging.

On the same day as the Royal Society event, the US’s National Academies of Science (NAS) hosted their annual “State of the Science” event in Washington, DC. According to reporting by John Timmer at Ars Technica, many speakers at this event were, if anything, even keener than the Royal Society speakers to avoid acknowledging the scale of the (real and potential) damage. A few oblique comments from NAS president Marcia McNutt; a few forthright ones from a Republican former congresswoman, Heather Wilson; but overall, a pronounced tendency to ignore the present in favour of a future that may never come.

Frankly, the scientific community on both sides of the Atlantic deserves better.

The post There’s an elephant in the room at the Royal Society – and for once, it’s not (just) Elon Musk appeared first on Physics World.

  •  

Quantum physics guides proton motion in biological systems

If you dig deep enough, you’ll find that most biochemical and physiological processes rely on shuttling hydrogen atoms – protons – around living systems. Until recently, this proton transfer process was thought to occur when protons jump from water molecule to water molecule and between chains of amino acids. In 2023, however, researchers suggested that protons might, in fact, transfer at the same time as electrons. Scientists in Israel have now confirmed this is indeed the case, while also showing that proton movement is linked to the electrons’ spin, or magnetic moment. Since the properties of electron spin are defined by quantum mechanics, the new findings imply that essential life processes are intrinsically quantum in nature.

The scientists obtained this result by placing crystals of lysozyme – an enzyme commonly found in living organisms – on a magnetic substrate. Depending on the direction of the substrate’s magnetization, the spin of the electrons ejected from this substrate may be up or down. Once the electrons are ejected from the substrate, they enter the lysozymes. There, they become coupled to phonons, or vibrations of the crystal lattice.

Crucially, this coupling is not random. Instead, the chirality, or “handedness”, of the phonons determines which electron spin they will couple with – a  property known as chiral induced spin selectivity.

Excited chiral phonons mediate electron coupling spin

When the scientists turned their attention to proton transfer through the lysozymes, they discovered that the protons moved much more slowly with one magnetization direction than they did with the opposite. This connection between proton transfer and spin-selective electron transfer did not surprise Yossi Paltiel, who co-led the study with his Hebrew University of Jerusalem (HUJI) colleagues Naama Goren, Nir Keren and Oded Livnah in collaboration with Nurit Ashkenazy of Ben Gurion University and Ron Naaman of the Weizmann Institute.

“Proton transfer in living organisms occurs in a chiral environment and is an essential process,” Paltiel says. “Since protons also have spin, it was logical for us to try to relate proton transfer to electron spin in this work.”

The finding could shed light on proton hopping in biological environments, Paltiel tells Physics World. “It may ultimately help us understand how information and energy are transferred inside living cells, and perhaps even allow us to control this transfer in the future.

“The results also emphasize the role of chirality in biological processes,” he adds, “and show how quantum physics and biochemistry are fundamentally related.”

The HUJI team now plans to study how the coupling between the proton transfer process and the transfer of spin polarized electrons depends on specific biological environments. “We also want to find out to what extent the coupling affects the activity of cells,” Paltiel says.

Their present study is detailed in PNAS.

The post Quantum physics guides proton motion in biological systems appeared first on Physics World.

  •  

Development and application of a 3-electrode setup for the operando detection of side reactions in Li-Ion batteries

webinar main image

Join us to learn about the development and application of a 3-Electrode setup for the operando detection of side reactions in Li-Ion batteries.

Detecting parasitic side reactions originating both from the cathode active materials (CAMs) and the electrolyte is paramount for developing more stable cell chemistries for Li-ion batteries. This talk will present a method for the qualitative analysis of oxidative electrolyte oxidation, as well as the quantification of released lattice oxygen and transition metal ions (TM ions) from the CAM. It is based on a 3-electrode cell design employing a Vulcan carbon-based sense electrode (SE) that is held at a controlled voltage against a partially delithiated lithium iron phosphate (LFP) counter electrode (CE). At this SE, reductive currents can be measured while polarizing a CAM or carbon working electrode (WE) against the same LFP CE. In voltametric scans, we show how the SE potential can be selected to specifically detect a given side reaction during CAM charge/discharge, allowing, e.g., to discriminate between lattice oxygen, protons, and dissolved TMs. Furthermore, it is shown via On-line Electrochemical Mass Spectrometry (OEMS) that O2 reduction in the here-used LP47 electrolyte consumes ~2.3 electrons/O2. Using this value, the lattice oxygen release deduced from the 3-electrode setup upon charging of the NCA WE is in good agreement with OEMS measurements up to NCA potentials >4.65 VLi. At higher potentials, the contributions from the reduction of TM ions can be quantified by comparing the integrated SE current with the O2 evolution from OEMS

Lennart Reuter headshot
Lennart Reuter

Lennart Reuter is a PhD student in the group of Prof Hubert A Gasteiger at the Chair of Technical Electrochemistry at TUM. His research focused on the interfacial processes in lithium-ion batteries that govern calendar life, cycle stability, and rate capability. He advanced the on-line electrochemical mass spectrometry (OEMS) technique to investigate gas evolution mechanisms from interfacial side reactions at the cathode and anode. His work also explored how SEI formation and graphite structural changes affect Li⁺ transport, using impedance spectroscopy and complementary analysis techniques.

 

Leonhard J Reinschluessel headshot
Leonhard J Reinschluessel

Leonhard J Reinschluessel is currently a PhD candidate at at the Chair of Technical Electrochemistry in the Gasteiger research group at the Technical University of Munich (TUM). His current work encompasses an in-depth understanding of the complex interplay of cathode- and electrolyte degradation mechanisms in lithium-ion batteries using operando lab-based and synchrotron techniques. He received his MSc in chemistry from TUM, where he investigated the mitigation of aging of FeNC-based cathode catalyst layers in PEMFCs in his thesis at the Gasteiger group Electrochemistry at TUM. His research focused on the interfacial processes in lithium-ion batteries that govern calendar life, cycle stability, and rate capability. He advanced the on-line electrochemical mass spectrometry (OEMS) technique to investigate gas evolution mechanisms from interfacial side reactions at the cathode and anode. His work also explored how SEI formation and graphite structural changes affect Li⁺ transport, using impedance spectroscopy and complementary analysis techniques.

The post Development and application of a 3-electrode setup for the operando detection of side reactions in Li-Ion batteries appeared first on Physics World.

  •  

People benefit from medicine, but machines need healthcare too

I began my career in the 1990s at a university spin-out company, working for a business that developed vibration sensors to monitor the condition of helicopter powertrains and rotating machinery. It was a job that led to a career developing technologies and techniques for checking the “health” of machines, such as planes, trains and trucks.

What a difference three decades has made. When I started out, we would deploy bespoke systems that generated limited amounts of data. These days, everything has gone digital and there’s almost more information than we can handle. We’re also seeing a growing use of machine learning and artificial intelligence (AI) to track how machines operate.

In fact, with AI being increasingly used in medical science – for example to predict a patient’s risk of heart attacks – I’ve noticed intriguing similarities between how we monitor the health of machines and the health of human bodies. Jet engines and hearts are very different objects, but in both cases monitoring devices gives us a set of digitized physical measurements.

A healthy perspective

Sensors installed on a machine provide various basic physical parameters, such as its temperature, pressure, flow rate or speed. More sophisticated devices can yield information about, say, its vibration, acoustic behaviour, or (for an engine) oil debris or quality. Bespoke sensors might even be added if an important or otherwise unchecked aspect of a machine’s performance needs to be monitored – provided the benefits of doing so outweigh the cost.

Generally speaking, the sensors you use in a particular situation depend on what’s worked before and whether you can exploit other measurements, such as those controlling the machine. But whatever sensors are used, the raw data then have to be processed and manipulated to extract particular features and characteristics.

If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Once you’ve done all that, you can then determine the health of the machine, rather like in medicine. Is it performing normally? Does it seem to be developing a fault? If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Generally, we do this by tracking a range of parameters to look for consistent behaviour, such as a steady increase, or by seeing if a parameter exceeds a pre-defined threshold. With further analysis, we can also try to predict the future state of the machine, work out what its remaining useful life might be, or decide if any maintenance needs scheduling.

A diagnosis typically involves linking various anomalous physical parameters (or symptoms) to a probable cause. As machines obey the laws of physics, a diagnosis can either be based on engineering knowledge or be driven by data – or sometimes the two together. If a concrete diagnosis can’t be made, you can still get a sense of where a problem might lie before carrying out further investigation or doing a detailed inspection.

One way of doing this is to use a “borescope” – essentially a long, flexible cable with a camera on the end. Rather like an endoscope in medicine, it allows you to look down narrow or difficult-to-reach cavities. But unlike medical imaging, which generally takes place in the controlled environment of a lab or clinic, machine data are typically acquired “in the field”. The resulting images can be tricky to interpret because the light is poor, the measurements are inconsistent, or the equipment hasn’t been used in the most effective way.

Even though it can be hard to work out what you’re seeing, in-situ visual inspections are vital as they provide evidence of a known condition, which can be directly linked to physical sensor measurements. It’s a kind of health status calibration. But if you want to get more robust results, it’s worth turning to advanced modelling techniques, such as deep neural networks.

One way to predict the wear and tear of a machine’s constituent parts is to use what’s known as a “digital twin”. Essentially a virtual replica of a physical object, a digital twin is created by building a detailed model and then feeding in real-time information from sensors and inspections. The twin basically mirrors the behaviour, characteristics and performance of the real object.

Real-time monitoring

Real-time health data are great because they allow machines to be serviced as and when required, rather than following a rigid maintenance schedule. For example, if a machine has been deployed heavily in a difficult environment, it can be serviced sooner, potentially preventing an unexpected failure. Conversely, if it’s been used relatively lightly and not shown any problems, then  maintenance could be postponed or reduced in scope. This saves time and money because the equipment will be out of action less than anticipated.

We can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it

Having information about a machine’s condition at any point in time not only allows this kind of “intelligent maintenance” but also lets us use associated resources wisely. For example, we can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it. Spare parts can therefore be ordered only when required, saving money and optimizing supply chains.

Real-time health-monitoring data are particularly useful for companies owning many machines of one kind, such as airlines with a fleet of planes or haulage companies with a lot of trucks. It gives them a better understanding not just of how machines behave individually – but also collectively to give a “fleet-wide” view. Noticing and diagnosing failures from data becomes an iterative process, helping manufacturers create new or improved machine designs.

This all sounds great, but in some respects, it’s harder to understand a machine than a human. People can be taken to hospitals or clinics for a medical scan, but a wind turbine or jet engine, say, can’t be readily accessed, switched off or sent for treatment. Machines also can’t tell us exactly how they feel.

However, even humans don’t always know when there’s something wrong. That’s why it’s worth us taking a leaf from industry’s book and consider getting regular health monitoring and checks. There are lots of brilliant apps out there to monitor and track your heart rate, blood pressure, physical activity and sugar levels.

Just as with a machine, you can avoid unexpected failure, reduce your maintenance costs, and make yourself more efficient and reliable. You could, potentially, even live longer too.

The post People benefit from medicine, but machines need healthcare too appeared first on Physics World.

  •  

Japan’s ispace suffers second lunar landing failure

The Japanese firm ispace has suffered another setback after its second attempt to land on the Moon ended in failure yesterday. The Hakuto-R Mission 2, also known as Resilience, failed to touch down near the centre of Mare Frigoris (sea of cold) in the far north of the Moon after a sensor malfunctioned during descent.

Launched on 15 January from the Kennedy Space Center, Florida, aboard a SpaceX Falcon 9 rocket, the craft spent four months travelling to the Moon before it entered lunar orbit on 7 May. It then spent the past month completing several lunar orbital manoeuvres.

During the descent phase, the 2.3 m-high lander began a landing sequence that involved firing its main propulsion system to gradually decelerate and adjust its attitude. ispace says that the lander was confirmed to be nearly vertical but then the company lost communication with the craft.

The firm concludes that the laser rangefinder experienced delays attempting to measure the distance to the lunar surface during descent, meaning that it was unable to decelerate sufficiently to carry out a soft landing.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyze the telemetry data we have obtained thus far and work diligently to identify the cause,” noted ispace founder and chief executive officer Takeshi Hakamada in a statement. “We strive to restore trust by providing a report of the findings.”

The mission was planned to have operated for about two weeks. Resilience featured several commercial payloads, worth $16m, including a food-production experiment and a deep-space radiation probe. It also carried a rover, dubbed Tenacious, which was about the size of a microwave oven and would have collected and analyzed lunar regolith.

The rover would have also delivered a Swedish artwork called The Moonhouse – a small red cottage with white corners – and placed it at a “symbolically meaningful” site on the Moon.

Lunar losses

The company’s first attempt to land on the Moon also ended in failure in 2023 when the Hakuto-R Mission 1 crashed landed despite being in a vertical position as it carried out the final approach to the lunar surface.

The issue was put down to a software problem that incorrectly assessed the craft’s altitude during descent.

If the latest attempt was a success, ispace would have joined the US firms Intuitive Machines and Firefly Aerospace that both successfully landed on the Moon last year and in March, respectively.

The second lunar loss also casts doubt on ispace’s plans for further lunar landings with the grand aim of establishing a lunar colony of 1000 inhabitants by the 2040s.

The post Japan’s ispace suffers second lunar landing failure appeared first on Physics World.

  •  

Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe

This episode of the Physics World Weekly podcast features George Efstathiou and Richard Bond, who share the 2025 Shaw Prize in Astronomy, “for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background (CMB). Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass-energy content of the universe.”

Bond and Efstathiou talk about how the CMB emerged when the universe was just 380,000 years old and explain how the CMB is observed today. They explain why studying fluctuations in today’s CMB provides a window into the nature of the universe as it existed long ago, and how future studies could help physicists understand the nature of dark matter – which is one of the greatest mysteries in physics.

Efstathiou is emeritus professor of astrophysics at the University of Cambridge in the UK – and Richard Bond is a professor at the Canadian Institute for Theoretical Astrophysics (CITA) and university professor at the University of Toronto in Canada. Bond and Efstathiou share the 2025 Shaw Prize in Astronomy and its $1.2m prize money equally.

This podcast is sponsored by The Shaw Prize Foundation.

The post Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe appeared first on Physics World.

  •  

Superconducting innovation: SQMS shapes up for scalable success in quantum computing

Developing quantum computing systems with high operational fidelity, enhanced processing capabilities plus inherent (and rapid) scalability is high on the list of fundamental problems preoccupying researchers within the quantum science community. One promising R&D pathway in this regard is being pursued by the Superconducting Quantum Materials and Systems (SQMS) National Quantum Information Science Research Center at the US Department of Energy’s Fermi National Accelerator Laboratory, the pre-eminent US particle physics facility on the outskirts of Chicago, Illinois.

The SQMS approach involves placing a superconducting qubit chip (held at temperatures as low as 10–20 mK) inside a three-dimensional superconducting radiofrequency (3D SRF) cavity – a workhorse technology for particle accelerators employed in high-energy physics (HEP), nuclear physics and materials science. In this set-up, it becomes possible to preserve and manipulate quantum states by encoding them in microwave photons (modes) stored within the SRF cavity (which is also cooled to the millikelvin regime).

Put another way: by pairing superconducting circuits and SRF cavities at cryogenic temperatures, SQMS researchers create environments where microwave photons can have long lifetimes and be protected from external perturbations – conditions that, in turn, make it possible to generate quantum states, manipulate them and read them out. The endgame is clear: reproducible and scalable realization of such highly coherent superconducting qubits opens the way to more complex and scalable quantum computing operations – capabilities that, over time, will be used within Fermilab’s core research programme in particle physics and fundamental physics more generally.

Fermilab is in a unique position to turn this quantum technology vision into reality, given its decadal expertise in developing high-coherence SRF cavities. In 2020, for example, Fermilab researchers demonstrated record coherence lifetimes (of up to two seconds) for quantum states stored in an SRF cavity.

“It’s no accident that Fermilab is a pioneer of SRF cavity technology for accelerator science,” explains Sir Peter Knight, senior research investigator in physics at Imperial College London and an SQMS advisory board member. “The laboratory is home to a world-leading team of RF engineers whose niobium superconducting cavities routinely achieve very high quality factors (Q) from 1010 to above 1011 – figures of merit that can lead to dramatic increases in coherence time.”

Moreover, Fermilab offers plenty of intriguing HEP use-cases where quantum computing platforms could yield significant research dividends. In theoretical studies, for example, the main opportunities relate to the evolution of quantum states, lattice-gauge theory, neutrino oscillations and quantum field theories in general. On the experimental side, quantum computing efforts are being lined up for jet and track reconstruction during high-energy particle collisions; also for the extraction of rare signals and for exploring exotic physics beyond the Standard Model.

SQMS associate scientists Yao Lu and Tanay Roy
Collaborate to accumulate SQMS associate scientists Yao Lu (left) and Tanay Roy (right) worked with PhD student Taeyoon Kim (centre) to develop a two-qudit superconducting QPU with a record coherence lifetime (>20 ms). (Courtesy: Hannah Brumbaugh, Fermilab)

Cavities and qubits

SQMS has already notched up some notable breakthroughs on its quantum computing roadmap, not least the demonstration of chip-based transmon qubits (a type of charge qubit circuit exhibiting decreased sensitivity to noise) showing systematic and reproducible improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

Key to success here is an extensive collaborative effort in materials science and the development of novel chip fabrication processes, with the resulting transmon qubit ancillas shaping up as the “nerve centre” of the 3D SRF cavity-based quantum computing platform championed by SQMS. What’s in the works is essentially a unique quantum analogue of a classical computing architecture: the transmon chip providing a central logic-capable quantum information processor and microwave photons (modes) in the 3D SRF cavity acting as the random-access quantum memory.

As for the underlying physics, the coupling between the transmon qubit and discrete photon modes in the SRF cavity allows for the exchange of coherent quantum information, as well as enabling quantum entanglement between the two. “The pay-off is scalability,” says Alexander Romanenko, a senior scientist at Fermilab who leads the SQMS quantum technology thrust. “A single logic-capable processor qubit, such as the transmon, can couple to many cavity modes acting as memory qubits.”

In principle, a single transmon chip could manipulate more than 10 qubits encoded inside a single-cell SRF cavity, substantially streamlining the number of microwave channels required for system control and manipulation as the number of qubits increases. “What’s more,” adds Romanenko, “instead of using quantum states in the transmon [coherence times just crossed into milliseconds], we can use quantum states in the SRF cavities, which have higher quality factors and longer coherence times [up to two seconds].”

In terms of next steps, continuous improvement of the ancilla transmon coherence times will be critical to ensure high-fidelity operation of the combined system – with materials breakthroughs likely to be a key rate-determining step. “One of the unique differentiators of the SQMS programme is this ‘all-in’ effort to understand and get to grips with the fundamental materials properties that lead to losses and noise in superconducting qubits,” notes Knight. “There are no short-cuts: wide-ranging experimental and theoretical investigations of materials physics – per the programme implemented by SQMS – are mandatory for scaling superconducting qubits into industrial and scientifically useful quantum computing architectures.”

Laying down a marker, SQMS researchers recently achieved a major milestone in superconducting quantum technology by developing the longest-lived multimode superconducting quantum processor unit (QPU) ever built (coherence lifetime >20 ms). Their processor is based on a two-cell SRF cavity and leverages its exceptionally high quality factor (~1010) to preserve quantum information far longer than conventional superconducting platforms (typically 1 or 2 ms for rival best-in-class implementations).

Coupled with a superconducting transmon, the two-cell SRF module enables precise manipulation of cavity quantum states (photons) using ultrafast control/readout schemes (allowing for approximately 104 high-fidelity operations within the qubit lifetime). “This represents a significant achievement for SQMS,” claims Yao Lu, an associate scientist at Fermilab and co-lead for QPU connectivity and transduction in SQMS. “We have demonstrated the creation of high-fidelity [>95%] quantum states with large photon numbers [20 photons] and achieved ultra-high-fidelity single-photon entangling operations between modes [>99.9%]. It’s work that will ultimately pave the way to scalable, error-resilient quantum computing.”

The SQMS multiqubit QPU prototype
Scalable thinking The SQMS multiqudit QPU prototype (above) exploits 3D SRF cavities held at millikelvin temperatures. (Courtesy: Ryan Postel, Fermilab)

Fast scaling with qudits

There’s no shortage of momentum either, with these latest breakthroughs laying the foundations for SQMS “qudit-based” quantum computing and communication architectures. A qudit is a multilevel quantum unit that can be more than two states and, in turn, hold a larger information density – i.e. instead of working with a large number of qubits to scale information processing capability, it may be more efficient to maintain a smaller number of qudits (with each holding a greater range of values for optimized computations).

Scale-up to a multiqudit QPU system is already underway at SQMS via several parallel routes (and all with a modular computing architecture in mind). In one approach, coupler elements and low-loss interconnects integrate a nine-cell multimode SRF cavity (the memory) to a two-cell SRF cavity quantum processor. Another iteration uses only two-cell modules, while yet another option exploits custom-designed multimodal cavities (10+ modes) as building blocks.

One thing is clear: with the first QPU prototypes now being tested, verified and optimized, SQMS will soon move to a phase in which many of these modules will be assembled and put together in operation. By extension, the SQMS effort also encompasses crucial developments in control systems and microwave equipment, where many devices must be synchronized optimally to encode and analyse quantum information in the QPUs.

Along a related coordinate, complex algorithms can benefit from fewer required gates and reduced circuit depth. What’s more, for many simulation problems in HEP and other fields, it’s evident that multilevel systems (qudits) – rather than qubits – provide a more natural representation of the physics in play, making simulation tasks significantly more accessible. The work of encoding several such problems into qudits – including lattice-gauge-theory calculations and others – is similarly ongoing within SQMS.

Taken together, this massive R&D undertaking – spanning quantum hardware and quantum algorithms – can only succeed with a “co-design” approach across strategy and implementation: from identifying applications of interest to the wider HEP community to full deployment of QPU prototypes. Co-design is especially suited to these efforts as it demands sustained alignment of scientific goals with technological implementation to drive innovation and societal impact.

In addition to their quantum computing promise, these cavity-based quantum systems will play a central role in serving both as the “adapters” and low-loss channels at elevated temperatures for interconnecting chip or cavity-based QPUs hosted in different refrigerators. These interconnects will provide an essential building block for the efficient scale-up of superconducting quantum processors into larger quantum data centres.

Researchers in the control room of the SQMS Quantum Garage facility
Quantum insights Researchers in the control room of the SQMS Quantum Garage facility, developing architectures and gates for SQMS hardware tailored toward HEP quantum simulations. From left to right: Nick Bornman, Hank Lamm, Doga Kurkcuoglu, Silvia Zorzetti, Julian Delgado, Hans Johnson (Courtesy: Hannah Brumbaugh)

 â€œThe SQMS collaboration is ploughing its own furrow – in a way that nobody else in the quantum sector really is,” says Knight. “Crucially, the SQMS partners can build stuff at scale by tapping into the phenomenal engineering strengths of the National Laboratory system. Designing, commissioning and implementing big machines has been part of the ‘day job’ at Fermilab for decades. In contrast, many quantum computing start-ups must scale their R&D infrastructure and engineering capability from a far-less-developed baseline.”

The last word, however, goes to Romanenko. “Watch this space,” he concludes, “because SQMS is on a roll. We don’t know which quantum computing architecture will ultimately win out, but we will ensure that our cavity-based quantum systems will play an enabling role.”

Scaling up: from qubits to qudits

Conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit
Left: conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit (AI-generated). Right: an ancilla qubit with two energy levels – ground ∣g⟩ and excited ∣e⟩ – is used to control a high-coherence (d+1) dimensional qudit encoded in a cavity resonator. The ancilla enables state preparation, control and measurement of the qudit. (Courtesy: Fermilab)

The post Superconducting innovation: SQMS shapes up for scalable success in quantum computing appeared first on Physics World.

  •  

Black-hole scattering calculations could shed light on gravitational waves

By adapting mathematical techniques used in particle physics, researchers in Germany have developed an approach that could boost our understanding of the gravitational waves that are emitted when black holes collide. Led by Jan Plefka at The Humboldt University of Berlin, the team’s results could prove vital to the success of future gravitational-wave detectors.

Nearly a decade on from the first direct observations of gravitational waves, physicists are hopeful that the next generation of ground- and space-based observatories will soon allow us to study these ripples in space–time with unprecedented precision. But to ensure the success of upcoming projects like the LISA space mission, the increased sensitivity offered by these detectors will need to be accompanied with a deeper theoretical understanding of how gravitational waves are generated through the merging of two black holes.

In particular, they will need to predict more accurately the physical properties of gravitational waves produced by any given colliding pair and account for factors including their respective masses and orbital velocities. For this to happen, physicists will need to develop more precise solutions to the relativistic two-body problem. This problem is a key application of the Einstein field equations, which relate the geometry of space–time to the distribution of matter within it.

No exact solution

“Unlike its Newtonian counterpart, which is solved by Kepler’s Laws, the relativistic two-body problem cannot be solved exactly,” Plefka explains. “There is an ongoing international effort to apply quantum field theory (QFT) – the mathematical language of particle physics – to describe the classical two-body problem.”

In their study, Plefka’s team started from state-of-the-art techniques used in particle physics for modelling the scattering of colliding elementary particles, while accounting for their relativistic properties. When viewed from far away, each black hole can be approximated as a single point which, much like an elementary particle, carries a single mass, charge, and spin.

Taking advantage of this approximation, the researchers modified existing techniques in particle physics to create a framework called worldline quantum field theory (WQFT). “The advantage of WQFT is a clean separation between classical and quantum physics effects, allowing us to precisely target the classical physics effects relevant for the vast distances involved in astrophysical observables,” Plefka describes

Ordinarily, doing calculations with such an approach would involve solving millions of integrals that sum-up every single contribution to the black hole pair’s properties across all possible ways that the interaction between them could occur. To simplify the problem, Plefka’s team used a new algorithm that identified relationships between the integrals. This reduced the problem to just 250 “master integrals”, making the calculation vastly more manageable.

With these master integrals, the team could finally produce expressions for three key physical properties of black hole binaries within WQFT. These includes the changes in momentum during the gravity-mediated scattering of two black holes and the total energy radiated by both bodies over the course of the scattering.

Genuine physical process

Altogether, the team’s WQFT framework produced the most accurate solution to the Einstein field equations ever achieved to date. “In particular, the radiated energy we found contains a new class of mathematical functions known as ‘Calabi–Yau periods’,” Plefka explains. “While these functions are well-known in algebraic geometry and string theory, this marks the first time they have been shown to describe a genuine physical process.”

With its unprecedented insights into the structure of the relativistic two-body problem, the team’s approach could now be used to build more precise models of gravitational-wave formation, which could prove invaluable for the next generation of gravitational-wave detectors.

More broadly, however, Plefka predicts that the appearance of Calabi–Yau periods in their calculations could lead to an entirely new class of mathematical functions applicable to many areas beyond gravitational waves.

“We expect these periods to show up in other branches of physics, including collider physics, and the mathematical techniques we employed to calculate the relevant integrals will no doubt also apply there,” he says.

The research is described in Nature.

The post Black-hole scattering calculations could shed light on gravitational waves appeared first on Physics World.

  •  

Harmonious connections: bridging the gap between music and science

CP Snow’s classic The Two Cultures lecture, published in book form in 1959, is the usual go-to reference when exploring the divide between the sciences and humanities. It is a culture war that was raging long before the term became social-media shorthand for today’s tribal battles over identity, values and truth.

While Snow eloquently lamented the lack of mutual understanding between scientific and literary elites, the 21st-century version of the two-cultures debate often plays out with a little less decorum and a lot more profanity. Hip hop duo Insane Clown Posse certainly didn’t hold back in their widely memed 2010 track “Miracles”, which included the lyric â€œAnd I don’t wanna talk to a scientist / Y’all motherfuckers lying and getting me pissed”. An extreme example to be sure, but it hammers home the point: Snow’s two-culture concerns continue to resonate strongly almost 70 years after his influential lecture and writings.

A Perfect Harmony: Music, Mathematics and Science by David Darling is the latest addition to a growing genre that seeks to bridge that cultural rift. Like Peter Pesic’s Music and the Making of Modern Science, Susan Rogers and Ogi Ogas’ This Is What It Sounds Like, and Philip Ball’s The Music Instinct, Darling’s book adds to the canon that examines the interplay between musical creativity and the analytical frameworks of science (including neuroscience) and mathematics.

I’ve also contributed, in a nanoscopically small way, to this music-meets-science corpus with an analysis of the deep and fundamental links between quantum physics and heavy metal (When The Uncertainty Principle Goes To 11), and have a long-standing interest in music composed from maths and physics principles and constants (see my Lateral Thoughts articles from September 2023 and July 2024). Darling’s book, therefore, struck a chord with me.

Darling is not only a talented science writer with an expansive back-catalogue to his name but he is also an accomplished musician (check out his album Songs Of The Cosmos ), and his enthusiasm for all things musical spills off the page. Furthermore, he is a physicist, with a PhD in astronomy from the University of Manchester. So if there’s a writer who can genuinely and credibly inhabit both sides of the arts–science cultural divide, it’s Darling.

But is A Perfect Harmony in tune with the rest of the literary ensemble, or marching to a different beat? In other words, is this a fresh new take on the music-meets-maths (meets pop sci) genre or, like too many bands I won’t mention, does it sound suspiciously like something you’ve heard many times before? Well, much like an old-school vinyl album, Darling’s work has the feel of two distinct sides. (And I’ll try to make that my final spin on groan-worthy musical metaphors. Promise.)

Not quite perfect pitch

Although the subtitle for A Perfect Harmony is “Music, Mathematics and Science”, the first half of the book is more of a history of the development and evolution of music and musical instruments in various cultures, rather than a new exploration of the underpinning mathematical and scientific principles. Engaging and entertaining though this is – and all credit to Darling for working in a reference to Van Halen in the opening lines of chapter 1 – it’s well-worn ground: Pythagorean tuning, the circle of fifths, equal temperament, Music of the Spheres (not the Coldplay album, mercifully), resonance, harmonics, etc. I found myself wishing, at times, for a take that felt a little more off the beaten track.

One case in point is Darling’s brief discussion of the theremin. If anything earns the title of “The Physicist’s Instrument”, it’s the theremin – a remarkable device that exploits the innate electrical capacitance of the human body to load a resonant circuit and thus produce an ethereal, haunting tone whose pitch can be varied, without, remarkably, any physical contact.

While I give kudos to Darling for highlighting the theremin, the brevity of the description is arguably a lost opportunity when put in the broader context of the book’s aim to explain the deeper connections between music, maths and science. This could have been a novel and fascinating take on the links between electrical and musical resonance that went well beyond the familiar territory mapped out in standard physics-of-music texts.

Using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired

As the book progresses, however, Darling moves into more distinctive territory, choosing a variety of inventive examples that are often fascinating and never short of thought-provoking. I particularly enjoyed his description of orbital resonance in the system of seven planets orbiting the red dwarf TRAPPIST-1, 41 light-years from Earth. The orbital periods have ratios, which, when mapped to musical intervals, correspond to a minor sixth, a major sixth, two perfect fifths, a perfect fourth and another perfect fifth. And it’s got to be said that using the music of the eclectic Australian band King Gizzard and the Lizard Wizard to explain microtonality is nothing short of inspired.

A Perfect Harmony doesn’t entirely close the cultural gap highlighted by Snow all those years ago, but it does hum along pleasantly in the space between. Though the subject matter occasionally echoes well-trodden themes, Darling’s perspective and enthusiasm lend it freshness. There’s plenty here to enjoy, especially for physicists inclined to tune into the harmonies of the universe.

  • 2025 Oneworld Publications 288pp ÂŁ10.99pb/ÂŁ6.99ebook

The post Harmonious connections: bridging the gap between music and science appeared first on Physics World.

  •