↩ Accueil

Vue normale

Reçu aujourd’hui — 9 juin 20256.5 📰 Sciences English

People benefit from medicine, but machines need healthcare too

9 juin 2025 à 10:17

I began my career in the 1990s at a university spin-out company, working for a business that developed vibration sensors to monitor the condition of helicopter powertrains and rotating machinery. It was a job that led to a career developing technologies and techniques for checking the “health” of machines, such as planes, trains and trucks.

What a difference three decades has made. When I started out, we would deploy bespoke systems that generated limited amounts of data. These days, everything has gone digital and there’s almost more information than we can handle. We’re also seeing a growing use of machine learning and artificial intelligence (AI) to track how machines operate.

In fact, with AI being increasingly used in medical science – for example to predict a patient’s risk of heart attacks – I’ve noticed intriguing similarities between how we monitor the health of machines and the health of human bodies. Jet engines and hearts are very different objects, but in both cases monitoring devices gives us a set of digitized physical measurements.

A healthy perspective

Sensors installed on a machine provide various basic physical parameters, such as its temperature, pressure, flow rate or speed. More sophisticated devices can yield information about, say, its vibration, acoustic behaviour, or (for an engine) oil debris or quality. Bespoke sensors might even be added if an important or otherwise unchecked aspect of a machine’s performance needs to be monitored – provided the benefits of doing so outweigh the cost.

Generally speaking, the sensors you use in a particular situation depend on what’s worked before and whether you can exploit other measurements, such as those controlling the machine. But whatever sensors are used, the raw data then have to be processed and manipulated to extract particular features and characteristics.

If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Once you’ve done all that, you can then determine the health of the machine, rather like in medicine. Is it performing normally? Does it seem to be developing a fault? If the machine appears to be going wrong, can you try to diagnose what the problem might be?

Generally, we do this by tracking a range of parameters to look for consistent behaviour, such as a steady increase, or by seeing if a parameter exceeds a pre-defined threshold. With further analysis, we can also try to predict the future state of the machine, work out what its remaining useful life might be, or decide if any maintenance needs scheduling.

A diagnosis typically involves linking various anomalous physical parameters (or symptoms) to a probable cause. As machines obey the laws of physics, a diagnosis can either be based on engineering knowledge or be driven by data – or sometimes the two together. If a concrete diagnosis can’t be made, you can still get a sense of where a problem might lie before carrying out further investigation or doing a detailed inspection.

One way of doing this is to use a “borescope” – essentially a long, flexible cable with a camera on the end. Rather like an endoscope in medicine, it allows you to look down narrow or difficult-to-reach cavities. But unlike medical imaging, which generally takes place in the controlled environment of a lab or clinic, machine data are typically acquired “in the field”. The resulting images can be tricky to interpret because the light is poor, the measurements are inconsistent, or the equipment hasn’t been used in the most effective way.

Even though it can be hard to work out what you’re seeing, in-situ visual inspections are vital as they provide evidence of a known condition, which can be directly linked to physical sensor measurements. It’s a kind of health status calibration. But if you want to get more robust results, it’s worth turning to advanced modelling techniques, such as deep neural networks.

One way to predict the wear and tear of a machine’s constituent parts is to use what’s known as a “digital twin”. Essentially a virtual replica of a physical object, a digital twin is created by building a detailed model and then feeding in real-time information from sensors and inspections. The twin basically mirrors the behaviour, characteristics and performance of the real object.

Real-time monitoring

Real-time health data are great because they allow machines to be serviced as and when required, rather than following a rigid maintenance schedule. For example, if a machine has been deployed heavily in a difficult environment, it can be serviced sooner, potentially preventing an unexpected failure. Conversely, if it’s been used relatively lightly and not shown any problems, then  maintenance could be postponed or reduced in scope. This saves time and money because the equipment will be out of action less than anticipated.

We can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it

Having information about a machine’s condition at any point in time not only allows this kind of “intelligent maintenance” but also lets us use associated resources wisely. For example, we can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it. Spare parts can therefore be ordered only when required, saving money and optimizing supply chains.

Real-time health-monitoring data are particularly useful for companies owning many machines of one kind, such as airlines with a fleet of planes or haulage companies with a lot of trucks. It gives them a better understanding not just of how machines behave individually – but also collectively to give a “fleet-wide” view. Noticing and diagnosing failures from data becomes an iterative process, helping manufacturers create new or improved machine designs.

This all sounds great, but in some respects, it’s harder to understand a machine than a human. People can be taken to hospitals or clinics for a medical scan, but a wind turbine or jet engine, say, can’t be readily accessed, switched off or sent for treatment. Machines also can’t tell us exactly how they feel.

However, even humans don’t always know when there’s something wrong. That’s why it’s worth us taking a leaf from industry’s book and consider getting regular health monitoring and checks. There are lots of brilliant apps out there to monitor and track your heart rate, blood pressure, physical activity and sugar levels.

Just as with a machine, you can avoid unexpected failure, reduce your maintenance costs, and make yourself more efficient and reliable. You could, potentially, even live longer too.

The post People benefit from medicine, but machines need healthcare too appeared first on Physics World.

Reçu hier — 8 juin 20256.5 📰 Sciences English
Reçu avant avant-hier6.5 📰 Sciences English

Japan’s ispace suffers second lunar landing failure

6 juin 2025 à 15:04

The Japanese firm ispace has suffered another setback after its second attempt to land on the Moon ended in failure yesterday. The Hakuto-R Mission 2, also known as Resilience, failed to touch down near the centre of Mare Frigoris (sea of cold) in the far north of the Moon after a sensor malfunctioned during descent.

Launched on 15 January from the Kennedy Space Center, Florida, aboard a SpaceX Falcon 9 rocket, the craft spent four months travelling to the Moon before it entered lunar orbit on 7 May. It then spent the past month completing several lunar orbital manoeuvres.

During the descent phase, the 2.3 m-high lander began a landing sequence that involved firing its main propulsion system to gradually decelerate and adjust its attitude. ispace says that the lander was confirmed to be nearly vertical but then the company lost communication with the craft.

The firm concludes that the laser rangefinder experienced delays attempting to measure the distance to the lunar surface during descent, meaning that it was unable to decelerate sufficiently to carry out a soft landing.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyze the telemetry data we have obtained thus far and work diligently to identify the cause,” noted ispace founder and chief executive officer Takeshi Hakamada in a statement. “We strive to restore trust by providing a report of the findings.”

The mission was planned to have operated for about two weeks. Resilience featured several commercial payloads, worth $16m, including a food-production experiment and a deep-space radiation probe. It also carried a rover, dubbed Tenacious, which was about the size of a microwave oven and would have collected and analyzed lunar regolith.

The rover would have also delivered a Swedish artwork called The Moonhouse – a small red cottage with white corners – and placed it at a “symbolically meaningful” site on the Moon.

Lunar losses

The company’s first attempt to land on the Moon also ended in failure in 2023 when the Hakuto-R Mission 1 crashed landed despite being in a vertical position as it carried out the final approach to the lunar surface.

The issue was put down to a software problem that incorrectly assessed the craft’s altitude during descent.

If the latest attempt was a success, ispace would have joined the US firms Intuitive Machines and Firefly Aerospace that both successfully landed on the Moon last year and in March, respectively.

The second lunar loss also casts doubt on ispace’s plans for further lunar landings with the grand aim of establishing a lunar colony of 1000 inhabitants by the 2040s.

The post Japan’s ispace suffers second lunar landing failure appeared first on Physics World.

Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe

5 juin 2025 à 17:00

This episode of the Physics World Weekly podcast features George Efstathiou and Richard Bond, who share the 2025 Shaw Prize in Astronomy, “for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background (CMB). Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass-energy content of the universe.”

Bond and Efstathiou talk about how the CMB emerged when the universe was just 380,000 years old and explain how the CMB is observed today. They explain why studying fluctuations in today’s CMB provides a window into the nature of the universe as it existed long ago, and how future studies could help physicists understand the nature of dark matter – which is one of the greatest mysteries in physics.

Efstathiou is emeritus professor of astrophysics at the University of Cambridge in the UK – and Richard Bond is a professor at the Canadian Institute for Theoretical Astrophysics (CITA) and university professor at the University of Toronto in Canada. Bond and Efstathiou share the 2025 Shaw Prize in Astronomy and its $1.2m prize money equally.

This podcast is sponsored by The Shaw Prize Foundation.

The post Richard Bond and George Efstathiou: meet the astrophysicists who are shaping our understanding of the early universe appeared first on Physics World.

Superconducting innovation: SQMS shapes up for scalable success in quantum computing

5 juin 2025 à 16:00

Developing quantum computing systems with high operational fidelity, enhanced processing capabilities plus inherent (and rapid) scalability is high on the list of fundamental problems preoccupying researchers within the quantum science community. One promising R&D pathway in this regard is being pursued by the Superconducting Quantum Materials and Systems (SQMS) National Quantum Information Science Research Center at the US Department of Energy’s Fermi National Accelerator Laboratory, the pre-eminent US particle physics facility on the outskirts of Chicago, Illinois.

The SQMS approach involves placing a superconducting qubit chip (held at temperatures as low as 10–20 mK) inside a three-dimensional superconducting radiofrequency (3D SRF) cavity – a workhorse technology for particle accelerators employed in high-energy physics (HEP), nuclear physics and materials science. In this set-up, it becomes possible to preserve and manipulate quantum states by encoding them in microwave photons (modes) stored within the SRF cavity (which is also cooled to the millikelvin regime).

Put another way: by pairing superconducting circuits and SRF cavities at cryogenic temperatures, SQMS researchers create environments where microwave photons can have long lifetimes and be protected from external perturbations – conditions that, in turn, make it possible to generate quantum states, manipulate them and read them out. The endgame is clear: reproducible and scalable realization of such highly coherent superconducting qubits opens the way to more complex and scalable quantum computing operations – capabilities that, over time, will be used within Fermilab’s core research programme in particle physics and fundamental physics more generally.

Fermilab is in a unique position to turn this quantum technology vision into reality, given its decadal expertise in developing high-coherence SRF cavities. In 2020, for example, Fermilab researchers demonstrated record coherence lifetimes (of up to two seconds) for quantum states stored in an SRF cavity.

“It’s no accident that Fermilab is a pioneer of SRF cavity technology for accelerator science,” explains Sir Peter Knight, senior research investigator in physics at Imperial College London and an SQMS advisory board member. “The laboratory is home to a world-leading team of RF engineers whose niobium superconducting cavities routinely achieve very high quality factors (Q) from 1010 to above 1011 – figures of merit that can lead to dramatic increases in coherence time.”

Moreover, Fermilab offers plenty of intriguing HEP use-cases where quantum computing platforms could yield significant research dividends. In theoretical studies, for example, the main opportunities relate to the evolution of quantum states, lattice-gauge theory, neutrino oscillations and quantum field theories in general. On the experimental side, quantum computing efforts are being lined up for jet and track reconstruction during high-energy particle collisions; also for the extraction of rare signals and for exploring exotic physics beyond the Standard Model.

SQMS associate scientists Yao Lu and Tanay Roy
Collaborate to accumulate SQMS associate scientists Yao Lu (left) and Tanay Roy (right) worked with PhD student Taeyoon Kim (centre) to develop a two-qudit superconducting QPU with a record coherence lifetime (>20 ms). (Courtesy: Hannah Brumbaugh, Fermilab)

Cavities and qubits

SQMS has already notched up some notable breakthroughs on its quantum computing roadmap, not least the demonstration of chip-based transmon qubits (a type of charge qubit circuit exhibiting decreased sensitivity to noise) showing systematic and reproducible improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

Key to success here is an extensive collaborative effort in materials science and the development of novel chip fabrication processes, with the resulting transmon qubit ancillas shaping up as the “nerve centre” of the 3D SRF cavity-based quantum computing platform championed by SQMS. What’s in the works is essentially a unique quantum analogue of a classical computing architecture: the transmon chip providing a central logic-capable quantum information processor and microwave photons (modes) in the 3D SRF cavity acting as the random-access quantum memory.

As for the underlying physics, the coupling between the transmon qubit and discrete photon modes in the SRF cavity allows for the exchange of coherent quantum information, as well as enabling quantum entanglement between the two. “The pay-off is scalability,” says Alexander Romanenko, a senior scientist at Fermilab who leads the SQMS quantum technology thrust. “A single logic-capable processor qubit, such as the transmon, can couple to many cavity modes acting as memory qubits.”

In principle, a single transmon chip could manipulate more than 10 qubits encoded inside a single-cell SRF cavity, substantially streamlining the number of microwave channels required for system control and manipulation as the number of qubits increases. “What’s more,” adds Romanenko, “instead of using quantum states in the transmon [coherence times just crossed into milliseconds], we can use quantum states in the SRF cavities, which have higher quality factors and longer coherence times [up to two seconds].”

In terms of next steps, continuous improvement of the ancilla transmon coherence times will be critical to ensure high-fidelity operation of the combined system – with materials breakthroughs likely to be a key rate-determining step. “One of the unique differentiators of the SQMS programme is this ‘all-in’ effort to understand and get to grips with the fundamental materials properties that lead to losses and noise in superconducting qubits,” notes Knight. “There are no short-cuts: wide-ranging experimental and theoretical investigations of materials physics – per the programme implemented by SQMS – are mandatory for scaling superconducting qubits into industrial and scientifically useful quantum computing architectures.”

Laying down a marker, SQMS researchers recently achieved a major milestone in superconducting quantum technology by developing the longest-lived multimode superconducting quantum processor unit (QPU) ever built (coherence lifetime >20 ms). Their processor is based on a two-cell SRF cavity and leverages its exceptionally high quality factor (~1010) to preserve quantum information far longer than conventional superconducting platforms (typically 1 or 2 ms for rival best-in-class implementations).

Coupled with a superconducting transmon, the two-cell SRF module enables precise manipulation of cavity quantum states (photons) using ultrafast control/readout schemes (allowing for approximately 104 high-fidelity operations within the qubit lifetime). “This represents a significant achievement for SQMS,” claims Yao Lu, an associate scientist at Fermilab and co-lead for QPU connectivity and transduction in SQMS. “We have demonstrated the creation of high-fidelity [>95%] quantum states with large photon numbers [20 photons] and achieved ultra-high-fidelity single-photon entangling operations between modes [>99.9%]. It’s work that will ultimately pave the way to scalable, error-resilient quantum computing.”

The SQMS multiqubit QPU prototype
Scalable thinking The SQMS multiqudit QPU prototype (above) exploits 3D SRF cavities held at millikelvin temperatures. (Courtesy: Ryan Postel, Fermilab)

Fast scaling with qudits

There’s no shortage of momentum either, with these latest breakthroughs laying the foundations for SQMS “qudit-based” quantum computing and communication architectures. A qudit is a multilevel quantum unit that can be more than two states and, in turn, hold a larger information density – i.e. instead of working with a large number of qubits to scale information processing capability, it may be more efficient to maintain a smaller number of qudits (with each holding a greater range of values for optimized computations).

Scale-up to a multiqudit QPU system is already underway at SQMS via several parallel routes (and all with a modular computing architecture in mind). In one approach, coupler elements and low-loss interconnects integrate a nine-cell multimode SRF cavity (the memory) to a two-cell SRF cavity quantum processor. Another iteration uses only two-cell modules, while yet another option exploits custom-designed multimodal cavities (10+ modes) as building blocks.

One thing is clear: with the first QPU prototypes now being tested, verified and optimized, SQMS will soon move to a phase in which many of these modules will be assembled and put together in operation. By extension, the SQMS effort also encompasses crucial developments in control systems and microwave equipment, where many devices must be synchronized optimally to encode and analyse quantum information in the QPUs.

Along a related coordinate, complex algorithms can benefit from fewer required gates and reduced circuit depth. What’s more, for many simulation problems in HEP and other fields, it’s evident that multilevel systems (qudits) – rather than qubits – provide a more natural representation of the physics in play, making simulation tasks significantly more accessible. The work of encoding several such problems into qudits – including lattice-gauge-theory calculations and others – is similarly ongoing within SQMS.

Taken together, this massive R&D undertaking – spanning quantum hardware and quantum algorithms – can only succeed with a “co-design” approach across strategy and implementation: from identifying applications of interest to the wider HEP community to full deployment of QPU prototypes. Co-design is especially suited to these efforts as it demands sustained alignment of scientific goals with technological implementation to drive innovation and societal impact.

In addition to their quantum computing promise, these cavity-based quantum systems will play a central role in serving both as the “adapters” and low-loss channels at elevated temperatures for interconnecting chip or cavity-based QPUs hosted in different refrigerators. These interconnects will provide an essential building block for the efficient scale-up of superconducting quantum processors into larger quantum data centres.

Researchers in the control room of the SQMS Quantum Garage facility
Quantum insights Researchers in the control room of the SQMS Quantum Garage facility, developing architectures and gates for SQMS hardware tailored toward HEP quantum simulations. From left to right: Nick Bornman, Hank Lamm, Doga Kurkcuoglu, Silvia Zorzetti, Julian Delgado, Hans Johnson (Courtesy: Hannah Brumbaugh)

 “The SQMS collaboration is ploughing its own furrow – in a way that nobody else in the quantum sector really is,” says Knight. “Crucially, the SQMS partners can build stuff at scale by tapping into the phenomenal engineering strengths of the National Laboratory system. Designing, commissioning and implementing big machines has been part of the ‘day job’ at Fermilab for decades. In contrast, many quantum computing start-ups must scale their R&D infrastructure and engineering capability from a far-less-developed baseline.”

The last word, however, goes to Romanenko. “Watch this space,” he concludes, “because SQMS is on a roll. We don’t know which quantum computing architecture will ultimately win out, but we will ensure that our cavity-based quantum systems will play an enabling role.”

Scaling up: from qubits to qudits

Conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit
Left: conceptual illustration of the SQMS Center’s superconducting TESLA cavity coupled to a transmon ancilla qubit (AI-generated). Right: an ancilla qubit with two energy levels – ground ∣g⟩ and excited ∣e⟩ – is used to control a high-coherence (d+1) dimensional qudit encoded in a cavity resonator. The ancilla enables state preparation, control and measurement of the qudit. (Courtesy: Fermilab)

The post Superconducting innovation: SQMS shapes up for scalable success in quantum computing appeared first on Physics World.

❌