↩ Accueil

Vue normale

CERN accepts $1bn in private cash towards Future Circular Collider

19 janvier 2026 à 14:00

The CERN particle-physics lab near Geneva has received $1bn from private donors towards the construction of the Future Circular Collider (FCC). The cash marks the first time in the lab’s 72-year history that individuals and philanthropic foundations have agreed to support a major CERN project. If built, the FCC would be the successor to the Large Hadron Collider (LHC), where the Higgs boson was discovered.

CERN originally released a four-volume conceptual design report for the FCC in early 2019, with more detail included in a three-volume feasibility study that came out last year. It calls for a giant tunnel some 90.7 km in circumference – roughly three times as long as the LHC  – that would be built about 200m underground on average.

The FCC has been recommended as the preferred option for the next flagship collider at CERN in the ongoing process to update the European Strategy for Particle Physics, which will be passed over to the  CERN Council in May 2026.If the plans are given the green light by CERN Council in 2028, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030. It would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years until the early 2060s.

The FCC-ee would focus on creating a million Higgs particles in total to allow physicists to study its properties with an accuracy an order of magnitude better that possible with the LHC. The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy – about 85 TeV – with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.

The funding model for the FCC-ee, which is expected to have a price tag of about $18bn, is still a work in progress. But it is estimated that at least two-thirds of the construction costs will come from CERN’s 24 member states with the rest needing to be found elsewhere. One option to plug that gap is private donations and in late December CERN received a significant boost from several organizations including the Breakthrough Prize Foundation, the Eric and Wendy Schmidt Fund for Strategic Innovation, and the entrepreneurs John Elkann and Xavier Niel. Together, they pledged a total of $1bn towards the FCC-ee.

Costas Fountas, president of the CERN Council says CERN is “extremely grateful” for the interest. “This once again demonstrates CERN’s relevance and positive impact on society, and the strong interest in CERN’s future that exists well beyond our own particle physics community,” he notes.

Eric Schmidt, who founded Google, claims that he and Wendy Schmidt were “inspired by the ambition of this project and by what it could mean for the future of humanity”. The FCC, he believes, is an instrument that “could push the boundaries of human knowledge and deepen our understanding of the fundamental laws of the Universe” and could lead to technologies that could benefit society “in profound ways” from medicine to computing to sustainable energy.

The cash promised has been welcomed by outgoing CERN director-general Fabiola Gianotti. “It’s the first time in history that private donors wish to partner with CERN to build an extraordinary research instrument that will allow humanity to take major steps forward in our understanding of fundamental physics and the universe,” she said. “I am profoundly grateful to them for their generosity, vision, and unwavering commitment to knowledge and exploration.

Further boost

The cash comes a few months after the Circular Electron–Positron Collider (CEPC) – a rival collider to the FCC-ee that also involves building a huge 100 km tunnel to study the Higgs in unprecedented detail – was not considered for inclusion in China’s next five-year plan, which runs from 2026–2030. There has been much discussion in China whether the CEPC is the right project for the country, with the collider facing criticism from particle physicist and Nobel laureate Chen-Ning Yang, before he died last year.

Wang Yifang of the Institute of High Energy Physics (IHEP) in Beijing says they will submit the CEPC for consideration again in 2030 unless FCC is officially approved before then. But for particle theorist John Ellis from Kings College London, China’s decision to effectively put the CEPC on the back burner  “certainly simplifies the FCC discussion”. “However, an opportunity for growing the world particle physics community has been lost, or at least deferred [by the decision],” Ellis told Physics World.

Ellis adds, however, that he would welcome China’s participation in the FCC. “Their accelerator and detector [technical design reviews] show that they could bring a lot to the table, if the political obstacles can be overcome,” he says.

However, if the FCC-ee goes ahead China could perhaps make significant “in-kind” contributions rather like those that occur with the ITER experimental fusion reactor, which is currently being built in France. In this case, instead of cash payments, the countries provide components, equipment and other materials.

Those considerations and more will now fall to the British physicist Mark Thomson, who took over from Gianotti as CERN director-general on 1 January for a five-year term. As well as working on funding requirements for the FCC-ee, top of his in-tray will actually be shutting down the LHC in June to make way for further work on the HL-LHC, which involves installing powerful new superconducting magnets and improving the detection.

About 90% of the 27 km LHC accelerator will be affected by the upgrade with a major part being to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to less than 10 microns in cross section. The upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

The HL-LHC will probably not turn on until 2030, which is when Thomson’s term will nearly be over but that doesn’t deter him from leading the world’s foremost particle-physics lab. “It’s an incredibly exciting project,” Thomson told the Guardian. “It’s more interesting than just sitting here with the machine hammering away.”

The post CERN accepts $1bn in private cash towards Future Circular Collider appeared first on Physics World.

Polarization-sensitive photoacoustic microscopy reveals heart tissue health

19 janvier 2026 à 10:30
MIR-DS-PAM images of fibrotic and normal cardiac tissue
Imaging tissue fibrosis (a) Mid-infrared dichroism-sensitive photoacoustic microscopy (MIR-DS-PAM) images of cell-induced fibrosis (CIF) and normal control (NC) tissue; (c) MIR-DS-PAM images of drug-induced fibrosis (DIF) and NC tissue; (b) and (d) show the corresponding confocal fluorescence microscopy (CFM) images. Scale bars: 500 µm. (Courtesy: CC-BY 4.0/Light Sci. Appl. 10.1038/s41377-025-02117-0)

Many of the tissues in the human body rely upon highly organized microstructures to function effectively. If the collagen fibres in heart muscle become disordered, for instance, this can lead to or reflect disorders such as fibrosis and cancer. To image and analyse such structural changes, researchers at Pohang University of Science and Technology (POSTECH) in Korea have developed a new label-free microscopy technique and demonstrated its use in engineered heart tissue.

The ability to assess the alignment of microstructures such as protein fibres within tissue’s extracellular matrix provides a valuable tool for diagnosing disease, monitoring therapy response and evaluating tissue engineering models. Currently, however, this is achieved using histological imaging methods based on immunofluorescent staining, which can be labour-intensive and sensitive to the imaging conditions and antibodies used.

Instead, a team headed up by Chulhong Kim and Jinah Jang is investigating photoacoustic microscopy (PAM), a label-free imaging modality that relies on light absorption by endogenous tissue chromophores to reveal structural and functional information. In particular, PAM with mid-infrared (MIR) incident light provides bond-selective, high-contrast imaging of proteins, lipids and carbohydrates. The researchers also incorporated dichroism-sensitive (DS) functionality, resulting in a technique referred to as MIR-DS-PAM.

“Dichroism-sensitivity enables the quantitative assessment of fibre alignment by detecting the polarization-dependent absorption of anisotropic materials like collagen,” explains first author Eunwoo Park. “This adds a new contrast mechanism to conventional photoacoustic imaging, allowing simultaneous visualization of molecular content and microstructural organization without any labelling.”

Park and colleagues constructed a MIR-DS-PAM system using a pulsed quantum cascade laser as the light source. They tuned the laser to a centre wavelength of 6.0 µm to correspond with an absorption peak from the C=O stretching vibration in proteins. The laser beam was linearly polarized, modulated by a half-wave plate and used to illuminate the target tissue.

Tissue analysis

To validate the functionality of their MIR-DS-PAM technique, the researchers used it to image a formalin-fixed section of engineered heart tissue (EHT). They obtained images at four incident angles and used the acquired photoacoustic data to calculate the photoacoustic amplitude, which visualizes the protein content, as well as the degree of linear dichroism (DoLD) and the orientation angle of linear dichroism (AoLD), which reveal the extracellular matrix alignment.

“Cardiac tissue features highly aligned extracellular matrix with complex fibre orientation and layered architecture, which are critical to its mechanical and electrical function,” Park explains. “These properties make it an ideal model for demonstrating the ability of MIR-DS-PAM to detect physiologically relevant histostructural and fibrosis-related changes.”

The researchers also used MIR-DS-PAM to quantify the structural integrity of EHT during development, using specimens cultured for one to five days before fixing. Analysis of the label-free images revealed that as the tissue matured, the DoLD gradually increased, while the standard deviation of the AoLD decreased – indicating increased protein accumulation with more uniform fibre alignment over time. They note that these results agree with those from immunofluorescence-stained confocal fluorescence microscopy.

Next, they examined diseased EHT with two types of fibrosis: cell-induced fibrosis (CIF) and drug-induced fibrosis (DIF). In the CIF sample, the average photoacoustic amplitude and AoLD uniformity were both lower than found in normal EHT, indicating reduced protein density and disrupted fibre alignment. DIF exhibited a higher photoacoustic amplitude and lower AoLD uniformity than normal EHT, suggesting extensive extracellular matrix accumulation with disorganized orientation.

Both CIF and DIF showed a slight reduction in DoLD, again signifying a disorganized tissue structure, a common hallmark of fibrosis. The two fibrosis types, however, exhibited diverse biochemical profiles and different levels of mechanical dysfunction. The findings demonstrate the ability of MIR-DS-PAM to distinguish diseased from healthy tissue and identify different types of fibrosis. The researchers also imaged a tissue assembly containing both normal and fibrotic EHT to show that MIR-DS-PAM can capture features in a composite sample.

They conclude that MIR-DS-PAM enables label-free monitoring of both tissue development and fibrotic remodelling. As such, the technique shows potential for use within tissue engineering research, as well as providing a diagnostic tool for assessing tissue fibrosis or remodelling in biopsied samples. “Its ability to visualize both biochemical composition and structural alignment could aid in identifying pathological changes in cardiological, musculoskeletal or ocular tissues,” says Park.

“We are currently expanding the application of MIR-DS-PAM to disease contexts where extracellular matrix remodelling plays a central role,” he adds. “Our goal is to identify label-free histological biomarkers that capture both molecular and structural signatures of fibrosis and degeneration, enabling multiparametric analysis in pathological conditions.”

 

The post Polarization-sensitive photoacoustic microscopy reveals heart tissue health appeared first on Physics World.

Astronomer Daniel Jaffe named president of the Giant Magellan Telescope project

16 janvier 2026 à 16:30

Astronomer Daniel Jaffe has been appointed the next president of the Giant Magellan Telescope Corporation –  the international consortium building the $2.5bn Giant Magellan Telescope (GMT). He succeeds Robert Shelton, who announced his retirement last year after eight years in the role.

A former head of astronomy at the University of Texas at Austin from 2011 to 2015, Jaffe became vice president for research at the university from 2016 to 2025 where he also served as interim provost from 2020 to 2021.

Jaffe has sat on the board of directors of the Association of Universities for Research in Astronomy and the Gemini Observatory and played a role in establishing the University of Texas at Austin’s partnership in the GMT.

Under construction in Chile and expected to be complete in the 2030s, the GMT consists of seven mirrors to create a 25.4 m telescope. From the ground it will produce images 4-16 times sharper than the James Webb Space Telescope and will investigate the origins of the chemical elements, and search for signs of life on distant planets.

“I am honoured to lead the GMT at this exciting stage,” notes Jaffe. “[It] represents a profound leap in our ability to explore the universe and employ a host of new technologies to make fundamental discoveries.”

“[Jaffe] brings decades of leadership in research, astronomy instrumentation, public-private partnerships, and academia,” noted Taft Armandroff, board chair of the GMTO Corporation. “His deep understanding of the Giant Magellan Telescope, combined with his experience leading large research enterprises and cultivating a collaborative environment, make him exceptionally well suited to lead the observatory through its next phase of construction and toward operations.”

Jaffe joins the GMT at a pivotal time, as it aims to secure the funding necessary to complete the telescope with just over $1bn from private funds having been pledges so far. The collaboration recently added Northwestern University and the Massachusetts Institute of Technology to its international consortium taking the number of members to 16 universities and research institutions.

In June 2025 the GMT, which is already 40% completed, received NSF approval confirming that the observatory will advance into its “major facilities final design phase”, one of the final steps before becoming eligible for federal construction funding.

Yet it faces competition from another next-generation telescope – the Thirty Meter Telescope (TMT) that will use a segmented primary mirror consisting of 492 elements of zero-expansion glass for a 30 m-diameter primary mirror.

The TMT team chose Hawaii’s Mauna Kea peak as its location. However, protests by indigenous Hawaiians, who regard the site as sacred, have delayed the start of construction with officials identifying the island of La Palma, belonging to Spain’s Canary Islands, as an alternative site in 2019.

The post Astronomer Daniel Jaffe named president of the Giant Magellan Telescope project appeared first on Physics World.

India turns to small modular nuclear reactors to meet climate targets

16 janvier 2026 à 13:30

India has been involved in nuclear energy and power for decades, but now the country is  turning to small modular nuclear reactors (SMRs) as part of a new, long-term push towards nuclear and renewable energy. In December 2025 the country’s parliament passed a bill that allows private companies for the first time to participate in India’s nuclear programme, which could see them involved in generating power, operating plants and making equipment.

Some commentators are unconvinced that the move will be enough to help meet India’s climate pledge to achieving 500 GW of non-fossil-fuel based energy generation by 2030. Interestingly, however, India has now joined other nations, such as Russia and China, in taking an interest in SMRs. They could help stem the overall decline in nuclear power, which now accounts for just 9% of electricity generated around the world – down from 17.5% in 1996.

Last year India’s finance minister Nirmala Sitharaman announced a nuclear energy mission funded with 200 billion Indian rupees ($2.2bn) to develop at least five indigenously designed and operational SMRs by 2033. Unlike huge, conventional nuclear plants, such as pressurized heavy-water reactors (PHWRs), most or all components of an SMR are manufactured in factories before being assembled at the reactor site.

SMRs, typically generate less than 300 MW of electrical power but – being modular – additional capacity can be brought on quickly and easily given their lower capital costs, shorter construction times, ability to work with lower-capacity grids and lower carbon emissions. Despite their promise, there are only two fully operating SMRs in the world – both in Russia – although two further high-temperature gas-cooled SMRs are currently being built in China. In June 2025 Rolls-Royce SMR was selected as the preferred bidder by Great British Nuclear to build the UK’s first fleet of SMRs, with plans to provide 470 MW of low-carbon electricity.

Cost benefit analysis

An official at the Department of Atomic Energy told Physics World that part of that mix of five new SMRs in India could be the 200 MW Bharat small modular reactor, which are based on pressurized water reactor technology and use slightly enriched uranium as a fuel. Other options are 55 MW small modular reactors and the Indian government also plans to partner with the private sector to deploy 220 MW Bharat small reactors.

Despite such moves, some are unconvinced that small nuclear reactors could help India scale its nuclear ambitions. “SMRs are still to demonstrate that they can supply electricity at scale,” says Karthik Ganesan, a fellow and director of partnerships at the Council on Energy, Environment and Water (CEEW), a non-profit policy research think-tank based in New Delhi. “SMRs are a great option for captive consumption, where large investment that will take time to start generating is at a premium.”

Ganesan, however, says it is too early to comment on the commercial viability of SMRs as cost reductions from SMRs depend on how much of the technology is produced in a factory and in what quantities. “We are yet to get to that point and any test reactors deployed would certainly not be the ones to benchmark their long-term competitiveness,” he says. “[But] even at a higher tariff, SMRs will still have a use case for industrial consumers who want certainty in long-term tariffs and reliable continuous supply in a world where carbon dioxide emissions will be much smaller than what we see from the power sector today.”

M V Ramana from the University of British Columbia, Vancouver, who works in international security and energy supply, is concerned over the cost efficiency of SMRs compared to their traditional counterparts. “Larger reactors are cheaper on a per-megawatt basis because their material and work requirements do not scale linearly with power capacity,” says Ramana.  This, according to Ramana, means that the electricity SMRs produce will be more expensive than nuclear energy from large reactors, which are already far more expensive than renewables such as solar and wind energy.

Clean or unclean?

Even if SMRs take over from PHWRs, there is still the question of what do with its nuclear waste. As Ramana points out, all activities linked to the nuclear fuel chain – from mining uranium to dealing with the radioactive wastes produced – have significant health and environmental impacts. “The nuclear fuel chain is polluting, albeit in a different way from that of fossil fuels,” he says, adding that those pollutants remain hazardous for hundreds of thousands of years. “There is no demonstrated solution to managing these radioactive wastes – nor can there be, given the challenge of trying to ensure that these materials do not come into contact with living beings,” says Ramana.

Ganesan, however, thinks that nuclear energy is still clean as it produces electricity with much a lower environmental footprint especially when it comes to so-called “criteria pollutants”: ozone; particulate matter; carbon monoxide; lead; sulphur dioxide; and nitrogen dioxide.  While nuclear waste still needs to be managed, Ganesan says the associated costs are already included in the price of setting up a reactor. “In due course, with technological development, the burn up will significantly higher and waste generated a lot lesser.”

The post India turns to small modular nuclear reactors to meet climate targets appeared first on Physics World.

Gravitational lensing sheds new light on Hubble constant controversy

16 janvier 2026 à 11:00

By studying how light from eight distant quasars is gravitationally lensed as it propagates towards Earth, astronomers have calculated a new value for the Hubble constant – a parameter that describes the rate at which the universe is expanding. The result agrees more closely with previous “late-universe” probes of this constant than it does with calculations based on observations of the cosmic microwave background (CMB) in the early universe, strengthening the notion that we may be misunderstanding something fundamental about how the universe works.

The universe has been expanding ever since the Big Bang nearly 14 billion years ago. We know this, in part, because of observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, Hubble discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The relationship between this speed and the galaxies’ distance is known as the Hubble constant, H0.

Astronomers have developed several techniques for measuring H0. The problem is that different techniques deliver different values. According to measurements made by the European Space Agency’s Planck satellite of CMB radiation “left over” from the Big Bang, the value of H0 is about 67 kilometres per second per megaparsec (km/s/Mpc), where one Mpc is 3.3 million light years. In contrast, “distance-ladder” measurements such as those made by the SH0ES collaboration those involving observations of type Ia supernovae yield a value of about 73 km/s/Mpc. This discrepancy is known as the Hubble tension.

Time-delay cosmography

In the latest work, the TDCOSMO collaboration, which includes astronomers Kenneth Wong and Eric Paic of the University of Tokyo, Japan, measured H0 using a technique called time-delay cosmography. This well-established method dates back to 1964 and uses the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted.

“This is called gravitational lensing, and if the circumstances are right, we’ll actually see multiple distorted images, each of which will have taken a slightly different pathway to get to us, taking different amounts of time,” Wong explains.

By looking for changes in these images that are identical, but slightly out of sync, astronomers can measure the time differences required for the light from the objects to reach Earth. Then, by combining these data with estimates of the distribution of the mass of the distorting galactic lens, they can calculate H0.

A real tension, not a measurement artefact

Wong and colleagues measured the light from eight strongly lensed quasars using various telescopes, including the James Webb Space Telescope (JWST), the Keck Telescopes and the Very Large Telescope (VLT). They also made use of observations from the Sloan Lens ACS (SLACS) sample with Keck and the Legacy Survey (SL2S) sample.

Based on these measurements, they obtained a H0 value of roughly 71.6 km s−1 Mpc−1, which is more consistent with current-day observations (such as that from SH0ES) than early-universe ones (such as that from Planck). Wong explains that this discrepancy supports the idea that the Hubble tension arises from real physics, not just some unknown error in the various methods. “Our measurement is completely independent of other methods, both early- and late-universe, so if there are any systematic uncertainties in those, we should not be affected by them,” he says.

The astronomers say that the SLACS and SL2S sample data are in excellent agreement with the new TDCOSMO-2025 sample, while the new measurements improve the precision of H0 to 4.6%. However, Paic notes that nailing down the value of H0 to a level that would “definitely confirm” the Hubble tension will require a precision of 1-2%. “This could be possible by increasing the number of objects observed as well as ruling out any systematic errors as yet unaccounted for,” he says.

Wong adds that while the TDCOSMO-2025 dataset contains its own uncertainties, multiple independent measurements should, in principle, strengthen the result. “One of the largest sources of uncertainty is the fact that we don’t know exactly how the mass in the lens galaxies is distributed,” he explains. “It is usually assumed that the mass follows some simple profile that is consistent with observations, but it is hard to be sure and this uncertainty can directly influence the values we calculate.”

The biggest hurdle, Wang adds, will “probably be addressing potential sources of systematic uncertainty, making sure we have thought of all the possible ways that our result could be wrong or biased and figuring out how to handle those uncertainties.”

The study is detailed in Astronomy and Astrophysics.

The post Gravitational lensing sheds new light on Hubble constant controversy appeared first on Physics World.

RFID-tagged drug capsule lets doctors know when it has been swallowed

15 janvier 2026 à 10:15

Taking medication as and when prescribed is crucial for it to have the desired effect. But nearly half of people with chronic conditions don’t adhere to their medication regimes, a serious problem that leads to preventable deaths, drug resistance and increased healthcare costs. So how can medical professionals ensure that patients are taking their medicine as prescribed?

A team at Massachusetts Institute of Technology (MIT) has come up with a solution: a drug capsule containing an RFID tag that uses radiofrequency (RF) signals to communicate that it has been swallowed, and then bioresorbs into the body.

“Medication non-adherence remains a major cause of preventable morbidity and cost, but existing ingestible tracking systems rely on non-degradable electronics,” explains project leader Giovanni Traverso. “Our motivation was to create a passive, battery-free adherence sensor that confirms ingestion while fully biodegrading, avoiding long-term safety and environmental concerns associated with persistent electronic devices.”

The device – named SAFARI (smart adherence via Faraday cage and resorbable ingestible) – incorporates an RFID tag with a zinc foil RF antenna and an RF chip, as well as the drug payload, inside an ingestible gelatin capsule. The capsule is coated with a mixture of cellulose and molybdenum particles, which blocks the transit of any RF signals.

SAFARI capsules with and without RF-blocking coating
SAFARI capsules Photos of the capsules with (left) and without (right) the RF-blocking coating. (Courtesy: Mehmet Say)

Once swallowed, however, this shielding layer breaks down in the stomach. The RFID tag (which can be preprogrammed with information such as dose metadata, manufacturing details and unique ID) can then be wirelessly queried by an external reader and return a signal from inside the body confirming that the medication has been ingested.

The capsule itself dissolves upon exposure to digestive fluids, releasing the desired medication; the  metal antenna components also dissolve completely in the stomach. The use of biodegradable materials is key as it eliminates the need for device retrieval and minimizes the risk of gastrointestinal (GI) blockage. The tiny (0.16 mm²) RFID chip remains intact and should safely leave the body through the GI tract.

Traverso suggests that the first clinical applications for the SAFARI capsule will likely be high-risk settings in which objective ingestion confirmation is particularly valuable. “[This includes] tuberculosis, HIV, transplant immunosuppression or cardiovascular therapies, where missed doses can have serious clinical consequences,” he tells Physics World.

In vivo demonstration

To assess the degradation of the SAFARI capsule and its components in vitro, Traverso and colleagues placed the capsule into simulated gastric fluid at physiological temperature (37 °C). The RF shielding coating dissolved in 10–20 min, while the capsule and the zinc layer in the RFID tag disintegrated into pieces after one day.

Next, the team endoscopically delivered the SAFARI capsules into the stomachs of sedated pigs, chosen as they have a similar sized GI tract to humans. Once in contact with gastric fluid in the stomach, the capsule coating swelled and then partially dissolved (as seen using endoscopic images), exposing the RFID tag. The researchers found that, in general, the tag and capsule parts disintegrated in the stomach at up to 24 h later.

A panel antenna positioned 10 cm from the animal captured the tag data. Even with the RFID tags immersed in gastric fluid, the external receiver could effectively record signals in the frequency range of 900–925 MHz, with RSSI (received signal strength indicator) values ranging from 65 to 78 dB – demonstrating that the tag could effectively transmit RF signals from inside the stomach.

The researchers conclude that this successful use of SAFARI in swine indicates the potential for translation to clinical research. They note that the device should be safe for human ingestion as its composite materials meet established dietary and biomedical exposure limits, with levels of zinc and molybdenum orders of magnitude below those associated with toxicity.

“We have demonstrated robust performance and safety in large-animal models, which is an important translational milestone,” explains first author Mehmet Girayhan Say. “Before human studies, further work is needed on chronic exposure with characterization of any material accumulation upon repeated dosing, as well as user-centred integration of external readers to support real-world clinical workflows.”

The post RFID-tagged drug capsule lets doctors know when it has been swallowed appeared first on Physics World.

Quantum state teleported between quantum dots at telecoms wavelengths

14 janvier 2026 à 17:00

Physicists at the University of Stuttgart, Germany have teleported a quantum state between photons generated by two different semiconductor quantum dot light sources located several metres apart. Though the distance involved in this proof-of-principle “quantum repeater” experiment is small, members of the team describe the feat as a prerequisite for future long-distance quantum communications networks.

“Our result is particularly exciting because such a quantum Internet will encompass these types of distant quantum nodes and will require quantum states that are transmitted among these different nodes,” explains Tim Strobel, a PhD student at Stuttgart’s Institute of Semiconductor Optics and Functional Interfaces (IHFG) and the lead author of a paper describing the research. “It is therefore an important step in showing that remote sources can be effectively interfaced in this way in quantum teleportation experiments.”

In the Stuttgart study, one of the quantum dots generates a single photon while the other produces a pair of photons that are entangled – meaning that the quantum state of one photon is closely linked to the state of the other, no matter how far apart they are. One of the photons in the entangled pair then travels to the other quantum dot and interferes with the photon there. This process produces a superposition that allows the information encapsulated in the single photon to be transferred to the distant “partner” photon from the pair.

Quantum frequency converters

Strobel says the most challenging part of the experiment was making photons from two remote quantum dots interfere with each other. Such interference is only possible if the two particles are indistinguishable, meaning they must be similar in every regard, be it in their temporal shape, spatial shape or wavelength. In contrast, each quantum dot is unique, especially in terms of its spectral properties, and each one emits photons at slightly different wavelengths.

To close the gap, the team used devices called quantum frequency converters to precisely tune the wavelength of the photons and match them spectrally. The researchers also used the converters to shift the original wavelengths of the photons emitted from the quantum dots (around 780 nm) to a wavelength commonly used in telecommunications (1515 nm) without altering the quantum state of the photons. This offers further advantages: “Being at telecommunication wavelengths makes the technology compatible with the existing global optical fibre network, an important step towards real-life applications,” Strobel tells Physics World.

Proof-of-principle experiment

In this work, the quantum dots were separated by an optical fibre just 10 m in length. However, the researchers aim to push this to considerably greater distances in the future. Strobel notes that the Stuttgart study was published in Nature Communications back-to-back with an independent work carried out by researchers led by Rinaldo Trotta of Sapienza University in Rome, Italy. The Rome-based group demonstrated quantum state teleportation across the Sapienza University campus at shorter wavelengths, enabled by the brightness of their quantum-dot source.

“These two papers that we published independently strengthen the measurement outcomes, demonstrating the maturity of quantum dot light sources in this domain,” Strobel says. Semiconducting quantum dots are particularly attractive for this application, he adds, because as well as producing both single and entangled photons on demand, they are also compatible with other semiconductor technologies.

Fundamental research pays off

Simone Luca Portalupi, who leads the quantum optics group at IHFG, notes that “several years of fundamental research and semiconductor technology are converging into these quantum teleportation experiments”. For Peter Michler, who led the study team, the next step is to leverage these advances to bring quantum-dot-based teleportation technology out of a controlled laboratory environment and into the real world.

Strobel points out that there is already some precedent for this, as one of the group’s previous studies showed that they could maintain photon entanglement across a 36-km fibre link deployed across the city of Stuttgart. “The natural next step would be to show that we can teleport the state of a photon across this deployed fibre link,” he says. “Our results will stimulate us to improve each building block of the experiment, from the sample to the setup.”

The post Quantum state teleported between quantum dots at telecoms wavelengths appeared first on Physics World.

Quantum metrology at NPL: we explore the challenges and opportunities

14 janvier 2026 à 15:02

This episode of the Physics World Weekly podcast features a conversation with Tim Prior and John Devaney of the National Physical Laboratory (NPL), which is the UK’s national metrology institute.

Prior is NPL’s quantum programme manager and Devaney is its quantum standards manager. They talk about NPL’s central role in the recent launch of NMI-Q, which brings together some of the world’s leading national metrology institutes to accelerate the development and adoption of quantum technologies.

Prior and Devaney describe the challenges and opportunities of developing metrology and standards for rapidly evolving technologies including quantum sensors, quantum computing and quantum cryptography. They talk about the importance of NPL’s collaborations with industry and academia and explore the diverse career opportunities for physicists at NPL. Prior and Devaney also talk about their own careers and share their enthusiasm for working in the cutting-edge and fast-paced field of quantum metrology.

This podcast is sponsored by the National Physical Laboratory.

Further reading

Why quantum metrology is the driving force for best practice in quantum standardization

Performance metrics and benchmarks point the way to practical quantum advantage

End note: NPL retains copyright on this article.

The post Quantum metrology at NPL: we explore the challenges and opportunities appeared first on Physics World.

Mapping electron phases in nanotube arrays

14 janvier 2026 à 13:56

Carbon nanotube arrays are designed to investigate the behaviour of electrons in low‑dimensional systems. By arranging well‑aligned 1D nanotubes into a 2D film, the researchers create a coupled‑wire structure that allows them to study how electrons move and interact as the system transitions between different dimensionalities. Using a gate electrode positioned on top of the array, the researchers were able to tune both the carrier density (number of electrons and holes in a unit area) and the strength of electron–electron interactions, enabling controlled access to regimes. The nanotubes behave as weakly coupled 1D channels where electrons move along each nanotube, as a 2D Fermi liquid where the electrons can move between nanotubes behaving like a conventional metal, or as a set of quantum‑dot‑like islands showing Coulomb blockade where at low carrier densities sections of the nanotubes become isolated.

The dimensional transitions are set by two key temperatures: T₂D, where electrons begin to hop between neighbouring nanotubes, and T₁D, where the system behaves as a Luttinger liquid which is a 1D state in which electrons cannot easily pass each other and therefore move in a strongly correlated, collective way. Changing the number of holes in the nanotubes changes how strongly the tubes interact with each other. This controls when the system stops acting like separate 1D wires and when strong interactions make parts of the film break up into isolated regions that show Coulomb blockade.

The researchers built a phase diagram by looking at how the conductance changes with temperature and voltage, and by checking how well it follows power‑law behaviour at different energy ranges. This approach allows them to identify the boundaries between Tomonaga–Luttinger liquid, Fermi liquid and Coulomb blockade phases across a wide range of gate voltages and temperatures.

Overall, the work demonstrates a continuous crossover between 2D, 1D and 0D electronic behaviour in a controllable nanotube array. This provides an experimentally accessible platform for studying correlated low‑dimensional physics and offers insights relevant to the development of nanoscale electronic devices and future carbon nanotube technologies.

Read the full article

Dimensionality and correlation effects in coupled carbon nanotube arrays

Xiaosong Deng et al 2025 Rep. Prog. Phys. 88 088001

Do you want to learn more about this topic?

Structural approach to charge density waves in low-dimensional systems: electronic instability and chemical bonding Jean-Paul Pouget and Enric Canadell (2024)

The post Mapping electron phases in nanotube arrays appeared first on Physics World.

CMS spots hints of a new form of top‑quark matter

14 janvier 2026 à 13:54

The CMS Collaboration investigated in detail events in which a top quark and an anti‑top quark are produced together in high‑energy proton–proton collisions at √s = 13 TeV, using the full 138 fb⁻¹ dataset collected between 2016 and 2018. The top quark is the heaviest fundamental particle and decays almost immediately after being produced in high-energy collisions. As a consequence, the formation of a bound top–antitop state was long considered highly unlikely and had never been observed. The anti-top quark has the same mass and lifetime as the top quark but opposite charges. When a top quark and an anti-top quark are produced together, they form a top-antitop pair (tt̄).

Focusing on events with two charged leptons (top quarks and anti-top quarks decay into two electrons, two muons or one electron and one muon) and multiple jets (sprays of particles associated with top quark decay), the analysis examines the invariant mass of the top–antitop system along with two angular observables that directly probe how the spins of the top and anti‑top quarks are correlated. These measurements allow the team to compare the data with the prediction for the non resonant tt̄ production based on fixed order perturbative quantum chromodynamics (QCD), which is what physicists normally use to calculate how quarks behave according to the standard model of particle physics.

Near the kinematic threshold where the top–antitop pair is produced, CMS observes a significant excess of events relative to the QCD prediction. The number of extra events they see can be translated into a production rate. Using a simplified model based on non‑relativistic QCD, they estimate that this excess corresponds to a cross section of about 8.8 picobarns, with an uncertainty of roughly +1.2/–1.4 picobarns. The pattern of the excess, including its spin‑correlation features, is consistent with the production of a colour singlet pseudoscalar (a top–antitop pair in the 1S₀ state, i.e. the simplest, lowest energy configuration), and therefore with the prediction of non-relativistic QCD near the tt̄ threshold. The statistical significance of the excess exceeds five standard deviations, indicating that the effect is unlikely to be a statistical fluctuation. Researchers want to find a toponium‑like state because it would reveal how the strongest force in nature behaves at the highest energies, test key theories of heavy‑quark physics, and potentially expose new physics beyond the Standard Model.

The researchers emphasise that modelling the tt̄ threshold region is theoretically challenging, and that alternative explanations remain possible. Nonetheless, the result aligns with long‑standing predictions from non‑relativistic QCD that heavy quarks could form short‑lived bound states near threshold. The analysis also showcases spin correlation as an effective means to discover and characterise such effects, which were previously considered to be beyond the reach of experimental capabilities. Starting with the confirmation by the ATLAS Collaboration last July, this observation has sparked and continues to inspire follow-up theoretical follow-up theoretical and experimental works, opening up a new field of study involving bound states of heavy quarks and providing new insight into the behaviour of the strong force at high energies.

Read the full article

Observation of a pseudoscalar excess at the top quark pair production threshold

The CMS Collaboration 2025 Rep. Prog. Phys. 88 087801

Do you want to learn more about this topic?

The sea of quarks and antiquarks in the nucleon D F Geesaman and P E Reimer (2019)

The post CMS spots hints of a new form of top‑quark matter appeared first on Physics World.

Photonics West explores the future of optical technologies

14 janvier 2026 à 13:00

The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.

As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.

Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.

Here are some of the product innovations on show at this year’s event.

Enabling high-performance photonics assembly with SmarAct

As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.

SmarAct’s photonics assembly portfolio
Rapid and reliable SmarAct’s technology portfolio enables assembly, alignment and testing of optical components with nanometre precision. (Courtesy: SmarAct)

A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.

For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.

Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.

Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.

For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.

Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.

  • Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.

Avantes previews AvaSoftX software platform and new broadband light source

Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.

AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.

The new AvaSoftX software platform
Streamlined solution The new AvaSoftX software platform offers next-generation control and data handling. (Courtesy: Avantes)

Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.

The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.

Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.

A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.

The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.

Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.

  • For product previews, live demonstrations and more, meet Avantes at booth #1157.

HydraHarp 500: high-performance time tagger redefines precision and scalability

One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.

At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.

The HydraHarp 500
At the forefront of photon timing The high-resolution multichannel time tagger HydraHarp 500 offers picosecond timing precision. It combines versatile trigger methods with multiple interfaces, making it ideally suited for demanding applications that require many input channels and high data throughput. (Courtesy: PicoQuant)

A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.

Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.

Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.

With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

 

The post Photonics West explores the future of optical technologies appeared first on Physics World.

Mission to Mars: from biological barriers to ethical impediments

14 janvier 2026 à 12:00

“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.

The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.

This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.

Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.

Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.

Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.

Let’s get physical

The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.

Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.

Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.

In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.

Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.

  • February 2026 MIT Press 264pp £27hb

The post Mission to Mars: from biological barriers to ethical impediments appeared first on Physics World.

Solar storms could be forecast by monitoring cosmic rays

14 janvier 2026 à 09:33

Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.

CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.

To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.

Pushing back cosmic rays

Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.

“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”

In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.

Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.

“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”

Multiple missions

The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.

Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.

“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.

With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.

“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.

The research is described in The Astrophysical Journal.

The post Solar storms could be forecast by monitoring cosmic rays appeared first on Physics World.

CERN team solves decades-old mystery of light nuclei formation

13 janvier 2026 à 15:00

When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.

Baryon resonance

The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.

The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.

Smoking gun

Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.

Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.

The hunt is on

Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.

Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.

The post CERN team solves decades-old mystery of light nuclei formation appeared first on Physics World.

Anyon physics could explain coexistence of superconductivity and magnetism

13 janvier 2026 à 09:45

New calculations by physicists in the US provide deeper insights into an exotic material in which superconductivity and magnetism can coexist. Using a specialized effective field theory, Zhengyan Shi and Todadri Senthil at the Massachusetts Institute of Technology show how this coexistence can emerge from the collective states of mobile anyons in certain 2D materials.

An anyon is a quasiparticle with statistical properties that lie somewhere between those of bosons and fermions. First observed in 2D electron gases in strong magnetic fields, anyons are known for their fractional electrical charge and fractional exchange statistics, which alter the quantum state of two identical anyons when they are exchanged for each other.

Unlike ordinary electrons, anyons produced in these early experiments could not move freely, preventing them from forming complex collective states. Yet in 2023, experiments with a twisted bilayer of molybdenum ditelluride provided the first evidence for mobile anyons through observations of fractional quantum anomalous Hall (FQAH) insulators. This effect appears as fractionally quantized electrical resistance in 2D electron systems at zero applied magnetic field.

Remarkably, these experiments revealed that molybdenum ditelluride can exhibit superconductivity and magnetism at the same time. Since superconductivity usually relies on electron pairing that can be disrupted by magnetism, this coexistence was previously thought impossible.

Anyonic quantum matter

“This then raises a new set of theoretical questions,” explains Shi. “What happens when a large number of mobile anyons are assembled together? What kind of novel ‘anyonic quantum matter’ can emerge?”

In their study, Shi and Senthil explored these questions using a new effective field theory for an FQAH insulator. Effective field theories are widely used in physics to approximate complex phenomena without modelling every microscopic detail. In this case, the duo’s model captured the competition between anyon mobility, interactions, and fractional exchange statistics in a many-body system of mobile anyons.

To test their model, the researchers considered the doping of an FQAH insulator – adding mobile anyons beyond the plateau in Hall resistance, where the existing anyons were effectively locked in place. This allowed the quasiparticles to move freely and form new collective phases.

“Crucially, we recognized that the fate of the doped state depends on the energetic hierarchy of different types of anyons,” Shi explains. “This observation allowed us to develop a powerful heuristic for predicting whether the doped state becomes a superconductor without any detailed calculations.”

In their model, Shi and Senthil focused on a specific FQAH insulator called a Jain state, which hosts two types of anyon excitations. One type has electrical charge of 1/3 of an electron and the other with 2/3. In a perfectly clean system, doping the insulator with 2/3-charge anyons produced a chiral topological superconductor, a phase that is robust against disorder and features edge currents flowing in only one direction. In contrast, doping with 1/3-charge anyons produced a metal with broken translation symmetry – still conducting, but with non-uniform patterns in its electron density.

Anomalous vortex glass

“In the presence of impurities, we showed that the chiral superconductor near the superconductor–insulator transition is a novel phase of matter dubbed the ‘anomalous vortex glass’, in which patches of swirling supercurrents are sprinkled randomly across the sample,” Shi describes. “Observing this vortex glass phase would be smoking-gun evidence for the anyonic mechanism for superconductivity.”

The results suggest that even when adding the simplest kind of anyons – like those in the Jain state – the collective behaviour of these quasiparticles can enable the coexistence of magnetism and superconductivity. In future studies, the duo hopes that more advanced methods for introducing mobile anyons could reveal even more exotic phases.

“Remarkably, our theory provides a qualitative account of the phase diagram of a particular 2D material (twisted molybdenum ditelluride), although many more tests are needed to rule out other possible explanations,” Shi says. “Overall, these findings highlight the vast potential of anyonic quantum matter, suggesting a fertile ground for future discoveries.”

The research is described in PNAS.

The post Anyon physics could explain coexistence of superconductivity and magnetism appeared first on Physics World.

Can entrepreneurship be taught? An engineer’s viewpoint

12 janvier 2026 à 12:00

I am intrigued by entrepreneurship. Is it something we all innately possess – or can entrepreneurship be taught to anyone (myself included) for whom it doesn’t come naturally? Could we all – with enough time, training and support – become the next Jeff Bezos, Richard Branson or Martha Lane Fox?

In my professional life as an engineer in industry, we often talk about the importance of invention and innovation. Without them, products will become dated and firms will lose their competitive edge. However, inventions don’t necessarily sell themselves, which is where entrepreneurs have a key influence.

So what’s the difference between inventors, innovators and entrepreneurs? An inventor, to me, is someone who creates a new process, application or machine. An innovator is a person who introduces something new or does something for the first time. An entrepreneur, however, is someone who sets up a business or takes on a venture, embracing financial risks with the aim of profit.

Scientists and engineers are naturally good inventors and innovators. We like to solve problems, improve how we do things, and make the world more ordered and efficient. In fact, many of the greatest inventors and innovators of all time were scientists and engineers – think James Watt, George Stephenson and Frank Whittle.

But entrepreneurship requires different, additional qualities. Many entrepreneurs come from a variety of different backgrounds – not just science and engineering – and tend to have finance in their blood. They embrace risk and have unlimited amounts of courage and business acumen – skills I’d need to pick up if wanted to be an entrepreneur myself.

Risk and reward

Engineers are encouraged to take risks, exploring new technologies and designs; in fact, it’s critical for companies seeking to stay competitive. But we take risks in a calculated and professional manner that prioritizes safety, quality, regulations and ethics, and project success. We balance risk taking with risk management, spotting and assessing potential risks – and mitigating or removing them if they’re big.

Courage is not something I’ve always had professionally. Over time, I have learned to speak up if I feel I have something to say that’s important to the situation or contributes to our overall understanding. Still, there’s always a fear of saying something silly in front of other people or being unable to articulate a view adequately. But entrepreneurs have courage in their DNA.

So can entrepreneurship be taught? Specifically, can it be taught to people like me with a technical background – and, if so, how? Some of the most famous innovators, like Henry Ford, Thomas Edison, Steve Jobs, James Dyson and Benjamin Franklin, had scientific or engineering backgrounds, so is there a formula for making more people like them?

Skill sets and gaps

Let’s start by listing the skills that most engineers have that could be beneficial for entrepreneurship. In no particular order, these include:

  • problem-solving ability: essential for designing effective solutions or to identify market gaps;
  • innovative mindset: critical for building a successful business venture;
  • analytical thinking: engineers make decisions based on data and logic, which is vital for business planning and decision making;
  • persistence: a pre-requisite for delivering engineering projects and needed to overcome the challenges of starting a business;
  • technical expertise: a significant competitive advantage and providing credibility, especially relevant for tech start-ups.

However, there are mindset differences between engineers and entrepreneurs that any training would need to overcome. These include:

  • risk tolerance: engineers typically focus on improving reliability and reducing risk, whilst entrepreneurs are more comfortable with embracing greater uncertainty;
  • focus: engineers concentrate on delivering to requirements, whilst entrepreneurs focus on consumer needs and speed to market;
  • business acumen: a typical engineering education doesn’t cover essential business skills such as marketing, sales and finance, all of which are vital for running a company.

Such skills may not always come naturally to engineers and scientists, but they can be incorporated into our teaching and learning. Some great examples of how to do this were covered in Physics World last year. In addition, there is a growing number of UK universities offering science and engineering degrees combined with entrepreneurship.

The message is that whilst some scientists and engineers become entrepreneurs, not all do. Simply having a science or engineering background is no guarantee of becoming an entrepreneur, nor is it a requirement. Nevertheless, the problem-solving and technical skills developed by scientists and engineers are powerful assets that, when combined with business acumen and entrepreneurial drive, can lead to business success.

Of course, entrepreneurship may not suit everybody – and that’s perfectly fine. No-one should be forced to become an entrepeneur if they don’t want to. We all need to play to our core strengths and interests and build well-rounded teams with complementary skillsets – something that every successful business needs. But surely there’s a way of teaching entrepeneurism too?

The post Can entrepreneurship be taught? An engineer’s viewpoint appeared first on Physics World.

Shapiro steps spotted in ultracold bosonic and fermionic gases

12 janvier 2026 à 09:00

Shapiro steps – a series of abrupt jumps in the voltage–current characteristic of a Josephson junction that is exposed to microwave radiation – have been observed for the first time in ultracold gases by groups in Germany and Italy. Their work on atomic Josephson junctions provides new insights into the phenomenon, and could lead to a standard for chemical potential.

In 1962 Brian Josephson of the University of Cambridge calculated that, if two superconductors were separated by a thin insulating barrier, the phase difference between the wavefunctions on either side should induce quantum tunneling, leading to a current at zero potential difference.

A year later, Sidney Shapiro and colleagues at the consultants Arthur D. Little showed that inducing an alternating electric current using a microwave field causes the phase of the wavefunction on either side of a Josephson junction to evolve at different rates, leading to quantized increases in potential difference across the junction. The height of these “Shapiro steps” depends only on the applied frequency of the field and the electrical charge. This is now used as a reference standard for the volt.

Researchers have subsequently developed analogues of Josephson junctions in other systems such as liquid helium and ultracold atomic gases. In the new work, two groups have independently observed Shapiro steps in ultracold quantum gases. Instead of placing a fixed insulator in the centre and driving the system with a field, the researchers used focused laser beams to create potential barriers that divided the traps into two. Then they moved the positions of the barriers to alter the potentials of the atoms on either side.

Current emulation

“If we move the atoms with a constant velocity, that means there’s a constant velocity of atoms through the barrier,” says Herwig Ott of RPTU University Kaiserslautern-Landau in Germany, who led one of the groups. “This is how we emulate a DC current. Now for the Shapiro protocol you have to apply an AC current, and the AC current you simply get by modulating your barrier in time.”

Ott and colleagues in Kaiserslautern, in collaboration with researchers in Hamburg and the United Arab Emirates (UAE), used a Bose–Einstein condensate (BEC) of rubidium-87 atoms. Meanwhile in Italy, Giulia Del Pace of the European Laboratory for Nonlinear Spectroscopy at the University of Florence and colleagues (including the same UAE collaborators) studied ultracold lithium-6 atoms, which are fermions.

Both groups observed the theoretically-predicted Shapiro steps, but Ott and Del Pace explain that these observations do not simply confirm predictions. “The message is that no matter what your microscopic mechanism is, the phenomenon of Shapiro steps is universal,” says Ott. In superconductors, the Shapiro steps are caused by the breaking of Cooper pairs; in ultracold atomic gases, vortex rings are created. Nevertheless, the same mathematics applies. “This is really quite remarkable,” says Ott.

Del Pace says it was unclear whether Shapiro steps would be seen in strongly-interacting fermions, which are “way more interacting than the electrons in superconductors”. She asks, “Is it a limitation to have strong interactions or is it something that actually helps the dynamics to happen? It turns out it’s the latter.”

Magnetic tuning

Del Pace’s group applied a variable magnetic field to tune their system between a BEC of molecules, a system dominated by Cooper pairs and a unitary Fermi gas in which the particles were as strongly interacting as permitted by quantum mechanics. The size of the Shapiro steps was dependent on the strength of the interparticle interaction.

Ott and Del Pace both suggest that this effect could be used to create a reference standard for chemical potential – a measure of the strength of the atomic interaction (or equation of state) in a system.

“This equation of state is very well known for a BEC or for a strongly interacting Fermi gas…but there is a range of interaction strengths where the equation of state is completely unknown, so one can imagine taking inspiration from the way Josephson junctions are used in superconductors and using atomic Josephson junctions to study the equation of state in systems where the equation of state is not known,” explains Del Pace.

The two papers are published side by side in Science: Del Pace and Ott.

Rocío Jáuregui Renaud of the Autonomous University of Mexico is impressed, especially by the demonstration in both bosons and fermions.  “The two papers are important, and they are congruent in their results, but the platform is different,” she says. “At this point, the idea is not to give more information directly about superconductivity, but to learn more about phenomena that sometimes you are not able to see in electronic systems but you would probably see in neutral atoms.”

The post Shapiro steps spotted in ultracold bosonic and fermionic gases appeared first on Physics World.

Watching how grasshoppers glide inspires new flying robot design

9 janvier 2026 à 15:43

While much insight has been gleaned from how grasshoppers hop, their gliding prowess has mostly been overlooked. Now researchers at Princeton University have studied how these gangly insects deploy and retract their wings to inspire a new approach to flying robots.

Typical insect-inspired robot designs are often based on bees and flies. They feature constant flapping motion, yet that requires a lot of power so the robots either carry heavy batteries or are tethered to a power supply.

Grasshoppers, however, are able to jump and glide as well as flap their wings and while they are not the best gliding insect, they have another trick as they are able to retract and unfurl their wings.

Grasshoppers have two sets of wings, the forewings and hindwings. The front wing is mainly used for protection and camouflage while the hindwing is used for flight. The hindwing is corrugated, which allows it to fold in neatly like an accordion.

A team of engineers, biologists and entomologists analysed the wings of the American grasshopper, also known as the bird grasshopper, due to its superior flying skills. They took CT scans of the insects and then used the findings to 3D-print model wings. They attached these wings to small frames to create grasshopper-inspired gliders, finding that their performance was on par with that of actual grasshoppers.

The team also tweaked certain wing features such as the shape, camber and corrugation, finding that a smooth wing produced gliding that was more efficient and repeatable than one with corrugations. “This showed us that these corrugations might have evolved for other reasons,” notes Princeton engineer Aimy Wissa, who adds that “very little” is known about how grasshoppers deploy their wings.

The researchers say that further work could result in new ways to extend the flight time for insect-sized robots without the need for heavy batteries or tethering. “This grasshopper research opens up new possibilities not only for flight, but also for multimodal locomotion,” adds Lee. “By combining biology with engineering, we’re able to build and ideate on something completely new.”

The post Watching how grasshoppers glide inspires new flying robot design appeared first on Physics World.

Cracking the limits of clocks: a new uncertainty relation for time itself

9 janvier 2026 à 13:00

What if a chemical reaction, ocean waves or even your heartbeat could all be used as clocks? That’s the starting point of a new study by Kacper Prech, Gabriel Landi and collaborators, who uncovered a fundamental, universal limit to how precisely time can be measured in noisy, fluctuating systems. Their discovery – the clock uncertainty relation (CUR) – doesn’t just refine existing theory, it reframes timekeeping as an information problem embedded in the dynamics of physical processes, from nanoscale biology to engineered devices.

The foundation of this work contains a simple but powerful reframing: anything that “clicks” regularly is a clock. In the research paper’s opening analogy, a castaway tries to cook a fish without a wristwatch. They could count bird calls, ocean waves, or heartbeats – each a potential timekeeper with different cadence and regularity. But questions remain: given real-world fluctuations, what’s the best way to estimate time, and what are the inescapable limits?

The authors answer both. They show for a huge class of systems – those described by classical, Markovian jump processes (systems where the future depends only on the present state, not the past history – a standard model across statistical physics and biophysics) – there is a tight achievable bound on timekeeping precision. The bound is controlled not by how often the system jumps on average (the traditional “dynamical activity”), but by a subtler quantity: the mean residual time, or the average time you’d wait for the next event if you start observing at a random moment. That distinction matters.

The inspection paradox
The inspection paradox The graphic illustrates the mean residual time used in the CUR and how it connects to the so-called inspection paradox – a counterintuitive bias where randomly arriving observers are more likely to land in longer gaps between events. Buses arrive in clusters (gaps of 5 min) separated by long intervals (15 min), so while the average time between buses might seem moderate, a randomly arriving passenger (represented by the coloured figures) is statistically more likely to land in one of the long 15-min gaps than in a short 5-min one. The mean residual time is the average time a passenger waits for their bus if they arrive at the bus stop at a random time. Counterintuitively, this can be much longer than the average time between buses. The visual also demonstrates why the mean residual time captures more information than the simple average interval, since it accounts for the uneven distribution of gaps that biases your real waiting experience. (Courtesy: IOP Publishing)

The study introduces CUR, a universal, tight bound on timekeeping precision that – unlike earlier bounds – can be saturated and the researchers identify the exact observables that achieve this limit. Surprisingly, the optimal strategy for estimating time from a noisy process is remarkably simple: sum the expected waiting times of each observed state along the trajectory, rather than relying on complex fitting methods. The work also reveals that the true limiting factor for precision isn’t the traditional dynamical activity, but rather the inverse of the mean residual time. This makes the CUR provably tighter than the earlier kinetic uncertainty relation, especially in systems far from equilibrium.

The team also connects precision to two practical clock metrics: resolution (how often a clock ticks) and accuracy (how many ticks before it drifts by one tick.) In other words, achieving steadier ticks comes at the cost of accepting fewer of them per unit of time.

This framework offers practical tools across several domains. It can serve as a diagnostic for detecting hidden states in complex biological or chemical systems: if measured event statistics violate the CUR, that signals the presence of hidden transitions or memory effects. For nanoscale and molecular clocks – like biomolecular oscillators (cellular circuits that produce rhythmic chemical signals) and molecular motors (protein machines that walk along cellular tracks) – the CUR sets fundamental performance limits and guides the design of optimal estimators. Finally, while this work focuses on classical systems, it establishes a benchmark for quantum clocks, pointing toward potential quantum advantages and opening new questions about what trade-offs emerge in the quantum regime.

Landi, an associate professor of theoretical quantum physics at the University of Rochester, emphasizes the conceptual shift: that clocks aren’t just pendulums and quartz crystals. “Anything is a clock,” he notes. The team’s framework “gives the recipe for constructing the best possible clock from whatever fluctuations you have,” and tells you “what the best noise-to-signal ratio” can be. In everyday terms, the Sun is accurate but low-resolution for cooking; ocean waves are higher resolution but noisier. The CUR puts that intuition on firm mathematical ground.

Looking forward, the group is exploring quantum generalizations and leveraging CUR-violations to infer hidden structure in biological data. A tantalizing foundational question lingers: can robust biological timekeeping emerge from many bad, noisy clocks, synchronizing into a good one?

Ultimately, this research doesn’t just sharpen a bound; it reframes timekeeping as a universal inference task grounded in the flow of events. Whether you’re a cell sensing a chemical signal, a molecular motor stepping along a track or an engineer building a nanoscale device, the message is clear: to tell time well, count cleverly – and respect the gaps.

The research is detailed in Physical Review X.

The post Cracking the limits of clocks: a new uncertainty relation for time itself appeared first on Physics World.

Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously

9 janvier 2026 à 11:00

A new microscope that can simultaneously measure both forward- and backward-scattered light from a sample could allow researchers to image both micro- and nanoscale objects at the same time. The device could be used to observe structures as small as individual proteins, as well as the environment in which they move, say the researchers at the University of Tokyo who developed it.

“Our technique could help us link cell structures with the motion of tiny particles inside and outside cells,” explains Kohki Horie of the University of Tokyo’s department of physics, who led this research effort. “Because it is label-free, it is gentler on cells and better for long observations. In the future, it could help quantify cell states, holding potential for drug testing and quality checks in the biotechnology and pharmaceutical industries.”

Detecting forward and backward scattered light at the same time

The new device combines two powerful imaging techniques routinely employed in biomedical applications: quantitative phase microscopy (QPM) and interferometric scattering (iSCAT).

QPM measures forward-scattered (FS) light – that is, light waves that travel in the same direction as before they were scattered. This technique is excellent at imaging structures in the Mie scattering region (greater than 100 nm, referred to as microscale in this study). This makes it ideal for visualizing complex structures such as biological cells. It falls short, however, when it comes to imaging structures in the Rayleigh scattering region (smaller than 100 nm, referred to as nanoscale in this study).

The second technique, iSCAT, detects backward-scattered (BS) light. This is light that’s reflected back towards the direction from which it came and which predominantly contains Rayleigh scattering. As such, iSCAT exhibits high sensitivity for detecting nanoscale objects. Indeed, the technique has recently been used to image single proteins, intracellular vesicles and viruses. It cannot, however, image microscale structures because of its limited ability to detect in the Mie scattering region.

The team’s new bidirectional quantitative scattering microscope (BiQSM) is able to detect both FS and BS light at the same time, thereby overcoming these previous limitations.

Cleanly separating the signals from FS and BS

The BiQSM system illuminates a sample through an objective lens from two opposite directions and detects both the FS and BS light using a single image sensor. The researchers use the spatial-frequency multiplexing method of off-axis digital holography to capture both images simultaneously. The biggest challenge, says Horie, was to cleanly separate the signals from FS and BS light in the images while keeping noise low and avoiding mixing between them.

Horie and colleagues, Keiichiro Toda, Takuma Nakamura and team leader Takuro Ideguchi, tested their technique by imaging live cells. They were able to visualize micron-sized cell structures, including the nucleus, nucleoli and lipid droplets, as well as nanoscale particles. They compared the FS and BS results using the scattering-field amplitude (SA), defined as the amplitude ratios between the scattered wave and the incident illumination wave.

“SA characterizes the light scattered in both the forward and backward directions within a unified framework,” says Horie, “so allowing for a direct comparison between FS and BS light images.”

Spurred on by their findings, which are detailed in Nature Communications, the researchers say they now plan to study even smaller particles such as exosomes and viruses.

The post Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously appeared first on Physics World.

Quantum information theory sheds light on quantum gravity

8 janvier 2026 à 15:34

This episode of the Physics World Weekly podcast features Alex May, whose research explores the intersection of quantum gravity and quantum information theory. Based at Canada’s Perimeter Institute for Theoretical Physics, May explains how ideas being developed in the burgeoning field of quantum information theory could help solve one of the most enduring mysteries in physics – how to reconcile quantum mechanics with Einstein’s general theory of relativity, creating a viable theory of quantum gravity.

This interview was recorded in autumn 2025 when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the last of those conversations to appear on the podcast.

The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“; and the third was with Tim Hsieh, “Building a quantum future using topological phases of matter and error correction”.

APS logo

 

This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March 2026 in Denver, Colorado, and online.

The post Quantum information theory sheds light on quantum gravity appeared first on Physics World.

Chess960 still results in white having an advantage, finds study

8 janvier 2026 à 14:43

Chess is a seemingly simple game, but one that hides incredible complexity. In the standard game, the starting positions of the pieces are fixed so top players rely on memorizing a plethora of opening moves, which can sometimes result in boring, predictable games. It’s also the case that playing as white, and therefore going first, offers an advantage.

In the 1990s, former chess world champion Bobby Fischer proposed another way to play chess to encourage more creative play.

This form of the game – dubbed Chess960 – keeps the pawns in the same position but randomizes where the pieces at the back of the board – the knights, bishops, rooks, king and queen – are placed at the start while keeping the rest of the rules the same. It is named after the 960 starting positions that result from mixing it up at the back.

It was thought that Chess960 could allow for more permutations that would make the game fairer for both players. Yet research by physicist Marc Barthelemy at Paris-Saclay University suggests it’s not as simple as this.

Initial advantage

He used the open-source chess program called Stockfish to analyse each of the 960 starting positions and developed a statistical method to measure decision-making complexity by calculating how much “information” a player needs to identify the best moves.

He found that the standard game can be unfair, as players with black pieces who go second have to keep up with the moves from the player with white.

Yet regardless of starting positions at the back, Barthelemy discovered that white still has an advantage in almost all – 99.6% – of the 960 positions. He also found that the standard set-up – rook, knight, bishop, queen, king, bishop, knight, rook – is nothing special and is presumably an historical accident possibly as the starting positions are easy to remember, being visually symmetrical.

“Standard chess, despite centuries of cultural evolution, does not occupy an exceptional location in this landscape: it exhibits a typical initial advantage and moderate total complexity, while displaying above-average asymmetry in decision difficulty,” writes Barthelemy.

For a more fair and balanced match, Barthelemy suggests playing position #198, which has the starting positions as queen, knight, bishop, rook, king, bishop, knight and rook.

The post Chess960 still results in white having an advantage, finds study appeared first on Physics World.

Tetraquark measurements could shed more light on the strong nuclear force

8 janvier 2026 à 11:00

The Compact Muon Solenoid (CMS) Collaboration has made the first measurements of the quantum properties of a family of three “all-charm” tetraquarks that was recently discovered at the Large Hadron Collider (LHC) at CERN. The findings could help shed more light on the properties of the strong nuclear force, which holds protons and neutrons together in nuclei. The result could help us better understand how ordinary matter forms.

In recent years, the LHC has discovered tens of massive particles called hadrons, which are made of quarks bound together by the strong force. Quarks come in six types: up, down, charm, strange, top and bottom. Most observed hadrons comprise two or three quarks (called mesons and baryons, respectively). Physicists have also observed exotic hadrons that comprise four or five quarks. These are the tetraquarks and pentaquarks respectively. Those seen so far usually contain a charm quark and its antimatter counterpart (a charm antiquark), with the remaining two or three quarks being up, down or strange quarks, or their antiquarks.

Identifying and studying tetraquarks and pentaquarks helps physicists to better understand how the strong force binds quarks together. This force also binds protons and neutrons in atomic nuclei.

Physicists are still divided as to the nature of these exotic hadrons. Some models suggest that their quarks are tightly bound via the strong force, so making these hadrons compact objects. Others say that the quarks are only loosely bound. To confuse things further, there is evidence that in some exotic hadrons, the quarks might be both tightly and loosely bound at the same time.

Now, new findings from the CMS Collaboration suggest that tetraquarks are tightly bound, but they do not completely rule out other models.

Measuring quantum numbers

In their work, which is detailed in Nature, CMS physicists studied all-charm tetraquarks. These comprise two charm quarks and two charm antiquarks and were produced by colliding protons at high energies at the LHC. Three states of this tetraquark have been identified at the LHC. These are: X(6900); X(6600); and X(7100), where the numbers denote their approximate mass in millions of electron volts. The team measured the fundamental properties of these tetraquarks, including their quantum numbers: parity (P); charge conjugation (C); angular momentum, and spin (J). P determines whether a particle has the same properties as its spatial mirror image; C whether it has the same properties as its antiparticle; and J, the total angular momentum of the hadron. These numbers provide information on the internal structure of a tetraquark.

The researchers used a version of a well-known technique called angular analysis, which is similar to the technique used to characterize the Higgs boson. This approach focuses on the angles at which the decay products of the all-charm tetraquarks are scattered.

“We call this technique quantum state tomography,” explains CMS team member Chiara Mariotti of the INFN Torino inItaly. “Here, we deduce the quantum state of an exotic state X from the analysis of its decay products. In particular, the angular distributions in the decay X → J/ψJ/ψ, followed by J/ψ decays into two muons, serve as analysers of polarization of two J/ψ particles,” she explains.

The researchers analysed all-charm tetraquarks produced at the CMS experiment between 2016 and 2018. They calculated that J is likely to be 2 and that P and C are both +1. This combination of properties is expressed as 2++.

Result favours tightly-bound quarks

“This result favours models in which all four quarks are tightly bound,” says particle physicist Timothy Gershon of the UK’s University of Warwick, who was not involved in this study. “However, the question is not completely put to bed. The sample size in the CMS analysis is not sufficient to exclude fully other possibilities, and additionally certain assumptions are made that will require further testing in future.”

Gershon adds, “These include assumptions that all three states have the same quantum numbers, and that all correspond to tetraquark decays to two J/ψ mesons with no additional particles not included in the reconstruction (for example there could be missing photons that have been radiated in the decay).”

Further studies with larger data samples are warranted, he adds. “Fortunately, CMS as well as both the LHCb and the ATLAS collaborations [at CERN] already have larger samples in hand, so we should not have to wait too long for updates.”

Indeed, the CMS Collaboration is now gathering more data and exploring additional decay modes of these exotic tetraquarks. “This will ultimately improve our understanding how this matter forms, which, in turn, could help refine our theories of how ordinary matter comes into being,” Mariotti tells Physics World.

The post Tetraquark measurements could shed more light on the strong nuclear force appeared first on Physics World.

Reinforcement learning could help airborne wind energy take off

7 janvier 2026 à 17:00

When people think of wind energy, they usually think of windmill-like turbines dotted among hills or lined up on offshore platforms. But there is also another kind of wind energy, one that replaces stationary, earthbound generators with tethered kites that harvest energy as they soar through the sky.

This airborne form of wind energy, or AWE, is not as well-developed as the terrestrial version, but in principle it has several advantages. Power-generating kites are much less massive than ground-based turbines, which reduces both their production costs and their impact on the landscape. They are also far easier to install in areas that lack well-developed road infrastructure. Finally, and perhaps most importantly, wind speeds are many times greater at high altitudes than they are near the ground, significantly enhancing the power densities available for kites to harvest.

There is, however, one major technical challenge for AWE, and it can be summed up in a single word: control. AWE technology is operationally more complex than conventional turbines, and the traditional method of controlling kites (known as model-predictive control) struggles to adapt to turbulent wind conditions. At best, this reduces the efficiency of energy generation. At worst, it makes it challenging to keep devices safe, stable and airborne.

In a paper published in EPL, Antonio Celani and his colleagues Lorenzo Basile and Maria Grazia Berni of the University of Trieste, Italy, and the Abdus Salam International Centre for Theoretical Physics (ICTP) propose an alternative control method based on reinforcement learning. In this form of machine learning, an agent learns to make decisions by interacting with its environment and receiving feedback in the form of “rewards” for good performance. This form of control, they say, should be better at adapting to the variable and uncertain conditions that power-generating kites encounter while airborne.

What was your motivation for doing this work?

Our interest originated from some previous work where we studied a fascinating bird behaviour called thermal soaring. Many birds, from the humble seagull to birds of prey and frigatebirds, exploit atmospheric currents to rise in the sky without flapping their wings, and then glide or swoop down. They then repeat this cycle of ascent and descent for hours, or even for weeks if they are migratory birds. They’re able to do this because birds are very effective at extracting energy from the atmosphere to turn it into potential energy, even though the atmospheric flow is turbulent, hence very dynamic and unpredictable.

Photo of Antonio Celani at a blackboard
Antonio Celani. (Courtesy: Antonio Celani)

In those works, we showed that we could use reinforcement learning to train virtual birds and also real toy gliders to soar. That got us wondering whether this same approach could be exported to AWE.

When we started looking at the literature, we saw that in most cases, the goal was to control the kite to follow a predetermined path, irrespective of the changing wind conditions. These cases typically used only simple models of atmospheric flow, and almost invariably ignored turbulence.

This is very different from what we see in birds, which adapt their trajectories on the fly depending on the strength and direction of the fluctuating wind they experience. This led us to ask: can a reinforcement learning (RL) algorithm discover efficient, adaptive ways of controlling a kite in a turbulent environment to extract energy for human consumption?

What is the most important advance in the paper?

We offer a proof of principle that it is indeed possible to do this using a minimal set of sensor inputs and control variables, plus an appropriately designed reward/punishment structure that guides trial-and-error learning. The algorithm we deploy finds a way to manoeuvre the kite such that it generates net energy over one cycle of operation. Most importantly, this strategy autonomously adapts to the ever-fluctuating conditions induced by turbulence.

Photo of Lorenzo Basile
Lorenzo Basile. (Courtesy: Lorenzo Basile)

The main point of RL is that it can learn to control a system just by interacting with the environment, without requiring any a priori knowledge of the dynamical laws that rule its behaviour. This is extremely useful when the systems are very complex, like the turbulent atmosphere and the aerodynamics of a kite.

What are the barriers to implementing RL in real AWE kites, and how might these barriers be overcome?

The virtual environment that we use in our paper to train the kite controller is very simplified, and in general the gap between simulations and reality is wide. We therefore regard the present work mostly as a stimulus for the AWE community to look deeper into alternatives to model-predictive control, like RL.

On the physics side, we found that some phases of an AWE generating cycle are very difficult for our system to learn, and they require a painful fine-tuning of the reward structure. This is especially true when the kite is close to the ground, where winds are weaker and errors are the most punishing. In those cases, it might be a wise choice to use other heuristic, hard-wired control strategies rather than RL.

Finally, in a virtual environment like the one we used to do the RL training in this work, it is possible to perform many trials. In real power kites, this approach is not feasible – it would take too long. However, techniques like offline RL might resolve this issue by interleaving a few field experiments where data are collected with extensive off-line optimization of the strategy. We successfully used this approach in our previous work to train real gliders for soaring.

What do you plan to do next?

We would like to explore the use of offline RL to optimize energy production for a small, real AWE system. In our opinion, the application to low-power systems is particularly relevant in contexts where access to the power grid is limited or uncertain. A lightweight, easily portable device that can produce even small amounts of energy might make a big difference in the everyday life of remote, rural communities, and more generally in the global south.

The post Reinforcement learning could help airborne wind energy take off appeared first on Physics World.

❌