↩ Accueil

Vue lecture

Magnetically launched atoms sense motion

Researchers in France have devised a new technique in quantum sensing that uses trapped ultracold atoms to detect acceleration and rotation. They then combined their quantum sensor with a conventional, classical inertial sensor to create a hybrid system that was used to measure acceleration due to Earth’s gravity and the rotational frequency of the Earth. With further development, the hybrid sensor could be deployed in the field for applications such as inertial navigation and geophysical mapping.

Measuring inertial quantities such as acceleration and rotation is at the heart of inertial navigation systems, which operate without information from satellites or other external sources. This relies on the precise knowledge of the position and orientation of the navigation device. Inertial sensors based on classical physics have been available for some time, but quantum devices are showing great promise. On one hand, classical sensors using quartz in micro-electro-mechanical (MEM) devices have gained widespread use due to their robustness and speed. However, they suffer from drifts – a gradual loss of accuracy over time, due to several factors such as temperature sensitivity and material aging. On the other hand, quantum sensors using ultracold atoms achieve better stability over long operation times. While such sensors are already commercially available, the technology is still being developed to achieve the robustness and speed of classical sensors.

Now, the Cold Atom Group of the French Aerospace Lab (ONERA) has devised a new method in atom interferometry that uses ultracold atoms to measure inertial quantities. By launching the atoms using a magnetic field gradient, the researchers demonstrated stabilities below 1 µm/s2 and 1 µrad/s for acceleration and rotation measurements over 24 hours respectively. This was done by continuously performing a 4 s interferometer sequence on the atoms for around 20 min to extract the inertial quantities. That is equivalent to driving a car for 20 min straight and knowing the acceleration and rotation to the µm/s2 and µrad/s level.

Cold-atom accelerometer–gyroscope

They built their cold-atom accelerometer–gyroscope using rubidium-87 atoms. By holding the atoms in a magneto-optical trap, the researchers cool them down to 2 µK, enabling good control over the atoms for further manipulation. By releasing the atoms from the trap, the atoms freely fall along the gravity direction. This allows the researchers to measure their free falling acceleration using atom interferometry. In their protocol, a series of three light pulses that coherently splits an atomic cloud into two paths, redirects and then recombines it allowing the cloud to interfere with itself. From the phase shift of the interference pattern, the inertial quantities can be deduced.

Measuring their rotation rates however, requires that the atoms have an initial velocity in the horizontal direction. This is done by applying a horizontal magnetic field gradient, which results in a horizontal force on atoms with magnetic moments. The rubidium atoms are prepared in one of the magnetic states known as the Zeeman sublevels. The researchers then use a pair of coils that they called the “launching coils” in the horizontal plane to create the necessary magnetic field gradient to give the atoms a horizontal velocity. The atoms are then transferred back to the ground non-magnetic state using a microwave pulse before performing atom interferometry. This avoids any additional magnetic forces that can affect interferometer’s outcome.

Analysing the launch velocity using laser pulses with tuned frequencies, the researchers are able to discriminate the velocity of the atoms whether it being from the magnetic launching scheme or other effects. The researchers observe two dominant and symmetric peaks associated to the velocity of the atoms due to the magnetic launching. However, they also observe a third smaller peak in between. This peak is due to an unwanted effect from the laser beams that transfers additional velocity to the atoms. Further improvement in the stability of the laser beams’ polarization – the orientation of its oscillating electric field with respect to its propagation axis, as well the current noise in the launching coils will allow for more atoms to be launched.

Using their new launch technique, the researchers operated their cold-atom dual accelerometer–gyroscope for two days straight, averaging down their results to obtain an acceleration measurement of 7×10−7 m/s2 and a rotation rate of 4×10−7 rad/s, limited by residual ground vibration noise.

Best of both worlds

While classical sensors suffer from long term drifts, they operate continuously in comparison to a quantum sensor that requires preparation of the atomic sample and the interferometry process which takes around half a second. For this reason, a classical–quantum hybrid sensor benefits from the long-term stability of the quantum sensor and the fast repetition rate of the classical one. By attaching a commercial classical accelerometer and a commercial classical gyroscope to the atom interferometer, they implemented a feedback loop on the classical sensor’s outputs. The researchers demonstrated a respective 100-fold and three-fold improvement on the acceleration and rotation rates stabilities, respectively, of the classical sensors compared to when they are operated alone.

Operating this hybrid sensor continuously and utilizing their magnetic launch technique, the researchers report a measure of the local acceleration due gravity in their laboratory of 980,881.397 mGal (the milligal is a standard unit of gravimetry). They measured Earth’s rotation rate to be 4.82 × 10−5 rad/s. Cross checking with another atomic gravimeter, they find their acceleration value deviating by 2.3 mGal, which they regard to be due to misalignment of the vertical interferometer beams. Their rotation measurement has a significant error of about 25%, which the team attributes to wave-front distortions for the Raman beams used in their interferometer.

Yannick Bidel, a researcher working on this project, explains how such an inertial quantum sensor has room for improvement. Large-momentum-transfer, a technique to increase the arm separation of the interferometer, is one way to go. He further adds that once they reach bias stabilities of 10−9 to 10−10 rad/s within a compact size atom interferometer, such a sensor could become transportable and ready for in-field measurement campaigns.

The research is described in Science Advances.

The post Magnetically launched atoms sense motion appeared first on Physics World.

  •  

Are you better than AI? Try our quiz to find out

Two images: a black hole and a 1950s computer
(Courtesy: EHT Collaboration; Los Alamos National Laboratory)

1 When the Event Horizon Telescope imaged a black hole in 2019, what was the total mass of all the hard drives needed to store the data?
A 1 kg
B 50 kg
C 500 kg
D 2000 kg

2 In 1956 MANIAC I became the first computer to defeat a human being in chess, but because of its limited memory and power, the pawns and which other pieces had to be removed from the game?
A Bishops
B Knights
C Queens
D Rooks

Two images: cartoon of the Monty Hall problem and data storage racks
(Courtesy: IOP Publishing; CERN)

3 The logic behind the Monty Hall problem, which involves a car and two goats behind different doors, is one of the cornerstones of machine learning. On which TV game show is it based?
A Deal or No Deal
B Family Fortunes
C Let’s Make a Deal
D Wheel of Fortune

4 In 2023 CERN broke which barrier for the amount of data stored on devices at the lab?
A 10 petabytes (1016 bytes)
B 100 petabytes (1017 bytes)
C 1 exabyte (1018 bytes)
D 10 exabytes (1019 bytes)

5 What was the world’s first electronic computer?
A Atanasoff–Berry Computer (ABC)
B Electronic Discrete Variable Automatic Computer (EDVAC)
C Electronic Numerical Integrator and Computer (ENIAC)
D Small-Scale Experimental Machine (SSEM)

6 What was the outcome of the chess match between astronaut Frank Poole and the HAL 9000 computer in the movie 2001: A Space Odyssey?
A Draw
B HAL wins
C Poole wins
D Match abandoned

7 Which of the following physics breakthroughs used traditional machine learning methods?
A Discovery of the Higgs boson (2012)
B Discovery of gravitational waves (2016)
C Multimessenger observation of a neutron-star collision (2017)
D Imaging of a black hole (2019)

8 The physicist John Hopfield shared the 2024 Nobel Prize for Physics with Geoffrey Hinton for their work underpinning machine learning and artificial neural networks – but what did Hinton originally study?
A Biology
B Chemistry
C Mathematics
D Psychology

9 Put the following data-driven discoveries in chronological order.
A Johann Balmer’s discovery of a formula computing wavelength from Anders Ångström’s measurements of the hydrogen lines
B Johannes Kepler’s laws of planetary motion based on Tycho Brahe’s astronomical observations
C Henrietta Swan Leavitt’s discovery of the period-luminosity relationship for Cepheid variables
D Ole Rømer’s estimation of the speed of light from observations of the eclipses of Jupiter’s moon Io

10 Inspired by Alan Turing’s “Imitation Game” – in which an interrogator tries to distinguish between a human and machine – when did Joseph Weizenbaum develop ELIZA, the world’s first “chatbot”?
A 1964
B 1984
C 2004
D 2024

11 What does the CERN particle-physics lab use to store data from the Large Hadron Collider?
A Compact discs
B Hard-disk drives
C Magnetic tape
D Solid-state drives

12 In preparation for the High Luminosity Large Hadron Collider, CERN tested a data link to the Nikhef lab in Amsterdam in 2024 that ran at what speed?
A 80 Mbps
B 8 Gbps
C 80 Gbps
D 800 Gbps

13 When complete, the Square Kilometre Array telescope will be the world’s largest radio telescope. How many petabytes of data is it expected to archive per year?
A 15
B 50
C 350
D 700

  • This quiz is for fun and there are no prizes. Answers will be published in April.

The post Are you better than AI? Try our quiz to find out appeared first on Physics World.

  •  

How would an asteroid strike affect life on Earth?

How would the climate and the environment on our planet change if an asteroid struck? Researchers at the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea have now tried to answer this question by running several impact simulations with a state-of-the-art Earth system model on their in-house supercomputer. The results show that the climate, atmospheric chemistry and even global photosynthesis would be dramatically disrupted in the three to four years following the event, due to the huge amounts of dust produced by the impact.

Beyond immediate effects such as scorching heat, earthquakes and tsunamis, an asteroid impact would have long-lasting effects on the climate because of the large quantities of aerosols and gases ejected into the atmosphere. Indeed, previous studies on the Chicxulub 10-km asteroid impact, which happened around 66 million years ago, revealed that dust, soot and sulphur led to a global “impact winter” and was very likely responsible for the dinosaurs going extinct during the Cretaceous/Paleogene period.

“This winter is characterized by reduced sunlight, because of the dust filtering it out, cold temperatures and decreased precipitation at the surface,” says Axel Timmermann, director of the ICCP and leader of this new study. “Severe ozone depletion would occur in the stratosphere too because of strong warming caused by the dust particles absorbing solar radiation there.”

These unfavourable climate conditions would inhibit plant growth via a decline in photosynthesis both on land and in the sea and would thus affect food productivity, Timmermann adds.

Something surprising and potentially positive would also happen though, he says: plankton in the ocean would recover within just six months and its abundance could even increase afterwards. Indeed, diatoms (silicate-rich algae) would be more plentiful than before the collision. This might be because the dust created by the asteroid is rich in iron, which would trigger plankton growth as it sinks into the ocean. These phytoplankton “blooms” could help alleviate emerging food crises triggered by the reduction in terrestrial productivity, at least for several years after the impact, explains Timmermann.

The effect of a “Bennu”-sized asteroid impact

In this latest study, published in Science Advances, the researchers simulated the effect of a “Bennu”-sized asteroid impact. Bennu is a so-called medium-sized asteroid with a diameter of around 500 m. This type of asteroid is more likely to impact Earth than the “planet killer” larger asteroids, but has been studied far less.

There is an estimated 0.037% chance of such an asteroid colliding with Earth in September 2182. While this probability is small, such an impact would be very serious, says Timmermann, and would lead to climate conditions similar to those observed after some of the largest volcanic eruptions in the last 100 000 years. “It is therefore important to assess the risk, which is the product of the probability and the damage that would be caused, rather than just the probability by itself,” he tells Physics World. “Our results can serve as useful benchmarks to estimate the range of environmental effects from future medium-sized asteroid collisions.”

The team ran the simulations on the IBS’ supercomputer Aleph using the Community Earth System Model Version 2 (CESM2) and the Whole Atmosphere Community Climate Model Version 6 (WACCM6). The simulations injected up to 400 million tonnes of dust into the stratosphere.

The climate effects of impact-dust aerosols mainly depend on their abundance in the atmosphere and how they evolve there. The simulations revealed that global mean temperatures would drop by 4° C, a value that’s comparable with the cooling estimated as a result of the Toba volcano erupting around 74 000 years ago (which emitted 2000 Tg (2×1015 g) of sulphur dioxide). Precipitation also decreased 15% worldwide and ozone dropped by a dramatic 32% in the first year following the asteroid impact.

Asteroid impacts may have shaped early human evolution

“On average, medium-sized asteroids collide with Earth about every 100 000 to 200 000 years,” says Timmermann. “This means that our early human ancestors may have experienced some of these medium-sized events. These may have impacted human evolution and even affected our species’ genetic makeup.”

The researchers admit that their model has some inherent limitations. For one, CESM2/WACCM6, like other modern climate models, is not designed and optimized to simulate the effects of massive amounts of aerosol injected into the atmosphere. Second, the researchers only focused on the asteroid colliding with the Earth’s land surface. This is obviously less likely than an impact on the ocean, because roughly 70% of Earth’s surface is covered by water, they say. “An impact in the ocean would inject large amounts of water vapour rather than climate-active aerosols such as dust, soot and sulphur into the atmosphere and this vapour needs to be better modelled – for example, for the effect it has on ozone loss,” they say.

The effect of the impact on specific regions on the planet also needs to be better simulated, the researchers add. Whether the asteroid impacts during winter or summer also needs to be accounted for since this can affect the extent of the climate changes that would occur.

Finally, as well as the dust nanoparticles investigated in this study, future work should also look at soot emissions from wildfires ignited by “impact “spherules”, and sulphur and CO2 released from target evaporites, say Timmermann and colleagues. “The ‘impact winter’ would be intensified and prolonged if other aerosols such as soot and sulphur were taken into account.”

The post How would an asteroid strike affect life on Earth? appeared first on Physics World.

  •  

Ionizing radiation: its biological impacts and how it is used to treat disease

This episode of the Physics World Weekly podcast features Ileana Silvestre Patallo, a medical physicist at the UK’s National Physical Laboratory, and Ruth McLauchlan, consultant radiotherapy physicist at Imperial College Healthcare NHS Trust.

In a wide-ranging conversation with Physics World’s Tami Freeman, Patallo and McLauchlan explain how ionizing radiation such as X-rays and proton beams interact with our bodies and how radiation is being used to treat diseases including cancer.

The post Ionizing radiation: its biological impacts and how it is used to treat disease appeared first on Physics World.

  •  

Earth’s core could contain lots of primordial helium, experiments suggest

Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.

Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.

Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.

As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.

Tantalizing clues

However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.

The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.

It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.

Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.

To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.

Synchrotron and mass spectrometer

“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.

These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.

However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.

Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.

The research is described in Physical Review Letters.

The post Earth’s core could contain lots of primordial helium, experiments suggest appeared first on Physics World.

  •  

US science rues ongoing demotion of research under President Trump

Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.

In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security. 

A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.

The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at  institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.

NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.

Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.

Climate concerns

In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.

The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.

The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”

At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.

A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.

Falling behind

Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.

Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.

Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”

The post US science rues ongoing demotion of research under President Trump appeared first on Physics World.

  •  

Joint APS meeting brings together the physics community

New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.

Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.

The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.

With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.

Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.

Precision motion drives innovative instruments for physics applications

For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).

This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.

Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.

Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.

Mad City Labs product portfolio
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)

The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.

Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.

Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.

The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.

For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.

  • Visit Mad City Labs at the APS Global Summit, at booth #401

New lasers target quantum, Raman spectroscopy and life sciences

HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.

Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.

Ampheia Series laser from HÜBNER Photonics
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)

In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.

For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.

For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.

The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.

In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.

  • Visit HÜBNER Photonics at the APS Global Summit, at booth #359.

 

The post Joint APS meeting brings together the physics community appeared first on Physics World.

  •  

Lost in the mirror: as AI development gathers momentum, will it reflect humanity’s best or worst attributes?

Are we at risk of losing ourselves in the midst of technological advancement? Could the tools we build to reflect our intelligence start distorting our very sense of self? Artificial intelligence (AI) is a technological advancement with huge ethical implications, and in The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, Shannon Vallor offers a philosopher’s perspective on this vital question.

Vallor, who is based at the University of Edinburgh in the UK, argues that artificial intelligence is not just reshaping society but is also subtly rewriting our relationship with knowledge and autonomy. She even goes as far as to say, “Today’s AI mirrors tell us what it is to be human – what we prioritize, find good, beautiful or worth our attention.”

Vallor employs the metaphor of AI as a mirror – a device that reflects human intelligence but lacks independent creativity. According to her, AI systems, which rely on curated sets of training data, cannot truly innovate or solve new challenges. Instead, they mirror our collective past, reflecting entrenched biases and limiting our ability to address unprecedented global problems like climate change. Therefore, unless we carefully consider how we build and use AI, it risks stalling human progress by locking us into patterns of the past.

The book explores how humanity’s evolving relationship with technology – from mechanical automata and steam engines to robotics and cloud computing – has shaped the development of AI. Vallor grounds readers in what AI is and, crucially, what it is not. As she explains, while AI systems appear to “think”, they are fundamentally tools designed to process and mimic human-generated data.

The book’s philosophical underpinnings are enriched by Vallor’s background in the humanities and her ethical expertise. She draws on myths, such as the story of Narcissus, who met a tragic end after being captivated by his reflection, to illustrate the dangers of AI. She gives as an example the effect that AI social-media filters have on the propagation and domination of Western beauty standards.

Vallor also explores the long history of literature grappling with artificial intelligence, self-awareness and what it truly means to be human. These fictional works, which include Do Androids Dream of Electric Sheep? by Philip K Dick, are used not just as examples but as tools to explore the complex relationship between humanity and AI. The emphasis on the ties between AI and popular culture results in writing that is both accessible and profound, deftly weaving complex ideas into a narrative that engages readers from all backgrounds.

One area where I find Vallor’s conclusions contentious is her vision for AI in augmenting science communication and learning. She argues that our current strategies for science communication are inadequate and that improving public and student access to reliable information is critical. In her words: “Training new armies of science communicators is an option, but a less prudent use of scarce public funds than conducting vital research itself. This is one area where AI mirrors will be useful in the future.”

Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences

In my opinion, this statement warrants significant scrutiny. Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences. While public distrust of experts is a legitimate issue, delegating science communication to AI risks exacerbating the problem.

AI’s lack of genuine understanding, combined with its susceptibility to bias and detachment from human nuance, could further erode trust and deepen the disconnect between science and society. Vallor’s optimism in this context feels misplaced. AI, as it currently stands, is ill-suited to bridge the gaps that good science communication seeks to address.

Despite its generally critical tone, The AI Mirror is far from a technophobic manifesto. Vallor’s insights are ultimately hopeful, offering a blueprint for reclaiming technology as a tool for human advancement. She advocates for transparency, accountability, and a profound shift in economic and social priorities. Rather than building AI systems to mimic human behaviour, she argues, we should design them to amplify our best qualities – creativity, empathy and moral reasoning – while acknowledging the risk that this technology will devalue these talents as well as amplify them.

The AI Mirror is essential reading for anyone concerned about the future of artificial intelligence and its impact on humanity. Vallor’s arguments are rigorous yet accessible, drawing from philosophy, history and contemporary AI research. She challenges readers to see AI not as a technological inevitability but as a cultural force that we must actively shape.

Her emphasis on the need for a “new language of virtue” for the AI age warrants consideration, particularly in her call to resist the seductive pull of efficiency and automation at the expense of humanity. Vallor argues that as AI systems increasingly influence decision-making in society, we must cultivate a vocabulary of ethical engagement that goes beyond simplistic notions of utility and optimization. As she puts it: “We face a stark choice in building AI technologies. We can use them to strengthen our humane virtues, sustaining and extending our collective capabilities to live wisely and well. By this path, we can still salvage a shared future for human flourishing.”

Vallor’s final call to action is clear: we must stop passively gazing into the AI mirror and start reshaping it to serve humanity’s highest virtues, rather than its worst instincts. If AI is a mirror, then we must decide what kind of reflection we want to see.

The post Lost in the mirror: as AI development gathers momentum, will it reflect humanity’s best or worst attributes? appeared first on Physics World.

  •  

NASA launches $488m megaphone-shaped SPHEREx observatory to map the universe

NASA has launched a $488m infrared mission to map the distribution of galaxies and study cosmic inflation. The Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer (SPHEREx) mission was launched yesterday from Vandenberg Space Force Base in California by a SpaceX Falcon-9 rocket.

Set to operate for two years in a polar orbit about 650 km from the Earth’s surface, SPHEREx will collect data from 450 million galaxies as well as more than 100 million stars to create a 3D map of the cosmos.

It will use to this gain an insight into cosmic inflation – the rapid expansion of the universe following the Big Bang.

It will also search the Milky Way for hidden reservoirs of water, carbon dioxide and other ingredients critical for life as well as study the cosmic glow of light from the space between galaxies.

The craft features three concentric shields that surround the telescope to protect it from light and heat. Three mirrors, including a 20 cm primary mirror, collect light before feed it into filters and detectors. The set-up allows the telescope to resolve 102 different wavelengths of light.

Packing a punch

SPHEREx has been launched together with another NASA mission dubbed Polarimeter to Unify the Corona and Heliosphere (PUNCH). Via a constellation of four satellites in a low-Earth orbit, PUNCH will make 3D observations of the Sun’s corona to learn how the mass and energy become solar wind. It will also explore the formation and evolution of space weather events such as coronal mass ejections, which can create storms of energetic particle radiation that can be damaging to spacecraft.

PUNCH will now undergo a three-month commissioning period in which the four satellites will enter the correct orbital formation and the instruments calibrated to operate as a single “virtual instrument” before it begins studying the solar wind.

“Everything in NASA science is interconnected, and sending both SPHEREx and PUNCH up on a single rocket doubles the opportunities to do incredible science in space,” noted Nicky Fox, associate administrator for NASA’s science mission directorate. “Congratulations to both mission teams as they explore the cosmos from far-out galaxies to our neighbourhood star. I am excited to see the data returned in the years to come.”

The post NASA launches $488m megaphone-shaped SPHEREx observatory to map the universe appeared first on Physics World.

  •  

Perovskite solar cells can be completely recycled

A research team headed up at Linköping University in Sweden and Cornell University in the US has succeeded in recycling almost all of the components of perovskite solar cells using simple, non-toxic, water-based solvents. What’s more, the researchers were able to use the recycled components to make new perovskite solar cells with almost the same power conversion efficiency as those created from new materials. This work could pave the way to a sustainable perovskite solar economy, they say.

While solar energy is considered an environmentally friendly source of energy, most of the solar panels available today are based on silicon, which is difficult to recycle. This has led to the first generation of silicon solar panels, which are reaching the end of their life cycles, ending up in landfills, says Xun Xiao, one of the team members at Linköping University.

When developing emerging solar cell technologies, we therefore need to take recycling into consideration, adds one of the leaders of the new study, Feng Gao, also at Linköping. “If we don’t know how to recycle them, maybe we shouldn’t put them on the market at all.”

To this end, many countries around the world are imposing legal requirements on photovoltaic manufacturers, to ensure that they collect and recycle any solar cell waste they produce. These initiatives include the WEEE directive 2012/19/EU in the European Union and equivalent legislation in Asia and the US.

Perovskites are one of the most promising materials for making next-generation solar cells. Not only are they relatively inexpensive, they are also easy to fabricate, lightweight, flexible and transparent. This allows them to be placed on top of a variety of surfaces, unlike their silicon counterparts. And since they boast a power conversion efficiency (PCE) of more than 25%, this makes them comparable to existing photovoltaics on the market.

A shorter lifespan

One of their downsides, however, is that perovskite solar cells have a shorter lifespan than silicon solar cells. This means that recycling is even more critical for these materials. Today, perovskite solar cells are disassembled using dangerous solvents such as dimethylformamide, but Gao and colleagues have now developed a technique in which water can be used as the solvent.

Perovskites are crystalline materials with an ABXstructure, where A is caesium, methylammonium (MA) or formamidinium (FA); B is lead or tin; and X is chlorine, bromine or iodine. Solar cells made of these materials are composed of different layers: the hole/electron transport layers; the perovskite layer; indium tin oxide substrates; and cover glasses.

In their work, which they detail in Nature, the researchers succeeded in delaminating end-of-life devices layer by layer, using water containing three low-cost additives: sodium acetate, sodium iodide and hypophosphorous acid. Despite being able to dissolve organic iodide salts such as methylammonium iodide and formamidinium iodide, water only marginally dissolves lead iodide (about 0.044 g per 100 ml at 20 °C). The researchers therefore developed a way to increase the amount of lead iodide that dissolves in water by introducing acetate ions into the mix. These ions readily coordinate with lead ions, forming highly soluble lead acetate (about 44.31 g per 100 ml at 20 °C).

Once the degraded perovskites had dissolved in the aqueous solution, the researchers set about recovering pure and high-quality perovskite crystals from the solution. They did this by providing extra iodide ions to coordinate with lead. This resulted in [PbI]+ transitioning to [PbI2]0 and eventually to [PbI3] and the formation of the perovskite framework.

To remove the indium tin oxide substrates, the researchers sonicated these layers in a solution of water/ethanol (50%/50% volume ratio) for 15 min. Finally, they delaminated the cover glasses by placing the degraded solar cells on a hotplate preheated to 150 °C for 3 min.

They were able to apply their technology to recycle both MAPbI3 and FAPbI3 perovskites.

New devices made from the recycled perovskites had an average power conversion efficiency of 21.9 ± 1.1%, with the best samples clocking in at 23.4%. This represents an efficiency recovery of more than 99% compared with those prepared using fresh materials (which have a PCE of 22.1 ± 0.9%).

Looking forward, Gao and colleagues say they would now like to demonstrate that their technique works on a larger scale. “Our life-cycle assessment and techno-economic analysis has already confirmed that our strategy not only preserves raw materials, but also appreciably lowers overall manufacturing costs of solar cells made from perovskites,” says co-team leader Fengqi You, who works at Cornell University. “In particular, reclaiming the valuable layers in these devices drives down expenses and helps reduce the ‘levelized cost’ of electricity they produce, making the technology potentially more competitive and sustainable at scale,” he tells Physics World.

The post Perovskite solar cells can be completely recycled appeared first on Physics World.

  •  

Preparing the next generation of US physicists for a quantum future

Quantum technologies are flourishing the world over, with advances across the board researching practical applications such as quantum computing, communication, cryptography and sensors. Indeed, the quantum industry is booming – an estimated $42bn was invested in the sector in 2023, and this amount is projected to rise to $106bn by 2040.

With academia, industry and government all looking for professionals to join the future quantum workforce, it’s crucial to have people with the right skills, and from all educational levels. With this in mind, efforts are being made across the US to focus on quantum education and training, with educators working to introduce quantum concepts from the elementary-school level, all the way to tailored programmes at PhD and postgraduate level that meet the needs of potential employers in the area. Efforts are being made to ensure that graduates and early-career physicists are aware of the many roles available in the quantum sphere.   

“There are a lot of layers to what has to be done in quantum education,” says Emily Edwards, an electrical and computer engineer at Duke University and co-leader of the National Q-12 Education Partnership. “I like to think of quantum education along different dimensions. One way is to think about what most learners may need in terms of foundational public literacy or student literacy in the space. Towards the top, we have people who are very specialized. Essentially, we have to think about many different learners at different stages – they might need specific tools or might need different barriers removed for them. And so different parts of the economy – from government to industry to academia and professional institutions – will play a role in how to address the needs of a certain group.”

Engaging young minds

To ensure that the US remains a key global player in quantum information science and technology (QIST), the National Q-12 Education Partnership – launched by the White House Office of Science and Technology Policy and the National Science Foundation (NSF) – is focused on ways to engage young minds in quantum, building the necessary tools and strategies to help improve early (K-12) education and outreach.

To achieve this, Q-12 is looking at outreach and education in middle and high school by introducing QIST concepts and providing access to learning materials and to inspire the next generation of quantum leaders. Over the next decade, Q-12 also aims to provide quantum-related curricula – developed by professionals in the field – beyond university labs and classrooms, to community colleges and online courses.

Edwards explains that while Q-12 mainly focuses on the K-12 level, there is also an overlap with early undergraduate, two-year colleges  – meaning that there is a wide range of requirements, issues and unique challenges to contend with. Such a big space also means that different companies and institutions have varying levels of funding and interests in quantum education research and development.

“Academic organizations, for example, tend to work on educational research or to provide professional development, especially because it’s nascent,” says Edwards. “There is a lot of the activity in the academic space, within professional societies. We also work with a number of private companies, some of which are developing curricula, or providing free access to different tools and simulations for learning experiences.”

The role of the APS

The American Physical Society (APS) is strongly involved in quantum education – by making sure that teachers have access to tools and resources for quantum education as well as connecting quantum professionals with K-12 classrooms to discuss careers in quantum. “The APS has been really active in engaging with teachers and connecting them with the vast network of APS members, stakeholders and professionals, to talk about careers,” says Edwards. APS and Q-12 have a number of initiatives – such as Quantum To-Go and QuanTime – that help connect quantum professionals with classrooms and provide teachers with ready-to-use quantum activities.

A classroom with a teacher stood at the front and a woman waving from a large screen on the wall
Role model The Quantum To-Go programme matches scientists, engineers and professionals in quantum information science andt technology with classrooms across the US to inspire students to enter the quantum workforce. (Courtesy: APS)

Claudia Fracchiolla, who is the APS’s head of public engagement, points out that while there is growing interest in quantum education, there is a lack of explicit support for high-school teachers who need to be having conversations about a possible career in quantum with students that will soon be choosing a major.

“We know from our research that while teachers might want to engage in this professional development, they don’t always have the necessary support from their institution and it is not regulated,” explains Fracchiolla. She adds that while there are a “few stellar people in the field who are creating materials for teachers”, there is not a clear standard on how they can be used, or what can be taught at a school level.

Quantum To-Go

To help tackle these issues, the APS and Q-12 launched the Quantum To-Go programme, which pairs educators with quantum-science professionals, who speak to students about quantum concepts and careers. The programme covers students from the first year of school through to undergraduate level, with scientists visiting in person or virtually.

It’s a really great way for quantum professionals in different sectors to visit classrooms and talk about their experiences

Emily Edwards

“I think it’s a really great way for quantum professionals in different sectors to visit classrooms and talk about their experiences,” says Edwards. She adds that this kind of collaboration can be especially useful “because we know that students  – particularly young women, or students of colour or those from any marginalized background – self-select out of these areas while they’re still in the K-12 environment.”

Edwards puts this down to a lack of role models in the workplace. “Not only do they not hear about quantum in the classroom or in their curriculum, but they also can’t see themselves working in the field,” she says. “So there’s no hope of achieving a diverse workforce if you don’t connect a diverse set of professionals with the classroom. So we are really proud to be a part of Quantum To-Go.”

Quantum resources

With 2025 being celebrated as the International Year of Quantum Science and Technology (IYQ), both Q-12 and the APS hope to see and host many community-driven activities and events focused on young learners and their families. An example of this is Q-12’s QuanTime initiative, which seeks to help teachers curate informal quantum activities across the US all year round. “Education is local in the US, and so it’s most successful if we can work with locals to help develop their own community resources,” explains Edwards.

A key event in the APS’s annual calendar of activities celebrating IYQ is the Quantum Education and Policy Summit, held in partnership with the Q-SEnSE institute. It aims to bring together key experts in physics education, policymakers and quantum industry leaders, to develop quantum educational resources and policies.

A panel of five adults at a long table with microphones and name plates
Quantum influencers Testifying before the US House Science Committee on 7 June 2023 were (from left to right) National Quantum Coordination Office director Charles Tahan, former Department of Education under secretary for science Paul Dabbar, NASA quantum scientist Eleanor Rieffel, Quantum Economic Development Consortium executive director Celia Merzbacher, and University of Illinois quantum scientist Emily Edwards (now at Duke University). (Courtesy: House Science Committee)

Another popular resource produced by the APS is its PhysicsQuest kits, which are aimed at middle-school students to help them explore specific physics topics. “We engaged with different APS members who work in quantum to design activities for middle-school students,”  says Fracchiolla. “We then worked with some teachers to pilot and test those activities, before finalizing our kits, which are freely available to teachers. Normally, each year we do four activities, but thanks to IYQ, we decided to double that to eight activities that are all related to topics in quantum science and technology.”

To help distribute these kits to teachers, as well as provide them with guidance on how to use all the included materials, the APS is hosting workshops for teachers during the Teachers’ Days at the APS Global Physics Summit in March 2025. Workshops will also be held at the APS Division of Atomic, Molecular and Optical Physics (DAMOP) annual meeting in June. 

“A key part of IYQ is creating an awareness of what quantum science and technology entails, because it is also about the people that work in the field,” says Fracchiolla. “Something that was really important when we were writing the proposal to send to the UN for the IYQ was to demonstrate how quantum technologies will supports the UN’s sustainable development goals. I hope this also inspires students to pursue careers in quantum, as they realize that it goes beyond quantum computing.”

If we are focusing on quantum technologies to address sustainable development goals, we need to make sure that they are accessible to everyone

Claudia Fracchiolla

Fracchiolla also underlines that having a diverse range of people in the quantum workforce will ensure that these technologies will help to tackle societal and environmental issues, and vice versa. “If we are focusing on quantum technologies to address sustainable development goals, we need to make sure that they are accessible to everyone. And that’s not going to happen if diverse minds are not involved in the process of developing these technologies,” she says, while acknowledging that this is currently not the case.

It is Fracchiolla’s ultimate hope that the IYQ and the APS’s activities taken together will help all students feel empowered that there is a place for them in the field. “Quantum is still a nascent field and we have the opportunity to not repeat the errors of the past, that have made many areas of science exclusive. We need to make the field diverse from the get go.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Preparing the next generation of US physicists for a quantum future appeared first on Physics World.

  •  

Demonstrators march for science in New York City

The Stand Up for Science demonstration at Washington Square Park in New York City on Friday 7 March 2025 had the most qualified speakers, angriest participants and wickedest signs of any protest I can remember.

Raucous, diverse and loud, it was held in the shadow of looming massive cuts to key US scientific agencies including the National Institutes of Health (NIH), the National Science Foundation (NSF), and the National Oceanic and Atmospheric Administration (NOAA)

Other anti-science actions have included the appointment of a vaccine opponent as head of the US Health and Human Services and the cancellation of $400m in grants and contracts to Columbia University.

I arrived at the venue half an hour beforehand. Despite the chillingly cold and breezy weather, the park’s usual characters were there, including chess players, tap dancers, people advertising “Revolution Books” and evangelists who handed me a “spiritual credit card”.

But I had come for a more real-world cause that is affecting many of my research colleagues right here, right now. Among the Stand Up For Science demonstrators was Srishti Bose, a fourth-year graduate student in neuroscience at Queens College, who met me underneath the arch at the north of the park, the traditional site of demonstrations.

She had organized the rally together with two other women – a graduate student at Stony Brook University and a postdoc at the Albert Einstein College of Medicine. They had heard that there would be a Stand Up for Science rally on the same day in Washington, DC, and thought that New York City should have one too. In fact, there were 32 across the US in total.

The trio didn’t have much time, and none of them had ever planned a political protest before. “We spent 10 days frantically e-mailing everyone we could think of,” Srishti said, of having to arrange the permits, equipment, insurance, medical and security personnel – and speakers.

Photo of demonstrators in New York City.
Speaking out Two of the protestors in Washington Square in Greenwich Village, New York. (Courtesy: Robert P Crease)

I was astounded at what they accomplished. The first speaker was Harald Varmus, who won the 1989 Nobel Prize for Physiology and Medicine and spent seven years as director of the NIH under President Barack Obama. “People think medicine falls from the sky,” he told protestors, “rather than from academics supported by science funding.”

Another Nobel-prize-winner who spoke was Martin Chalfie from Columbia University, who won the 2008 Nobel Prize for Chemistry.

Speaker after speaker – faculty, foundation directors, lab heads, faculty, postdocs, graduate students, New York State politicians – ticked off what was being lost by the budget cuts targeting science.

It included money for motor neurone disease, Alzheimer’s, cancer, polio, measles, heart disease research, climate science, and funding that supports stipends and salaries for postdocs, grad students, university labs and departments.

Lisa Randall, a theoretical physicist at Harvard University, began with a joke: “How many government officials does it take to screw in a light bulb? None: Trump says the job’s done and they stay in the dark.”

Randall continued by enumerating programme and funding cuts that will turn the lights out on important research. “Let’s keep the values that Make America Great – Again,” she concluded.

The crowd of 2000 or so demonstrators were diverse and multi-generational, as is typical for such events in my New York City. I heard at least five different languages being spoken. Everyone was fired up and roared “Boo!” whenever the names of certain politicians were mentioned.

I told Bose about the criticism I had heard that Stand Up for Science was making science look like a special-interest group rather than being carried out in the public interest.

She would have none of it. “They made us an interest group,” Bose insisted. “We grew up thinking that everyone accepted and supported science. This is the first time we’ve had a direct attack on what we do. I can’t think of a single lab that doesn’t have an NSF or NIH grant.”

Photo of demonstrator with placard.
Seriously funny Many of the demonstrators held messages aloft. (Courtesy: Robert P Crease)

Lots of signs were on display, many fabulously aggressive and angry, ranging from hand-drawn lettering on cardboard to carefully produced placards – some of which I won’t reproduce in a family magazine.

“I shouldn’t have to make a sign saying that ‘Defunding science is wrong’…but here we are” said one. “Go fact yourself!” and “Science keeps you assholes alive”, said others.

Two female breast-cancer researchers had made a sign that, they told me, put their message in a way that they thought the current US leaders would get: “Science saves boobs.”

I saw others that bitterly mocked the current US president’s apparent ignorance of the distinction between “transgenic” and “transgender”.

“Girls just wanna have funding” said another witty sign. “Executive orders are not peer reviewed”; “Science: because I’d rather not make shit up”; “Science is significant *p<0.05” said others.

The rally ended with 20 minutes of call-and-response chants. Everyone knew the words, thanks to a QR code.

“We will fight?”

“Every day!”

“When science is under attack?”

“Stand up, fight back!”

“What do we want?”

“Answers”

“When do we want it?”

“After peer review!”

After the spirited chanting, the rally was officially over, but many people stayed, sharing stories, collecting information and seeking ideas for the next moves.

“Obviously,” Bose said, “it’s not going to end here.”

The post Demonstrators march for science in New York City appeared first on Physics World.

  •  

Why nothing beats the buzz of being in a small hi-tech business

A few months ago, I attended a presentation and reception at the Houses of Parliament in London for companies that had won Business Awards from the Institute of Physics in 2024. What excited me most at the event was hearing about the smaller start-up companies and their innovations. They are developing everything from metamaterials for sound proofing to instruments that can non-invasively measure pressure in the human brain.

The event also reminded me of my own experience working in the small-business sector. After completing my PhD in high-speed aerodynamics at the University of Southampton, I spent a short spell working for what was then the Defence and Evaluation Research Agency (DERA) in Farnborough. But wanting to stay in Southampton, I decided working permanently at DERA wasn’t right for me so started looking for a suitable role closer to home.

I soon found myself working as a development engineer at a small engineering company called Stewart Hughes Limited. It was founded in 1980 by Ron Stewart and Tony Hughes, who had been researchers at the Institute of Sound and Vibration Research (ISVR) at Southampton University. Through numerous research contracts, the pair had spent almost a decade developing techniques for monitoring the condition of mechanical machinery from their vibrations.

By attaching accelerometers or vibration sensors to the machines, they discovered that the resulting signals can be processed to determine the physical condition of the devices. Their particular innovation was to find a way to both capture and process the accelerometer signals in near real time to produce indicators relating to the health of the equipment being monitored. It required a combination of hardware and software that was cutting edge at the time.

Exciting times

Although I did not join the firm until early 1994, it still had all the feel of a start-up. We were located in a single office building (in reality it was a repurposed warehouse) with 50 or so staff, about 40 of whom were electronics, software and mechanical engineers. There was a strong emphasis on “systems engineering” – in other words, integrating different disciplines to design and build an overarching solution to a problem.

In its early years, Stewart Hughes had developed a variety of applications for their vibration health monitoring technique. It was used in all sorts of areas, ranging from conveyor belts carrying coal and Royal Navy ships travelling at sea to supersized trucks working on mines. But when I joined, the company was focused on helicopter drivetrains.

In particular, the company had developed a product called Health and Usage Monitoring System (HUMS). The UK’s Civil Aviation Authority required this kind of device to be fitted on all helicopters transporting passengers to and from oil platforms in the North Sea to improve operational safety. Our equipment (and that of rival suppliers – we did not have a monopoly) was used to monitor mechanical parts such as gears, bearings, shafts and rotors.

For someone straight out of university, it was an exciting time. There were lots of technical challenges to be solved, including designing effective ways to process signals in noisy environments and extracting information about critical drivetrain components. We then had to convert the data into indicators that could be monitored to detect and diagnose mechanical issues.

As a physicist, I found myself working closely with the engineers but tended to approach things from a more fundamental angle, helping to explain why certain approaches worked and others didn’t. Don’t forget that the technology developed by Stewart Hughes wasn’t used in the comfort of a physics lab but on a real-life working helicopter. That meant capturing and processing data on the airborne helicopter itself using bespoke electronics to manage high onboard data rates.

After the data were downloaded, they had to be sent on floppy disks or other portable storage devices to ground stations. There the results would be presented in a form to allow customers and our own staff to interpret and diagnose any mechanical problems. We also had to develop ways to monitor an entire fleet of helicopters, continuously learning and developing from experience.

Steward Hughes’s innovative and successful HUMS technology, which was the first of its kind to be flown on a North Sea helicopter, saw the company win Queen’s Awards on two separate occasions. The first was in 1993 for “export achievement” and the second was in 1998 for “technological achievement”. By the end of 1998 the company was bought by Smiths Industries, which in turn was acquired by General Electric in 2007.

Stormy days

If it all sounds as if working in a small business is plain sailing, well it rarely is. A few years before I joined, Stewart Huges had ridden out at least one major storm when it was forced to significantly reduce the workforce because anticipated contracts did not materialize. “Black Friday”, as it became known, made the board of directors nervous about taking on additional employees, often relying on existing staff to work overtime instead.

This arrangement actually suited many of the early-career employees, who were keen to quickly expand their work experience and their pay packet. But when I arrived, we were once again up against cash-flow challenges, which is the bane of any small business. Back then there were no digital electronic documents and web portals, which led to some hairy situations.

I can recall several occasions when the company had to book a despatch rider for 2 p.m. on a Friday afternoon to dash a report up the motorway to the Ministry of Defence in London. If we hadn’t got an approval signature and contractual payment before the close of business on the same day, the company literally wouldn’t have been able to open its doors on Monday morning.

Being part of a small company was undoubtedly a formative part of my early career experience

At some stage, however, the company’s bank lost patience with this hand-to-mouth existence and the board of directors was told to put the firm on a more solid financial footing. This edict led to the company structure becoming more formal and the directors being less accessible, with a seasoned professional brought in to help run the business. The resulting change in strategic trajectory eventually led to its sale.

Being part of a small company was undoubtedly a formative part of my early career experience. It was an exciting time and the fact all employees were – literally – under one roof meant that we knew and worked with the decision makers. We always had the opportunity to speak up and influence the future. We got to work on unexpected new projects because there was external funding available. We could be flexible when it came to trying out new software or hardware as part of our product development.

The flip side was that we sometimes had to flex too much, which at times made it hard to stick to a cohesive strategy. We struggled to find cash to try out blue sky or speculative approaches – although there were plenty of good ideas. These advantages come with being part of a larger corporation with bigger budgets and greater overall stability.

That said, I appreciate the diverse and dynamic learning curve I experienced at Stewart Hughes. The founders were innovators, whose vision and products have stood the test of time, still being widely used today . The company benefited many people not just the staff who led successful careers but also the pilots and passengers on helicopters whose lives may potentially have been saved.

Working in a large corporation is undoubtedly a smoother ride than in a small business. But it’s rarely seat-of-the-pants stuff and I learned so much from my own days at Stewart Hughes. Attending the IOP’s business awards reminded me of the buzz of being in a small firm. It might not be to everyone’s taste, but if you get the chance to work in that environment, do give it serious thought.

The post Why nothing beats the buzz of being in a small hi-tech business appeared first on Physics World.

  •  

Cat qubits open a faster track to fault-tolerant quantum computing

Researchers from the Amazon Web Services (AWS) Center for Quantum Computing have announced what they describe as a “breakthrough” in quantum error correction. Their method uses so-called cat qubits to reduce the total number of qubits required to build a large-scale, fault-tolerant quantum computer, and they claim it could shorten the time required to develop such machines by up to five years.

Quantum computers are promising candidates for solving complex problems that today’s classical computers cannot handle. Their main drawback is the tendency for errors to crop up in the quantum bits, or qubits, they use to perform computations. Just like classical bits, the states of qubits can erroneously flip from 0 to 1, which is known as a bit-flip error. In addition, qubits can suffer from inadvertent changes to their phase, which is a parameter that characterizes their quantum superposition (phase-flip errors). A further complication is that whereas classical bits can be copied in order to detect and correct errors, the quantum nature of qubits makes copying impossible. Hence, errors need to be dealt with in other ways.

One error-correction scheme involves building physical or “measurement” qubits around each logical or “data” qubit. The job of the measurement qubits is to detect phase-flip or bit-flip errors in the data qubits without destroying their quantum nature. In 2024, a team at Google Quantum AI showed that this approach is scalable in a system of a few dozen qubits. However, a truly powerful quantum computer would require around a million data qubits and an even larger number of measurement qubits.

Cat qubits to the rescue

The AWS researchers showed that it is possible reduce this total number of qubits. They did this by using a special type of qubit called a cat qubit. Named after the Schrödinger’s cat thought that illustrates the concept of quantum superposition, cat qubits use the superposition of coherent states to encode information in a way that resists bit flips. Doing so may increase the number of phase-flip errors, but special error-correction algorithms can deal with these efficiently.

The AWS team got this result by building a microchip containing an array of five cat qubits. These are connected to four transmon qubits, which are a type of superconducting qubit with a reduced sensitivity to charge noise (a major source of errors in quantum computations). Here, the cat qubits serve as data qubits, while the transmon qubits measure and correct phase-flip errors. The cat qubits were further stabilized by connecting each of them to a buffer mode that uses a non-linear process called two-photon dissipation to ensure that their noise bias is maintained over time.

According to Harry Putterman, a senior research scientist at AWS, the team’s foremost challenge (and innovation) was to ensure that the system did not introduce too many bit-flip errors. This was important because the system uses a classical repetition code as its “outer layer” of error correction, which left it with no redundancy against residual bit flips. With this aspect under control, the researchers demonstrated that their superconducting quantum circuit suppressed errors from 1.75% per cycle for a three-cat qubit array to 1.65% per cycle for a five-cat qubit array. Achieving this degree of error suppression with larger error-correcting codes previously required tens of additional qubits.

On a scalable path

AWS’s director of quantum hardware, Oskar Painter, says the result will reduce the development time for a full-scale quantum computer by 3-5 years. This is, he says, a direct outcome of the system’s simple architecture as well as its 90% reduction in the “overhead” required for quantum error correction. The team does, however, need to reduce the error rates of the error-corrected logical qubits. “The two most important next steps towards building a fault-tolerant quantum computer at scale is that we need to scale up to several logical qubits and begin to perform and study logical operations at the logical qubit level,” Painter tells Physics World.

According to David Schlegel, a research scientist at the French quantum computing firm Alice & Bob, which specializes in cat qubits, this work marks the beginning of a shift from noisy, classically simulable quantum devices to fully error-corrected quantum chips. He says the AWS team’s most notable achievement is its clever hybrid arrangement of cat qubits for quantum information storage and traditional transmon qubits for error readout.

However, while Schlegel calls the research “innovative”, he says it is not without limitations. Because the AWS chip incorporates transmons, it still needs to address both bit-flip and phase-flip errors. “Other cat qubit approaches focus on completely eliminating bit flips, further reducing the qubit count by more than a factor of 10,” Schlegel says. “But it remains to be seen which approach will prove more effective and hardware-efficient for large-scale error-corrected quantum devices in the long run.”

The research is published in Nature.

The post Cat qubits open a faster track to fault-tolerant quantum computing appeared first on Physics World.

  •  

Physicists in Serbia begin strike action in support of student protests

Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.

The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.

The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.

“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB  in Belgrade signed a letter in support of the students, one of many from various research institutions since December.

Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.

“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”

Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.

Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits  Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.

Missed exams

Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.

Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.

The post Physicists in Serbia begin strike action in support of student protests appeared first on Physics World.

  •  

Nanosensor predicts risk of complications in early pregnancy

Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.

Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.

“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.

To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.

In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.

“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”

Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.

In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.

Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.

Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.

Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.

“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”

The post Nanosensor predicts risk of complications in early pregnancy appeared first on Physics World.

  •  

New materials for quantum technology, how ultrasound can help detect breast cancer

In this episode of the Physics World Weekly podcast, we explore how computational physics is being used to develop new quantum materials; and we look at how ultrasound can help detect breast cancer.

Our first guest is Bhaskaran Muralidharan, who leads the Computational Nanoelectronics & Quantum Transport Group at the Indian Institute of Technology Bombay. In a conversation with Physics World’s Hamish Johnston, he explains how computational physics is being used to develop new materials and devices for quantum science and technology. He also shares his personal perspective on quantum physics in this International Year of Quantum Science and Technology.

Our second guest is Daniel Sarno of the UK’s National Physical Laboratory, who is an expert in the medical uses of ultrasound. In a conversation with Physics World’s Tami Freeman, Sarno explains why conventional mammography can struggle to detect cancer in patients with higher density breast tissue. This is a particular problem because women with such tissue are at higher risk of developing the disease. To address this problem, Sarno and colleagues have developed a ultrasound technique for measuring tissue density and are commercializing it via a company called sona.

  • Bhaskaran Muralidharan is an editorial board member on Materials for Quantum Technology. The journal is produced by IOP Publishing, which also brings you Physics World

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post New materials for quantum technology, how ultrasound can help detect breast cancer appeared first on Physics World.

  •  

Curious consequence of special relativity observed for the first time in the lab

A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.

A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.

In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.

This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.

But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.

The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.

The Terrell effect in the lab

While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.

The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.

When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.

Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.

The research is available on the arXiv pre-print server.

The post Curious consequence of special relativity observed for the first time in the lab appeared first on Physics World.

  •  

Seen a paper changed without notification? Study reveals the growing trend of ‘stealth corrections’

The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification.  That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).

When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.

After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.

Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.

Sustained vigilance

The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has been temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.

The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.

After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.

The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.

“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.

The post Seen a paper changed without notification? Study reveals the growing trend of ‘stealth corrections’ appeared first on Physics World.

  •  

How physics raised the roof: the people and places that drove the science of acoustics

Sometimes an attention-grabbing title is the best thing about a book, but not in this case. Pistols in St Paul’s: Science, Music and Architecture in the Twentieth Century by historian Fiona Smyth, is an intriguing journey charting the development of acoustics in architecture during the first half of the 20th century.

The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.

St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.

The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers

We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.

Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.

The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.

Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.

Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.

In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.

Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.

The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.

As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.

But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.

  • 2024 Manchester University Press 328pp £25.00/$36.95

The post How physics raised the roof: the people and places that drove the science of acoustics appeared first on Physics World.

  •  

The complex and spatially heterogeneous nature of degradation in heavily cycled Li-ion cells

As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.

This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!

Toby Bond
Toby Bond

Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.

 

 

The post The complex and spatially heterogeneous nature of degradation in heavily cycled Li-ion cells appeared first on Physics World.

  •  

Fermilab’s Anna Grassellino: eyeing the prize of quantum advantage

The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.

What’s the headline take on SQMS?

Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.

Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.

How is the centre delivering versus the vision laid out in the NQI?

SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.

At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)

Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

How are you building on these breakthroughs?

First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.

We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.

Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.

What technologies pursued by SQMS will be critical to the scale-up of quantum systems?

There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.

Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)

Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)

How important are partnerships to the SQMS mission?

Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.

In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.

Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.

How extensive is the SQMS partner network?

All told, SQMS quantum platforms and experiments involve the collective efforts of more than 500 experts from 35 partner organizations, among them the National Institute for Standards and Technology (NIST), NASA Ames Research Center and Northwestern University; also leading companies in the quantum tech industry like IBM and Rigetti Computing. Our network extends internationally and includes flagship tie-ins with the UK’s National Physical Laboratory (NPL), the Institute for Nuclear Physics (INFN) in Italy, and the Institute for Quantum Computing (University of Waterloo, Canada).

What are the drivers for your engagement with the quantum technology industry?

The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.

I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.

How do Rigetti and IBM, for example, benefit from their engagement with SQMS?

Our collaboration with Rigetti Computing, a Silicon Valley company that’s building quantum computers, has been exemplary throughout: a two-way partnership that leverages the unique enabling technologies within SQMS to boost the performance of Rigetti’s superconducting quantum processors.

The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.

At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.

Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?

Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.

We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.

How is your team building capacity to support quantum R&D and technology innovation?

We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.

Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)

In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.

What is the vision for the future of SQMS?

SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.

We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.

It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.

Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.

How SQMS is bridging the quantum skills gap

SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)

As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.

Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.

Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.

“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.

“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”

The post Fermilab’s Anna Grassellino: eyeing the prize of quantum advantage appeared first on Physics World.

  •  

‘Phononic shield’ protects mantis shrimp from its own shock waves

When a mantis shrimp uses shock waves to strike and kill its prey, how does it prevent those shock waves from damaging its own tissues? Researchers at Northwestern University in the US have answered this question by identifying a structure within the shrimp that filters out harmful frequencies. Their findings, which they obtained by using ultrasonic techniques to investigate surface and bulk wave propagation in the shrimp’s dactyl club, could lead to novel advanced protective materials for military and civilian applications.

Dactyl clubs are hammer-like structures located on each side of a mantis shrimp’s body. They store energy in elastic structures similar to springs that are latched in place by tendons. When the shrimp contracts its muscles, the latch releases, releasing the stored energy and propelling the club forward with a peak force of up to 1500 N.

This huge force (relative to the animal’s size) creates stress waves in both the shrimp’s target – typically a hard-shelled animal such as a crab or mollusc – and the dactyl club itself, explains biomechanical engineer Horacio Dante Espinosa, who led the Northwestern research effort. The club’s punch also creates bubbles that rapidly collapse to produce shockwaves in the megahertz range. “The collapse of these bubbles (a process known as cavitation collapse), which takes place in just nanoseconds, releases intense bursts of energy that travel through the target and shrimp’s club,” he explains. “This secondary shockwave effect makes the shrimp’s strike even more devastating.”

Protective phononic armour

So how do the shrimp’s own soft tissues escape damage? To answer this question, Espinosa and colleagues studied the animal’s armour using transient grating spectroscopy (TGS) and asynchronous optical sampling (ASOPS). These ultrasonic techniques respectively analyse how stress waves propagate through a material and characterize the material’s microstructure. In this work, Espinosa and colleagues used them to provide high-resolution, frequency-dependent wave propagation characteristics that previous studies had not investigated experimentally.

The team identified three distinct regions in the shrimp’s dactyl club. The outermost layer consists of a hard hydroxyapatite coating approximately 70 μm thick, which is durable and resists damage. Beneath this, an approximately 500 μm-thick layer of mineralized chitin fibres arranged in a herringbone pattern enhances the club’s fracture resistance. Deeper still, Espinosa explains, is a region that features twisted fibre bundles organized in a corkscrew-like arrangement known as a Bouligand structure. Within this structure, each successive layer is rotated relative to its neighbours, giving it a unique and crucial role in controlling how stress waves propagate through the shrimp.

“Our key finding was the existence of phononic bandgaps (through which waves within a specific frequency range cannot travel) in the Bouligand structure,” Espinosa explains. “These bandgaps filter out harmful stress waves so that they do not propagate back into the shrimp’s club and body. They thus preserve the club’s integrity and protect soft tissue in the animal’s appendage.”

 The team also employed finite element simulations incorporating so-called Bloch-Floquet analyses and graded mechanical properties to understand the phonon bandgap effects. The most surprising result, Espinosa tells Physics World, was the formation of a flat branch around the 450 to 480 MHz range, which correlates to frequencies arising from bubble collapse originating during club impact.

Evolution and its applications

For Espinosa and his colleagues, a key goal of their research is to understand how evolution leads to natural composite materials with unique photonic, mechanical and thermal properties. In particular, they seek to uncover how hierarchical structures in natural materials and the chemistry of their constituents produce emergent mechanical properties. “The mantis shrimp’s dactyl club is an example of how evolution leads to materials capable of resisting extreme conditions,” Espinosa says. “In this case, it is the violent impacts the animal uses for predation or protection.”

The properties of the natural “phononic shield” unearthed in this work might inspire advanced protective materials for both military and civilian applications, he says. Examples could include the design of helmets, personnel armour, and packaging for electronics and other sensitive devices.

In this study, which is described in Science, the researchers analysed two-dimensional simulations of wave behaviour. Future research, they say, should focus on more complex three-dimensional simulations to fully capture how the club’s structure interacts with shock waves. “Designing aquatic experiments with state-of-the-art instrumentation would also allow us to investigate how phononic properties function in submerged underwater conditions,” says Espinosa.

The team would also like to use biomimetics to make synthetic metamaterials based on the insights gleaned from this work.

The post ‘Phononic shield’ protects mantis shrimp from its own shock waves appeared first on Physics World.

  •  

Thirty years of the Square Kilometre Array: here’s what the world’s largest radio telescope project has achieved so far

From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images.  When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.

Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.

Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.

Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.

“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”

Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.

MeerKAT: glimpsing the heart of the Milky Way

In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.

A radio image of the centre of the Milky Way
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)

Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.

Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.

Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures

Farhad Yusef-Zadeh

Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.

“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”

The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”

In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.

An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.

HERA: the first stars and galaxies

When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.

When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.

Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.

The Hydrogen Epoch of Reionization Array (HERA) radio telescope
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))

Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.

MWA: “unexpected” new objects

Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”

The MWA telescope in Australia
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))

Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.

New transients are challenging our current models of stellar evolution

Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia

After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.

The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.

This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.

ASKAP: giant sky surveys

The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.

Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA 37 e048).

CSIRO’s ASKAP antennas at the Murchison Radioastronomy Observatory in Western Australia
Down under The AKSAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)

Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.

This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.

By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA 38 e003). These are giant rings of radiation that are observed only in radio waves.  Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.

The precursors are so important. They’ve given us new questions. And it’s incredibly exciting

Philippa Hartley, SKAO, Manchester

While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.

Trouble on the horizon

As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.

Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.

Multiple satellites orbiting the Earth
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)

ASKAP,  MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.

In 2022, the International Astronomical Union (IAU) launched its Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference. The SKA Observatory and the US National Science Foundation’s centre for ground-based optical astronomy NOIRLab co-host the facility, which aims to reduce the impact of these satellite constellations.

Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”

In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.

The post Thirty years of the Square Kilometre Array: here’s what the world’s largest radio telescope project has achieved so far appeared first on Physics World.

  •  

Firefly Aerospace’s Blue Ghost mission achieves perfect lunar landing

The US firm Firefly Aerospace has claimed to be the first commercial company to achieve “a fully successful soft landing on the Moon”. Yesterday, the company’s Blue Ghost lunar lander touched down on the Moon’s surface in an “upright, stable configuration”. It will now operate for 14 days where it will drill into the lunar soil and image a total eclipse from the Moon where the Earth blocks the Sun.

Blue Ghost was launched on 15 January from NASA’s Kennedy Space Center in Florida via a SpaceX Falcon 9 rocket. Following a 45-day trip, the craft landed in Mare Crisium, touching down within its 100 m landing target next to a volcanic feature called Mons Latreille.

The mission is carrying 10 NASA instruments, which includes a lunar subsurface drill, sample collector, X-ray imager and dust-mitigation experiments. “With the hardest part behind us, Firefly looks forward to completing more than 14 days of surface operations, again raising the bar for commercial cislunar capabilities,” notes Shea Ferring, chief technology officer at Firefly Aerospace.

In February 2024 the Houston-based company Intuitive Machines became the first private firm to soft land on the Moon with its Odysseus mission. Yet it suffered a few hiccups prior to touch down and rather than landing vertically, did so at a 30 degree angle, which affected radio-transmission rates.

The Firefly mission is part of NASA’s Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.

Firefly’s Blue Ghost Mission 2 is expected to launch next year, where it will aim to land on the far side of the Moon. “With annual lunar missions, Firefly is paving the way for a lasting lunar presence that will help unlock access to the rest of the solar system for our nation, our partners, and the world,” notes Jason Kim, chief executive officer of Firefly Aerospace.

The post Firefly Aerospace’s Blue Ghost mission achieves perfect lunar landing appeared first on Physics World.

  •  

Ask me anything: Artur Ekert – ‘Nature doesn’t know that we divided all phenomena into physics, chemistry and biology’

What skills do you use every day in your job?

Apart from the usual set of mathematical skills ranging from probability theory and linear algebra to aspects of cryptography, the most valuable skill is the ability to think in a critical and dissecting way. Also, one mustn’t be afraid to go in different directions and connect dots. In my particular case, I was lucky enough that I knew the foundations of quantum physics and the problems that cryptographers were facing and I was able to connect the two. So I would say it’s important to have a good understanding of topics outside your narrow field of interest. Nature doesn’t know that we divided all phenomena into physics, chemistry and biology, but we still put ourselves in those silos and don’t communicate with each other.

Artur Ekert flying a small plane
Flying high and low “Physics – not just quantum mechanics, but all its aspects – deeply shapes my passion for aviation and scuba diving,” says Artur Ekert. “Experiencing and understanding the world above and below brings me great joy and often clarifies the fine line between adventure and recklessness.” (Courtesy: Artur Ekert)

What do you like best and least about your job?

Least is easy, all admin aspects of it. Best is meeting wonderful people. That means not only my senior colleagues – I was blessed with wonderful supervisors and mentors – but also the junior colleagues, students and postdocs that I work with. This job is a great excuse to meet interesting people.

What do you know today that you wish you’d known at the start of your career?

That it’s absolutely fine to follow your instincts and your interests without paying too much attention to practicalities. But of course that is a post-factum statement. Maybe you need to pay attention to certain practicalities to get to the comfortable position where you can make the statement I just expressed.

The post Ask me anything: Artur Ekert – ‘Nature doesn’t know that we divided all phenomena into physics, chemistry and biology’ appeared first on Physics World.

  •  

Harvard’s springtail-like jumping robot leaps into action

Globular springtails (Dicyrtomina minuta) are small bugs about five millimetres long that can be seen crawling through leaf litter and garden soil. While they do not have wings and cannot fly, they more than make up for it with their ability to hop relatively large heights and distances.

This jumping feat is thanks to a tail-like appendage on their abdomen called a furcula, which is folded in beneath their body, held under tension.

When released, it snaps against the ground in as little as 20 milliseconds, flipping the springtail up to 6 cm into the air and 10 cm horizontally.

Researchers at the Harvard John A Paulson School of Engineering and Applied Sciences have now created a robot that mimics this jumping ability.

They modified a cockroach-inspired robot to include a latch-mediated spring actuator, in which potential energy is stored in an elastic element – essentially a robotic fork-like furcula.

Via computer simulations and experiments to control the length of the linkages in the furcula as well as the energy stored in them, the team found that the robot could jump some 1.4 m horizontally, or 23 times its body length – the longest of any existing robot relative to body length.

The work could help design robots that can traverse places that are hazardous to humans.

“Walking provides a precise and efficient locomotion mode but is limited in terms of obstacle traversal,” notes Harvard’s Robert Wood. “Jumping can get over obstacles but is less controlled. The combination of the two modes can be effective for navigating natural and unstructured environments.”

The post Harvard’s springtail-like jumping robot leaps into action appeared first on Physics World.

  •  

Optical sensors could improve the comfort of indoor temperatures

The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.

The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.

Limitation of thermostats for radiative heat

Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.

Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.

In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.

Integrating optical sensors to improve room comfort

The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.

The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.

The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.

The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.

This research was published in Nature Communications.

The post Optical sensors could improve the comfort of indoor temperatures appeared first on Physics World.

  •  

Black hole’s shadow changes from one year to the next

New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.

The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.

Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.

Hot and violent region

The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.

The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.

New result reinforces previous interpretations

The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.

Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.

Very long baseline interferometry

To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.

The facilities contributing to this work were the Atacama Large Millimeter Array (ALMA) and the Atacama Pathfinder Experiment, both in Chile; the South Pole Telescope (SPT) in Antarctica; the IRAM 30-metre telescope and NOEMA Observatory in Spain; the James Clerk Maxwell Telescope (JCMT) and the Submillimeter Array (SMA) on Mauna Kea, Hawai’I, US; the Large Millimeter Telescope (LMT) in Mexico; the Kitt Peak Telescope in Arizona, US; and the Greenland Telescope (GLT). The distance between these telescopes – the baseline – ranges from 160 m to 10 700 km. Data were correlated at the Max-Planck-Institut für Radioastronomie (MPIfR) in Germany and the MIT Haystack Observatory in the US.

“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”

Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.

The analyses are detailed in Astronomy and Astrophysics.

The post Black hole’s shadow changes from one year to the next appeared first on Physics World.

  •  

Frequency-comb detection of gas molecules achieves parts-per-trillion sensitivity

A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.

Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.

Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.

Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.

Shifting resonances

However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.

“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.

Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.

Dithering the cavity

This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.

The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”

Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”

The spectrometer is described in Nature.

The post Frequency-comb detection of gas molecules achieves parts-per-trillion sensitivity appeared first on Physics World.

  •  

Exploring CERN: Physics World visits the world’s leading particle-physics lab

In this episode of the Physics World Weekly podcast, online editor Margaret Harris chats about her recent trip to CERN. There, she caught up with physicists working on some of the lab’s most exciting experiments and heard from CERN’s current and future leaders.

Founded in Geneva in 1954, today CERN is most famous for the Large Hadron Collider (LHC), which is currently in its winter shutdown. Harris describes her descent 100 m below ground level to visit the huge ATLAS detector and explains why some of its components will soon be updated as part of the LHC’s upcoming high luminosity upgrade.

She explains why new “crab cavities” will boost the number of particle collisions at the LHC. Among other things, this will allow physicists to better study how Higgs bosons interact with each other, which could provide important insights into the early universe.

Harris describes her visit to CERN’s Antimatter Factory, which hosts several experiments that are benefitting from a 2021 upgrade to the lab’s source of antiprotons. These experiments measure properties of antimatter – such as its response to gravity – to see if its behaviour differs from that of normal matter.

Harris also heard about the future of the lab from CERN’s director general Fabiola Gianotti and her successor Mark Thomson, who will take over next year.

The post Exploring CERN: <em>Physics World</em> visits the world’s leading particle-physics lab appeared first on Physics World.

  •  

Radioactive anomaly appears in the deep ocean

Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.

Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.

Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.

Almost twice as much 10Be than expected

In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.

The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.

Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.

To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”

Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.

The post Radioactive anomaly appears in the deep ocean appeared first on Physics World.

  •  

US-led missions launched to investigate the Moon’s water

  • Update 7 March 2025: In a statement, Intuitive Machines announced that while Athena performed a soft landing on the Moon on 6 March, it landed on its side about 250m away from the intended landing spot. Given that the lander is unable to recharge its batteries, the firm declared the mission over with the team accessing the data that has been collected.

The private firm Intuitive Machines has launched a lunar lander to test extraction methods for water and volatile gases. The six-legged Moon lander, dubbed Athena, took off yesterday aboard a SpaceX Falcon 9 rocket from NASA’s Kennedy Space Center in Florida . Also aboard the rocket was NASA’s Lunar Trailblazer – a lunar orbiter that will investigate water on the Moon and its geology.

In February 2024, Intuitive Machines’ Odysseus mission became the first US mission to make a soft landing on the Moon since Apollo 17 and the first private craft to do so. After a few hiccups during landing, the mission carried out measurements with an optical and radio telescope before it ended seven days later.

Athena is the second lunar lander by Intuitive Machines in its quest to build infrastructure on the Moon that would be required for long-term lunar exploration.

The mission, standing almost five meters tall, aims to land in the Mons Mouton region, which is about 160 km from the lunar south pole.

It will use a drill to bore one meter into the surface and test the extraction of substances – including volatiles such as carbon dioxide as well as water – that it will then analyse with a mass spectrometer.

Athena also contains a “hopper” dubbed Grace that can travel up to 25 kilometres on the lunar surface. Carrying about 10 kg of payloads, the rocket-propelled drone will aim to take images of the lunar surface and explore nearby craters.

As well as Grace, Athena carries two rovers. MAPP, built by Lunar Outpost, will autonomously navigate the lunar surface while a small, lightweight rover dubbed Yaoki, which has been built by the Japanese firm Dymon, will explore the Moon within 50 meters of the lander.

Athena is part of NASA’s $2.6bn Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.

Taking the Moon’s temperature

Lunar Trailblazer, meanwhile, will spend two years orbiting the Moon from a 100 km altitude polar orbit. Weighing 200 kg and about the size of a washing machine, it will map the distribution of water on the Moon’s surface about 12 times a day with a resolution of about 50 meters.

While it is known that water exists on the lunar surface, little is known about its form, abundance, distribution or how it arrived. Various hypothesis range from “wet” asteroids crashing into the Moon to volcanic eruptions producing water vapour from the Moon’s interior.

Artist's impression of the Lunar Trailblazer
Water hunter: NASA’s Lunar Trailblazer will spend two years mapping the distribution of water on the surface of the Moon (courtesy: Lockheed Martin Space for Lunar Trailblazer)

To help answer that question, the craft will examine water deposits via an imaging spectrometer dubbed the High-resolution Volatiles and Minerals Moon Mapper that has been built by NASA’s Jet Propulsion Laboratory.

A thermal mapper, meanwhile, that has been developed by the University of Oxford, will plot the temperature of the Moon’s surface and help to confirm the presence and location of water.

Lunar Trailblazer was selected in 2019 as part of NASA’s Small Innovative Missions for Planetary Exploration programme.

The post US-led missions launched to investigate the Moon’s water appeared first on Physics World.

  •  

A model stretch: explaining the rheology of developing tissue

While the biology of how an entire organism develops from a single cell has long been a source of fascination, recent research has increasingly highlighted the role of mechanical forces. “If we want to have rigorous predictive models of morphogenesis, of tissues and cells forming organs of an animal,” says Konstantin Doubrovinski at the University of Texas Southwestern Medical Center, “it is absolutely critical that we have a clear understanding of material properties of these tissues.”

Now Doubrovinski and his colleagues report a rheological study explaining why the developing fruit fly (Drosophila melanogaster) epithelial tissue stretches as it does over time to allow the embryo to change shape.

Previous studies had shown that under a constant force, tissue extension was proportional to the time the force had been applied to the power of one half. This had puzzled the researchers, since it did not fit a simple model in which epithelial tissues behave like linear springs. In such a model, the extension obeys Hooke’s law and is proportional to the force applied alone, such that the exponent of time in the relation would be zero.

They and other groups had tried to explain this observation of an exponent equal to 0.5 as due to the viscosity of the medium surrounding the cells, which would lead to deformation near the point of pulling that then gradually spreads. However, their subsequent experiments ruled out viscosity as a cause of the non-zero exponent.

Tissue pulling experiments
Tissue pulling experiments Schematic showing how a ferrofluid droplet positioned inside one cell is used to stretch the epithelium via an external magnetic field. The lower images are snapshots from an in vivo measurement. (Courtesy: Konstantin Doubrovinski/bioRxiv 10.1101/2023.09.12.557407)

For their measurements, the researchers had exploited a convenient feature of Drosophila epithelial cells – a small hole, through which they could manipulate a droplet of ferrofluid to enter using a permanent magnet. Once inside the cell, a magnet acting on this droplet could exert forces on the cell to stretch the surrounding tissue.

For the current study, the researchers first tested the observed scaling law over longer periods of time. A power law gives a straight line on a log–log plot but as Doubrovinski points out, curves also look like straight lines over short sections. However, even when they increased the time scales probed in their experiments to cover three orders of magnitude – from fractions of a second to several minutes – the observed power law still held.

Understanding the results

One of the post docs on the team – Mohamad Ibrahim Cheikh – stumbled upon the actual relation giving the power law with an exponent of 0.5 while working on a largely unrelated problem. He had been modelling ellipsoids in a hexagonal meshwork on a surface, in what Doubrovinski describes as a “large” and “relatively complex” simulation. He decided to examine what would happen if he allowed the mesh to relax in its stretched position, which would model the process of actin turnover in cells.

Cheikh’s simulation gave the power law observed in the epithelial cells. “We totally didn’t expect it,” says Doubrovinski. “We pursued it and thought, why are we getting it? What’s going on here?”

Although this simulation yielded the power law with an exponent of 0.5, because the simulation was so complex, it was hard to get a handle on why. “There are all these different physical effects that we took into account that we thought were relevant,” he tells Physics World.

To get a more intuitive understanding of the system, the researchers attempted to simplify the model into a lattice of springs in one dimension, keeping only some of the physical effects from the simulations, until they identified the effects required to give the exponent value of 0.5. They could then scale this simplified one-dimensional model back up to three dimensions and test how it behaved.

According to their model, if they changed the magnitude of various parameters, they should be able to rescale the curves so that they essentially collapse onto a single curve. “This makes our prediction falsifiable,” says Doubrovinski, and in fact the experimental curves could be rescaled in this way.

When the researchers used measured values for the relaxation constant based on the actin turnover rate, along with other known parameters such as the size of the force and the size of the extension, they were able to calculate the force constant of the epithelial cell. This value also agreed with their previous estimates.

Doubrovinski explains how the ferrofluid droplet engages with individual “springs” of the lattice as it moves through the mesh. “The further it moves, the more springs it catches on,” he says. “So the rapid increase of one turns into a slow increase with an exponent of 0.5.” Against this model, all the pieces fit into place.

“I find it inspiring that the authors, first motivated by in vivo mechanical measurements, could develop a simple theory capturing a new phenomenological law of tissue rheology,” says Pierre Françoise Lenne, group leader at the Institut de Biologie du Development de Marseille at L’Universite d’Aix-Marseille. Lenne specializes in the morphogenesis of multicellular systems but was not involved in the current research.

Next, Doubrovinski and his team are keen to see where else their results might apply, such as other developmental stages and other types of organisms, such as mammals, for example.

The research is reported in Physical Review Letters and bioRxiv.

The post A model stretch: explaining the rheology of developing tissue appeared first on Physics World.

  •  

Quantum-inspired technique simulates turbulence with high speed

Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.

Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.

The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.

An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.

For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.

Highly compressed “tensor networks”

To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.

These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.

Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.

This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.

The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.

The study is detailed in Science Advances.

The post Quantum-inspired technique simulates turbulence with high speed appeared first on Physics World.

  •  

New transfer arm moves heavier samples in vacuum

Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?

Heavy duty: The new transfer arm
Heavy duty: The new transfer arm. (Courtesy: UHV Design)

The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.

“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”

The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.

Continual development, review and improvement

While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.

“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”

The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.

Moving multiple objects

As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.

Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”

The post New transfer arm moves heavier samples in vacuum appeared first on Physics World.

  •  

Experts weigh in on Microsoft’s topological qubit claim

Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.

However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.

Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.

“That’s a big no-no”

“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.

Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.

But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.

As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.

Built-in error protection

As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.

It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.

In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.

Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.

The device is a remarkable achievement from the materials science and fabrication standpoint

Ivar Martin, Argonne National Laboratory

“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).

Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.

Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.

“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Experts weigh in on Microsoft’s topological qubit claim appeared first on Physics World.

  •  

How cathode microstructure impacts solid-state batteries

Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.

This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.

In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.

Philip Minnmann
Philip Minnmann

Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.

 

Johannes Schubert
Johannes Schubert

Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.

The post How cathode microstructure impacts solid-state batteries appeared first on Physics World.

  •  

Fusion physicist Ian Chapman to head UK Research and Innovation

The fusion physicist Ian Chapman is to be the next head of UK Research and Innovation (UKRI) – the UK’s biggest public research funder. He will take up the position in June, replacing the geneticist Ottoline Leyser who has held the position since 2020.

With an annual budget of £9bn, UKRI is the umbrella organisation of the UK’s nine research councils, including the Science and Technology Facilities Council and the Engineering and Physical Sciences Research Council.

UK science minister Patrick Vallance notes that Chapman’s “leadership experience, scientific expertise and academic achievements make him an exceptionally strong candidate to lead UKRI”.

UKRI chairman Andrew Mackenzie, meanwhile, states that Chapman “has the skills, experience, leadership and commitment to unlock this opportunity to improve the lives and livelihoods of everyone”.

Hard act to follow

After gaining an MSc in mathematics and physics from Durham University, Chapman completed a PhD at Imperial College London in fusion science, which he partly did at Culham Science Centre in Oxfordshire.

In 2014 he became head of tokamak science at Culham and then became fusion programme manager a year later. In 2016, aged just 34, he was named chief executive of the UK Atomic Energy Authority (UKAEA), which saw him lead the UK’s magnetic confinement fusion research programme at Culham.

In that role he oversaw an upgrade to the lab’s Mega Amp Spherical Tokamak as well as the final  operation of the Joint European Torus (JET) – one of the world’s largest nuclear fusion devices – that closed in 2024.

Chapman also played a part in planning a prototype fusion power plant. Known as the Spherical Tokamak for Energy Production (STEP), it was first announced by the UK government in 2019 with operations expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.

Chapman, who currently sits on UKRI’s board, says that he is “excited” to take over as head of UKRI. “Research and innovation must be central to the prosperity of our society and our economy, so UKRI can shape the future of the country,” he notes. “I was tremendously fortunate to represent UKAEA, an organisation at the forefront of global research and innovation of fusion energy, and I look forward to building on those experiences to enable the wider UK research and innovation sector.”

The UKAEA has announced that Tim Bestwick, who is currently UKAEA’s deputy chief executive, will take over as interim UKAEA head until a permanent replacement is found.

Steve Cowley, director of the Princeton Plasma Physics Laboratory in the US and a former chief executive of UKAEA, told Physics World that Chapman is an “astonishing science leader” and that the UKRI is in “excellent hands”. “[Chapman] has set a direction for UK fusion research that is bold and inspired,” adds Cowley. “It will be a hard act to follow but UK fusion development will go ahead with great energy.”

The post Fusion physicist Ian Chapman to head UK Research and Innovation appeared first on Physics World.

  •  

World’s first patient treatments delivered with proton arc therapy

A team at the Trento Proton Therapy Centre in Italy has delivered the first clinical treatments using proton arc therapy, an emerging proton delivery technique. Following successful dosimetric comparisons with clinically delivered proton plans, the researchers confirmed the feasibility of PAT delivery and used PAT to treat nine cancer patients, reporting their findings in Medical Physics.

Currently, proton therapy is mostly delivered using pencil-beam scanning (PBS), which provides highly conformal dose distributions. But PBS delivery can be compromised by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this limitation by moving to an arc trajectory.

“Proton arc treatments are different from any other pencil-beam proton delivery technique because of the large number of beam angles used and the possibility to optimize the number of energies used for each beam direction, which enables optimization of the delivery time,” explains first author Francesco Fracchiolla. “The ability to optimize both the number of energy layers and the spot weights makes these treatments superior to any previous delivery technique.”

Plan comparisons

The Trento researchers – working with colleagues from RaySearch Laboratories – compared the dosimetric parameters of PAT plans with those of state-of-the-art multiple-field optimized (MFO) PBS plans, for 10 patients with head-and-neck cancer. They focused on this site due to the high number of organs-at-risk (OARs) close to the target that may be spared using this new technique.

In future, PAT plans will be delivered with the beam on during gantry motion (dynamic mode). This requires dynamic arc plan delivery with all system settings automatically adjusted as a function of gantry angle – an approach with specific hardware and software requirements that have so far impeded clinical rollout.

Instead, Fracchiolla and colleagues employed an alternative version of static PAT, in which the static arc is converted into a series of PBS beams and delivered using conventional delivery workflows. Using the RayStation treatment planning system, they created MFO plans (using six noncoplanar beam directions) and PAT plans (with 30 beam directions), robustly optimized against setup and range uncertainties.

PAT plans dramatically improved dose conformality compared with MFO treatments. While target coverage was of equal quality for both treatment types, PAT decreased the mean doses to OARs for all patients. The biggest impact was in the brainstem, where PAT reduced maximum and mean doses by 19.6 and 9.5 Gy(RBE), respectively. Dose to other primary OARs did not differ significantly between plans, but PAT achieved an impressive reduction in mean dose to secondary OARs not directly adjacent to the target.

The team also evaluated how these dosimetric differences impact normal tissue complication probability (NTCP). PAT significantly reduced (by 8.5%) the risk of developing dry mouth and slightly lowered other NTCP endpoints (swallowing dysfunction, tube feeding and sticky saliva).

To verify the feasibility of clinical PAT, the researchers delivered MFO and PAT plans for one patient on a clinical gantry. Importantly, delivery times (from the start of the first beam to the end of the last) were similar for both techniques: 36 min for PAT with 30 beam directions and 31 min for MFO. Reducing the number of beam directions to 20 reduced the delivery time to 25 min, while maintaining near-identical dosimetric data.

First patient treatments

The successful findings of the plan comparison and feasibility test prompted the team to begin clinical treatments.

“The final trigger to go live was the fact that the discretized PAT plans maintained pretty much exactly the optimal dosimetric characteristics of the original dynamic (continuous rotation) arc plan from which they derived, so there was no need to wait for full arc to put the potential benefits to clinical use. Pretreatment verification showed excellent dosimetric accuracy and everything could be done in a fully CE-certified environment,” say Frank Lohr and Marco Cianchetti, director and deputy director, respectively, of the Trento Proton Therapy Center. “The only current drawback is that we are not at the treatment speed that we could be with full dynamic arc.”

To date, nine patients have received or are undergoing PAT treatment: five with head-and-neck tumours, three with brain tumours and one thorax cancer. For the first two head-and-neck patients, the team created PAT plans with a half arc (180° to 0°) with 10 beam directions and a mean treatment time of 12 min. The next two were treated with a complete arc (360°) with 20 beam directions. Here, the mean treatment time was 24 min. Patient-specific quality assurance revealed an average gamma passing rate (3%, 3 mm) of 99.6% and only one patient required replanning.

All PAT treatments were performed using the centre’s IBA ProteusPlus proton therapy unit and the existing clinical workflow. “Our treatment planning system can convert an arc plan into a PBS plan with multiple beams,” Fracchiolla explains. “With this workaround, the entire clinical chain doesn’t change and the plan can be delivered on the existing system. This ability to convert the arc plans into PBS plans means that basically every proton centre can deliver these treatments with the current hardware settings.”

The researchers are now analysing acute toxicity data from the patients, to determine whether PAT reduces toxicity. They are also looking to further reduce the delivery times.

“Hopefully, together with IBA, we will streamline the current workflow between the OIS [oncology information system] and the treatment control system to reduce treatment times, thus being competitive in comparison with conventional approaches, even before full dynamic arc treatments become a clinical reality,” adds Lohr.

The post World’s first patient treatments delivered with proton arc therapy appeared first on Physics World.

  •  

The quest for better fusion reactors is putting a new generation of superconductors to the test

Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)

Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.

The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable

The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.

The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.

The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”

There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.

Is smaller better?

A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.

In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.

But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.

Tokamak Energy’s ST40 compact tokamak
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)

However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.

The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.

1 Superconductors

Semiconductor graph
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)

Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc).  Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.

What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.

Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.

However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current.  This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.

High-performance fusion materials

Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.

The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.

2 REBCO unit cell

Unit cell of a REBCO
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)

The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.

Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).

REBCO compounds  are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.

REBCO wires: Bendy but brittle

REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.

Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.

Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.

Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in  2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.

Pushing superconductors to the limit

Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).

These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.

Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.

Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.

To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).

3 Spot the difference

Irradiated REBCO crystal structure
(Courtesy: R.J. Nicholls, S. Diaz-Moreno, W. Iliffe et al. Communications Materials 3 52)

TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.

We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.

Searching for clues

The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.

In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials 3 52).

We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol. 36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.

Surrey Ion Beam Centre
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)

Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.

We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol. 34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin 48 710).

Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum. 95 063907).  The effect of other critical parameters like magnetic field and strain also still needs to be explored.

Towards net zero

The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.

This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices

Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.

This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.

The post The quest for better fusion reactors is putting a new generation of superconductors to the test appeared first on Physics World.

  •  

Astronomers create a ‘weather map’ for a gas giant exoplanet

Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.

Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.

Observing the planet during a transit event

On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.

Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.

“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”

The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.

Removing unwanted signals

According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.

“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”

The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”

The post Astronomers create a ‘weather map’ for a gas giant exoplanet appeared first on Physics World.

  •  

Precision radiosurgery: optimal dose delivery with cobalt-60

Leksell Gamma Knife Esprit
Leksell Gamma Knife Esprit

Join us for an insightful webinar that delves into the role of Cobalt-60 in intracranial radiosurgery using Leksell Gamma Knife.

Through detailed discussions and expert insights, attendees will learn how Leksell Gamma Knife, powered by cobalt-60, has and continues to revolutionize the field of radiosurgery, offering patients a safe and effective treatment option.

Participants will gain a comprehensive understanding of the use of cobalt in medical applications, highlighting its significance, and learn more about the unique properties of cobalt-60. The webinar will explore the benefits of cobalt-60 in intracranial radiosurgery and why it is an ideal choice for treating brain lesions while minimizing damage to surrounding healthy tissue.

Don’t miss this opportunity to enhance your knowledge and stay at the forefront of medical advancements in radiosurgery!

Riccardo Bevilacqua
Riccardo Bevilacqua

Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather, and writes popular science articles on physics and radiation.

 

The post Precision radiosurgery: optimal dose delivery with cobalt-60 appeared first on Physics World.

  •  

LEGO interferometer aims to put quantum science in the spotlight

We’ve had the LEGO Large Hadron Collider, a LEGO-based quantum computer and even a LEGO Kibble balance. But now you can now add a LEGO interferometer to that list thanks to researchers from the University of Nottingham.

Working with “student LEGO enthusiasts”, they have developed a fully functional LEGO interferometer kit that consists of lasers, mirrors, beamsplitters and, of course, some LEGO bricks.

The set, designed as a teaching aid for secondary-school pupils and older, is aimed at making quantum science more accessible and engaging as well as demonstrating the basic principles of interferometry such as interference patterns.

“Developing this project made me realise just how incredibly similar my work as a quantum scientist is to the hands-on creativity of building with LEGO,” notes Nottingham quantum physicist Patrik Svancara. “It’s an absolute thrill to show the public that cutting-edge research isn’t just complex equations. It’s so much more about curiosity, problem-solving, and gradually bringing ideas to life, brick by brick!”

A team at Cardiff University will now work on the design and develop materials that can be used to train science teachers with the hope that the sets will eventually be made available throughout the UK.

“We are sharing our experiences, LEGO interferometer blueprints, and instruction manuals across various online platforms to ensure our activities have a lasting impact and reach their full potential,” adds Svancara.

If you want to see the LEGO interferometer in action for yourself then it is being showcased at the Cosmic Titans: Art, Science, and the Quantum Universe exhibition at Nottingham’s Djanogly Art Gallery, which runs until 27 April.

The post LEGO interferometer aims to put quantum science in the spotlight appeared first on Physics World.

  •  

Test your quantum knowledge in this fun quiz

Two comic-style images labelled 1 and 2. First shows twin girls with the IYQ logo on their clothing. Second shows Alice and Bob on the telephone in Roy Lichtenstein style
(Courtesy: Jorge Cham; IOP Publishing)

1 Can you name the mascot for IYQ 2025?

2 In quantum cryptography, who eavesdrops on Alice and Bob?

Two images labelled 3 and 4. 3: photo of a large wire sculpture on a pier over the Thames. 4: STM image of an oval of bright colours with small peaks all around the outside and one peak in the middle
(Courtesy: Andy Roberts IBM Research/Science Photo Library)

3 Which artist made the Quantum Cloud sculpture in London?

4 IBM used which kind of atoms to create its Quantum Mirage image?

5 When Werner Heisenberg developed quantum mechanics on Helgoland in June 1925, he had travelled to the island to seek respite from what?
A His allergies
B His creditors
C His funders
D His lovers

6 According to the State of Quantum 2024 report, how many countries around the world had government initiatives in quantum technology at the time of writing?
A 6
B 17
C 24
D 33

7 The E91 quantum cryptography protocol was invented in 1991. What does the E stand for?
A Edison
B Ehrenfest
C Einstein
D Ekert

8 British multinational consumer-goods firm Reckitt sells a “Quantum” version of which of its household products?
A Air Wick freshener
B Finish dishwasher tablets
C Harpic toilet cleaner
D Vanish stain remover

9 John Bell’s famous theorem of 1964 provides a mathematical framework for understanding what quantum paradox?
A Einstein–Podolsky–Rosen
B Quantum indefinite causal order
C Schrödinger’s cat
D Wigner’s friend

10 Which celebrated writer popularized the notion of Schrödinger’s cat in the mid-1970s?
A Douglas Adams
B Margaret Atwood
C Arthur C Clarke
D Ursula K le Guin

11 Which of these isn’t an interpretation of quantum mechanics?
A Copenhagen
B Einsteinian
C Many worlds
D Pilot wave

12 Which of these companies is not a real quantum company?
A Qblox
B Qruise
C Qrypt
D Qtips

13 Which celebrity was spotted in the audience at a meeting about quantum computers and music in London in December 2022?
A Peter Andre
B Peter Capaldi
C Peter Gabriel
D Peter Schmeichel

14 What of the following birds has not yet been chosen by IBM as the name for different versions of its quantum hardware?
A Condor
B Eagle
C Flamingo
D Peregrine

15 When quantum theorist Erwin Schrödinger fled Nazi-controlled Vienna in 1938, where did he hide his Nobel-prize medal?
A In a filing cabinet
B Under a pot plant
C Behind a sofa
D In a desk drawer

16 Which of the following versions of the quantum Hall effect has not been observed so far in the lab?
A Fractional quantum Hall effect
B Anomalous fractional quantum Hall effect
C Anyonic fractional quantum Hall effect
D Excitonic fractional quantum Hall effect

17 What did Quantum Coffee on Front Street West in Toronto call its recently launched pastry, which is a superposition of a croissant and muffin?
A Croissin
B Cruffin
C Muffant
D Muffcro

18 What destroyed the Helgoland guest house where Heisenberg stayed in 1925 while developing quantum mechanics?
A A bomb
B A gas leak
C A rat infestation
D A storm

  • This quiz is for fun and there are no prizes. Answers will be revealed on the Physics World website in April.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Test your quantum knowledge in this fun quiz appeared first on Physics World.

  •  

US science faces unprecedented difficulties under the Trump administration

As physicists, we like to think that physics and politics are – indeed, ought to be – unconnected. And a lot of the time, that’s true.

Certainly, the value of the magnetic moment of the muon or the behaviour of superconductors in a fusion reactor (look out for our feature article next week) have nothing do with where anyone sits on the political spectrum. It’s subjects like climate change, evolution and medical research that tend to get caught in the political firing line.

But scientists of all disciplines in the US are now feeling the impact of politics at first hand. The new administration of Donald Trump has ordered the National Institutes of Health to slash the “indirect” costs of its research projects, threatening medical science and putting the universities that support it at risk. The National Science Foundation, which funds much of US physics, is under fire too, with staff sacked and grant funding paused.

Trump has also signed a flurry of executive orders that, among other things, ban federal government initiatives to boost diversity, equity and inclusion (DEI) and instruct government departments to “combat illegal private-sector DEI preferences, mandates, policies, programs and activities”. Some organizations are already abandoning such efforts for fear of these future repercussions.

What’s troubling for physics is that attacks on diversity initiatives fall most heavily on people from under-represented groups, who are more likely to quit physics or not go into it in the first place. That’s bad news for our subject as a whole because we know that a diverse community brings in smart ideas, new approaches and clever thinking.

The speed of changes in the US is bewildering too. Yes, the proportion from federal grants for indirect costs might be too high, but making dramatic changes at short notice, with no consultation is bizarre. There’s also a danger that universities will try to recoup lost money by raising tuition fees, which will hit poorer students the hardest.

US science has long been a beacon of excellence, a top destination especially for researchers from other nations. But many scientists are fearful of speaking out, scared that they or their institutions will pay a price for any opposition.

So far, it’s been left to senior leaders such as James Gates – a theoretical physicist at the University of Maryland – to warn of the dangers in store. “My country,” he said at an event earlier this month, “is in for a 50-year period of a new dark ages.”

I sincerely hope he’s wrong.

The post US science faces unprecedented difficulties under the Trump administration appeared first on Physics World.

  •  

Jim Gates updates his theorist’s bucket list and surveys the damage being done to US science and society

This episode of the Physics World Weekly podcast features an interview with the theoretical physicist Jim Gates who is at the University of Maryland and Brown University – both in the US.

He updates his theorist’s bucket list, which he first shared with Physics World back in 2014. This is a list of breakthroughs in physics that Gates would like to see happen before he dies.

One list item – the observation or gravitational waves – happened in 2015 and Gates explains the importance of the discovery. He also explains why the observation of gravitons, which are central to a theory of quantum gravity, is on his bucket list.

Quantum information

Gates is known for his work on supersymmetry and superstring theory, so it is not surprising that experimental evidence for those phenomena are on the bucket list. Gates also talks about a new item on his list that concerns the connections between quantum physics and information theory.

In this interview with Physics World’s Margaret Harris, Gates also reflects on how the current political upheaval in the US is affecting science and society – and what scientists can do ensure that the public has faith in science.

The post Jim Gates updates his theorist’s bucket list and surveys the damage being done to US science and society appeared first on Physics World.

  •  

Incoming CERN director-general Mark Thomson outlines his future priorities

How did you get interested in particle physics?

I studied physics at the University of Oxford and I was the first person in my family to go to university. I then completed a DPhil at Oxford in 1991 studying cosmic rays and neutrinos. In 1992 I moved to University College London as a research fellow. That was the first time I went to CERN and two years later I began working on the Large Electron-Positron Collider, which was the predecessor of the Large Hadron Collider. I was fortunate enough to work on some of the really big measurements of the W and Z bosons and electroweak unification, so it was a great time in my life. In 2000 I worked at the University of Cambridge where I set up a neutrino group. It was then that I began working at Fermilab – the US’s premier particle physics lab.

So you flipped from collider physics to neutrino physics?

Over the past 20 years, I have oscillated between them and sometimes have done both in parallel. Probably the biggest step forward was in 2013 when I became spokesperson for the Deep Underground Neutrino Experiment – a really fascinating, challenging and ambitious project. In 2018 I was then appointed executive chair of the Science and Technology Facilities Council (STFC) – one of the main UK funding agencies. The STFC funds particle physics and astronomy in the UK and maintains relationships with organizations such as CERN and the Square Kilometre Array Observatory, as well as operating some of the UK’s biggest national infrastructures such as the Rutherford Appleton Laboratory and the Daresbury Laboratory.

What did that role involve?

It covered strategic funding of particle physics and astronomy in the UK and also involved running a very large scientific organization with about 2800 scientific, technical and engineering staff. It was very good preparation for the role as CERN director-general.

What attracted you to become CERN director-general?

CERN is such an important part of the global particle-physics landscape. But I don’t think there was ever a moment where I just thought “Oh, I must do this”. I’ve spent six years on the CERN Council, so I know the organization well. I realized I had all of the tools to do the job – a combination of the science, knowing the organization and then my experience in previous roles. CERN has been a large part of my life for many years, so it’s a fantastic opportunity for me.

What were your first thoughts when you heard you had got the role?

It was quite a surreal moment. My first thoughts were “Well, OK, that’s fun”, so it didn’t really sink in until the evening. I’m obviously very happy and it was fantastic news but it was almost a feeling of “What happens now?”.

What so does happen now as CERN director-general designate?

There will be a little bit of shadowing, but you can’t shadow someone for the whole year, that doesn’t make very much sense. So what I really have to do is understand the organization, how it works from the inside and, of course, get to know the fantastic CERN staff, which I’ve already started doing. A lot of my time at the moment is meeting people and understanding how things work.

How might you do things differently?

I don’t think I will do anything too radical. I will have a look at where we can make things work better. But my priority for now is putting in place the team that will work with me from January. That’s quite a big chunk of work.

We have a decision to make on what comes after the High Luminosity-LHC in the mid-2040s

What do you think your leadership style will be?

I like to put around me a strong leadership team and then delegate and trust the leadership team to deliver. I’m there to set the strategic direction but also to empower them to deliver. That means I can take an outward focus and engage with the member states to promote CERN. I think my leadership style is to put in place a culture where the staff can thrive and operate in a very open and transparent way. That’s very important to me because it builds trust both within the organization and with CERN’s partners. The final thing is that I’m 100% behind CERN being an inclusive organization.

So diversity is an important aspect for you?

I am deeply committed to diversity and CERN is deeply committed to it in all its forms, and that will not change. This is a common value across Europe: our member states absolutely see diversity as being critical, and it means a lot to our scientific communities as well. From a scientific point of view, if we’re not supporting diversity, we’re losing people who are no different from others who come from more privileged backgrounds. Also, diversity at CERN has a special meaning: it means all the normal protected characteristics, but also national diversity. CERN is a community of 24 member states and quite a few associate member states, and ensuring nations are represented is incredibly important. It’s the way you do the best science, ultimately, and it’s the right thing to do.

The LHC is undergoing a £1bn upgrade towards a High Luminosity-LHC (HL-LHC), what will that entail?

The HL-LHC is a big step up in terms of capability and the goal will be to increase the luminosity of the machine. We are also upgrading the detectors to make them even more precise. The HL-LHC will run from about 2030 to the early 2040s. So by the end of LHC operations, we would have only taken about 10% of the overall data set once you add what the HL-LHC is expected to produce.

What physics will that allow?

There’s a very specific measurement that we would like to make around the nature of the Higgs mechanism. There’s something very special about the Higgs boson that it has a very strange vacuum potential, so it’s always there in the vacuum. With the HL-LHC, we’re going to start to study the structure of that potential. That’s a really exciting and fundamental measurement and it’s a place where we might start to see new physics.

Beyond the HL-LHC, you will also be involved in planning what comes next. What are the options?

We have a decision to make on what comes after the HL-LHC in the mid-2040s. It seems a long way off but these projects need a 20-year lead-in. I think the consensus amongst the scientific community for a number of years has been that the next machine must explore the Higgs boson. The motivation for a Higgs factory is incredibly strong.

Yet there has not been much consensus whether that should be a linear or circular machine?

My personal view is that a circular collider is the way forward. One option is the Future Circular Collider (FCC) – a 91 km circumference collider that would be built at CERN.

What would the benefits of the FCC be?

We know how to build circular colliders and it gives you significantly more capability than a linear machine by producing more Higgs bosons. It is also a piece of research infrastructure that will be there for many years beyond the electron–positron collider. The other aspect is that at some point in the future, we are going to want a high-energy hadron collider to explore the unknown.

But it won’t come cheap, with estimates being about £12–15bn for the electron–positron version, dubbed FCC-ee?

While the price tag for the FCC-ee is significant, that is spread over 24 member states for 15 years and contributions can also come from elsewhere. I’m not saying it’s going to be easy to actually secure that jigsaw puzzle of resource, because money will need to come from outside Europe as well.

China is also considering the Circular Electron Positron Collider (CEPC) that could, if approved, be built by the 2030s. What would happen to the FCC if the CEPC were to go ahead?

I think that will be part of the European Strategy for Particle Physics, which will happen throughout this year, to think about the ifs and buts. Of course, nothing has really been decided in China. It’s a big project and it might not go ahead. I would say it’s quite easy to put down aggressive timescales on paper but actually delivering them is always harder. The big advantage of CERN is that we have the scientific and engineering heritage in building colliders and operating them. There is only one CERN in the world.

What do you make of alternative technologies such as muon colliders that could be built in the existing LHC tunnel and offer high energies?

It’s an interesting concept but technically we don’t know how to do it. There’s a lot of development work but it’s going to take a long time to turn that into a real machine. So looking at a muon collider on the time scale of the mid-2040s is probably unrealistic. What is critical for an organization like CERN and for global particle physics is that when the HL-LHC stops by 2040, there’s not a large gap without a collider project.

Last year CERN celebrated its 70th anniversary, what do you think particle physics might look like in the next 70 years?

If you look back at the big discoveries over the last 30 years we’ve seen neutrino oscillations, the Higgs boson, gravitational waves and dark energy. That’s four massive discoveries. In the coming decade we will know a lot more about the nature of the neutrino and the Higgs boson via the HL-LHC. The big hope is we find something else that we don’t expect.

The post Incoming CERN director-general Mark Thomson outlines his future priorities appeared first on Physics World.

  •  

‘Sneeze simulator’ could improve predictions of pathogen spread

A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.

For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.

To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.

The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.

Reproducing human coughs and sneezes

One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.

During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.

A photo of a man with dark hair, glasses and a beard holding a 3D model of the human upper respiratory tract. A mask is mounted on a metal arm in the background.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)

“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”

These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.

“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.

The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.

The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.

The post ‘Sneeze simulator’ could improve predictions of pathogen spread appeared first on Physics World.

  •  

Memory of previous contacts affects static electricity on materials

Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.

Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.

“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”

This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.

Triboelectric series

Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.

Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.

According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.

In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”

Starting from scratch

For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.

At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.

Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.

Contact history

“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”

To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.

“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”

The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.

Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.

The research is described in Nature.

The post Memory of previous contacts affects static electricity on materials appeared first on Physics World.

  •  

Wireless deep brain stimulation reverses Parkinson’s disease in mice

Nanoparticle-mediated DBS reverses the symptoms of Parkinson’s disease
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.

Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.

Chunying Chen
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)

The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.

Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).

As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.

The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.

The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.

Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.

Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.

Results of motor tests in mice
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.

Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.

Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.

Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.

The study is reported in Science Advances.

The post Wireless deep brain stimulation reverses Parkinson’s disease in mice appeared first on Physics World.

  •  

How should scientists deal with politicians who don’t respect science?

Three decades ago – in May 1995 – the British-born mathematical physicist Freeman Dyson published an article in the New York Review of Books. Entitled “The scientist as rebel”, it described how all scientists have one thing in common. No matter what their background or era, they are rebelling against the restrictions imposed by the culture in which they live.

“For the great Arab mathematician and astronomer Omar Khayyam, science was a rebellion against the intellectual constraints of Islam,” Dyson wrote. Leading Indian physicists in the 20th century, he added, were rebelling against their British colonial rulers and the “fatalistic ethic of Hinduism”. Even Dyson traced his interest in science as an act of rebellion against the drudgery of compulsory Latin and football at school.

“Science is an alliance of free spirits in all cultures rebelling against the local tyranny that each culture imposes,” he wrote. Through those acts of rebellion, scientists expose “oppressive and misguided conceptions of the world”. The discovery of evolution and of DNA changed our sense of what it means to be human, he said, while black holes and Gödel’s theorem gave us new views of the universe and the nature of mathematics.

But Dyson feared that this view of science was being occluded. Writing in the 1990s, which was a time of furious academic debate about the “social construction of science”, he feared that science’s liberating role was becoming hidden by a cabal of sociologists and philosophers who viewed scientists as like any other humans, governed by social, psychological and political motives. Dyson didn’t disagree with that view, but underlined that nature is the ultimate arbiter of what’s important.

Today’s rebels

One wonders what Dyson, who died in 2020, would make of current events were he alive today. It’s no longer just a small band of academics disputing science. Its opponents also include powerful and highly placed politicians, who are tarring scientists and scientific findings for lacking objectivity and being politically motivated. Science, they say, is politics by other means. They then use that charge to justify ignoring or openly rejecting scientific findings when creating regulations and making decisions.

Thousands of researchers, for instance, contribute to efforts by the United Nations Intergovernmental Panel on Climate Change (IPCC) to measure the impact and consequences of the rising amounts of carbon dioxide in the atmosphere. Yet US President Donald Trump –speaking after Hurricane Helene left a trail of destruction across the south-east US last year – called climate change “one of the great scams”. Meanwhile, US chief justice John Roberts once rejected using mathematics to quantify the partisan effects of gerrymandering, calling it “sociological gobbledygook”.

In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations

These attitudes are not only anti-science but also undermine democracy by sidelining experts and dissenting voices, curtailing real debate, scapegoating and harming citizens.

A worrying precedent for how things may play out in the Trump administration occurred in 2012 when North Carolina’s legislators passed House Bill 819. By prohibiting the use of models of sea-level rise to protect people living near the coast from flooding, the bill damaged the ability of state officials to protect its coastline, resources and citizens. It also prevented other officials from fulfilling their duty to advise and protect people against threats to life and property.

In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations. In the first Trump administration, bills were introduced in the US Congress to stop politicians from using science produced by the Department of Energy in policies to avoid admitting the reality of climate change.

We can expect more anti-scientific efforts, if the first Trump administration is anything to go by. Dyson’s rebel alliance, it seems, now faces not just posturing academics but a Galactic Empire.

The critical point

In his 1995 essay, Dyson described how scientists can be liberators by abstaining from political activity rather than militantly engaging in it. But how might he have seen them meeting this moment? Dyson would surely not see them turning away from their work to become politicians themselves. After all, it’s abstaining from politics that empowers scientists to be “in rebellion against the restrictions” in the first place. But Dyson would also see them as aware that science is not the driving force in creating policies; political implementation of scientific findings ultimately depends on politicians appreciating the authority and independence of these findings.

One of Trump’s most audacious “Presidential Actions”, made in the first week of his presidency, was to define sex. The action makes a female “a person belonging, at conception, to the sex that produces the large reproductive cell” and a male “a person belonging, at conception, to the sex that produces the small reproductive cell”. Trump ordered the government to use this “fundamental and incontrovertible reality” in all regulations.

An editorial in Nature (563 5) said that this “has no basis in science”, while cynics, citing certain biological interpretations that all human zygotes and embryos are initially effectively female, gleefully insisted that the order makes all of us female, including the new US president. For me and other Americans, Trump’s action restructures the world as it has been since Genesis.

Still, I imagine that Dyson would still see his rebels as hopeful, knowing that politicians don’t have the last word on what they are doing. For, while politicians can create legislation, they cannot legislate creation.

Sometimes rebels have to be stoic.

The post How should scientists deal with politicians who don’t respect science? appeared first on Physics World.

  •  

Scientists discover secret of ice-free polar-bear fur

In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.

The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.

“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”

With and without sebum

Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.

This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.

What is responsible for the low ice adhesion?

Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.

This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.

Insights into human technology

As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”

The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”

The post Scientists discover secret of ice-free polar-bear fur appeared first on Physics World.

  •  

Inverse design configures magnon-based signal processor

For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.

Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.

“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”

One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.

Collective rotation

A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.

“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”

Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.

Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.

Thin magnetic film

The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.

To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.

This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.

While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”

Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.

The research is described in Nature Electronics.

The post Inverse design configures magnon-based signal processor appeared first on Physics World.

  •  

The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t

A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.

Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.

In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.

Rising to the occasion?

To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.

What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.

You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.

Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.

You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.

Close-up of weighing scale with small cakes on top
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)

Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.

There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.

Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?

What is the muon g-2?

Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.

However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.

This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.

“Virtual” particles

Illustration of subatomic particles in the Standard Model
(Courtesy: CERN)

The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.

“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.

The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev. 73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev. 74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.

The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment.  However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.

New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”

Testing the Standard Model: experiment vs theory

In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett. 126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.

The Muon g-2 experiment
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)

Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”

The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep. 887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke  – making the discrepancy all the more intriguing.

While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.

Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.

This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.

1 Eyes on the prize

Chart of muon g-2 results from 5 different experiments
(Courtesy: Muon g-2 collaboration/IOP Publishing)

Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.

That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature 593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.

The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”

If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.

To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.

2 Which Standard Model?

Chart of the Muon g-2 experiment results versus the various Standard Model predictions
(Courtesy: Alex Keshavarzi/IOP Publishing)

Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.

Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.

The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.

The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.

On your marks, get set, bake!

In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.

For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.

New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.

Large group of people stood holding a banner that says Muon g-2
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)

The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.

It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.

The post The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t appeared first on Physics World.

  •  

Low-temperature plasma halves cancer recurrence in mice

Treatment with low-temperature plasma is emerging as a novel cancer therapy. Previous studies have shown that plasma can deactivate cancer cells in vitro, suppress tumour growth in vivo and potentially induce anti-tumour immunity. Researchers at the University of Tokyo are investigating another promising application – the use of plasma to inhibit tumour recurrence after surgery.

Lead author Ryo Ono and colleagues demonstrated that treating cancer resection sites with streamer discharge – a type of low-temperature atmospheric plasma – significantly reduced the recurrence rate of melanoma tumours in mice.

“We believe that plasma is more effective when used as an adjuvant therapy rather than as a standalone treatment, which led us to focus on post-surgical treatment in this study,” says Ono.

In vivo experiments

To create the streamer discharge, the team applied a high-voltage pulse (25 kV, 20 ns, 100 pulse/s) to a 3 mm-diameter rod electrode with a hemispherical tip. The rod was placed in a quartz tube with a 4 mm inner diameter, and the working gas – humid oxygen mixed with ambient air – was flowed through the tube. As electrons in the plasma collide with molecules in the gas, the mixture generates cytotoxic reactive oxygen and nitrogen species.

The researchers performed three experiments on mice with melanoma, a skin cancer with a local recurrence rate of up to 10%. In the first experiment, they injected 11 mice with mouse melanoma cells, resecting the resulting tumours eight days later. They then treated five of the mice with streamer discharge for 10 min, with the mouse placed on a grounded plate and the electrode tip 10 mm above the resection site.

Experimental setup for plasma generation
Experimental setup Streamer discharge generation and treatment. (Courtesy: J. Phys. D: Appl. Phys. 10.1088/1361-6463/ada98c)

Tumour recurrence occurred in five of the six control mice (no plasma treatment) and two of the five plasma-treated mice, corresponding to recurrence rates of 83% and 40%, respectively. In a second experiment with the same parameters, recurrence rates were 44% in nine control mice and 25% in eight plasma-treated mice.

In a third experiment, the researchers delayed the surgery until 12 days after cell injection, increasing the size of the tumour before resection. This led to a 100% recurrence rate in the control group of five mice. Only one recurrence was seen in five plasma-treated mice, although one mouse that died of unknown causes was counted as a recurrence, resulting in a recurrence rate of 40%.

All of the experiments showed that plasma treatment reduced the recurrence rate by roughly 50%. The researchers note that the plasma treatment did not affect the animals’ overall health.

Cytotoxic mechanisms

To further confirm the cytotoxicity of streamer discharge, Ono and colleagues treated cultured melanoma cells for between 0 and 250 s, at an electrode–surface distance of 10 mm. The cells were then incubated for 3, 6 or 24 h. Following plasma treatments of up to 100 s, most cells were still viable 24 h later. But between 100 and 150 s of treatment, the cell survival rate decreased rapidly.

The experiment also revealed a rapid transition from apoptosis (natural programmed cell death) to late apoptosis/necrosis (cell death due to external toxins) between 3 and 24 h post-treatment. Indeed, 24 h after a 150 s plasma treatment, 95% of the dead cells were in the late stages of apoptosis/necrosis. This finding suggests that the observed cytotoxicity may arise from direct induction of apoptosis and necrosis, combined with inhibition of cell growth at extended time points.

In a previous experiment, the researchers used streamer discharge to treat tumours in mice before resection. This treatment delayed tumour regrowth by at least six days, but all mice still experienced local recurrence. In contrast, in the current study, plasma treatment reduced the recurrence rate.

The difference may be due to different mechanisms by which plasma inhibits tumour recurrence: cytotoxic reactive species killing residual cancer cells at the resection site; or reactive species triggering immunogenic cell death. The team note that either or both of these mechanisms may be occurring in the current study.

“Initially, we considered streamer discharge as the main contributor to the therapeutic effect, as it is the primary source of highly reactive short-lived species,” explains Ono. “However, recent experiments suggest that the discharge within the quartz tube also generates a significant amount of long-lived reactive species (with lifetimes typically exceeding 0.1 s), which may contribute to the therapeutic effect.”

One advantage of the streamer discharge device is that it uses only room air and oxygen, without requiring the noble gases employed in other cold atmospheric plasmas. “Additionally, since different plasma types generate different reactive species, we hypothesized that streamer discharge could produce a unique therapeutic effect,” says Ono. “Conducting in vivo experiments with different plasma sources will be an important direction for future research.”

Looking ahead to use in the clinic, Ono believes that the low cost of the device and its operation should make it feasible to use plasma treatment immediately after tumour resection to reduce recurrence risk. “Currently, we have only obtained preliminary results in mice,” he tells Physics World. “Clinical application remains a long-term goal.”

The study is reported in Journal of Physics D: Applied Physics.

The post Low-temperature plasma halves cancer recurrence in mice appeared first on Physics World.

  •  

Ultra-high-energy neutrino detection opens a new window on the universe

Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultra-high-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.

“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”

Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or being absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.

Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are also expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.

Underwater detection

Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.

The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000 m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.

KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.

Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.

New class of astrophysical messengers

While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultra-high-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.

Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them”.

Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”

With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.

Resconi, explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”

The observation is described in Nature.

The post Ultra-high-energy neutrino detection opens a new window on the universe appeared first on Physics World.

  •  

Threads of fire: uncovering volcanic secrets with Pele’s hair and tears

Volcanoes are awe-inspiring beasts. They spew molten rivers, towering ash plumes, and – in rarer cases – delicate glassy formations known as Pele’s hair and Pele’s tears. These volcanic materials, named after the Hawaiian goddess of volcanoes and fire, are the focus of the latest Physics World Stories podcast, featuring volcanologists Kenna Rubin (University of Rhode Island) and Tamsin Mather (University of Oxford).

Pele’s hair is striking: fine, golden filaments of volcanic glass that shimmer like spider silk in the sunlight. Formed when lava is ejected explosively and rapidly stretched into thin strands, these fragile fibres range from 1 to 300 µm thick – similar to human hair. Meanwhile, Pele’s tears – small, smooth droplets of solidified lava – can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities.

These materials are more than just geological curiosities. By studying their structure and chemistry, researchers can infer crucial details about past eruptions. Understanding these “fossil” samples provides insights into the history of volcanic activity and its role in shaping planetary environments.

Rubin and Mather describe what it’s like working in extreme volcanic landscapes. One day, you might be near the molten slopes of active craters, and then on another trip you could be exploring the murky depths of underwater eruptions via deep-sea research submersibles like Alvin.

For a deeper dive into Pele’s hair and tears, listen to the podcast and explore our recent Physics World feature on the subject.

The post Threads of fire: uncovering volcanic secrets with Pele’s hair and tears appeared first on Physics World.

💾

  •  

Modelling the motion of confined crowds could help prevent crushing incidents

Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments. 

To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).

The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.

“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.

In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.

Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.

Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.

“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.

The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.

Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.

If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.

The post Modelling the motion of confined crowds could help prevent crushing incidents appeared first on Physics World.

  •  

Freedom in the Equation exhibition opens at Harvard Science Centre

A new exhibition dedicated to Ukrainian scientists has opened at Harvard Science Centre in the Cambridge, Massachusetts, in the US.

The exhibition – Freedom in the Equation – shares the stories of 10 scientists to highlight Ukraine’s lost scientific potential due to Russia’s aggression towards the country while also shedding light on the contributions of Ukrainian scientists.

Among them are physicists Vasyl Kladko and Lev Shubnikov. Kladko worked on semiconductor physics and was deputy director of the Institute of Semiconductor Physics in Kyiv. He was killed in 2022 at the age of 65 as he tried to help his family flee Russia’s invasion.

Shubnikov, meanwhile, established a cryogenic lab at the Ukrainian Institute of Physics and Technology in Kharkiv (now known as the Kharkiv Institute of Physics and Technology) in the early 1930s.  In 1937, Shubnikov was arrested during Stalin’s regime and accused of espionage and was executed shortly after.

The scientists were selected by Oleksii Boldyrev, a molecular biologist and founder of the online platform myscience.ua, together with Krystyna Semeryn, a literary scholar and publicist.

The portraits were created by Niklas Elemehed, who is the official artist of the Nobel prize, with the text compiled by Olesia Pavlyshyn, editor-in-chief at the Ukrainian popular-science outlet Kunsht.

The exhibition, which is part of the Science at Risk project, runs until 10 March. “Today, I witness scientists being killed, and preserving their names has become a continuation of my work in historical research and a continuation of resistance against violence toward Ukrainian science,” says Boldyrev.

The post Freedom in the Equation exhibition opens at Harvard Science Centre appeared first on Physics World.

  •  

Schrödinger’s cat states appear in the nuclear spin state of antimony

Physicists at the University of New South Wales (UNSW) are the first to succeed in creating and manipulating quantum superpositions of a single, large nuclear spin. The superposition involves spin states that are very far apart and are therefore the superposition is considered a Schrödinger’s cat state. The work could be important for applications in quantum information processing and quantum error correction.

It was Erwin Schrödinger who, in 1935, devised his famous thought experiment involving a cat that could, worryingly, be both dead and alive at the same time. In his gedanken experiment, the decay of a radioactive atom triggers a mechanism (the breaking of a vial containing a poisonous gas) that kills the cat. However, since the decay of the radioactive atom is a quantum phenomenon,  the atom is in a superposition of being decayed and not decayed. If the cat and poison are hidden in a box, we do not know if the cat is alive or dead. Instead, the state of the feline is  a superposition of dead and alive – known as a Schrödinger’s cat state – until we open the box.

Schrödinger’s cat state (or just cat state) is now used to refer a superposition of two very different states of a quantum system. Creating cat states in the lab is no easy task, but researchers have managed to do this in recent years using the quantum superposition of coherent states of a laser field with different amplitudes, or phases, of the field. They have also created cat states using a trapped ion (with the vibrational state of the ion in the trap playing the role of the cat) and coherent microwave fields confined to superconducting boxes combined with Rydberg atoms and superconducting quantum bits (qubits).

Antimony atom cat

The cat state in the UNSW study is an atom of antimony, which is a heavy atom with a large nuclear spin. The high spin value implies that, instead of just pointing up and down (that is, in one of two directions), the nuclear spin of antimony can be in spin states corresponding to eight different directions. This makes it a high-dimensional quantum system that is valuable for quantum information processing and for encoding error-correctable logical qubits. The atom was embedded in a silicon quantum chip that allows for readout and control of the nuclear spin state.

Normally, a qubit, is described by just two quantum states, explains Xi Yu, who is lead author of a paper describing the study. For example, an atom with its spin pointing down can be labelled as the “0” state and the spin pointing up, the “1” state. The problem with such a system is that information contained in these states is fragile and can be easily lost when a 0 switches to a 1, or vice versa. The probability of this logical error occurring is reduced by creating a qubit using a system like the antinomy atom. With its eight different spin directions, a single error is not enough to erase the quantum information – there are still seven quantum states left, and it would take seven consecutive errors to turn the 0 into a 1.

More room for error

The information is still encoded in binary code (0 and 1), but there is more room for error between the logical codes, says team leader Andrea Morello. “If an error occurs, we detect it straight away, and we can correct it before further errors accumulate.”

The researchers say they were not initially looking to make and manipulate cat states but started with a project on high-spin nuclei for reasons unrelated to quantum information. They were in fact interested in observing quantum chaos in a single nuclear spin, which had been an experimental “holy grail” for a very long time, says Morello. “Once we began working with this system, we first got derailed by the serendipitous discovery of nuclear electric resonance, he remembers “We then became aware of some new theoretical ideas for the use of high-spin systems in quantum information and quantum error correcting codes.

“We therefore veered towards that research direction, and this is our first big result in that context,” he tells Physics World.

Scalable technology

The main challenge the team had to overcome in their study was to set up seven “clocks” that had to be precisely synchronized, so they could keep track of the quantum state of the eight-level system. Until quite recently, this would have involved cumbersome programming of waveform generators, explains Morello. “The advent of FPGA [field-programmable gate array] generators, tailored for quantum applications, has made this research much easier to conduct now.”

While there have already been a few examples of such physical platforms in which quantum information can be encoded in a (Hilbert) space of dimension larger than two – for example, microwave cavities or trapped ions – these were relatively large in size: bulk microwave cavities are typically the size of matchbox, he says. “Here, we have reconstructed many of the properties of other high-dimensional systems, but within an atomic-scale object – a nuclear spin. It is very exciting, and quite plausible, to imagine a quantum processor in silicon, containing millions of such Schrödinger cat states.”

The fact that the cat is hosted in a silicon chip means that this technology could be scaled up in the long-term using methods similar to those already employed in the computer chip industry today, he adds.

Looking ahead, the UNSW team now plans to demonstrate quantum error correction in its antimony system. “Beyond that, we are working to integrate the antimony atoms with lithographic quantum dots, to facilitate the scalability of the system and perform quantum logic operations between cat-encoded qubits,” reveals Morello.

The present study is detailed in Nature Physics.

The post Schrödinger’s cat states appear in the nuclear spin state of antimony appeared first on Physics World.

  •  

Quantum superstars gather in Paris for the IYQ 2025 opening ceremony

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has declared 2025 the International Year of Quantum Science and Technology – or IYQ.

UNESCO kicked-off IYQ on 4–5 February at a gala opening ceremony in Paris. Physics World’s Matin Durrani was there, and he shares his highlights from the event in this episode of the Physics World Weekly podcast.

No fewer than four physics Nobel laureates took part in the ceremony alongside representatives from governments and industry. While some speakers celebrated the current renaissance in quantum research and the burgeoning quantum-technology sector, others called on the international community to ensure that people in all nations benefit from a potential quantum revolution – not just people in wealthier countries. The dangers of promising too much from quantum computers and other technologies, was also discussed – as Durrani explains.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum superstars gather in Paris for the IYQ 2025 opening ceremony appeared first on Physics World.

  •  

US science in chaos as impact of Trump’s executive orders sinks in

Scientists across the US have been left reeling after a spate of executive orders from US President Donald Trump has led to research funding being slashed, staff being told to quit and key programmes being withdrawn. In response to the orders, government departments and external organizations have axed diversity, equity and inclusion (DEI) programmes, scrubbed mentions of climate change from websites, and paused research grants pending tests for compliance with the new administration’s goals.

Since taking up office on 20 January, Trump has signed dozens of executive orders. One ordered the closure of the US Agency for International Development, which has supported medical and other missions worldwide for more than six decades. The administration said it was withdrawing almost all of the agency’s funds and wanted to sack its entire workforce. A federal judge has temporarily blocked the plans, saying they may violate the US’s constitution, which reserves decisions on funding to Congress.

Individual science agencies are under threat too. Politico reported that the Trump administration has asked the National Science Foundation (NSF), which funds much US basic and applied research, to lay off between a quarter and a half of its staff in the next two months. Another report suggests there are plans to cut the agency’s annual budget from roughly $9bn to $3bn. Meanwhile, former officials of the National Oceanic and Atmospheric Administration (NOAA) told CBS News that half its staff could be sacked and its budget slashed by 30%.

Even before they had learnt of plans to cut its staff and budget, officials at the NSF were starting to examine details of thousands of grants it had awarded for references to DEI, climate change and other topics that Trump does not like. The swiftness of the announcements has caused chaos, with recipients of grants suddenly finding themselves unable to access the NSF’s award cash management service, which holds grantees’ funds, including their salaries.

NSF bosses have taken some steps to reassure grantees. “Our top priority is resuming our funding actions and services to the research community and our stakeholders,” NSF spokesperson Mike England told Physics World in late January. In what is a highly fluid situation, there was some respite on 2 February when the NSF announced that access had been restored with the system able to accept payment requests.

“Un-American” actions

Trump’s anti-DEI orders have caused shockwaves throughout US science. According to 404 Media, NASA staff were told on 22 January to “drop everything” to remove mentions of DEI, Indigenous people, environmental justice and women in leadership, from public websites. Another victim has been NASA’s Here to Observe programme, which links undergraduates from under-represented groups with scientists who oversee NASA’s missions. Science reported that contracts for half the scientists involved in the programme had been cancelled by the end of January.

It is still unclear, however, what impact the Trump administration’s DEI rules will have on the make-up of NASA’s astronaut corps. Since choosing its first female astronaut in 1978, NASA has sought to make the corps more representative of US demographics. How exactly the agency should move forward will fall to Jared Isaacman, the space entrepreneur and commercial astronaut who has been nominated as NASA’s next administrator.

Anti-DEI initiatives have hit individual research labs too. Physics World understands that Fermilab – the US’s premier particle-physics lab – suspended its DEI office and its women in engineering group in January. Meanwhile, the Fermilab LBGTQ+ group, called Spectrum, was ordered to cease all activities and its mailing list deleted. Even the rainbow “Pride” flag was removed from the lab’s iconic Wilson Hall.

Some US learned societies, despite being formally unaffiliated with the government, have also responded to pressure from the new administration. The American Geophysical Union (AGU) removed the word “diversity” from its diversity and inclusion page, although it backtracked after criticism of the move.

There was also some confusion that the American Chemical Society had removed its webpage on diversity and inclusion, but they had in fact published a new page and failed to put a redirect in place. “Inclusion and Belonging is a core value of the American Chemical Society, and we remain committed to creating environments where people from diverse backgrounds, cultures, perspectives and experiences thrive,” a spokesperson told Physics World. “We know the broken link caused confusion and some alarm, and we apologize.”

For the time being, the American Physical Society’s page on inclusion remains live, as does that of the American Institute of Physics.

Dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men

Neal Lane, Rice University

Such a response – which some opponents denounce as going beyond what is legally required for fear of repercussions if no action is taken – has left it up to individual leaders to underline the importance of diversity in science. Neal Lane, a former science adviser to President Clinton, told Physics World that “dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men, including scientists, engineers, technical workers – essentially everyone who contributes to advancing America’s global leadership in science and technology”.

Lane, who is now a science and technology policy fellow at Rice University in Texas, think that the new administration’s anti-DEI actions “will weaken the US” and believes they should be considered “un-American”. “The purpose of DEI policies programmes and activities is to ensure all Americans have the opportunity to participate and the country is able to benefit from their participation,” he says.

One senior physicist at a US university, who wishes to remain anonymous, told Physics World that those behind the executive orders are relying on institutions and individuals to “comply in advance” with what they perceive to be the spirit of the orders. “They are relying on people to ignore the fine print, which says that executive orders can’t and don’t overwrite existing law. But it is up to scientists to do the reading — and to follow our consciences. More than universities are on the line: the lives of our students and colleagues are on the line.”

Education turmoil

Another target of the Trump administration is the US Department of Education, which was set up in 1978 to oversee everything from pre-school to postgraduate education. It has already put dozens of its civil servants on leave, ostensibly because their work involves DEI issues. Meanwhile, the withholding of funds has led to the cancellation of scientific meetings, mostly focusing on medicine and life sciences, that were scheduled in the US for late January and early February.

Colleges and universities in the US have also reacted to Trump’s anti-DEI executive order. Academic divisions at Harvard University and the Massachusetts Institute of Technology, for example, have already indicated that they will no longer require applicants for jobs to indicate how they plan to advance the goals of DEI. Northeastern University in Boston has removed the words “diversity” and “inclusion” from a section of its website.

Not all academic organizations have fallen into line, however. Danielle Holly, president of the women-only Mount Holyoke College in South Hadley, Massachusetts, says it will forgo contracts with the federal government if they required abolishing DEI. “We obviously can’t enter into contracts with people who don’t allow DEI work,” she told the Boston Globe. “So for us, that wouldn’t be an option.”

Climate concerns

For an administration that doubts the reality of climate change and opposes anti-pollution laws, the Environmental Protection Agency (EPA) is under fire too. Trump administration representatives were taking action even before the Senate approved Lee Zeldin, a former Republican Congressman from New York who has criticized much environmental legislation, as EPA Administrator. They removed all outside advisers on the EPA’s scientific advisory board and its clean air scientific advisory committee – purportedly to “depoliticize” the boards.

Once the Senate approved Zeldin on 29 January, the EPA sent an e-mail warning more than 1000 probationary employees who had spent less than a year in the agency that their roles could be “terminated” immediately. Then, according to the New York Times, the agency developed plans to demote longer-term employees who have overseen research, enforcement of anti-pollution laws, and clean-ups of hazardous waste. According to Inside Climate News, staff also found their individual pronouns scrubbed from their e-mails and websites without their permission – the result of an order to remove “gender ideology extremism”.

Critics have also questioned the nomination of Neil Jacobs to lead the NOAA. He was its acting head during Trump’s first term in office, serving during the 2019 “Sharpiegate” affair when Trump used a Sharpie pen to alter a NOAA weather map to indicate that Hurricane Dorian would affect Alabama. While conceding Jacobs’s experience and credentials, Rachel Cleetus of the Union of Concerned Scientists asserts that Jacobs is “unfit to lead” given that he “fail[ed] to uphold scientific integrity at the agency”.

Spending cuts

Another concern for scientists is the quasi-official team led by “special government employee” and SpaceX founder Elon Musk. The administration has charged Musk and his so-called “department of government efficiency”, or DOGE, to identify significant cuts to government spending. Though some of DOGE’s activities have been blocked by US courts, agencies have nevertheless been left scrambling for ways to reduce day-to-day costs.

The National Institutes of Health (NIH), for example, has said it will significantly reduce its funding for “indirect” costs of research projects it supported – the overheads that, for example, cover the cost of maintaining laboratories, administering grants, and paying staff salaries. Under the plans, indirect cost reimbursement for federally funded research would be capped at 15%, a drastic cut from its usual range.

NIH personnel have tried to put a positive gloss on its actions. “The United States should have the best medical research in the world,” a statement from NIH declared. “It is accordingly vital to ensure that as many funds as possible go towards direct scientific research costs rather than administrative overhead.”

Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives

US senator Patty Murray

Opponents of the Trump administration, however, are unconvinced. They argue that the measure will imperil critical clinical research because many academic recipients of NIH funds did not have the endowments to compensate for the losses. “Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives,” says US senator Patty Murray, a Democrat from Washington state.

Slashing universities’ share of grants to below 15%, could, however, force institutions to make up the lost income by raising tuition fees, which could “go through the roof”, according to the anonymous senior physicist contacted by Physics World. “Far from being a populist policy, these cuts to overheads are an attack on the subsidies that make university education possible for students from a range of socioeconomic backgrounds. The alternative is to essentially shut down the university research apparatus, which would in many ways be the death of American scientific leadership and innovation.”

Musk and colleagues have also gained unprecedented access to government websites related to civil servants and the country’s entire payments system. That access has drawn criticism from several commentators who note that, since Musk is a recipient of significant government support through his SpaceX company, he could use the information for his own advantage.

“Musk has access to all the data on federal research grantees and contractors: social security numbers, tax returns, tax payments, tax rebates, grant disbursements and more,” wrote physicist Michael Lubell from City College of New York. “Anyone who depends on the federal government and doesn’t toe the line might become a target. This is right out of (Hungarian prime minister) Viktor Orbán’s playbook.”

A new ‘dark ages’

As for the long-term impact of these changes, James Gates – a theoretical physicist at the University of Maryland and a past president of the US National Society of Black Physicists – is blunt. “My country is in for a 50-year period of a new dark ages,” he told an audience at the Royal College of Art in London, UK, on 7 February.

My country is in for a 50-year period of a new dark ages

James Gates, University of Maryland

Speaking at an event sponsored by the college’s association for Black students – RCA BLK – and supported by the UK’s organization for Black physicists, the Blackett Lab Family, he pointed out that the US has been through such periods before. As examples, Gates cited the 1950s “Red Scare” and the period after 1876 when the federal government abandoned efforts to enforce the civil rights of Black Americans in southern states and elsewhere.

However, he is not entirely pessimistic. “Nothing is permanent in human behaviour. The question is the timescale,” Gates said. “There will be another dawn, because that’s part of the human spirit.”

  • With additional reporting by Margaret Harris, online editor of Physics World, in London and Michael Banks, news editor of Physics World

The post US science in chaos as impact of Trump’s executive orders sinks in appeared first on Physics World.

  •  

Bacterial ‘cables’ form a living gel in mucus

Bacterial cells in solutions of polymers such as mucus grow into long cable-like structures that buckle and twist on each other, forming a “living gel” made of intertwined cells. This behaviour is very different from what happens in polymer-free liquids, and researchers at the California Institute of Technology (Caltech) and Princeton University, both in the US, say that understanding it could lead to new treatments for bacterial infections in patients with cystic fibrosis. It could also help scientists understand how cells organize themselves into polymer-secreting conglomerations of bacteria called biofilms that can foul medical and industrial equipment.

Interactions between bacteria and polymers are ubiquitous in nature. For example, many bacteria live as multicellular colonies in polymeric fluids, including host-secreted mucus, exopolymers in the ocean and the extracellular polymeric substance that encapsulates biofilms. Often, these growing colonies can become infectious, including in cystic fibrosis patients, whose mucus is more concentrated than it is in healthy individuals.

Laboratory studies of bacteria, however, typically focus on cells in polymer-free fluids, explains study leader Sujit Datta, a biophysicist and bioengineer at Caltech. “We wondered whether interactions with extracellular polymers influence proliferating bacterial colonies,” says Datta, “and if so, how?”

Watching bacteria grow in mucus

In their work, which is detailed in Science Advances, the Caltech/Princeton team used a confocal microscope to monitor how different species of bacteria grew in purified samples of mucus. The samples, Dutta explains, were provided by colleagues at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine.

Normally, when bacterial cells divide, the resulting “daughter” cells diffuse away from each other. However, in polymeric mucus solutions, Datta and colleagues observed that the cells instead remained stuck together and began to form long cable-like structures. These cables can contain thousands of cells, and eventually they start bending and folding on top of each other to form an entangled network.

“We found that we could quantitively predict the conditions under which such cables form using concepts from soft-matter physics typically employed to describe non-living gels,” Datta says.

Support for bacterial colonies

The team’s work reveals that polymers, far from being a passive medium, play a pivotal role in supporting bacterial life by shaping how cells grow in colonies. The form of these colonies – their morphology – is known to influence cell-cell interactions and is important for maintaining their genetic diversity. It also helps determine how resilient a colony is to external stressors.

“By revealing this previously-unknown morphology of bacterial colonies in concentrated mucus, our finding could help inform ways to treat bacterial infections in patients with cystic fibrosis, in which the mucus that lines the lungs and gut becomes more concentrated, often causing the bacterial infections that take hold in that mucus to become life-threatening,” Datta tells Physics World.

Friend or foe?

As for why cable formation is important, Datta explains that there are two schools of thought. The first is that by forming large cables, bacteria may become more resilient against the body’s immune system, making them more infectious. The other possibility is that the reverse is true – that cable formation could in fact leave bacteria more exposed to the host’s defence mechanisms. These include “mucociliary clearance”, which is the process by which tiny hairs on the surface of the lungs constantly sweep up mucus and propel it upwards.

“Could it be that when bacteria are all clumped together in these cables, it is actually easier to get rid of them by expelling them out of the body?” Dutta asks.

Investigating these hypotheses is an avenue for future research, he adds. “Ours is a fundamental discovery on how bacteria grow in complex environments, more akin to their natural habitats,” Datta says. “We also expect it will motivate further work exploring how cable formation influences the ways in which bacteria interact with hosts, phages, nutrients and antibiotics.”

The post Bacterial ‘cables’ form a living gel in mucus appeared first on Physics World.

  •  

Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage

IBM is on a mission to transform quantum computers from applied research endeavour to mainstream commercial opportunity. It wants to go beyond initial demonstrations of “quantum utility”, where these devices outperform classical computers only in a few niche applications, and reach the new frontier of “quantum advantage”. That’ll be where quantum computers routinely deliver significant, practical benefits beyond approximate classical computing methods, calculating solutions that are cheaper, faster and more accurate.

Unlike classical computers, which rely on the binary bits that can be either 0 or 1, quantum computers exploit quantum binary bits (qubits), but as a superposition of 0 and 1 states. This superposition, coupled with quantum entanglement (a correlation of two qubits), enables quantum computers to perform some types of calculation significantly faster than classical machines, such as problems in quantum chemistry and molecular reaction kinetics.

In the vanguard of IBM’s quantum R&D effort is Sarah Sheldon, a principal research scientist and senior manager of quantum theory and capabilities at the IBM Thomas J Watson Research Center in Yorktown Heights, New York. After a double-major undergraduate degree in physics and nuclear science and engineering at Massachusetts Institute of Technology (MIT), Sheldon received her PhD from MIT in 2013 – though she did much of her graduate research in nuclear science and engineering as a visiting scholar at the Institute for Quantum Computing (IQC) at the University of Waterloo, Canada.

At IQC, Sheldon was part of a group studying quantum control techniques, manipulating the spin states of nuclei in nuclear-magnetic-resonance (NMR) experiments. “Although we were using different systems to today’s leading quantum platforms, we were applying a lot of the same kinds of control techniques now widely deployed across the quantum tech sector,” Sheldon explains.

“Upon completion of my PhD, I opted instinctively for a move into industry, seeking to apply all that learning in quantum physics into immediate and practical engineering contributions,” she says. “IBM, as one of only a few industry players back then with an experimental group in quantum computing, was the logical next step.”

Physics insights, engineering solutions

Sheldon currently heads a cross-disciplinary team of scientists and engineers developing techniques for handling noise and optimizing performance in novel experimental demonstrations of quantum computers. It’s ambitious work that ties together diverse lines of enquiry spanning everything from quantum theory and algorithm development to error mitigation, error correction and techniques for characterizing quantum devices.

We’re investigating how to extract the optimum performance from current machines online today as well as from future generations of quantum computers.

Sarah Sheldon, IBM

“From algorithms to applications,” says Sheldon, “we’re investigating what can we do with quantum computers: how to extract the optimum performance from current machines online today as well as from future generations of quantum computers – say, five or 10 years down the line.”

A core priority for Sheldon and colleagues is how to manage the environmental noise that plagues current quantum computing systems. Qubits are all too easily disturbed, for example, by their interactions with environmental fluctuations in temperature, electric and magnetic fields, vibrations, stray radiation and even interference between neighbouring qubits.

The ideal solution – a strategy called error correction – involves storing the same information across multiple qubits, such that errors are detected and corrected when one or more of the qubits are impacted by noise. But the problem with these so-called “fault-tolerant” quantum computers is they need millions of qubits, which is impossible to implement in today’s small-scale quantum architectures. (For context, IBM’s latest Quantum Development Roadmap outlines a practical path to error-corrected quantum computers by 2029.)

“Ultimately,” Sheldon notes, “we’re working towards large-scale error-corrected systems, though for now we’re exploiting near-term techniques like error mitigation and other ways of managing noise in these systems.” In practical terms, this means implementing quantum architectures without increasing the number of qubits – essentially, integrating them with classical computers to reduce noise through increasing samples on the quantum computer combined with classical processing.

Strength in diversity

For Sheldon, one big selling point of the quantum tech industry is the opportunity to collaborate with people from a wide range of disciplines. “My team covers a broad-scope R&D canvas,” she says. There are mathematicians and computer scientists, for example, working on complexity theory and novel algorithm development; physicists specializing in quantum simulation and incorporating error suppression techniques; as well as quantum chemists working on simulations of molecular systems.

“Quantum is so interdisciplinary – you are constantly learning something new from your co-workers,” she adds. “I started out specializing in quantum control techniques, before moving onto experimental demonstrations of larger multiqubit systems while working ever more closely with theorists.”

A corridor in IBM's quantum lab
Computing reimagined Quantum scientists and engineers at the IBM Thomas J Watson Research Center are working to deliver IBM’s Quantum Development Roadmap and a practical path to error-corrected quantum computers by 2029. (Courtesy: Connie Zhou for IBM)

External research collaborations are also mandatory for Sheldon and her colleagues. Front-and-centre is the IBM Quantum Network, which provides engagement opportunities with more than 250 organizations across the “quantum ecosystem”. These range from top-tier labs – such as CERN, the University of Tokyo and the UK’s National Quantum Computing Centre – to quantum technology start-ups like Q-CTRL and Algorithmiq. It also encompasses established industry players aiming to be early-adopting end-users of quantum technologies (among them Bosch, Boeing and HSBC).

“There’s a lot of innovation happening across the quantum community,” says Sheldon, “so external partnerships are incredibly important for IBM’s quantum R&D programme. While we have a deep and diverse skill set in-house, we can’t be the domain experts across every potential use-case for quantum computing.”

Opportunity knocks

Notwithstanding the pace of innovation, there are troubling clouds on the horizon. In particular, there is a shortage of skilled workers in the quantum workforce, with established technology companies and start-ups alike desperate to attract more physical scientists and engineers. The task is to fill not only specialist roles – be it error-correction scientists or quantum-algorithm developers – but more general positions such as test and measurement engineers, data scientists, cryogenic technicians and circuit designers.

Yet Sheldon remains upbeat about addressing the skills gap. “There are just so many opportunities in the quantum sector,” she notes. “The field has changed beyond all recognition since I finished my PhD.” Perhaps the biggest shift has been the dramatic growth of industry engagement and, with it, all sorts of attractive career pathways for graduate scientists and engineers. Those range from firms developing quantum software or hardware to the end-users of quantum technologies in sectors such as pharmaceuticals, finance or healthcare.

“As for the scientific community,” argues Sheldon, “we’re also seeing the outline take shape for a new class of quantum computational scientist. Make no mistake, students able to integrate quantum computing capabilities into their research projects will be at the leading edge of their fields in the coming decades.”

Ultimately, Sheldon concludes, early-career scientists shouldn’t necessarily over-think things regarding that near-term professional pathway. “Keep it simple and work with people you like on projects that are going to interest you – whether quantum or otherwise.”

The post Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage appeared first on Physics World.

  •  

Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection

The COVID-19 pandemic provided a driving force for researchers to seek out new disinfection methods that could tackle future viral outbreaks. One promising approach relies on the use of nanoparticles, with several metal and metal oxide nanoparticles showing anti-viral activity against SARS-CoV-2, the virus that causes COVID-19. With this in mind, researchers from Sweden and Estonia investigated the effect of such nanoparticles on two different virus types.

Aiming to elucidate the nanoparticles’ mode of action, they discovered a previously unknown antiviral mechanism, reporting their findings in Nanoscale.

The researchers – from the Swedish University of Agricultural Sciences (SLU) and the University of Tartu – examined triethanolamine terminated titania (TATT) nanoparticles, spherical 3.5-nm diameter titanium dioxide (titania) particles that are expected to interact strongly with viral surface proteins.

They tested the antiviral activity of the TATT nanoparticles against two types of virus: swine transmissible gastroenteritis virus (TGEV) – an enveloped coronavirus that’s surrounded by a phospholipid membrane and transmembrane proteins; and the non-enveloped encephalomyocarditis virus (EMCV), which does not have a phospholipid membrane. SARS-CoV-2 has a similar structure to TGEV: an enveloped virus with an outer lipid membrane and three proteins forming the surface.

“We collaborated with the University of Tartu in studies of antiviral materials,” explains lead author Vadim Kessler from SLU. “They had found strong activity from cerium dioxide nanoparticles, which acted as oxidants for membrane destruction. In our own studies, we saw that TATT formed appreciably stable complexes with viral proteins, so we could expect potentially much higher activity at lower concentration.”

In this latest investigation, the team aimed to determine whether one of these potential mechanisms – blocking of surface proteins, or membrane disruption via oxidation by nanoparticle-generated reactive oxygen species – is the likely cause of TATT’s antiviral activity. The first of these effects usually occurs at low (nanomolar to micromolar) nanoparticle concentrations, the latter at higher (millimolar) concentrations.

Mode of action

To assess the nanoparticle’s antiviral activity, the researchers exposed viral suspensions to colloidal TATT solutions for 1 h, at room temperature and in the dark (without UV illumination). For comparison, they repeated the process with silicotungstate polyoxometalate (POM) nanoparticles, which are not able to bind strongly to cell membranes.

The nanoparticle-exposed viruses were then used to infect cells and the resulting cell viability served as a measure of the virus infectivity. The team note that the nanoparticles alone showed no cytotoxicity against the host cells.

Measuring viral infectivity after nanoparticle exposure revealed that POM nanoparticles did not exhibit antiviral effects on either virus, even at relatively high concentrations of 1.25 mM. TATT nanoparticles, on the other hand, showed significant antiviral activity against the enveloped TGEV virus at concentrations starting from 0.125 mM, but did not affect the non-enveloped EMCV virus.

Based on previous evidence that TATT nanoparticles interact strongly with proteins in darkness, the researchers expected to see antiviral activity at a nanomolar level. But the finding that TATT activity only occurred at millimolar concentrations, and only affected the enveloped virus, suggests that the antiviral effect is not due to blocking of surface proteins. And as titania is not oxidative in darkness, the team propose that the antiviral effect is actually due to direct complexation of nanoparticles with membrane phospholipids – a mode of antiviral action not previously considered.

“Typical nanoparticle concentrations required for effects on membrane proteins correspond to the protein content on the virus surface. With a 1:1 complex, we would need maximum nanomolar concentrations,” Kessler explains. “We saw an effect at about 1 mM/l, which is far higher. This was the indication for us that the effect was on the whole of membrane.”

Verifying the membrane effect

To corroborate their hypothesis, the researchers examined the leakage of dye-labelled RNA from the TGEV coronavirus after 1 h exposure to nanoparticles. The fluorescence signal from the dye showed that TATT-treated TGEV released significantly more RNA than non-exposed virus, attributed to the nanoparticles disrupting the virus’s phospholipid membrane.

Finally, the team studied the interactions between TATT nanoparticles and two model phospholipid compounds. Both molecules formed strong complexes with TATT nanoparticles, while their interaction with POM nanoparticles was weak. This additional verification led the researchers to conclude that the antiviral effect of TATT in dark conditions is due to direct membrane disruption via complexation of titania nanoparticles with phospholipids.

“To the best of our knowledge, [this] proves a new pathway for metal oxide nanoparticles antiviral action,” they write.

Importantly, the nanoparticles are non-toxic, and work at room temperature without requiring UV illumination – enabling simple and low-cost disinfection methods. “While it was known that disinfection with titania could work in UV light, we showed that no special technical measures are necessary,” says Kessler.

Kessler suggests that the nanoparticles could be used to coat surfaces to destroy enveloped viruses, or in cost-effective filters to decontaminate air or water. “[It should be] possible to easily create antiviral surfaces that don’t require any UV activation just by spraying them with a solution of TATT, or possibly other oxide nanoparticles with an affinity to phosphate, including iron and aluminium oxides in particular,” he tells Physics World.

The post Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection appeared first on Physics World.

  •  

Organic photovoltaic solar cells could withstand harsh space environments

Carbon-based organic photovoltaics (OPVs) may be much better than previously thought at withstanding the high-energy radiation and sub-atomic particle bombardments of space environments. This finding, by researchers at the University of Michigan in the US, challenges a long-standing belief that OPV devices systematically degrade under conditions such as those encountered by spacecraft in low-Earth orbit. If verified in real-world tests, the finding suggests that OPVs could one day rival traditional thin-film photovoltaic technologies based on rigid semiconductors such as gallium arsenide.

Lightweight, robust, radiation-resilient photovoltaics are critical technologies for many aerospace applications. OPV cells are particularly attractive for this sector because they are ultra-lightweight, thermally stable and highly flexible. This last property allows them to be integrated onto curved surfaces as well as flat ones.

Today’s single-junction OPV devices also have a further advantage. Thanks to power conversion efficiencies (PCEs) that now exceed 20%, their specific power – that is, the power generated per weight – can be up to 40 W/g. This is significantly higher than traditional photovoltaic technologies, including those based on silicon (1 W/g) and gallium arsenide (3 W/g) on flexible substrates. Devices with such a large specific power could provide energy for small spacecraft heading into low-Earth orbit and beyond.

Until now, however, scientists believed that these materials had a fatal flaw for space applications: they weren’t robust to irradiation by the energetic particles (predominantly fluxes of electrons and protons) that spacecraft routinely encounter.

Testing two typical OPV materials

In the new work, researchers led by electrical and computer engineer Yongxi Li and physicist Stephen Forrest analysed how two typical OPV materials behave when exposed to proton particles with differing energies. They did this by characterizing their optoelectronic properties before and after irradiation exposure. The first materials were made up of small molecules (DBP, DTDCPB and C70) that had been grown using a technique called vacuum thermal evaporation (VTE). The second group consisted of solution-processed small molecules and polymers (PCE-10, PM6, BT-CIC and Y6).

The team’s measurements show that the OPVs grown by VTE retained their initial PV efficiency under radiation fluxes of up to 1012 cm−2. In contrast, polymer-based OPVs lose 50% of their original efficiency under the same conditions. This, say the researchers, is because proton irradiation breaks carbon-hydrogen bonds in the polymers’ molecular alkyl side chains. This leads to polymer cross-linking and the generation of charge traps that imprison electrons and prevent them from generating useful current.

The good news, Forrest says, is that many of these defects can be mended by thermally annealing the materials at temperatures of 45 °C or less. After such an annealing, the cell’s PCE returns to nearly 90% of its value before irradiation. This means that Sun-facing solar cells made of these materials could essentially “self-heal”, though Forrest acknowledges that whether this actually happens in deep space is a question that requires further investigation. “It may be more straightforward to design the material so that the electron traps never appear in the first place or by filling them with other atoms, so eliminating this problem,” he says.

According to Li, the new study, which is detailed in Joule, could aid the development of standardized stability tests for how protons interact with OPV devices. Such tests already exist for c-Si and GaAs solar cells, but not for OPVs, he says.

The Michigan researchers say they will now be developing materials that combine high PCEs with strong resilience to proton exposure. “We will then use these materials to fabricate OPV devices that we will then test on CubeSats and spacecraft in real-world environments,” Li tells Physics World.

The post Organic photovoltaic solar cells could withstand harsh space environments appeared first on Physics World.

  •  

How international conferences can help bring women in physics together

International conferences are a great way to meet people from all over the world to share the excitement of physics and discuss the latest developments in the subject. But the International Conference on Women in Physics (ICWIP) offers more by allowing us to to listen to the experiences of people from many diverse backgrounds and cultures. At the same time, it highlights the many challenges that women in physics still face.

The ICWIP series is organized by the International Union of Pure and Applied Physics (IUPAP) and the week-long event typically features a mixture of plenaries, workshops and talks. Prior to the COVID-19 pandemic, the conferences were held in various locations across the world, but the last two have been held entirely online. The last such meeting – the 8th ICWIP run from India in 2023 – saw around 300 colleagues from 57 countries attend. I was part of a seven-strong UK contingent – at various stages of our careers – who gave a presentation describing the current situation for women in physics in the UK.

Being held solely online didn’t stop delegates fostering a sense of community or discussing their predicaments and challenges. What became evident during the week was the extent and types of issues that women from across the globe still have to contend with. One is the persistence of implicit and explicit gender bias in their institutions or workplaces. This, along with negative stereotyping of women, produces discrepancies between male and female numbers in institutions, particularly at postgraduate level and beyond. Women often end up choosing not to pursue physics later into their careers and being reluctant to take up leadership roles.

Much more needs to be done to ensure women are encouraged in their careers. Indeed, women often face challenging work–life balances, with some expected to play a greater role in family commitments than men, and have little support at their workplaces. One postdoctoral researcher at the 2023 meeting, for example, attempted to discuss her research poster in the virtual conference room while looking after her young children at home – the literal balancing of work and life in action.

A virtual presentation with five speakers' avatars stood in front of a slide showing their names
Open forum The author and co-presenters at the most recent International Conference on Women in Physics. Represented by avatars online, they gave a presentation on women in physics in the UK. (Courtesy: Chethana Setty)

To improve their circumstances, delegates suggested enhancing legislation to combat gender bias and improve institutional culture through education to reduce negative stereotypes. More should also be done to improve networks and professional associations for women in physics. Another factor mentioned at the meeting, meanwhile, is the importance of early education and issues related to equity of teaching, whether delivered face-to-face or online.

But women can face disadvantages other than their gender, such as socioeconomic status and identity, resulting in a unique set of challenges for them. This is the principle of intersectionality and was widely discussed in the context of problems in career progression.

In the UK, change is starting to happen. The Limit Less campaign by the Institute of Physics (IOP), which publishes Physics World, encourages students post 16 years old to study physics. The annual Conference for Undergraduate Women and Non-binary Physicists provides individuals with support and encouragement in their personal and professional development. There are also other initiatives such as the STEM Returner programme and the Daphne Jackson Trust for those wishing to return to a physics career. WISE Ten Steps contributes to supporting workplace culture positively and the Athena SWAN and the IOP’s new Physics Inclusion Award aims to improve women’s prospects.

As we now look forward to the next ICWIP there is still a lot more to do. We must ensure that women can continue in their physics careers while recognizing that intersectionality will play an increasingly significant role in shaping future equity, diversity and inclusion policies. It is likely that soon a new team will be sought from academia and industry, comprising of individuals at various career stages to represent the UK at the next ICWIP. Please do get involved if you are interested. Participation is not limited to women.

Women are doing physics in a variety of challenging circumstances. Gaining an international outlook of different cultural perspectives, as is possible at an international conference like the ICWIP, helps to put things in context and highlights the many common issues faced by women in physics. Taking the time to listen and learn from each other is critical, a process that can facilitate collaboration on issues that affect us all. Fundamentally, we all share a passion for physics, and endeavour to be catalysts for positive change for future generations.

  • This article was based on discussions with Sally Jordan from the Open University; Holly Campbell, UK Atomic Energy Authority; Josie C, AWE; Wendy Sadler and Nils Rehm, Cardiff University; and Sarah Bakewell and Miriam Dembo, Institute of Physics

The post How international conferences can help bring women in physics together appeared first on Physics World.

  •  

Artisan, architect or artist: what kind of person are you at work?

We tend to define ourselves by the subjects we studied, and I am no different. I originally did physics before going on to complete a PhD in aeronautical engineering, which has led to a lifelong career in aerospace.

However, it took me quite a few years before I realized that there is more than one route to an enjoyable and successful career. I used to think that a career began at the “coal face” – doing things you were trained for or had a specialist knowledge of – before managing projects then products or people as you progressed to loftier heights.

Many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?

At some point, I began to realize that while companies often adopt this linear approach to career paths, not everyone is comfortable with it. In fact, I now think that many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?

Artisans are people who focus on creating functional, practical and often decorative items using hands-on methods or skills. Their work emphasizes craftmanship, attention to detail and the quality of the finished product. For scientists and engineers, artisans are highly skilled people who apply their technical knowledge and know-how. Let’s be honest: they are the ones who get the “real work” done. From programmers to machinists and assemblers, these are the people who create detailed designs and make or maintain a high-quality product.

Architects, on the other hand, combine vision with technical knowledge to create functional and effective solutions. Their work involves designing, planning and overseeing. They have a broader view of what’s happening and may be responsible for delivering projects. They need to ensure tasks are appropriately prioritized and keep things on track and within budget.

Architects also help with guiding on best practice and resolving or unblocking issues. They are the people responsible for ensuring that the end result meets the needs of users and, where applicable, comply with regulations. Typically, this role involves running a project or team – think principal investigator, project manager, software architect or systems engineer.

As for artists, they are the people who have a big picture view of the world – they will not have eyes for the finer details. They are less constrained by a framework and are comfortable working with minimal formal guidance and definition. They have a vision of what will be needed for the future – whether that’s new products and strategic goals or future skills and technology requirements.

Artists set the targets for how an organization, department or business needs to grow and they define strategies for how a business will develop its competitive edge. Artists are often leaders and chiefs.

Which type are you?

To see how these personas work in practice, imagine working for a power utility provider. If there’s a power outage, the artisans will be the people who get the power back on by locating and fixing damaged power lines, repairing substations and so on. They are practical people who know how to make things work.

The architect will be organizing the repair teams, working out who goes to which location, and what to prioritize, ensuring that customers are kept happy and senior leaders are kept informed of progress. The artist, meanwhile, will be thinking about the future. How, for example, can utilities protect themselves better from storm damage and what new technologies or designs can be introduced to make the supply more resilient and minimize disruption?

Predominantly artisans are practical, architects are tactical and artists are strategic but there is an overlap between these qualities. Artisans, architects and artists differ in their goals and methods, but the boundaries between them are blurred. Based on my gut experience as a physicist in industry, I’d say the breakdown between different skills is roughly as shown in the figure below.

Pie chart of personal attributes
Varying values Artisans, architects and artists don’t have only one kind of attribute but are practical, tactical and strategic in different proportions. The numbers shown here are based on the author’s gut feeling after working in industry for more than 30 years.

Now this breakdown is not hard and fast. To succeed in your career, you need to be creative, inventive and skilful – whatever your role. While working with your colleagues, you need to engage in common processes such as adhering to relevant standards, regulations and quality requirements to deliver quality solutions and products. But thinking of ourselves as artisans, architects or artists may explain why each of us is suited to a certain role.

Know your strengths

Even though we all have something of the other personas in us, what’s important is to know what your own core strength is. I used to believe that the only route for a successful career was to work through each of these personas by starting out as artisan, turning into an architect, and then ultimately becoming an artist. And to be fair, this is how many career paths are structured, which his why we’re often encouraged to think this way.

However, I have worked with people who liked “hands on” work so much, that didn’t want to move to a different role, even though it meant turning down a significant promotion. I also know others who have indeed moved between different personas, only to discover the new type of work did not suit them.

Trouble is, although it’s usually possible to retrace steps, it’s not always straightforward to do so. Quite why that should be the case is not entirely clear. It’s certainly not because people are unwilling to accept a pay cut, but more because changing tack is seen as a retrograde step for both employees and their employers.

To be successful, any team, department or business needs to not only understand the importance of this skills mix but also recognize it’s not a simple pipeline – all three personas are critical to success. So if you don’t know already, I encourage you to think about what you enjoy doing most, using your insights to proactively drive career conversations and decisions. Don’t be afraid to emphasize where your “value add” lies.

If you’re not sure whether a change in persona is right for you, seek advice from mentors and peers or look for a secondment to try it out. The best jobs are the ones where you can spend most of your time doing what you love doing. Whether you’re an artisan, architect or artist – the most impactful employees are the ones who really enjoy what they do.

The post Artisan, architect or artist: what kind of person are you at work? appeared first on Physics World.

  •  

Thousands of nuclear spins are entangled to create a quantum-dot qubit

A new type of quantum bit (qubit) that stores information in a quantum dot with the help of an ensemble of nuclear spin states has been unveiled by physicists in the UK and Austria. Led by Dorian Gangloff and Mete Atatüre at the University of Cambridge, the team created a collective quantum state that could be used as a quantum register to store and relay information in a quantum communication network of the future.

Quantum communication networks are used to exchange and distribute quantum information between remotely-located quantum computers and other devices. As well as enabling distributed quantum computing, quantum networks can also support secure quantum cryptography. Today, these networks are in the very early stages of development and use the entangled quantum states of photons to transmit information. Network performance is severely limited by decoherence, whereby the quantum information held by photons is degraded as they travel long distances. As a result, effective networks need repeater nodes that receive and then amplify weakened quantum signals.

“To address these limitations, researchers have focused on developing quantum memories capable of reliably storing entangled states to enable quantum repeater operations over extended distances,” Gangloff explains. “Various quantum systems are being explored, with semiconductor quantum dots being the best single-photon generators delivering both photon coherence and brightness.”

Single-photon emission

Quantum dots are widely used for their ability to emit single photons at specific wavelengths. These photons are created by electronic transitions in quantum dots and are ideal for encoding and transmitting quantum information.

However, the electronic spin states of quantum dots are not particularly good at storing quantum information for long enough to be useful as stationary qubits (or nodes) in a quantum network. This is because they contain hundreds or thousands of nuclei with spins that fluctuate. The noise generated by these fluctuations causes the decoherence of qubits based on electronic spin states.

In their previous research, Gangloff and Atatüre’s team showed how this noise could be controlled by sensing how it interacts with the electronic spin states.

Atatüre says, “Building on our previous achievements, we suppressed random fluctuations in the nuclear ensemble using a quantum feedback algorithm. This is already very useful as it dramatically improves the electron spin qubit performance.”

Magnon excitation

Now, using a gallium arsenide quantum dot, the team has used the feedback algorithm to stabilize 13,000 nuclear spin states in a collective, entangled “dark state”. This is a stable quantum state that cannot absorb or emit photons. By introducing just a single nuclear magnon (spin flip) excitation, shared across all 13,000 nuclei, they could then flip the entire ensemble between two different collective quantum states.

Each of these collective states could respectively be defined as a 0 and a 1 in a binary quantum logic system. The team then showed how quantum information could be exchanged between the nuclear system and the quantum dot’s electronic qubit with a fidelity of about 70%.

“The quantum memory maintained the stored state for approximately 130 µs, validating the effectiveness of our protocol,” Gangloff explains. “We also identified unambiguously the factors limiting the current fidelity and storage time, including crosstalk between nuclear modes and optically induced spin relaxation.”

The researchers are hopeful that their approach could transform one of the biggest limitations to quantum dot-based communication networks into a significant advantage.

“By integrating a multi-qubit register with quantum dots – the brightest and already commercially available single-photon sources – we elevate these devices to a much higher technology readiness level,” Atatüre explains.

With some further improvements to their system’s fidelity, the researchers are now confident that it could be used to strengthen interactions between quantum dot qubits and the photonic states they produce, ultimately leading to longer coherence times in quantum communication networks. Elsewhere, it could even be used to explore new quantum phenomena, and gather new insights into the intricate dynamics of quantum many-body systems.

The research is described in Nature Physics.

The post Thousands of nuclear spins are entangled to create a quantum-dot qubit appeared first on Physics World.

  •  

Say hi to Quinnie – the official mascot of the International Year of Quantum Science and Technology

Whether it’s the Olympics or the FIFA World Cup, all big global events need a cheeky, fun mascot. So welcome to Quinnie – the official mascot for the International Year of Quantum Science and Technology (IYQ) 2025.

Unveiled at the launch of the IYQ at the headquarters of UNESCO in Paris on 4 February, Quinnie has been drawn by Jorge Cham, the creator of the long-running cartoon strip PHD Comics.

Quinnie was developed for UNESCO in a collaboration between Cham and Physics Magazine, which is published by the American Physical Society (APS) – one of the founding partners of IYQ.

Image of Quinnie, the mascot for the International Year of Quantum Science and Technology
Riding high Quinnie surfing on a quantum wave function. (Courtesy: Jorge Cham)

“Quinnie represents a young generation approaching quantum science with passion, ingenuity, and energy,” says Physics editor Matteo Rini. “We imagine her effortlessly surfing on quantum-mechanical wave functions and playfully engaging with the knottiest quantum ideas, from entanglement to duality.”

Quinnie is set to appear in a series of animated cartoons that the APS will release throughout the year.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Say hi to Quinnie – the official mascot of the International Year of Quantum Science and Technology appeared first on Physics World.

  •  

New class of quasiparticle appears in bilayer graphene

A newly-discovered class of quasiparticles known as fractional excitons offers fresh opportunities for condensed-matter research and could reveal unprecedented quantum phases, say physicists at Brown University in the US. The new quasiparticles, which are neither bosons nor fermions and carry no charge, could have applications in quantum computing and sensing, they say.

In our everyday, three-dimensional world, particles are classified as either fermions or bosons. Fermions such as electrons follow the Pauli exclusion principle, which prevents them from occupying the same quantum state. This property underpins phenomena like the structure of atoms and the behaviour of metals and insulators. Bosons, on the other hand, can occupy the same state, allowing for effects like superconductivity and superfluidity.

Fractional excitons defy this traditional classification, says Jia Leo Li, who led the research. Their properties lie somewhere in between those of fermions and bosons, making them more akin to anyons, which are particles that exist only in two-dimensional systems. But that’s only one aspect of their unusual nature, Li adds. “Unlike typical anyons, which carry a fractional charge of an electron, fractional excitons are neutral particles, representing a distinct type of quantum entity,” he says.

The experiment

Li and colleagues created the fractional excitons using two sheets of graphene – a form of carbon just one atom thick – separated by a layer of another two-dimensional material, hexagonal boron nitride. This layered setup allowed them to precisely control the movement of electrons and positively-charged “holes” and thus to generate excitons, which are pairs of electrons and holes that behave like single particles.

The team then applied a 12 T magnetic field to their bilayer structure. This strong field caused the electrons in the graphene to split into fractional charges – a well-known phenomenon that occurs in the fractional quantum Hall effect. “Here, strong magnetic fields create Landau electronic levels that induce particles with fractional charges,” Li explains. “The bilayer structure facilitates pairing between these positive and negative charges, making fractional excitons possible.”

“Distinct from any known particles”

The fractional excitons represent a quantum system of neutral particles that obey fractional quantum statistics, interact via dipolar forces and are distinct from any known particles, Li tells Physics World. He adds that his team’s study, which is detailed in Nature, builds on prior works that predicted the existence of excitons in the fractional quantum Hall effect (see, for example, Nature Physics 13, 751 2017Nature Physics 15, 898-903 2019Science 375 (6577), 205-209 2022).

The researchers now plan to explore the properties of fractional excitons further. “Our key objectives include measuring the fractional charge of the constituent particles and confirming their anyonic statistics,” Li explains. Studies of this nature could shed light on how fractional excitons interact and flow, potentially revealing new quantum phases, he adds.

“Such insights could have profound implications for quantum technologies, including ultra-sensitive sensors and robust quantum computing platforms,” Li says. “As research progresses, fractional excitons may redefine the boundaries of condensed-matter physics and applied quantum science.”

The post New class of quasiparticle appears in bilayer graphene appeared first on Physics World.

  •  

European Space Agency’s Euclid mission spots spectacular Einstein ring

The European Space Agency (ESA) has released a spectacular image of an Einstein ring – a circle of light formed around a galaxy by gravitational lensing. Taken by the €1.4bn Euclid mission, the ring is a result of the gravitational effects of a galaxy located around 590 million light-years from Earth.

Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of more than two billion galaxies. The images it takes are about four times as sharp as current ground-based telescopes.

Einstein’s general theory of relativity predicts that light will bend around objects in space, so that they focus the light like a giant lens. This gravitational lensing effect is bigger for more massive objects and means we can sometimes see the light from distant galaxies that would otherwise be hidden.

Yet if the alignment is just right, the light from the distant source galaxy bends to form a spectacular ring around the foreground object. In this case, the mass of galaxy NGC 6505 is bending and magnifying the light from a more distant galaxy, which is about 4.42 billion light-years away, into a ring.

Studying such rings can shed light on the expansion of the universe as well as the nature of dark matter.

Euclid’s first science results were released in May 2024, following its first shots of the cosmos in November 2023. Hints of the ring were first spotted in September 2023 when Euclid was being testing with follow-up measurements now revealing it in exquisite detail.

The post European Space Agency’s Euclid mission spots spectacular Einstein ring appeared first on Physics World.

  •  

Quantum simulators deliver surprising insights into magnetic phase transitions

Unexpected behaviour at phase transitions between classical and quantum magnetism has been observed in different quantum simulators operated by two independent groups. One investigation was led by researchers at Harvard University and used Rydberg atom as quantum bits (qubits). The other study was led by scientists at  Google Research and involved superconducting qubits. Both projects revealed unexpected deviations from the canonical mechanisms of magnetic freezing, with unexpected oscillations near the phase transition.

A classical magnetic material can be understood as a fluid mixture of magnetic domains that are oriented in opposite directions, with the domain walls in constant motion. As a strengthening magnetic field is applied to the system, the energy associated with a domain wall increases, so the magnetic domains themselves become larger and less mobile. At some point, when the magnetism becomes sufficiently strong, a quantum phase transition occurs, causing the magnetism of the material to become fixed and crystalline: “A good analogy is like water freezing,” says Mikhail Lukin of Harvard University.

The traditional quantitative model for these transitions is the Kibble–Zurek mechanism, which was first formulated to describe cosmological phase transitions in the early universe. It predicts that the dynamics of a system begin to “freeze” when the system gets so close to the transition point that the domains crystallize more quickly than they can come to equilibrium.

“There are some very good theories of various types of quantum phase transitions that have been developed,” says Lukin, “but typically these theories make some approximations. In many cases they’re fantastic approximations that allow you to get very good results, but they make some assumptions which may or may not be correct.”

Highly reconfigurable platform

In their work, Lukin and colleagues utilized a highly reconfigurable platform using Rydberg atom qubits. The system was pioneered by Lukin and others in 2016 to study a specific type of magnetic quantum phase transition in detail. They used a laser to simulate the effect of a magnetic field on the Rydberg atoms, and adjusted the laser frequency to tune the field strength.

The researchers found that, rather than simply becoming progressively larger and less mobile as the field strength increased (a phenomenon called coarsening), the domain sizes underwent unexpected oscillations around the phase transition.

“We were really quite puzzled,” says Lukin. “Eventually we figured out that this oscillation is a sign of a special type of excitation mode similar to the Higgs mode in high-energy physics. This is something we did not anticipate…That’s an example where doing quantum simulations on quantum devices really can lead to new discoveries.”

Meanwhile, the Google-led study used a new approach to quantum simulation with superconducting qubits. Such qubits have proved extremely successful and scalable because they use solid-state technology – and they are used in most of the world’s leading commercial quantum computers such as IBM’s Osprey and Google’s own Willow chips. Much of the previous work using such chips, however, has focused on sequential “digital” quantum logic in which one set of gates is activated only after the previous set has concluded. The long times needed for such calculations allows the effects of noise to accumulate, resulting in computational errors.

Hybrid approach

In the new work, the Google team developed a hybrid analogue–digital approach in which a digital universal quantum gate set was used to prepare well-defined input qubit states. They then switched the processor to analogue mode, using capacitive couplers to tune the interactions between the qubits. In this mode, all the qubits were allowed to operate on each other simultaneously, without the quantum logic being shoehorned into a linear set of gate operations. Finally, the researchers characterized the output by switching back to digital mode.

The researchers used a 69-qubit superconducting system to simulate a similar, but non-identical, magnetic quantum phase transition to that studied by Lukin’s group. They were also puzzled by similar unexpected behaviour in their system. The groups’ subsequently became aware of each other’s work, as Google Research’s Trond Anderson explains: “It’s very exciting to see consistent observations from the Lukin group. This not only provides supporting evidence, but also demonstrates that the phenomenon appears in several contexts, making it extra important to understand”.

Both groups are now seeking to push their research deeper into the exploration of complex many-body quantum physics. The Google group estimates that, to conduct its simulations of the highly entangled quantum states involved with the same level of experimental fidelity would take the US Department of Energy’s Frontier supercomputer – one of the world’s most powerful – more than a million years. The researchers now want to look at problems that are completely intractable classically, such as magnetic frustration. “The analogue–digital approach really combines the best of both worlds, and we’re very excited about this as a new promising direction towards making discoveries in systems that are too complex for classical computers,” says Anderson.

The Harvard researchers are also looking to push their system to study more and more complex quantum systems. “There are many interesting processes where dynamics – especially across a quantum phase transition – remains poorly understood,” says Lukin. “And it ranges from the science of complex quantum materials to systems in high-energy physics such as lattice gauge theories, which are notorious for being hard to simulate classically to the point where people literally give up…We want to apply these kinds of simulators to real open quantum problems and really use them to study the dynamics of these systems.”

The research is described in side-by-side papers in Nature. The Google paper is here and the Harvard paper here.

The post Quantum simulators deliver surprising insights into magnetic phase transitions appeared first on Physics World.

  •  

Supermassive black hole displays ‘unprecedented’ X-ray outbursts

An international team of researchers has detected a series of significant X-ray oscillations near the innermost orbit of a supermassive black hole – an unprecedented discovery that could indicate the presence of a nearby stellar-mass orbiter such as a white dwarf.

Optical outburst

The Massachusetts Institute of Technology (MIT)-led team began studying the extreme supermassive black hole 1ES 1927+654 – located around 270 million light years away and about a million times more massive than the Sun – in 2018, when it brightened by a factor of around 100 at optical wavelengths. Shortly after this optical outburst, X-ray monitoring revealed a period of dramatic variability as X-rays dropped rapidly – at first becoming undetectable for about a month, before returning with a vengeance and transforming into the brightest supermassive black hole in the X-ray sky.

“All of this dramatic variability seemed to be over by 2021, as the source appeared to have returned to its pre-2018 state. However, luckily, we continued to watch this source, having learned the lesson that this supermassive black hole will always surprise us. The discovery of these millihertz oscillations was indeed quite a surprise, but it gives us a direct probe of regions very close to the supermassive black hole,” says Megan Masterson, a fifth-year PhD candidate at the MIT Kavli Institute for Astrophysics and Space Research, who co-led the study with MIT’s Erin Kara – alongside researchers based elsewhere in the US, as well as at institutions in Chile, China, Israel, Italy, Spain and the UK.

“We found that the period of these oscillations rapidly changed – dropping from around 18 minutes in 2022 to around seven minutes in 2024. This period evolution is unprecedented, having never been seen before in the small handful of other supermassive black holes that show similar oscillatory behaviour,” she adds.

White dwarf

According to Masterson, one of the key ideas behind the study was that the rapid X-ray period change could be driven by a white dwarf – the compact remnant of a star like our Sun – orbiting around the supermassive black hole close to its event horizon.

“If this white dwarf is driving these oscillations, it should produce a gravitational wave signal that will be detectable with next-generation gravitational wave observatories, like ESA’s Laser Interferometer Space Antenna (LISA),” she says.

To test their hypothesis, the researchers used X-ray data from ESA’s XMM-Newton observatory to detect the oscillations, which allowed them to track how the X-ray brightness changed over time. The findings were presented in mid-January at the 245th meeting of the American Astronomical Society in National Harbor, Maryland, and subsequently reported in Nature.

According to Masterson, these insights into the behaviour of X-rays near a black hole will have major implications for future efforts to detect multi-messenger signals from supermassive black holes.

“We really don’t understand how common stellar-mass companions around supermassive black holes are, but these findings tell us that it may be possible for stellar-mass objects to survive very close to supermassive black holes and produce gravitational wave signals that will be detected with the next-generation gravitational wave observatories,” she says.

Looking ahead, Masterson confirms that the immediate next step for MIT research in this area is to continue to monitor 1ES 1927+654 – with both existing and future telescopes – in an effort to deepen understanding of the extreme physics at play in and around the innermost environments of black holes.

“We’ve learned from this discovery that we should expect the unexpected with this source,” she adds. “We’re also hoping to find other sources like this one through large time-domain surveys and dedicated X-ray follow-up of interesting transients.”

The post Supermassive black hole displays ‘unprecedented’ X-ray outbursts appeared first on Physics World.

  •  

How the changing environment affects solar-panel efficiency: the Indian perspective

This episode of the Physics World Weekly podcast looks at how climate and environmental change affect the efficiency of solar panels. Our guest is the climate scientist Sushovan Ghosh, who is lead author of paper that explores how aerosols, rising temperatures and other environmental factors will affect solar-energy output in India in the coming decades.

Today, India ranks fifth amongst nations in terms of installed solar-energy capacity and boosting this capacity will be crucial for the country’s drive to reduce its greenhouse gas emissions by 45% by 2030 – when compared to 2005.

While much of India is blessed with abundant sunshine, it is experiencing a persistent decline in incoming solar radiation that is associated with aerosol pollution. What is more, higher temperatures associated with climate change reduce the efficiency of solar cells  – and their performance is also impacted in India by other climate-related phenomena.

In this podcast, Ghosh explains how changes in the climate and environment affect the generation of solar energy and what can be done to mitigate these effects.

Ghosh co-wrote the paper when at the Centre for Atmospheric Sciences at the Indian Institute of Technology Delhi and he is now at the Barcelona Supercomputing Center in Spain. His co-authors in Delhi were Dilip Ganguly, Sagnik Dey and Subhojit Ghoshal Chowdhury; and the paper is called, “Future photovoltaic potential in India: navigating the interplay between air pollution control and climate change mitigation”. It appears in Environmental Research Letters, which is published by IOP Publishing – which also brings you Physics World.

The post How the changing environment affects solar-panel efficiency: the Indian perspective appeared first on Physics World.

  •  

Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets

A new graphene nanostructure could become the basis for the first ferromagnets made purely from carbon. Known as an asymmetric or “Janus” graphene nanoribbon after the two-faced god in Roman mythology, the opposite edges of this structure have different properties, with one edge taking a zigzag form. Lu Jiong , a researcher at the National University of Singapore (NUS) who co-led the effort to make the structure, explains that it is this zigzag edge that gives rise to the ferromagnetic state, making the structure the first of its kind.

“The work is the first demonstration of the concept of a Janus graphene nanoribbon (JGNR) strand featuring a single ferromagnetic zigzag edge,” Lu says.

Graphene nanostructures with zigzag-shaped edges show much promise for technological applications thanks to their electronic and magnetic properties. Zigzag GNRs (ZGNRs) are especially appealing because the behaviour of their electrons can be tuned from metal-like to semiconducting by adjusting the length or width of the ribbons; modifying the structure of their edges; or doping them with non-carbon atoms. The same techniques can also be used to make such materials magnetic. This versatility means they can be used as building blocks for numerous applications, including quantum and spintronics technologies.

Previously, only two types of symmetric ZGNRs had been synthesized via on-surface chemistry: 6-ZGNR and nitrogen-doped 6-ZGNR, where the “6” refers to the number of carbon rows across the nanoribbon’s width. In the latest work, Lu and co-team leaders Hiroshi Sakaguchi of the University of Kyoto, Japan and Steven Louie at the University of California, Berkeley, US sought to expand this list.

 “It has been a long-sought goal to make other forms of zigzag-edge related GNRs with exotic quantum magnetic states for studying new science and developing new applications,” says team member Song Shaotang, the first author of a paper in Nature about the research.

ZGNRs with asymmetric edges

Building on topological classification theory developed in previous research by Louie and colleagues, theorists in the Singapore-Japan-US collaboration predicted that it should be possible to tune the magnetic properties of these structures by making ZGNRs with asymmetric edges. “These nanoribbons have one pristine zigzag edge and another edge decorated with a pattern of topological defects spaced by a certain number m of missing motifs,” Louie explains. “Our experimental team members, using innovative z-shaped precursor molecules for synthesis, were able to make two kinds of such ZGNRs. Both of these have one edge that supports a benzene motif array with a spacing of m = 2 missing benzene rings in between. The other edge is a conventional zigzag edge.”

Crucially, the theory predicted that the magnetic behaviour – ranging from antiferromagnetism to ferrimagnetism to ferromagnetism – of these JGNRs could be controlled by varying the value of m. In particular, says Louie, the configuration of m = 2 is predicted to show ferromagnetism – that is, all electron spins aligned in the same direction – concentrated entirely on the pristine zigzag edge. This behaviour contrasts sharply with that of symmetric ZGNRs, where spin polarization occurs on both edges and the aligned edge spins are antiferromagnetically coupled across the width of the ribbon.

Precursor design and synthesis

To validate these theoretical predictions, the team synthesized JGNRs on a surface. They then used advanced scanning tunnelling microscope (STM) and atomic force microscope (AFM) measurements to visualize the materials’ exact real-space chemical structure. These measurements also revealed the emergence of exotic magnetic states in the JGNRs synthesized in Lu’s lab at the NUS.

atomic model of the JGNRs
Two sides: An atomic model of the Janus graphene nanoribbons (left) and its atomic force microscopic image (right). (Courtesy: National University of Singapore)

In the past, Sakaguchi explains that GNRs were mainly synthesized using symmetric precursor chemical structures, largely because their asymmetric counterparts were so scarce. One of the challenges in this work, he notes, was to design asymmetric polymeric precursors that could undergo the essential fusion (dehydrogenation) process to form JGNRs. These molecules often orient randomly, so the researchers needed to use additional techniques to align them unidirectionally prior to the polymerization reaction. “Addressing this challenge in the future could allow us to produce JGNRs with a broader range of magnetic properties,” Sakaguchi says.

Towards carbon-based ferromagnets

According to Lu, the team’s research shows that JGNRs could become the first carbon-based spin transport channels to show ferromagnetism. They might even lead to the development of carbon-based ferromagnets, capping off a research effort that began in the 1980s.

However, Lu acknowledges that there is much work to do before these structures find real-world applications. For one, they are not currently very robust when exposed to air. “The next goal,” he says, “is to develop chemical modifications that will enhance the stability of these 1D structures so that they can survive under ambient conditions.”

A further goal, he continues, is to synthesize JGNRs with different values of m, as well as other classes of JGNRs with different types of defective edges. “We will also be exploring the 1D spin physics of these structures and [will] investigate their spin dynamics using techniques such as scanning tunnelling microscopy combined with electron spin resonance, paving the way for their potential applications in quantum technologies.”

The post Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets appeared first on Physics World.

  •  

Asteroid Bennu contains the stuff of life, sample analysis reveals

A sample of asteroid dirt brought back to Earth by NASA’s OSIRIS-REx mission contains amino acids and the nucleobases of RNA and DNA, plus brines that could have facilitated the formation of organic molecules, scanning electron microscopy has shown.

The 120 g of material came from the near-Earth asteroid 101955 Bennu, which OSIRIS-REx visited in 2020. The findings “bolster the hypothesis that asteroids like Bennu could have delivered the raw ingredients to Earth prior to the emergence of life,” Dan Glavin of NASA’s Goddard Space Flight Center tells Physics World.

Bennu has an interesting history. It is 565 m across at its widest point and was once part of a much larger parent body, possibly 100 km in diameter, that was smashed apart in a collision in the Asteroid Belt between 730 million and 1.55 billion years ago. Bennu coalesced from the debris as a rubble pile that found itself in Earth’s vicinity.

The sample from Bennu was parachuted back to Earth in 2023 and shared among teams of researchers. Now two new papers, published in Nature and Nature Astronomy, reveal some of the findings from those teams.

Saltwater residue

In particular, researchers identified a diverse range of salt minerals, including sodium-bearing phosphates and carbonates that formed brines when liquid water on Bennu’s parent body either evaporated or froze.

SEM images of minerals found in Bennu samples
Mineral rich SEM images of trona (water-bearing sodium carbonate) found in Bennu samples. The needles form a vein through surrounding clay-rich rock, with small pieces of rock resting on top of the needles. (Courtesy: Rob Wardell, Tim Gooding and Tim McCoy, Smithsonian)

The liquid water would have been present on Bennu’s parent during the dawn of the Solar System, in the first few million years after the planets began to form. Heat generated by the radioactive decay of aluminium-26 would have kept pockets of water liquid deep inside Bennu’s parent body. The brines that this liquid water bequeathed would have played a role in kickstarting organic chemistry.

Tim McCoy, of the Smithsonian’s National Museum of Natural History and the lead author of the Nature paper, says that “brines play two important roles”.

One of those roles is producing the minerals that serve as templates for organic molecules. “As an example, brines precipitate phosphates that can serve as a template on which sugars needed for life are formed,” McCoy tells Physics World. The phosphate is like a pegboard with holes, and atoms can use those spaces to arrange themselves into sugar molecules.

The second role that brines can play is to then release the organic molecules that have formed on the minerals back into the brine, where they can combine with other organic molecules to form more complex compounds.

Ambidextrous amino acids

Meanwhile, the study reported in Nature Astronomy, led by Dan Glavin and Jason Dworkin of NASA’s Goddard Space Flight Center, focused on the detection of 14 of the 20 amino acids used by life to build proteins, deepening the mystery of why life only uses “left-handed” amino acids.

Amino acid molecules lack rotational symmetry – think of how, no matter how much you twist or turn your left hand, you will never be able to superimpose it on your right hand. As such, amino acids can randomly be either left- or right-handed, a property known as chirality.

However, for some reason that no one has been able to figure out yet, all life on Earth uses left-handed amino acids.

One hypothesis was that due to some quirk, amino acids formed in space and brought to Earth in impacts had a bias for being left-handed. This possibility now looks unlikely after Glavin and Dworkin’s team discovered that the amino acids in the Bennu sample are a mix of left- and right-handed, with no evidence that one is preferred over the other.

“So far we have not seen any evidence for a preferred chirality,” Glavin says. This goes for both the Bennu sample and a previous sample from the asteroid 162173 Ryugu, collected by Japan’s Hayabusa2 mission, which contained 23 different forms of amino acid. “For now, why life turned left on Earth remains a mystery.”

Taking a closer step to the origin of life

Another mystery is why the organic chemistry on Bennu’s parent body reached a certain point and then stopped. Why didn’t it form more complex organic molecules, or even life?

A mosaic image of Bennu
Near-Earth asteroid A mosaic image of Bennu, as observed by NASA’s OSIRIS-REx spacecraft. (Courtesy: NASA/Goddard/University of Arizona)

Amino acids are the construction blocks of proteins. In turn, proteins are one of the primary molecules for life, facilitating biological processes within cells. Nucleobases have also been identified in the Bennu sample, but although chains of nucleobases are the molecular skeleton of RNA and DNA, neither nucleic acid has been found in an extraterrestrial sample yet.

“Although the wet and salty conditions inside Bennu’s parent body provided an ideal environment for the formation of amino acids and nucleobases, it is not clear yet why more complex organic polymers did not evolve,” says Glavin.

Researchers are still looking for that complex chemistry. McCoy cites the 5-carbon sugar ribose, which is a component of RNA, as an essential organic molecule for life that scientists hope to one day find in an asteroid sample.

“But as you might imagine, as organic molecules increase in complexity, they decrease in number,” says McCoy, explaining that we will need to search ever larger amounts of asteroidal material before we might get lucky and find them.

The answers will ultimately help astrobiologists figure out where life began. Could proteins, RNA or even biological cells have formed in the early Solar System within objects such as Bennu’s parent planetesimal? Or did complex biochemistry begin only on Earth once the base materials had been delivered from space?

“What is becoming very clear is that the basic chemical building blocks of life could have been delivered to Earth, where further chemical evolution could have occurred in a habitable environment, including the origin of life itself,” says Glavin.

What’s really needed are more samples. China’s Tianwen-2 mission is blasting off later this year on a mission to capture a 100 g sample from the small near-earth asteroid 469219 Kamo‘oalewa. The findings are likely to be similar to those of OSIRIS-REx and Hayabusa2, but there’s always the chance that something more complex might be in that sample too. If and when those organic molecules are found, they will have huge repercussions for the origin of life on Earth.

The post Asteroid Bennu contains the stuff of life, sample analysis reveals appeared first on Physics World.

  •  

International quantum year launches in style at UNESCO headquarters in Paris

More than 800 researchers, policy makers and government officials from around the world gathered in Paris this week to attend the official launch of the International Year of Quantum Science and Technology (IYQ). Held at the headquarters of the United Nations Educational, Scientific and Cultural Organisation (UNESCO), the two-day event included contributions from four Nobel prize-winning physicists – Alain Aspect, Serge Haroche, Anne l’Huillier and William Phillips.

Opening remarks came from Cephas Adjej Mensah, a research director in the Ghanaian government, which last year submitted the draft resolution to the United Nations for 2025 to be proclaimed as the IYQ. “Let us commit to making quantum science accessible to all,” Mensah declared, reminding delegates that the IYQ is intended to be a global initiative, spreading the benefits of quantum equitably around the world. “We can unleash the power of quantum science and technology to make an equitable and prosperous future for all.”

The keynote address was given by l’Huillier, a quantum physicist at Lund University in Sweden, who shared the 2023 Nobel Prize for Physics with Pierre Agostini and Ferenc Krausz for their work on attosecond pulses. “Quantum mechanics has been extremely successful,” she said, explaining how it was invented 100 years ago by Werner Heisenberg on the island of Helgoland. “It has led to new science and new technology – and it’s just the beginning.”

An on-stage panel in a large auditorium
Let’s go Stephanie Simmons, chief quantum officer at Photonic and co-chair of Canada’s National Quantum Strategy advisory council, speaking at the IYQ launch in Paris. (Courtesy: Matin Durrani)

Some of that promise was outlined by Phillips in his plenary lecture. The first quantum revolution led to lasers, semiconductors and transistors, he reminded participants, but said that the second quantum revolution promises more by exploiting effects such as quantum entanglement and superposition – even if its potential can be hard to grasp. “It’s not that there’s something deeply wrong with quantum mechanics – it’s that there’s something deeply wrong with our ability to understand it,” Phillips explained.

The benefits of quantum technology to society were echoed by leading Chinese quantum physicist Jian-Wei Pan of the University of Science and Technology of China in Hefei. “The second quantum revolution will likely provide another human leap in human civilization,” said Pan, who was not at the meeting, in a pre-recorded video statement. “Sustainable funding from government and private sector is essential. Intensive and proactive international co-operation and exchange will undoubtedly accelerate the benefit of quantum information to all of humanity.”

Leaders of the burgeoning quantum tech sector were in Paris too. Addressing the challenges and opportunities of scaling quantum technologies to practical use was a panel made up of Quantinuum chief executive Rajeeb Hazra, QuEra president Takuya Kitawawa, IBM’s quantum-algorithms vice president Katie Pizzoalato, ID Quantique boss Grégoire Ribordy and Microsoft technical fellow Krysta Svore. Also present was Alexander Ling from the National University of Singapore, co-founder of two hi-tech start-ups.

“We cannot imagine what weird and wonderful things quantum mechanics will lead to but you can sure it’ll be marvellous,” said Celia Merzbacher, executive director of the Quantum Economic Development Consortium (QED-C), who chaired the session. All panellists stressed the importance of having a supply of talented quantum scientists and engineers if the industry is to succeed. Hamza also underlined that new products based on “quantum 2.0” technology had to be developed with – and to serve the needs of – users if they are to turn a profit.

The ethical challenges of quantum advancements were also examined in a special panel, as was the need for responsible quantum innovation to avoid a “digital divide” where quantum technology benefits some parts of society but not others. “Quantum science should elevate human dignity and human potential,” said Diederick Croese, a lawyer and director of the Centre for Quantum and Society at Quantum Delta NL in the Netherlands.

A man stood beside a large panel of coloured lights creating an abstract picture
Science in action German artist Robin Baumgarten explains the physics behind his Quantum Jungle art installation. (Courtesy: Matin Durrani)

The cultural impact of quantum science and technology was not forgotten in Paris either. Delegates flocked to an art installation created by Berlin-based artist and game developer Robin Baumgarten. Dubbed Quantum Jungle, it attempts to “visualize quantum physics in a playful yet scientifically accurate manner” by using an array of lights controlled by flickable, bendy metal door stops. Baumgarten claims it is a “mathematically accurate model of a quantum object”, with the brightness of each ring being proportional to the chance of an object being there.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post International quantum year launches in style at UNESCO headquarters in Paris appeared first on Physics World.

  •  

Spacewoman: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject

“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”

In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.

The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.

Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.

Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.

These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.

The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.

Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.

Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.

Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.

In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.

When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”

Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.

  • Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.

The post <em>Spacewoman</em>: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject appeared first on Physics World.

  •