↩ Accueil

Vue normale

index.feed.received.today — 13 mars 2025

Earth’s core could contain lots of primordial helium, experiments suggest

13 mars 2025 à 09:29

Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.

Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.

Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.

As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.

Tantalizing clues

However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.

The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.

It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.

Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.

To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.

Synchrotron and mass spectrometer

“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.

These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.

However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.

Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.

The research is described in Physical Review Letters.

The post Earth’s core could contain lots of primordial helium, experiments suggest appeared first on Physics World.

index.feed.received.yesterday — 12 mars 2025

US science rues ongoing demotion of research under President Trump

12 mars 2025 à 15:30

Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.

In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security. 

A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.

The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at  institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.

NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.

Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.

Climate concerns

In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.

The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.

The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”

At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.

A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.

Falling behind

Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.

Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.

Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”

The post US science rues ongoing demotion of research under President Trump appeared first on Physics World.

Joint APS meeting brings together the physics community

12 mars 2025 à 12:05

New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.

Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.

The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.

With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.

Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.

Precision motion drives innovative instruments for physics applications

For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).

This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.

Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.

Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.

Mad City Labs product portfolio
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)

The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.

Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.

Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.

The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.

For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.

  • Visit Mad City Labs at the APS Global Summit, at booth #401

New lasers target quantum, Raman spectroscopy and life sciences

HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.

Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.

Ampheia Series laser from HÜBNER Photonics
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)

In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.

For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.

For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.

The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.

In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.

  • Visit HÜBNER Photonics at the APS Global Summit, at booth #359.

 

The post Joint APS meeting brings together the physics community appeared first on Physics World.

index.feed.received.before_yesterday

Cat qubits open a faster track to fault-tolerant quantum computing

10 mars 2025 à 10:30

Researchers from the Amazon Web Services (AWS) Center for Quantum Computing have announced what they describe as a “breakthrough” in quantum error correction. Their method uses so-called cat qubits to reduce the total number of qubits required to build a large-scale, fault-tolerant quantum computer, and they claim it could shorten the time required to develop such machines by up to five years.

Quantum computers are promising candidates for solving complex problems that today’s classical computers cannot handle. Their main drawback is the tendency for errors to crop up in the quantum bits, or qubits, they use to perform computations. Just like classical bits, the states of qubits can erroneously flip from 0 to 1, which is known as a bit-flip error. In addition, qubits can suffer from inadvertent changes to their phase, which is a parameter that characterizes their quantum superposition (phase-flip errors). A further complication is that whereas classical bits can be copied in order to detect and correct errors, the quantum nature of qubits makes copying impossible. Hence, errors need to be dealt with in other ways.

One error-correction scheme involves building physical or “measurement” qubits around each logical or “data” qubit. The job of the measurement qubits is to detect phase-flip or bit-flip errors in the data qubits without destroying their quantum nature. In 2024, a team at Google Quantum AI showed that this approach is scalable in a system of a few dozen qubits. However, a truly powerful quantum computer would require around a million data qubits and an even larger number of measurement qubits.

Cat qubits to the rescue

The AWS researchers showed that it is possible reduce this total number of qubits. They did this by using a special type of qubit called a cat qubit. Named after the Schrödinger’s cat thought that illustrates the concept of quantum superposition, cat qubits use the superposition of coherent states to encode information in a way that resists bit flips. Doing so may increase the number of phase-flip errors, but special error-correction algorithms can deal with these efficiently.

The AWS team got this result by building a microchip containing an array of five cat qubits. These are connected to four transmon qubits, which are a type of superconducting qubit with a reduced sensitivity to charge noise (a major source of errors in quantum computations). Here, the cat qubits serve as data qubits, while the transmon qubits measure and correct phase-flip errors. The cat qubits were further stabilized by connecting each of them to a buffer mode that uses a non-linear process called two-photon dissipation to ensure that their noise bias is maintained over time.

According to Harry Putterman, a senior research scientist at AWS, the team’s foremost challenge (and innovation) was to ensure that the system did not introduce too many bit-flip errors. This was important because the system uses a classical repetition code as its “outer layer” of error correction, which left it with no redundancy against residual bit flips. With this aspect under control, the researchers demonstrated that their superconducting quantum circuit suppressed errors from 1.75% per cycle for a three-cat qubit array to 1.65% per cycle for a five-cat qubit array. Achieving this degree of error suppression with larger error-correcting codes previously required tens of additional qubits.

On a scalable path

AWS’s director of quantum hardware, Oskar Painter, says the result will reduce the development time for a full-scale quantum computer by 3-5 years. This is, he says, a direct outcome of the system’s simple architecture as well as its 90% reduction in the “overhead” required for quantum error correction. The team does, however, need to reduce the error rates of the error-corrected logical qubits. “The two most important next steps towards building a fault-tolerant quantum computer at scale is that we need to scale up to several logical qubits and begin to perform and study logical operations at the logical qubit level,” Painter tells Physics World.

According to David Schlegel, a research scientist at the French quantum computing firm Alice & Bob, which specializes in cat qubits, this work marks the beginning of a shift from noisy, classically simulable quantum devices to fully error-corrected quantum chips. He says the AWS team’s most notable achievement is its clever hybrid arrangement of cat qubits for quantum information storage and traditional transmon qubits for error readout.

However, while Schlegel calls the research “innovative”, he says it is not without limitations. Because the AWS chip incorporates transmons, it still needs to address both bit-flip and phase-flip errors. “Other cat qubit approaches focus on completely eliminating bit flips, further reducing the qubit count by more than a factor of 10,” Schlegel says. “But it remains to be seen which approach will prove more effective and hardware-efficient for large-scale error-corrected quantum devices in the long run.”

The research is published in Nature.

The post Cat qubits open a faster track to fault-tolerant quantum computing appeared first on Physics World.

Physicists in Serbia begin strike action in support of student protests

7 mars 2025 à 13:00

Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.

The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.

The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.

“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB  in Belgrade signed a letter in support of the students, one of many from various research institutions since December.

Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.

“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”

Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.

Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits  Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.

Missed exams

Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.

Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.

The post Physicists in Serbia begin strike action in support of student protests appeared first on Physics World.

Nanosensor predicts risk of complications in early pregnancy

7 mars 2025 à 10:00

Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.

Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.

“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.

To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.

In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.

“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”

Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.

In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.

Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.

Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.

Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.

“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”

The post Nanosensor predicts risk of complications in early pregnancy appeared first on Physics World.

Curious consequence of special relativity observed for the first time in the lab

6 mars 2025 à 14:28

A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.

A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.

In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.

This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.

But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.

The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.

The Terrell effect in the lab

While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.

The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.

When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.

Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.

The research is available on the arXiv pre-print server.

The post Curious consequence of special relativity observed for the first time in the lab appeared first on Physics World.

Seen a paper changed without notification? Study reveals the growing trend of ‘stealth corrections’

5 mars 2025 à 15:21

The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification.  That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).

When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.

After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.

Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.

Sustained vigilance

The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has bee temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.

The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.

After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.

The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.

“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.

The post Seen a paper changed without notification? Study reveals the growing trend of ‘stealth corrections’ appeared first on Physics World.

How physics raised the roof: the people and places that drove the science of acoustics

5 mars 2025 à 12:00

Sometimes an attention-grabbing title is the best thing about a book, but not in this case. Pistols in St Paul’s: Science, Music and Architecture in the Twentieth Century by historian Fiona Smyth, is an intriguing journey charting the development of acoustics in architecture during the first half of the 20th century.

The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.

St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.

The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers

We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.

Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.

The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.

Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.

Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.

In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.

Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.

The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.

As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.

But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.

  • 2024 Manchester University Press 328pp £25.00/$36.95

The post How physics raised the roof: the people and places that drove the science of acoustics appeared first on Physics World.

The complex and spatially heterogeneous nature of degradation in heavily cycled Li-ion cells

5 mars 2025 à 11:22

As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.

This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!

Toby Bond
Toby Bond

Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.

 

 

The post The complex and spatially heterogeneous nature of degradation in heavily cycled Li-ion cells appeared first on Physics World.

Fermilab’s Anna Grassellino: eyeing the prize of quantum advantage

5 mars 2025 à 11:00

The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.

What’s the headline take on SQMS?

Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.

Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.

How is the centre delivering versus the vision laid out in the NQI?

SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.

At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)

Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.

How are you building on these breakthroughs?

First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.

We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.

Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.

What technologies pursued by SQMS will be critical to the scale-up of quantum systems?

There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.

Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)

Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)

How important are partnerships to the SQMS mission?

Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.

In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.

Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.

How extensive is the SQMS partner network?

All told, SQMS quantum platforms and experiments involve the collective efforts of more than 500 experts from 35 partner organizations, among them the National Institute for Standards and Technology (NIST), NASA Ames Research Center and Northwestern University; also leading companies in the quantum tech industry like IBM and Rigetti Computing. Our network extends internationally and includes flagship tie-ins with the UK’s National Physical Laboratory (NPL), the Institute for Nuclear Physics (INFN) in Italy, and the Institute for Quantum Computing (University of Waterloo, Canada).

What are the drivers for your engagement with the quantum technology industry?

The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.

I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.

How do Rigetti and IBM, for example, benefit from their engagement with SQMS?

Our collaboration with Rigetti Computing, a Silicon Valley company that’s building quantum computers, has been exemplary throughout: a two-way partnership that leverages the unique enabling technologies within SQMS to boost the performance of Rigetti’s superconducting quantum processors.

The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.

At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.

Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?

Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.

We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.

How is your team building capacity to support quantum R&D and technology innovation?

We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.

Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)

In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.

What is the vision for the future of SQMS?

SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.

We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.

It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.

Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.

How SQMS is bridging the quantum skills gap

SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)

As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.

Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.

Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.

“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.

“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”

The post Fermilab’s Anna Grassellino: eyeing the prize of quantum advantage appeared first on Physics World.

Thirty years of the Square Kilometre Array: here’s what the world’s largest radio telescope project has achieved so far

4 mars 2025 à 10:00

From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images.  When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.

Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.

Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.

Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.

“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”

Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.

MeerKAT: glimpsing the heart of the Milky Way

In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.

A radio image of the centre of the Milky Way
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)

Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.

Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.

Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures

Farhad Yusef-Zadeh

Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.

“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”

The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”

In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.

An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.

HERA: the first stars and galaxies

When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.

When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.

Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.

The Hydrogen Epoch of Reionization Array (HERA) radio telescope
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))

Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.

MWA: “unexpected” new objects

Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”

The MWA telescope in Australia
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))

Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.

New transients are challenging our current models of stellar evolution

Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia

After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.

The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.

This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.

ASKAP: giant sky surveys

The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.

Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA 37 e048).

CSIRO’s ASKAP antennas at the Murchison Radioastronomy Observatory in Western Australia
Down under The AKSAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)

Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.

This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.

By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA 38 e003). These are giant rings of radiation that are observed only in radio waves.  Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.

The precursors are so important. They’ve given us new questions. And it’s incredibly exciting

Philippa Hartley, SKAO, Manchester

While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.

Trouble on the horizon

As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.

Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.

Multiple satellites orbiting the Earth
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)

ASKAP,  MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.

In 2022, the International Astronomical Union (IAU) launched its Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference. The SKA Observatory and the US National Science Foundation’s centre for ground-based optical astronomy NOIRLab co-host the facility, which aims to reduce the impact of these satellite constellations.

Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”

In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.

The post Thirty years of the Square Kilometre Array: here’s what the world’s largest radio telescope project has achieved so far appeared first on Physics World.

Optical sensors could improve the comfort of indoor temperatures

28 février 2025 à 13:00

The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.

The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.

Limitation of thermostats for radiative heat

Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.

Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.

In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.

Integrating optical sensors to improve room comfort

The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.

The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.

The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.

The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.

This research was published in Nature Communications.

The post Optical sensors could improve the comfort of indoor temperatures appeared first on Physics World.

Frequency-comb detection of gas molecules achieves parts-per-trillion sensitivity

27 février 2025 à 17:12

A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.

Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.

Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.

Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.

Shifting resonances

However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.

“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.

Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.

Dithering the cavity

This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.

The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”

Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”

The spectrometer is described in Nature.

The post Frequency-comb detection of gas molecules achieves parts-per-trillion sensitivity appeared first on Physics World.

New transfer arm moves heavier samples in vacuum

26 février 2025 à 13:21

Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?

Heavy duty: The new transfer arm
Heavy duty: The new transfer arm. (Courtesy: UHV Design)

The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.

“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”

The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.

Continual development, review and improvement

While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.

“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”

The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.

Moving multiple objects

As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.

Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”

The post New transfer arm moves heavier samples in vacuum appeared first on Physics World.

Experts weigh in on Microsoft’s topological qubit claim

25 février 2025 à 18:30

Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.

However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.

Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.

“That’s a big no-no”

“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.

Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.

But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.

As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.

Built-in error protection

As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.

It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.

In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.

Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.

The device is a remarkable achievement from the materials science and fabrication standpoint

Ivar Martin, Argonne National Laboratory

“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).

Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.

Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.

“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Experts weigh in on Microsoft’s topological qubit claim appeared first on Physics World.

How cathode microstructure impacts solid-state batteries

25 février 2025 à 17:34

Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.

This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.

In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.

Philip Minnmann
Philip Minnmann

Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.

 

Johannes Schubert
Johannes Schubert

Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.

The post How cathode microstructure impacts solid-state batteries appeared first on Physics World.

The quest for better fusion reactors is putting a new generation of superconductors to the test

25 février 2025 à 12:00
Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)

Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.

The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable

The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.

The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.

The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”

There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.

Is smaller better?

A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.

In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.

But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.

Tokamak Energy’s ST40 compact tokamak
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)

However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.

The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.

1 Superconductors

Semiconductor graph
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)

Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc).  Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.

What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.

Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.

However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current.  This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.

High-performance fusion materials

Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.

The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.

2 REBCO unit cell

Unit cell of a REBCO
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)

The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.

Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).

REBCO compounds  are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.

REBCO wires: Bendy but brittle

REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.

Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.

Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.

Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in  2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.

Pushing superconductors to the limit

Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).

These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.

Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.

Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.

To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).

3 Spot the difference

Irradiated REBCO crystal structure
(Courtesy: R.J. Nicholls, S. Diaz-Moreno, W. Iliffe et al. Communications Materials 3 52)

TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.

We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.

Searching for clues

The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.

In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials 3 52).

We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol. 36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.

Surrey Ion Beam Centre
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)

Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.

We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol. 34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin 48 710).

Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum. 95 063907).  The effect of other critical parameters like magnetic field and strain also still needs to be explored.

Towards net zero

The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.

This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices

Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.

This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.

The post The quest for better fusion reactors is putting a new generation of superconductors to the test appeared first on Physics World.

Precision radiosurgery: optimal dose delivery with cobalt-60

24 février 2025 à 16:17
Leksell Gamma Knife Esprit
Leksell Gamma Knife Esprit

Join us for an insightful webinar that delves into the role of Cobalt-60 in intracranial radiosurgery using Leksell Gamma Knife.

Through detailed discussions and expert insights, attendees will learn how Leksell Gamma Knife, powered by cobalt-60, has and continues to revolutionize the field of radiosurgery, offering patients a safe and effective treatment option.

Participants will gain a comprehensive understanding of the use of cobalt in medical applications, highlighting its significance, and learn more about the unique properties of cobalt-60. The webinar will explore the benefits of cobalt-60 in intracranial radiosurgery and why it is an ideal choice for treating brain lesions while minimizing damage to surrounding healthy tissue.

Don’t miss this opportunity to enhance your knowledge and stay at the forefront of medical advancements in radiosurgery!

Riccardo Bevilacqua
Riccardo Bevilacqua

Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather, and writes popular science articles on physics and radiation.

 

The post Precision radiosurgery: optimal dose delivery with cobalt-60 appeared first on Physics World.

Memory of previous contacts affects static electricity on materials

19 février 2025 à 17:20

Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.

Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.

“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”

This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.

Triboelectric series

Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.

Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.

According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.

In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”

Starting from scratch

For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.

At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.

Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.

Contact history

“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”

To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.

“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”

The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.

Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.

The research is described in Nature.

The post Memory of previous contacts affects static electricity on materials appeared first on Physics World.

Wireless deep brain stimulation reverses Parkinson’s disease in mice

19 février 2025 à 14:00
Nanoparticle-mediated DBS reverses the symptoms of Parkinson’s disease
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.

Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.

Chunying Chen
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)

The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.

Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).

As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.

The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.

The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.

Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.

Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.

Results of motor tests in mice
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.

Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.

Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.

Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.

The study is reported in Science Advances.

The post Wireless deep brain stimulation reverses Parkinson’s disease in mice appeared first on Physics World.

Inverse design configures magnon-based signal processor

18 février 2025 à 17:08

For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.

Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.

“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”

One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.

Collective rotation

A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.

“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”

Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.

Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.

Thin magnetic film

The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.

To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.

This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.

While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”

Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.

The research is described in Nature Electronics.

The post Inverse design configures magnon-based signal processor appeared first on Physics World.

The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t

18 février 2025 à 12:00

A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.

Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.

In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.

Rising to the occasion?

To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.

What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.

You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.

Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.

You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.

Close-up of weighing scale with small cakes on top
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)

Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.

There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.

Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?

What is the muon g-2?

Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.

However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.

This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.

“Virtual” particles

Illustration of subatomic particles in the Standard Model
(Courtesy: CERN)

The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.

“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.

The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev. 73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev. 74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.

The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment.  However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.

New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”

Testing the Standard Model: experiment vs theory

In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett. 126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.

The Muon g-2 experiment
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)

Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”

The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep. 887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke  – making the discrepancy all the more intriguing.

While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.

Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.

This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.

1 Eyes on the prize

Chart of muon g-2 results from 5 different experiments
(Courtesy: Muon g-2 collaboration/IOP Publishing)

Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.

That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature 593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.

The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”

If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.

To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.

2 Which Standard Model?

Chart of the Muon g-2 experiment results versus the various Standard Model predictions
(Courtesy: Alex Keshavarzi/IOP Publishing)

Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.

Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.

The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.

The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.

On your marks, get set, bake!

In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.

For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.

New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.

Large group of people stood holding a banner that says Muon g-2
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)

The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.

It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.

The post The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t appeared first on Physics World.

Ultra-high-energy neutrino detection opens a new window on the universe

17 février 2025 à 17:58

Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultra-high-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.

“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”

Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or being absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.

Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are also expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.

Underwater detection

Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.

The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000 m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.

KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.

Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.

New class of astrophysical messengers

While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultra-high-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.

Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them”.

Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”

With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.

Resconi, explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”

The observation is described in Nature.

The post Ultra-high-energy neutrino detection opens a new window on the universe appeared first on Physics World.

Modelling the motion of confined crowds could help prevent crushing incidents

17 février 2025 à 12:53

Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments. 

To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).

The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.

“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.

In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.

Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.

Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.

“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.

The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.

Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.

If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.

The post Modelling the motion of confined crowds could help prevent crushing incidents appeared first on Physics World.

US science in chaos as impact of Trump’s executive orders sinks in

12 février 2025 à 18:14

Scientists across the US have been left reeling after a spate of executive orders from US President Donald Trump has led to research funding being slashed, staff being told to quit and key programmes being withdrawn. In response to the orders, government departments and external organizations have axed diversity, equity and inclusion (DEI) programmes, scrubbed mentions of climate change from websites, and paused research grants pending tests for compliance with the new administration’s goals.

Since taking up office on 20 January, Trump has signed dozens of executive orders. One ordered the closure of the US Agency for International Development, which has supported medical and other missions worldwide for more than six decades. The administration said it was withdrawing almost all of the agency’s funds and wanted to sack its entire workforce. A federal judge has temporarily blocked the plans, saying they may violate the US’s constitution, which reserves decisions on funding to Congress.

Individual science agencies are under threat too. Politico reported that the Trump administration has asked the National Science Foundation (NSF), which funds much US basic and applied research, to lay off between a quarter and a half of its staff in the next two months. Another report suggests there are plans to cut the agency’s annual budget from roughly $9bn to $3bn. Meanwhile, former officials of the National Oceanic and Atmospheric Administration (NOAA) told CBS News that half its staff could be sacked and its budget slashed by 30%.

Even before they had learnt of plans to cut its staff and budget, officials at the NSF were starting to examine details of thousands of grants it had awarded for references to DEI, climate change and other topics that Trump does not like. The swiftness of the announcements has caused chaos, with recipients of grants suddenly finding themselves unable to access the NSF’s award cash management service, which holds grantees’ funds, including their salaries.

NSF bosses have taken some steps to reassure grantees. “Our top priority is resuming our funding actions and services to the research community and our stakeholders,” NSF spokesperson Mike England told Physics World in late January. In what is a highly fluid situation, there was some respite on 2 February when the NSF announced that access had been restored with the system able to accept payment requests.

“Un-American” actions

Trump’s anti-DEI orders have caused shockwaves throughout US science. According to 404 Media, NASA staff were told on 22 January to “drop everything” to remove mentions of DEI, Indigenous people, environmental justice and women in leadership, from public websites. Another victim has been NASA’s Here to Observe programme, which links undergraduates from under-represented groups with scientists who oversee NASA’s missions. Science reported that contracts for half the scientists involved in the programme had been cancelled by the end of January.

It is still unclear, however, what impact the Trump administration’s DEI rules will have on the make-up of NASA’s astronaut corps. Since choosing its first female astronaut in 1978, NASA has sought to make the corps more representative of US demographics. How exactly the agency should move forward will fall to Jared Isaacman, the space entrepreneur and commercial astronaut who has been nominated as NASA’s next administrator.

Anti-DEI initiatives have hit individual research labs too. Physics World understands that Fermilab – the US’s premier particle-physics lab – suspended its DEI office and its women in engineering group in January. Meanwhile, the Fermilab LBGTQ+ group, called Spectrum, was ordered to cease all activities and its mailing list deleted. Even the rainbow “Pride” flag was removed from the lab’s iconic Wilson Hall.

Some US learned societies, despite being formally unaffiliated with the government, have also responded to pressure from the new administration. The American Geophysical Union (AGU) removed the word “diversity” from its diversity and inclusion page, although it backtracked after criticism of the move.

There was also some confusion that the American Chemical Society had removed its webpage on diversity and inclusion, but they had in fact published a new page and failed to put a redirect in place. “Inclusion and Belonging is a core value of the American Chemical Society, and we remain committed to creating environments where people from diverse backgrounds, cultures, perspectives and experiences thrive,” a spokesperson told Physics World. “We know the broken link caused confusion and some alarm, and we apologize.”

For the time being, the American Physical Society’s page on inclusion remains live, as does that of the American Institute of Physics.

Dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men

Neal Lane, Rice University

Such a response – which some opponents denounce as going beyond what is legally required for fear of repercussions if no action is taken – has left it up to individual leaders to underline the importance of diversity in science. Neal Lane, a former science adviser to President Clinton, told Physics World that “dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men, including scientists, engineers, technical workers – essentially everyone who contributes to advancing America’s global leadership in science and technology”.

Lane, who is now a science and technology policy fellow at Rice University in Texas, think that the new administration’s anti-DEI actions “will weaken the US” and believes they should be considered “un-American”. “The purpose of DEI policies programmes and activities is to ensure all Americans have the opportunity to participate and the country is able to benefit from their participation,” he says.

One senior physicist at a US university, who wishes to remain anonymous, told Physics World that those behind the executive orders are relying on institutions and individuals to “comply in advance” with what they perceive to be the spirit of the orders. “They are relying on people to ignore the fine print, which says that executive orders can’t and don’t overwrite existing law. But it is up to scientists to do the reading — and to follow our consciences. More than universities are on the line: the lives of our students and colleagues are on the line.”

Education turmoil

Another target of the Trump administration is the US Department of Education, which was set up in 1978 to oversee everything from pre-school to postgraduate education. It has already put dozens of its civil servants on leave, ostensibly because their work involves DEI issues. Meanwhile, the withholding of funds has led to the cancellation of scientific meetings, mostly focusing on medicine and life sciences, that were scheduled in the US for late January and early February.

Colleges and universities in the US have also reacted to Trump’s anti-DEI executive order. Academic divisions at Harvard University and the Massachusetts Institute of Technology, for example, have already indicated that they will no longer require applicants for jobs to indicate how they plan to advance the goals of DEI. Northeastern University in Boston has removed the words “diversity” and “inclusion” from a section of its website.

Not all academic organizations have fallen into line, however. Danielle Holly, president of the women-only Mount Holyoke College in South Hadley, Massachusetts, says it will forgo contracts with the federal government if they required abolishing DEI. “We obviously can’t enter into contracts with people who don’t allow DEI work,” she told the Boston Globe. “So for us, that wouldn’t be an option.”

Climate concerns

For an administration that doubts the reality of climate change and opposes anti-pollution laws, the Environmental Protection Agency (EPA) is under fire too. Trump administration representatives were taking action even before the Senate approved Lee Zeldin, a former Republican Congressman from New York who has criticized much environmental legislation, as EPA Administrator. They removed all outside advisers on the EPA’s scientific advisory board and its clean air scientific advisory committee – purportedly to “depoliticize” the boards.

Once the Senate approved Zeldin on 29 January, the EPA sent an e-mail warning more than 1000 probationary employees who had spent less than a year in the agency that their roles could be “terminated” immediately. Then, according to the New York Times, the agency developed plans to demote longer-term employees who have overseen research, enforcement of anti-pollution laws, and clean-ups of hazardous waste. According to Inside Climate News, staff also found their individual pronouns scrubbed from their e-mails and websites without their permission – the result of an order to remove “gender ideology extremism”.

Critics have also questioned the nomination of Neil Jacobs to lead the NOAA. He was its acting head during Trump’s first term in office, serving during the 2019 “Sharpiegate” affair when Trump used a Sharpie pen to alter a NOAA weather map to indicate that Hurricane Dorian would affect Alabama. While conceding Jacobs’s experience and credentials, Rachel Cleetus of the Union of Concerned Scientists asserts that Jacobs is “unfit to lead” given that he “fail[ed] to uphold scientific integrity at the agency”.

Spending cuts

Another concern for scientists is the quasi-official team led by “special government employee” and SpaceX founder Elon Musk. The administration has charged Musk and his so-called “department of government efficiency”, or DOGE, to identify significant cuts to government spending. Though some of DOGE’s activities have been blocked by US courts, agencies have nevertheless been left scrambling for ways to reduce day-to-day costs.

The National Institutes of Health (NIH), for example, has said it will significantly reduce its funding for “indirect” costs of research projects it supported – the overheads that, for example, cover the cost of maintaining laboratories, administering grants, and paying staff salaries. Under the plans, indirect cost reimbursement for federally funded research would be capped at 15%, a drastic cut from its usual range.

NIH personnel have tried to put a positive gloss on its actions. “The United States should have the best medical research in the world,” a statement from NIH declared. “It is accordingly vital to ensure that as many funds as possible go towards direct scientific research costs rather than administrative overhead.”

Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives

US senator Patty Murray

Opponents of the Trump administration, however, are unconvinced. They argue that the measure will imperil critical clinical research because many academic recipients of NIH funds did not have the endowments to compensate for the losses. “Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives,” says US senator Patty Murray, a Democrat from Washington state.

Slashing universities’ share of grants to below 15%, could, however, force institutions to make up the lost income by raising tuition fees, which could “go through the roof”, according to the anonymous senior physicist contacted by Physics World. “Far from being a populist policy, these cuts to overheads are an attack on the subsidies that make university education possible for students from a range of socioeconomic backgrounds. The alternative is to essentially shut down the university research apparatus, which would in many ways be the death of American scientific leadership and innovation.”

Musk and colleagues have also gained unprecedented access to government websites related to civil servants and the country’s entire payments system. That access has drawn criticism from several commentators who note that, since Musk is a recipient of significant government support through his SpaceX company, he could use the information for his own advantage.

“Musk has access to all the data on federal research grantees and contractors: social security numbers, tax returns, tax payments, tax rebates, grant disbursements and more,” wrote physicist Michael Lubell from City College of New York. “Anyone who depends on the federal government and doesn’t toe the line might become a target. This is right out of (Hungarian prime minister) Viktor Orbán’s playbook.”

A new ‘dark ages’

As for the long-term impact of these changes, James Gates – a theoretical physicist at the University of Maryland and a past president of the US National Society of Black Physicists – is blunt. “My country is in for a 50-year period of a new dark ages,” he told an audience at the Royal College of Art in London, UK, on 7 February.

My country is in for a 50-year period of a new dark ages

James Gates, University of Maryland

Speaking at an event sponsored by the college’s association for Black students – RCA BLK – and supported by the UK’s organization for Black physicists, the Blackett Lab Family, he pointed out that the US has been through such periods before. As examples, Gates cited the 1950s “Red Scare” and the period after 1876 when the federal government abandoned efforts to enforce the civil rights of Black Americans in southern states and elsewhere.

However, he is not entirely pessimistic. “Nothing is permanent in human behaviour. The question is the timescale,” Gates said. “There will be another dawn, because that’s part of the human spirit.”

  • With additional reporting by Margaret Harris, online editor of Physics World, in London and Michael Banks, news editor of Physics World

The post US science in chaos as impact of Trump’s executive orders sinks in appeared first on Physics World.

Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage

12 février 2025 à 12:00

IBM is on a mission to transform quantum computers from applied research endeavour to mainstream commercial opportunity. It wants to go beyond initial demonstrations of “quantum utility”, where these devices outperform classical computers only in a few niche applications, and reach the new frontier of “quantum advantage”. That’ll be where quantum computers routinely deliver significant, practical benefits beyond approximate classical computing methods, calculating solutions that are cheaper, faster and more accurate.

Unlike classical computers, which rely on the binary bits that can be either 0 or 1, quantum computers exploit quantum binary bits (qubits), but as a superposition of 0 and 1 states. This superposition, coupled with quantum entanglement (a correlation of two qubits), enables quantum computers to perform some types of calculation significantly faster than classical machines, such as problems in quantum chemistry and molecular reaction kinetics.

In the vanguard of IBM’s quantum R&D effort is Sarah Sheldon, a principal research scientist and senior manager of quantum theory and capabilities at the IBM Thomas J Watson Research Center in Yorktown Heights, New York. After a double-major undergraduate degree in physics and nuclear science and engineering at Massachusetts Institute of Technology (MIT), Sheldon received her PhD from MIT in 2013 – though she did much of her graduate research in nuclear science and engineering as a visiting scholar at the Institute for Quantum Computing (IQC) at the University of Waterloo, Canada.

At IQC, Sheldon was part of a group studying quantum control techniques, manipulating the spin states of nuclei in nuclear-magnetic-resonance (NMR) experiments. “Although we were using different systems to today’s leading quantum platforms, we were applying a lot of the same kinds of control techniques now widely deployed across the quantum tech sector,” Sheldon explains.

“Upon completion of my PhD, I opted instinctively for a move into industry, seeking to apply all that learning in quantum physics into immediate and practical engineering contributions,” she says. “IBM, as one of only a few industry players back then with an experimental group in quantum computing, was the logical next step.”

Physics insights, engineering solutions

Sheldon currently heads a cross-disciplinary team of scientists and engineers developing techniques for handling noise and optimizing performance in novel experimental demonstrations of quantum computers. It’s ambitious work that ties together diverse lines of enquiry spanning everything from quantum theory and algorithm development to error mitigation, error correction and techniques for characterizing quantum devices.

We’re investigating how to extract the optimum performance from current machines online today as well as from future generations of quantum computers.

Sarah Sheldon, IBM

“From algorithms to applications,” says Sheldon, “we’re investigating what can we do with quantum computers: how to extract the optimum performance from current machines online today as well as from future generations of quantum computers – say, five or 10 years down the line.”

A core priority for Sheldon and colleagues is how to manage the environmental noise that plagues current quantum computing systems. Qubits are all too easily disturbed, for example, by their interactions with environmental fluctuations in temperature, electric and magnetic fields, vibrations, stray radiation and even interference between neighbouring qubits.

The ideal solution – a strategy called error correction – involves storing the same information across multiple qubits, such that errors are detected and corrected when one or more of the qubits are impacted by noise. But the problem with these so-called “fault-tolerant” quantum computers is they need millions of qubits, which is impossible to implement in today’s small-scale quantum architectures. (For context, IBM’s latest Quantum Development Roadmap outlines a practical path to error-corrected quantum computers by 2029.)

“Ultimately,” Sheldon notes, “we’re working towards large-scale error-corrected systems, though for now we’re exploiting near-term techniques like error mitigation and other ways of managing noise in these systems.” In practical terms, this means implementing quantum architectures without increasing the number of qubits – essentially, integrating them with classical computers to reduce noise through increasing samples on the quantum computer combined with classical processing.

Strength in diversity

For Sheldon, one big selling point of the quantum tech industry is the opportunity to collaborate with people from a wide range of disciplines. “My team covers a broad-scope R&D canvas,” she says. There are mathematicians and computer scientists, for example, working on complexity theory and novel algorithm development; physicists specializing in quantum simulation and incorporating error suppression techniques; as well as quantum chemists working on simulations of molecular systems.

“Quantum is so interdisciplinary – you are constantly learning something new from your co-workers,” she adds. “I started out specializing in quantum control techniques, before moving onto experimental demonstrations of larger multiqubit systems while working ever more closely with theorists.”

A corridor in IBM's quantum lab
Computing reimagined Quantum scientists and engineers at the IBM Thomas J Watson Research Center are working to deliver IBM’s Quantum Development Roadmap and a practical path to error-corrected quantum computers by 2029. (Courtesy: Connie Zhou for IBM)

External research collaborations are also mandatory for Sheldon and her colleagues. Front-and-centre is the IBM Quantum Network, which provides engagement opportunities with more than 250 organizations across the “quantum ecosystem”. These range from top-tier labs – such as CERN, the University of Tokyo and the UK’s National Quantum Computing Centre – to quantum technology start-ups like Q-CTRL and Algorithmiq. It also encompasses established industry players aiming to be early-adopting end-users of quantum technologies (among them Bosch, Boeing and HSBC).

“There’s a lot of innovation happening across the quantum community,” says Sheldon, “so external partnerships are incredibly important for IBM’s quantum R&D programme. While we have a deep and diverse skill set in-house, we can’t be the domain experts across every potential use-case for quantum computing.”

Opportunity knocks

Notwithstanding the pace of innovation, there are troubling clouds on the horizon. In particular, there is a shortage of skilled workers in the quantum workforce, with established technology companies and start-ups alike desperate to attract more physical scientists and engineers. The task is to fill not only specialist roles – be it error-correction scientists or quantum-algorithm developers – but more general positions such as test and measurement engineers, data scientists, cryogenic technicians and circuit designers.

Yet Sheldon remains upbeat about addressing the skills gap. “There are just so many opportunities in the quantum sector,” she notes. “The field has changed beyond all recognition since I finished my PhD.” Perhaps the biggest shift has been the dramatic growth of industry engagement and, with it, all sorts of attractive career pathways for graduate scientists and engineers. Those range from firms developing quantum software or hardware to the end-users of quantum technologies in sectors such as pharmaceuticals, finance or healthcare.

“As for the scientific community,” argues Sheldon, “we’re also seeing the outline take shape for a new class of quantum computational scientist. Make no mistake, students able to integrate quantum computing capabilities into their research projects will be at the leading edge of their fields in the coming decades.”

Ultimately, Sheldon concludes, early-career scientists shouldn’t necessarily over-think things regarding that near-term professional pathway. “Keep it simple and work with people you like on projects that are going to interest you – whether quantum or otherwise.”

The post Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage appeared first on Physics World.

How international conferences can help bring women in physics together

11 février 2025 à 09:00

International conferences are a great way to meet people from all over the world to share the excitement of physics and discuss the latest developments in the subject. But the International Conference on Women in Physics (ICWIP) offers more by allowing us to to listen to the experiences of people from many diverse backgrounds and cultures. At the same time, it highlights the many challenges that women in physics still face.

The ICWIP series is organized by the International Union of Pure and Applied Physics (IUPAP) and the week-long event typically features a mixture of plenaries, workshops and talks. Prior to the COVID-19 pandemic, the conferences were held in various locations across the world, but the last two have been held entirely online. The last such meeting – the 8th ICWIP run from India in 2023 – saw around 300 colleagues from 57 countries attend. I was part of a seven-strong UK contingent – at various stages of our careers – who gave a presentation describing the current situation for women in physics in the UK.

Being held solely online didn’t stop delegates fostering a sense of community or discussing their predicaments and challenges. What became evident during the week was the extent and types of issues that women from across the globe still have to contend with. One is the persistence of implicit and explicit gender bias in their institutions or workplaces. This, along with negative stereotyping of women, produces discrepancies between male and female numbers in institutions, particularly at postgraduate level and beyond. Women often end up choosing not to pursue physics later into their careers and being reluctant to take up leadership roles.

Much more needs to be done to ensure women are encouraged in their careers. Indeed, women often face challenging work–life balances, with some expected to play a greater role in family commitments than men, and have little support at their workplaces. One postdoctoral researcher at the 2023 meeting, for example, attempted to discuss her research poster in the virtual conference room while looking after her young children at home – the literal balancing of work and life in action.

A virtual presentation with five speakers' avatars stood in front of a slide showing their names
Open forum The author and co-presenters at the most recent International Conference on Women in Physics. Represented by avatars online, they gave a presentation on women in physics in the UK. (Courtesy: Chethana Setty)

To improve their circumstances, delegates suggested enhancing legislation to combat gender bias and improve institutional culture through education to reduce negative stereotypes. More should also be done to improve networks and professional associations for women in physics. Another factor mentioned at the meeting, meanwhile, is the importance of early education and issues related to equity of teaching, whether delivered face-to-face or online.

But women can face disadvantages other than their gender, such as socioeconomic status and identity, resulting in a unique set of challenges for them. This is the principle of intersectionality and was widely discussed in the context of problems in career progression.

In the UK, change is starting to happen. The Limit Less campaign by the Institute of Physics (IOP), which publishes Physics World, encourages students post 16 years old to study physics. The annual Conference for Undergraduate Women and Non-binary Physicists provides individuals with support and encouragement in their personal and professional development. There are also other initiatives such as the STEM Returner programme and the Daphne Jackson Trust for those wishing to return to a physics career. WISE Ten Steps contributes to supporting workplace culture positively and the Athena SWAN and the IOP’s new Physics Inclusion Award aims to improve women’s prospects.

As we now look forward to the next ICWIP there is still a lot more to do. We must ensure that women can continue in their physics careers while recognizing that intersectionality will play an increasingly significant role in shaping future equity, diversity and inclusion policies. It is likely that soon a new team will be sought from academia and industry, comprising of individuals at various career stages to represent the UK at the next ICWIP. Please do get involved if you are interested. Participation is not limited to women.

Women are doing physics in a variety of challenging circumstances. Gaining an international outlook of different cultural perspectives, as is possible at an international conference like the ICWIP, helps to put things in context and highlights the many common issues faced by women in physics. Taking the time to listen and learn from each other is critical, a process that can facilitate collaboration on issues that affect us all. Fundamentally, we all share a passion for physics, and endeavour to be catalysts for positive change for future generations.

  • This article was based on discussions with Sally Jordan from the Open University; Holly Campbell, UK Atomic Energy Authority; Josie C, AWE; Wendy Sadler and Nils Rehm, Cardiff University; and Sarah Bakewell and Miriam Dembo, Institute of Physics

The post How international conferences can help bring women in physics together appeared first on Physics World.

Thousands of nuclear spins are entangled to create a quantum-dot qubit

10 février 2025 à 17:19

A new type of quantum bit (qubit) that stores information in a quantum dot with the help of an ensemble of nuclear spin states has been unveiled by physicists in the UK and Austria. Led by Dorian Gangloff and Mete Atatüre at the University of Cambridge, the team created a collective quantum state that could be used as a quantum register to store and relay information in a quantum communication network of the future.

Quantum communication networks are used to exchange and distribute quantum information between remotely-located quantum computers and other devices. As well as enabling distributed quantum computing, quantum networks can also support secure quantum cryptography. Today, these networks are in the very early stages of development and use the entangled quantum states of photons to transmit information. Network performance is severely limited by decoherence, whereby the quantum information held by photons is degraded as they travel long distances. As a result, effective networks need repeater nodes that receive and then amplify weakened quantum signals.

“To address these limitations, researchers have focused on developing quantum memories capable of reliably storing entangled states to enable quantum repeater operations over extended distances,” Gangloff explains. “Various quantum systems are being explored, with semiconductor quantum dots being the best single-photon generators delivering both photon coherence and brightness.”

Single-photon emission

Quantum dots are widely used for their ability to emit single photons at specific wavelengths. These photons are created by electronic transitions in quantum dots and are ideal for encoding and transmitting quantum information.

However, the electronic spin states of quantum dots are not particularly good at storing quantum information for long enough to be useful as stationary qubits (or nodes) in a quantum network. This is because they contain hundreds or thousands of nuclei with spins that fluctuate. The noise generated by these fluctuations causes the decoherence of qubits based on electronic spin states.

In their previous research, Gangloff and Atatüre’s team showed how this noise could be controlled by sensing how it interacts with the electronic spin states.

Atatüre says, “Building on our previous achievements, we suppressed random fluctuations in the nuclear ensemble using a quantum feedback algorithm. This is already very useful as it dramatically improves the electron spin qubit performance.”

Magnon excitation

Now, using a gallium arsenide quantum dot, the team has used the feedback algorithm to stabilize 13,000 nuclear spin states in a collective, entangled “dark state”. This is a stable quantum state that cannot absorb or emit photons. By introducing just a single nuclear magnon (spin flip) excitation, shared across all 13,000 nuclei, they could then flip the entire ensemble between two different collective quantum states.

Each of these collective states could respectively be defined as a 0 and a 1 in a binary quantum logic system. The team then showed how quantum information could be exchanged between the nuclear system and the quantum dot’s electronic qubit with a fidelity of about 70%.

“The quantum memory maintained the stored state for approximately 130 µs, validating the effectiveness of our protocol,” Gangloff explains. “We also identified unambiguously the factors limiting the current fidelity and storage time, including crosstalk between nuclear modes and optically induced spin relaxation.”

The researchers are hopeful that their approach could transform one of the biggest limitations to quantum dot-based communication networks into a significant advantage.

“By integrating a multi-qubit register with quantum dots – the brightest and already commercially available single-photon sources – we elevate these devices to a much higher technology readiness level,” Atatüre explains.

With some further improvements to their system’s fidelity, the researchers are now confident that it could be used to strengthen interactions between quantum dot qubits and the photonic states they produce, ultimately leading to longer coherence times in quantum communication networks. Elsewhere, it could even be used to explore new quantum phenomena, and gather new insights into the intricate dynamics of quantum many-body systems.

The research is described in Nature Physics.

The post Thousands of nuclear spins are entangled to create a quantum-dot qubit appeared first on Physics World.

Quantum simulators deliver surprising insights into magnetic phase transitions

7 février 2025 à 15:31

Unexpected behaviour at phase transitions between classical and quantum magnetism has been observed in different quantum simulators operated by two independent groups. One investigation was led by researchers at Harvard University and used Rydberg atom as quantum bits (qubits). The other study was led by scientists at  Google Research and involved superconducting qubits. Both projects revealed unexpected deviations from the canonical mechanisms of magnetic freezing, with unexpected oscillations near the phase transition.

A classical magnetic material can be understood as a fluid mixture of magnetic domains that are oriented in opposite directions, with the domain walls in constant motion. As a strengthening magnetic field is applied to the system, the energy associated with a domain wall increases, so the magnetic domains themselves become larger and less mobile. At some point, when the magnetism becomes sufficiently strong, a quantum phase transition occurs, causing the magnetism of the material to become fixed and crystalline: “A good analogy is like water freezing,” says Mikhail Lukin of Harvard University.

The traditional quantitative model for these transitions is the Kibble–Zurek mechanism, which was first formulated to describe cosmological phase transitions in the early universe. It predicts that the dynamics of a system begin to “freeze” when the system gets so close to the transition point that the domains crystallize more quickly than they can come to equilibrium.

“There are some very good theories of various types of quantum phase transitions that have been developed,” says Lukin, “but typically these theories make some approximations. In many cases they’re fantastic approximations that allow you to get very good results, but they make some assumptions which may or may not be correct.”

Highly reconfigurable platform

In their work, Lukin and colleagues utilized a highly reconfigurable platform using Rydberg atom qubits. The system was pioneered by Lukin and others in 2016 to study a specific type of magnetic quantum phase transition in detail. They used a laser to simulate the effect of a magnetic field on the Rydberg atoms, and adjusted the laser frequency to tune the field strength.

The researchers found that, rather than simply becoming progressively larger and less mobile as the field strength increased (a phenomenon called coarsening), the domain sizes underwent unexpected oscillations around the phase transition.

“We were really quite puzzled,” says Lukin. “Eventually we figured out that this oscillation is a sign of a special type of excitation mode similar to the Higgs mode in high-energy physics. This is something we did not anticipate…That’s an example where doing quantum simulations on quantum devices really can lead to new discoveries.”

Meanwhile, the Google-led study used a new approach to quantum simulation with superconducting qubits. Such qubits have proved extremely successful and scalable because they use solid-state technology – and they are used in most of the world’s leading commercial quantum computers such as IBM’s Osprey and Google’s own Willow chips. Much of the previous work using such chips, however, has focused on sequential “digital” quantum logic in which one set of gates is activated only after the previous set has concluded. The long times needed for such calculations allows the effects of noise to accumulate, resulting in computational errors.

Hybrid approach

In the new work, the Google team developed a hybrid analogue–digital approach in which a digital universal quantum gate set was used to prepare well-defined input qubit states. They then switched the processor to analogue mode, using capacitive couplers to tune the interactions between the qubits. In this mode, all the qubits were allowed to operate on each other simultaneously, without the quantum logic being shoehorned into a linear set of gate operations. Finally, the researchers characterized the output by switching back to digital mode.

The researchers used a 69-qubit superconducting system to simulate a similar, but non-identical, magnetic quantum phase transition to that studied by Lukin’s group. They were also puzzled by similar unexpected behaviour in their system. The groups’ subsequently became aware of each other’s work, as Google Research’s Trond Anderson explains: “It’s very exciting to see consistent observations from the Lukin group. This not only provides supporting evidence, but also demonstrates that the phenomenon appears in several contexts, making it extra important to understand”.

Both groups are now seeking to push their research deeper into the exploration of complex many-body quantum physics. The Google group estimates that, to conduct its simulations of the highly entangled quantum states involved with the same level of experimental fidelity would take the US Department of Energy’s Frontier supercomputer – one of the world’s most powerful – more than a million years. The researchers now want to look at problems that are completely intractable classically, such as magnetic frustration. “The analogue–digital approach really combines the best of both worlds, and we’re very excited about this as a new promising direction towards making discoveries in systems that are too complex for classical computers,” says Anderson.

The Harvard researchers are also looking to push their system to study more and more complex quantum systems. “There are many interesting processes where dynamics – especially across a quantum phase transition – remains poorly understood,” says Lukin. “And it ranges from the science of complex quantum materials to systems in high-energy physics such as lattice gauge theories, which are notorious for being hard to simulate classically to the point where people literally give up…We want to apply these kinds of simulators to real open quantum problems and really use them to study the dynamics of these systems.”

The research is described in side-by-side papers in Nature. The Google paper is here and the Harvard paper here.

The post Quantum simulators deliver surprising insights into magnetic phase transitions appeared first on Physics World.

Supermassive black hole displays ‘unprecedented’ X-ray outbursts

7 février 2025 à 10:15

An international team of researchers has detected a series of significant X-ray oscillations near the innermost orbit of a supermassive black hole – an unprecedented discovery that could indicate the presence of a nearby stellar-mass orbiter such as a white dwarf.

Optical outburst

The Massachusetts Institute of Technology (MIT)-led team began studying the extreme supermassive black hole 1ES 1927+654 – located around 270 million light years away and about a million times more massive than the Sun – in 2018, when it brightened by a factor of around 100 at optical wavelengths. Shortly after this optical outburst, X-ray monitoring revealed a period of dramatic variability as X-rays dropped rapidly – at first becoming undetectable for about a month, before returning with a vengeance and transforming into the brightest supermassive black hole in the X-ray sky.

“All of this dramatic variability seemed to be over by 2021, as the source appeared to have returned to its pre-2018 state. However, luckily, we continued to watch this source, having learned the lesson that this supermassive black hole will always surprise us. The discovery of these millihertz oscillations was indeed quite a surprise, but it gives us a direct probe of regions very close to the supermassive black hole,” says Megan Masterson, a fifth-year PhD candidate at the MIT Kavli Institute for Astrophysics and Space Research, who co-led the study with MIT’s Erin Kara – alongside researchers based elsewhere in the US, as well as at institutions in Chile, China, Israel, Italy, Spain and the UK.

“We found that the period of these oscillations rapidly changed – dropping from around 18 minutes in 2022 to around seven minutes in 2024. This period evolution is unprecedented, having never been seen before in the small handful of other supermassive black holes that show similar oscillatory behaviour,” she adds.

White dwarf

According to Masterson, one of the key ideas behind the study was that the rapid X-ray period change could be driven by a white dwarf – the compact remnant of a star like our Sun – orbiting around the supermassive black hole close to its event horizon.

“If this white dwarf is driving these oscillations, it should produce a gravitational wave signal that will be detectable with next-generation gravitational wave observatories, like ESA’s Laser Interferometer Space Antenna (LISA),” she says.

To test their hypothesis, the researchers used X-ray data from ESA’s XMM-Newton observatory to detect the oscillations, which allowed them to track how the X-ray brightness changed over time. The findings were presented in mid-January at the 245th meeting of the American Astronomical Society in National Harbor, Maryland, and subsequently reported in Nature.

According to Masterson, these insights into the behaviour of X-rays near a black hole will have major implications for future efforts to detect multi-messenger signals from supermassive black holes.

“We really don’t understand how common stellar-mass companions around supermassive black holes are, but these findings tell us that it may be possible for stellar-mass objects to survive very close to supermassive black holes and produce gravitational wave signals that will be detected with the next-generation gravitational wave observatories,” she says.

Looking ahead, Masterson confirms that the immediate next step for MIT research in this area is to continue to monitor 1ES 1927+654 – with both existing and future telescopes – in an effort to deepen understanding of the extreme physics at play in and around the innermost environments of black holes.

“We’ve learned from this discovery that we should expect the unexpected with this source,” she adds. “We’re also hoping to find other sources like this one through large time-domain surveys and dedicated X-ray follow-up of interesting transients.”

The post Supermassive black hole displays ‘unprecedented’ X-ray outbursts appeared first on Physics World.

Asteroid Bennu contains the stuff of life, sample analysis reveals

6 février 2025 à 10:40

A sample of asteroid dirt brought back to Earth by NASA’s OSIRIS-REx mission contains amino acids and the nucleobases of RNA and DNA, plus brines that could have facilitated the formation of organic molecules, scanning electron microscopy has shown.

The 120 g of material came from the near-Earth asteroid 101955 Bennu, which OSIRIS-REx visited in 2020. The findings “bolster the hypothesis that asteroids like Bennu could have delivered the raw ingredients to Earth prior to the emergence of life,” Dan Glavin of NASA’s Goddard Space Flight Center tells Physics World.

Bennu has an interesting history. It is 565 m across at its widest point and was once part of a much larger parent body, possibly 100 km in diameter, that was smashed apart in a collision in the Asteroid Belt between 730 million and 1.55 billion years ago. Bennu coalesced from the debris as a rubble pile that found itself in Earth’s vicinity.

The sample from Bennu was parachuted back to Earth in 2023 and shared among teams of researchers. Now two new papers, published in Nature and Nature Astronomy, reveal some of the findings from those teams.

Saltwater residue

In particular, researchers identified a diverse range of salt minerals, including sodium-bearing phosphates and carbonates that formed brines when liquid water on Bennu’s parent body either evaporated or froze.

SEM images of minerals found in Bennu samples
Mineral rich SEM images of trona (water-bearing sodium carbonate) found in Bennu samples. The needles form a vein through surrounding clay-rich rock, with small pieces of rock resting on top of the needles. (Courtesy: Rob Wardell, Tim Gooding and Tim McCoy, Smithsonian)

The liquid water would have been present on Bennu’s parent during the dawn of the Solar System, in the first few million years after the planets began to form. Heat generated by the radioactive decay of aluminium-26 would have kept pockets of water liquid deep inside Bennu’s parent body. The brines that this liquid water bequeathed would have played a role in kickstarting organic chemistry.

Tim McCoy, of the Smithsonian’s National Museum of Natural History and the lead author of the Nature paper, says that “brines play two important roles”.

One of those roles is producing the minerals that serve as templates for organic molecules. “As an example, brines precipitate phosphates that can serve as a template on which sugars needed for life are formed,” McCoy tells Physics World. The phosphate is like a pegboard with holes, and atoms can use those spaces to arrange themselves into sugar molecules.

The second role that brines can play is to then release the organic molecules that have formed on the minerals back into the brine, where they can combine with other organic molecules to form more complex compounds.

Ambidextrous amino acids

Meanwhile, the study reported in Nature Astronomy, led by Dan Glavin and Jason Dworkin of NASA’s Goddard Space Flight Center, focused on the detection of 14 of the 20 amino acids used by life to build proteins, deepening the mystery of why life only uses “left-handed” amino acids.

Amino acid molecules lack rotational symmetry – think of how, no matter how much you twist or turn your left hand, you will never be able to superimpose it on your right hand. As such, amino acids can randomly be either left- or right-handed, a property known as chirality.

However, for some reason that no one has been able to figure out yet, all life on Earth uses left-handed amino acids.

One hypothesis was that due to some quirk, amino acids formed in space and brought to Earth in impacts had a bias for being left-handed. This possibility now looks unlikely after Glavin and Dworkin’s team discovered that the amino acids in the Bennu sample are a mix of left- and right-handed, with no evidence that one is preferred over the other.

“So far we have not seen any evidence for a preferred chirality,” Glavin says. This goes for both the Bennu sample and a previous sample from the asteroid 162173 Ryugu, collected by Japan’s Hayabusa2 mission, which contained 23 different forms of amino acid. “For now, why life turned left on Earth remains a mystery.”

Taking a closer step to the origin of life

Another mystery is why the organic chemistry on Bennu’s parent body reached a certain point and then stopped. Why didn’t it form more complex organic molecules, or even life?

A mosaic image of Bennu
Near-Earth asteroid A mosaic image of Bennu, as observed by NASA’s OSIRIS-REx spacecraft. (Courtesy: NASA/Goddard/University of Arizona)

Amino acids are the construction blocks of proteins. In turn, proteins are one of the primary molecules for life, facilitating biological processes within cells. Nucleobases have also been identified in the Bennu sample, but although chains of nucleobases are the molecular skeleton of RNA and DNA, neither nucleic acid has been found in an extraterrestrial sample yet.

“Although the wet and salty conditions inside Bennu’s parent body provided an ideal environment for the formation of amino acids and nucleobases, it is not clear yet why more complex organic polymers did not evolve,” says Glavin.

Researchers are still looking for that complex chemistry. McCoy cites the 5-carbon sugar ribose, which is a component of RNA, as an essential organic molecule for life that scientists hope to one day find in an asteroid sample.

“But as you might imagine, as organic molecules increase in complexity, they decrease in number,” says McCoy, explaining that we will need to search ever larger amounts of asteroidal material before we might get lucky and find them.

The answers will ultimately help astrobiologists figure out where life began. Could proteins, RNA or even biological cells have formed in the early Solar System within objects such as Bennu’s parent planetesimal? Or did complex biochemistry begin only on Earth once the base materials had been delivered from space?

“What is becoming very clear is that the basic chemical building blocks of life could have been delivered to Earth, where further chemical evolution could have occurred in a habitable environment, including the origin of life itself,” says Glavin.

What’s really needed are more samples. China’s Tianwen-2 mission is blasting off later this year on a mission to capture a 100 g sample from the small near-earth asteroid 469219 Kamo‘oalewa. The findings are likely to be similar to those of OSIRIS-REx and Hayabusa2, but there’s always the chance that something more complex might be in that sample too. If and when those organic molecules are found, they will have huge repercussions for the origin of life on Earth.

The post Asteroid Bennu contains the stuff of life, sample analysis reveals appeared first on Physics World.

Spacewoman: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject

5 février 2025 à 12:00

“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”

In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.

The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.

Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.

Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.

These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.

The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.

Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.

Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.

Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.

In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.

When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”

Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.

  • Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.

The post <em>Spacewoman</em>: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject appeared first on Physics World.

Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare

5 février 2025 à 11:28

Watch this short video filmed at the APS March Meeting in 2024, where Mark Elo, chief marketing officer of Tabor Quantum Solutions, introduces the Echo-5Q, which he explains is an industry collaboration between FormFactor and Tabor Quantum Solutions, using the QuantWare quantum processing unit (QPU).

Elo points out that it is an out-of-the-box solution, allowing customers to order a full-stack system, including the software, refrigeration, control electronics and the actual QPU. With the Echo-5, it gets delivered and installed, so that the customer can start doing quantum measurements immediately. He explains that the Echo-5Q is designed at a price and feature point that increases the accessibility for on-site quantum computing.

Brandon Boiko, senior applications engineer with FormFactor, describes the how FormFactor developed the dilution refrigeration technology that the qubits get installed into. Boiko explains that the product has been designed to reduce the cost of entry into the quantum field – made accessible through FormFactor’s test-and- measurement programme, which allows people to bring their samples on site to take measurements.

Alessandro Bruno is founder and CEO of QuantWare, which provides the quantum processor for the Echo-5Q, the part that sits at the milli Kelvin stage of the dilution refrigerator, and that hosts five qubits. Bruno hopes that the Echo-5Q will democratize access to quantum devices – for education, academic research and start-ups.

The post Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare appeared first on Physics World.

Reliability science takes centre stage with new interdisciplinary journal

5 février 2025 à 10:08
Journal of Reliability Science and Engineering (Courtesy: IOP Publishing)

As our world becomes ever more dependent on technology, an important question emerges: how much can we truly rely on that technology? To help researchers explore this question, IOP Publishing (which publishes Physics World) is launching a new peer-reviewed, open-access publication called Journal of Reliability Science and Engineering (JRSE). The journal will operate in partnership with the Institute of Systems Engineering (part of the China Academy of Engineering Physics) and will benefit from the editorial and commissioning support of the University of Electronic Science and Technology of China, Hunan University and the Beijing Institute of Structure and Environment Engineering.

“Today’s society relies much on sophisticated engineering systems to manufacture products and deliver services,” says JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. “Such systems include power plants, vehicles, transportation and manufacturing. The safe, reliable and economical operation of all these requires the continuing advancement of reliability science and engineering.”

Defining reliability

The reliability of an object is commonly defined as the probability that it will perform its intended function adequately for a specified period of time. “The object in question may be a human being, product, system, or process,” Zuo explains. “Depending on its nature, corresponding sub-disciplines are human-, material-, structural-, equipment-, software- and system reliability.”

Key concepts in reliability science include failure modes, failure rates and reliability function and coherency, as well as measurements such as mean time-to-failure, mean time between failures, availability and maintainability. “Failure modes can be caused by effects like corrosion, cracking, creep, fracture, fatigue, delamination and oxidation,” Zuo explains.

To analyse such effects, researchers may use approaches such as fault tree analysis (FTA); failure modes, effects and criticality analysis (FMECA); and binary decomposition, he adds. These and many other techniques lie within the scope of JRSE, which aims to publish high-quality research on all aspects of reliability. This could, for example, include studies of failure modes and damage propagation as well as techniques for managing them and related risks through optimal design and reliability-centred maintenance.

A focus on extreme environments

To give the journal structure, Zuo and his colleagues identified six major topics: reliability theories and methods; physics of failure and degradation; reliability testing and simulation; prognostics and health management; reliability engineering applications; and emerging topics in reliability-related fields.

Mingjian Zuo
JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. (Courtesy: IOP Publishing)

As well as regular issues published four times a year, JRSE will also produce special issues. A special issue on system reliability and safety in varying and extreme environments, for example, focuses on reliability and safety methods, physical/mathematical and data-driven models, reliability testing, system lifetime prediction and performance evaluation. Intelligent operation and maintenance of complex systems in varying and extreme environments are also covered.

Interest in extreme environments was one of the factors driving the journal’s development, Zuo says, due to the increasing need for modern engineering systems to operate reliably in highly demanding conditions. As examples, he cites wind farms being built further offshore; faster trains; and autonomous systems such as drones, driverless vehicles and social robots that must respond quickly and safely to ever-changing surroundings in close proximity to humans.

“As a society, we are setting ever higher requirements on critical systems such as the power grid and Internet, water distribution and transport networks,” he says. “All of these demand further advances in reliability science and engineering to develop tools for the design, manufacture and operation as well as the maintenance of today’s sophisticated engineering systems.”

The go-to platform for researchers and industrialists alike

Another factor behind the journal’s launch is that previously, there were no international journals focusing on reliability research by Chinese organizations. Since the discipline’s leaders include several such organizations, Zuo says the lack of international visibility has seriously limited scientific exchange and promotion of reliability research between China and the global community. He hopes the new journal will remedy this. “Notable features of the journal include gold open access (thanks to our partnership with IOP Publishing, a learned-society publisher that does not have shareholders) and a fast review process,” he says.

In general, the number of academic journals focusing on reliability science and engineering is limited, he adds. “JRSE will play a significant role in promoting the advances in reliability research by disseminating cutting-edge scientific discoveries and creative reliability assurance applications in a timely way.

“We are aiming that the journal will become the go-to platform for reliability researchers and industrialists alike.”

The first issue of JRSE will be published in March 2025, and its editors welcome submissions of original research reports as well as review papers co-authored by experts. “There will also be space for perspectives, comments, replies, and news insightful to the reliability community,” says Zuo. In the future, the journal plans to sponsor reliability-related academic forums and international conferences.

With over 100 experts from around the world on its editorial board, Zuo describes JRSE as scientist-led, internationally-focused and highly interdisciplinary. “Reliability is a critical measure of performance of all engineering systems used in every corner of our society,” he says. “This journal will therefore be of interest to disciplines such as mechanical-, electrical-, chemical-, mining- and aerospace engineering as well as the mathematical and life sciences.”

The post Reliability science takes centre stage with new interdisciplinary journal appeared first on Physics World.

Elastic response explains why cordierite has ultra-low thermal expansion

4 février 2025 à 17:49
Hot material The crystal structure of cordierite gives the material its unique thermal properties. (Courtesy: M Dove and L Li/Matter)

The anomalous and ultra-low thermal expansion of cordierite results from the interplay between lattice vibrations and the elastic properties of the material. That is the conclusion of Martin Dove at China’s Sichuan University and Queen Mary University of London in the UK and Li Li at the Civil Aviation Flight University of China. They showed that the material’s unusual behaviour stems from direction-varying elastic forces in its lattice, which act to vary cordierite’s thermal expansion along different directions.

Cordierite is a naturally-occurring mineral that can also be synthesized. Thanks to its remarkable thermal properties, it is used in products ranging from pizza stones to catalytic converters. When heated to high temperatures, it undergoes ultra-low thermal expansion along two directions, and it shrinks a tiny amount along the third direction. This makes it incredibly useful as a material that can be heated and cooled without changing size or suffering damage.

Despite its widespread use, scientists lack a fundamental understanding of how cordierite’s anomalous thermal expansion arises from the properties of its crystal lattice. Normally, thermal expansion (positive or negative) is understood in terms of Grüneisen parameters. These describe how vibrational modes (phonons) in the lattice cause it to expand or contract along each axis as the temperature changes.

Negative Grüneisen parameters describe a lattice that shrinks when heated, and are seen as key to understanding thermal contraction of cordierite. However, the material’s thermal response is not isotropic (it only contracts only along one axis when heated at high temperatures) so understanding cordierite in terms of its Grüneisen parameters alone is difficult.

Advanced molecular dynamics

In their study, Dove and Li used advanced molecular dynamics simulations to accurately model the behaviour of atoms in the cordierite lattice. Their closely matched experimental observations of the material’s thermal expansion, providing them with key insights into why the material has a negative thermal expansion in just one direction.

“Our research demonstrates that the anomalous thermal expansion of cordierite originates from a surprising interplay between atomic vibrations and elasticity,” Dove explains. The elasticity is described in the form of an elastic compliance tensor, which predicts how a material will distort in response to a force applied along a specific direction.

At lower temperatures, lattice vibrations occur at lower frequencies. In this case, the simulations predicted negative thermal expansion in all directions – which is in line with observations of the material.

At higher temperatures, the lattice becomes dominated by high-frequency vibrations. In principle, this should result in positive thermal expansion in all three directions. Crucially, however, Dove and Li discovered that this expansion is cancelled out by the material’s elastic properties, as described by its elastic compliance tensor.

What is more, the unique arrangement of crystal lattice meant that this tensor varied depending on the direction of the applied force, creating an imbalance that amplifies differences between the material’s expansion along each axis.

Cancellation mechanism

“This cancellation mechanism explains why cordierite exhibits small positive expansion in two directions and small negative expansion in the third,” Dove explains. “Initially, I was sceptical of the results. The initial data suggested uniform expansion behaviour at both high and low temperatures, but the final results revealed a delicate balance of forces. It was a moment of scientific serendipity.”

Altogether, Dove and Li’s result clearly shows that cordierite’s anomalous behaviour cannot be understood by focusing solely on the Grüneisen parameters of its three axes. It is crucial to take its elastic compliance tensor into account.

In solving this long-standing mystery, the duo now hope their results could help researchers to better predict how cordierite’s thermal expansion will vary at different temperatures. In turn, they could help to extend the useful applications of the material even further.

“Anisotropic materials like cordierite hold immense potential for developing high-performance materials with unique thermal behaviours,” Dove says. “Our approach can rapidly predict these properties, significantly reducing the reliance on expensive and time-consuming experimental procedures.”

The research is described in Matter.

The post Elastic response explains why cordierite has ultra-low thermal expansion appeared first on Physics World.

Thermometer uses Rydberg atoms to make calibration-free measurements

3 février 2025 à 17:30

A new way to measure the temperatures of objects by studying the effect of their black-body radiation on Rydberg atoms has been demonstrated by researchers at the US National Institute of Standards and Technology (NIST). The system, which provides a direct, calibration-free measure of temperature based on the fact that all atoms of a given species are identical, has a systematic temperature uncertainty of around 1 part in 2000.

The black-body temperature of an object is defined by the spectrum of the photons it emits. In the laboratory and in everyday life, however, temperature is usually measured by comparison to a reference. “Radiation is inherently quantum mechanical,” says NIST’s Noah Schlossberger, “but if you go to the store and buy a temperature sensor that measures the radiation via some sort of photodiode, the rate of photons converted into some value of temperature that you see has to be calibrated. Usually that’s done using some reference surface that’s held at a constant temperature via some sort of contact thermometer, and that contact thermometer has been calibrated to another contact thermometer – which in some indirect way has been tied into some primary standard at NIST or some other facility that offers calibration services.” However, each step introduces potential error.

This latest work offers a much more direct way of determining temperature. It involves measuring the black-body radiation emitted by an object directly, using atoms as a reference standard. Such a sensor does not need calibration because quantum mechanics dictates that every atom of the same type is identical. In Rydberg atoms the electrons are promoted to highly excited states. This makes the atoms much larger, less tightly bound and more sensitive to external perturbations. As part of an ongoing project studying their potential to detect electromagnetic fields, the researchers turned their attention to atom-based thermometry. “These atoms are exquisitely sensitive to black-body radiation,” explains NIST’s Christopher Holloway, who headed the work.

Packet of rubidium atoms

Central to the new apparatus is a magneto-optical trap inside a vacuum chamber containing a pure rubidium vapour. Every 300 ms, the researchers load a new packet of rubidium atoms into the trap, cool them to around 1 mK and excite them from the 5S energy level to the 32S Rydberg state using lasers. They then allow them to absorb black-body radiation from the surroundings for around 100 μs, causing some of the 32S atoms to change state. Finally, they apply a strong, ramped electric field, ionizing the atoms. “The higher energy states get ripped off easier than the lower energy states, so the electrons that were in each state arrive at the detector at a different time. That’s how we get this readout that tells us the population in each of the states,” explains Schlossberger, the work’s first author. The researchers can use this ratio to infer the spectrum of the black-body radiation absorbed by the atoms and, therefore, the temperature of the black body itself.

The researchers calculated the fractional systematic uncertainty of their measurement as 0.006, which corresponds to around 2 K at room temperature. Schlossberger concedes that this sounds relatively unimpressive compared to many commercial thermometers, but he notes that their thermometer measures absolute temperature, not relative temperature. “If I had two skyscrapers next to each other, touching, and they were an inch different in height, you could probably measure that difference to less than a millimetre,” he says, “If I asked you to tell me the total height of the skyscraper, you probably couldn’t.”

One application of their system, the researchers say, could lie in optical clocks, where frequency shifts due to thermal background noise are a key source of uncertainty. At present, researchers have to perform a lot of in situ thermometry to try to infer the black-body radiation experienced by the clock without disturbing the clock itself. Schlossberger says that, in future, one additional laser, could potentially allow the creation of Rydberg states in the clock atoms. “It’s sort of designed so that all the hardware is the same as atomic clocks, so without modifying the clock significantly it would tell you the radiation experienced by the same atoms that are used in the clock in the location they’re used.”

The work is described in a paper in Physical Review Research. Atomic physicist Kevin Weatherill of Durham University in the UK says “it’s an interesting paper and I enjoyed reading it”. “The direction of travel is to look for a quantum measurement for temperature – there are a lot of projects going on at NIST and some here in the UK,”, he says. He notes, however, that this experiment is highly complex and says “I think at the moment just measuring the width of an atomic transition in a vapour cell [which is broadened by the Doppler effect as atoms move faster] gives you a better bound on temperature than what’s been demonstrated in this paper.”

The post Thermometer uses Rydberg atoms to make calibration-free measurements appeared first on Physics World.

Ask me anything: Sophie Morley – ‘Active listening is the key to establishing productive research collaborations with our scientific end-users’

3 février 2025 à 15:20

What skills do you use every day in your job?

I am one of two co-chairs, along with my colleague Hendrik Ohldag, of the Quantum Materials Research and Discovery Thrust Area at ALS. Among other things, our remit is to advise ALS management on long-term strategy regarding quantum science, We launch and manage beamline development projects to enhance the quantum research capability at ALS and, more broadly, establish collaborations with quantum scientists and engineers in academia and industry.

In terms of specifics, the thrust area addresses problems of condensed-matter physics related to spin and quantum properties – for example, in atomically engineered multilayers, 2D materials and topological insulators with unusual electronic structures. As a beamline scientist, active listening is the key to establishing productive research collaborations with our scientific end-users – helping them to figure out the core questions they’re seeking to answer and, by extension, the appropriate experimental techniques to generate the data they need.

The task, always, is to translate external users’ scientific goals into practical experiments that will run reliably on the ALS beamlines. High-level organizational skills, persistence and exhaustive preparation go a long way: it takes a lot of planning and dialogue to ensure scientific users get high-quality experimental results.

What do you like best and least about your job?

A core part of my remit is to foster the collective conversation between ALS staff scientists and the quantum community, demystifying synchrotron science and the capabilities of the ALS with prospective end-users. The outreach activity is exciting and challenging in equal measure – whether that’s initiating dialogue with quantum experts at scientific conferences or making first contact using Teams or Zoom.

Internally, we also track the latest advances in fundamental quantum science and applied R&D. In-house colloquia are mandatory, with guest speakers from the quantum community engaging directly with ALS staff teams to figure out how our portfolio of synchrotron-based techniques – whether spectroscopy, scattering or imaging – can be put to work by users from research or industry. This learning and development programme, in turn, underpins continuous improvement of the beamline support services we offer to all our quantum end-users.

As for downsides: it’s never ideal when a piece of instrumentation suddenly “breaks” on a Friday afternoon. This sort of troubleshooting is probably the part of the job I like least, though it doesn’t happen often and, in any case, is a hit I’m happy to take given the flexibility inherent to my role.

What do you know today that you wish you knew when you were starting out in your career?

It’s still early days, but I guess the biggest lesson so far is to trust in my own specialist domain knowledge and expertise when it comes to engaging with the diverse research community working on quantum materials. My know-how in photon science – from coherent X-ray scattering and X-ray detector technology to in situ magnetic- and electric-field studies and automated measurement protocols – enables visiting researchers to get the most out of their beamtime at ALS.

The post Ask me anything: Sophie Morley – ‘Active listening is the key to establishing productive research collaborations with our scientific end-users’ appeared first on Physics World.

Fast and predictable: RayStation meets the needs of online adaptive radiotherapy

3 février 2025 à 10:30

Radiation therapy is a targeted cancer treatment that’s typically delivered over several weeks, using a plan that’s optimized on a CT scan taken before treatment begins. But during this time, the geometry of the tumour and the surrounding anatomy can vary, with different patients responding in different ways to the delivered radiation. To optimize treatment quality, such changes must be taken into consideration. And this is where adaptive radiotherapy comes into play.

Adaptive radiotherapy uses patient images taken throughout the course of treatment to update the initial plan and compensate for any anatomical variations. By adjusting the daily plan to match the patient’s daily anatomy, adaptive treatments ensure more precise, personalized and efficient radiotherapy, improving tumour control while reducing toxicity to healthy tissues.

The implementation of adaptive radiotherapy is continuing to expand, as technology developments enable adaptive treatments in additional tumour sites. And as more cancer centres worldwide choose this approach, there’s a need for flexible, innovative software to streamline this increasing clinical uptake.

Designed to meet these needs, RayStation – the treatment planning system from oncology software specialist RaySearch Laboratories – makes adaptive radiotherapy faster and easier to implement in clinical practice. The versatile and holistic RayStation software provides all of the tools required to support adaptive planning, today and into the future.

“We need to be fast, we need to be predictable and we need to be user friendly,” says Anna Lundin, technical product manager at RaySearch Laboratories.

Meeting the need for speed

Typically, adaptive radiotherapy uses the cone-beam CT (CBCT) images acquired for daily patient positioning to perform plan adaptation. For seamless implementation into the clinical workflow to fully reflect the daily anatomical changes, this procedure should be performed “online” with the patient on the treatment table, as opposed to an “offline” approach where plan adaptation occurs after the patient has left the treatment session. Such online adaptation, however, requires the ability to analyse patient scans and perform adaptive re-planning as rapidly as possible.

To fulfil the needs for streamlining all types of adaptive (online or offline) requirements, RayStation incorporates a package of advanced algorithms that perform key tasks, including segmentation, deformable registration, CBCT image enhancement and recontouring, all while the previously delivered dose is taken into consideration. By automating all of these steps, RayStation accelerates the replanning process to the speed needed for online adaptation, with the ability to create an adaptive plan in less than a minute.

Anna Lundin
Anna Lundin: “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources.” (Courtesy: RaySearch Laboratories)

Central to this process is RayStation’s dose tracking, which uses the daily images to calculate the actual dose delivered to the patient in each fraction. This ability to evaluate treatment progress, both on a daily basis and considering the estimated total dose, enables informed decisions as to whether to replan or not. The software’s flexible workflow allows users to perform daily dose tracking, compare plans with daily anatomical information against the original plans and adapt when needed.

“You can document trigger points for when adaptation is needed,” Lundin explains. “So you can evaluate whether the original plan is still good to go or whether you want to update or adapt the treatment plan to changes that have occurred.”

User friendly

Another challenge when implementing online adaptation is that its time constraints necessitate access to intuitive tools that enable quick decision making. “One of the big challenges with adaptive radiotherapy has been that a lot of the decision making and processes have been done on an ad hoc basis,” says Lundin. “We need to utilize the same protocol-based planning for adaptive as we do for standard treatment planning.”

As such, RaySearch Laboratories has focused on developing software that’s easy to use, efficient and accessible to a large proportion of clinical personnel. RayStation enables clinics to define and validate clinical procedures for a specific patient category in advance, eliminating the need to repeat this each time.

“By doing this, we let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible,” Lundin adds.

Versatile design

Lundin emphasizes that this accelerated adaptive replanning solution is built upon RayStation’s pre-existing comprehensive framework. “It’s not a parallel solution, it’s a progression,” she explains. “That means that all the tools that we have for robust optimization and evaluation, tools to assess biological effects, support for multiple treatment modalities – all that is also available when performing adaptive assessments and adaptive planning.”

This flexibility allows RayStation to support both photon- and ion-based treatments, as well as multiple imaging modalities. “We have built a framework that can be configured for each site and each clinical indication,” says Lundin. “We believe in giving users the freedom to select which techniques and which strategies to employ.”

We let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible

In particular, adaptive radiotherapy is gaining interest among the proton therapy community. For such highly conformal treatments, it’s even more important to regularly assess the actual delivered dose and ensure that the plan is updated to deliver the correct dose each day. “We have the first clinics using RayStation to perform adaptive proton treatments in an online fashion,” Lundin says.

It’s likely that we will also soon see the emergence of biologically adapted radiotherapy, in which treatments are adapted not just to the patient’s anatomy, but to the tumour’s biological characteristics and biological response. Here again, RayStation’s flexible and holistic architecture can support the replanning needs of this advanced treatment approach.

Predictable performance

Lundin points out that the progression towards online adaptation has been valuable for radiotherapy as a whole. “A lot of the improvements required to handle the time-critical procedures of online adaptive are of large benefit to all adaptive assessments,” she explains. “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources. I see it as strictly necessary for online adaptive, but good for all.”

Artificial intelligence (AI) is not only a key component in enhancing the speed and consistency of treatment planning (with tools such as deep learning segmentation and planning), but also enables the handling of massive data sets, which in turn allows users to improve the treatment “intents” that they prescribe.

AI plays a central role in RayStation
Key component AI plays a central role in enabling RayStation to deliver predictable and consistent treatment planning, with deep learning segmentation (shown in the image) being an integral part. (Courtesy: RaySearch Laboratories)

Learning more about how the delivered dose correlates with clinical outcome provides important feedback on the performance and effectiveness of current adaptive processes. This will help optimize and personalize future treatments and, ultimately, make the adaptive treatments more predictable and effective as a whole.

Lundin explains that full automation is the only way to generate the large amount of data in the predictable and consistent manner required for such treatment advancements, noting that it is not possible to achieve this manually.

RayStation’s ability to preconfigure and automate all of the steps needed for daily dose assessment enables these larger-scale dose follow-up clinical studies. The treatment data can be combined with patient outcomes, with AI employed to gain insight into how to best design treatments or predict how a tumour will respond to therapy.

“I look forward to seeing more outcome-related studies of adaptive radiotherapy, so we can learn from each other and have more general recommendations, as has been done in the field of standard radiotherapy planning,” says Lundin. “We need to learn and we need to improve. I think that is what adaptive is all about – to adapt each person’s treatment, but also adapt the processes that we use.”

Future evolution

Looking to the future, adaptive radiotherapy is expected to evolve rapidly, bolstered by ongoing advances in imaging techniques and increasing data processing speeds. RayStation’s machine learning-based segmentation and plan optimization algorithms will continue to play a central role in supporting this evolution, with AI making treatment adaptations more precise, personalized and efficient, enhancing the overall effectiveness of cancer treatment.

“RaySearch, with the foundation that we have in optimization and advancing treatment planning and workflows, is very well equipped to take on the challenges of these future developments,” Lundin adds. “We are looking forward to the improvements to come and determined to meet the expectations with our holistic software.”

The post Fast and predictable: RayStation meets the needs of online adaptive radiotherapy appeared first on Physics World.

Enhancing SRS/SBRT accuracy with RTsafe QA solutions: An overall experience

3 février 2025 à 10:21

PRIME SBRT

This webinar will present the overall experience of a radiotherapy department that utilizes RTsafe QA solutions, including the RTsafe Prime and SBRT anthropomorphic phantoms for intracranial stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) applications, respectively, as well as the remote dosimetry services offered by RTsafe. The session will explore how these phantoms can be employed for end-to-end QA measurements and dosimetry audits in both conventional linacs and a Unity MR-Linac system. Key features of RTsafe phantoms, such as their compatibility with RTsafe’s remote dosimetry services for point (OSLD, ionization chamber), 2D (films), and 3D (gel) dosimetry, will be discussed. These capabilities enable a comprehensive SRS/SBRT accuracy evaluation across the entire treatment workflow – from imaging and treatment planning to dose delivery.

Christopher W Schneider
Christopher W Schneider

Christopher Schneider is the adaptive radiotherapy technical director at Mary Bird Perkins Cancer Center and serves as an adjunct assistant professor in the Department of Physics and Astronomy at Louisiana State University in Baton Rouge. Under his supervision, Mary Bird’s MR-guided adaptive radiotherapy program has provided treatment to more than 150 patients in its first year alone. Schneider’s research group focuses on radiation dosimetry, late effects of radiation, and the development of radiotherapy workflow and quality-assurance enhancements.

The post Enhancing SRS/SBRT accuracy with RTsafe QA solutions: An overall experience appeared first on Physics World.

PLANCKS physics quiz – the solutions

31 janvier 2025 à 12:00

Question 1: 4D Sun

Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]

Boltzmann constant, kB = 1.38 × 10−23 J K−1

Speed of light, c = 3 × 108 m s−1

Solution

Black body radiation, spectral density: ε (ν) dν = ρ (ν) n (ν)

The photon energy, E = where h is Planck’s constant and ν is the photon frequency.

The density of states, ρ (ν) = n−1 where A is a constant independent of the frequency and the frequency term is the scaling of surface area of an n-dimensional sphere.

The Bose–Einstein distribution,

n(v)=1ehvkT1

where k is the Boltzmann constant and T is the temperature.

We let

x=hvkT

and get

ε(x)=xnex1

We do not need the constant of proportionality (which is not simple to calculate in 4D) to find the maximum of ε (x). Working out the constant just tells us how tall the peak is, but we are interested in where the peak is, not the total radiation.

dεdxnxn1ex1xnexex12

We set this equal to zero for the maximum of the distribution,

xn1exex12n1exx=0

This yields x = n (1 − ex) where

x=hvmaxkT

and we can relate

λmax=cvmax

and c being the speed of light.

This equation has the solution x = n +W (−ne−n) where W is the Lambert W function z = W (y) that solves zez = y (although there is a subtlety about which branch of the function). This is kind of useless to do anything with, though. One can numerically solve this equation using bisection/Newton–Raphson/iteration. Alternatively, one could notice that as the number of dimensions increases, e−x is small, so to leading approximation xn. One can do a little better iterating this, xnne−n which is what we will use. Note the second iteration yields

xnnennen

Number of dimensions, n Numerical solution Approximation
2 1.594 1.729
3 2.821 2.851
4 (the one we want) 3.921 3.927
5 4.965 4.966
6 5.985 5.985

Using the result above,

λmax=hckTxmax=6.63 ×1034·3×1081.38×1023·6×103·3.9=616 nm

616 nm is middle of the spectrum, so it will look white with a green-blue tint. Note, we have used T = 6000 K for the temperature here, as given in the question.

It would also be valid to look at ε (λ) dλ instead of ε (ν) .

Question 2: Heavy stuff

In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]

Solution

First we will set up the equations of motion for our system. We will set one mass to be at position −x and the other to be at x, so the masses are at a distance of 2x from each other. Starting from Newton’s law of gravity:

F=Gm22x2

we can then use Newton’s second law to rewrite the LHS,

mx¨=Gm24x2

which we can simplify to

x¨=Gm4x2

It is important that you get the right factor here depending on your choice for the particle coordinates at the start. Note there are other methods of getting this point, e.g. reduced mass.

We can now solve the second order ODE above. We will not show the whole process here but present the starting point and key results. We can write the acceleration in terms of the velocity. The initial velocity is zero and the initial position

xi=d2

So,

vdvdx=Gm4x20vvdv=Gm4xixdxx2

and once the integrals are solved we can rearrange for the velocity,

v=dxdt=Gm21x1xi

Now we can form an expression for the total time taken for the masses to meet in the middle,

T=2Gm0xidx1x1xi

There are quite a few steps involved in solving this integral, for these solutions, we shall make use of the following (but do attempt to solve it for yourselves in full).

01y1ydy=sin11=π2

Hence,

T=π22xi3Gm=π2d34Gm

We can now rearrange for G and substitute in the values given in the question, don’t forget to convert the time into seconds.

G=d34mπ2T2=6.67×1011 m3kg1s2

This is the generally accepted value for the gravitational constant of our universe as well.

Question 3: Just like clockwork

Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.

Simplified schematic of a pendulum clock mechanism
1 Tick tock Simplified schematic of a pendulum clock mechanism. When the pendulum swings one way (a), the escapement releases the gear attached to the hanging mass and allows it to fall. When the pendulum swings the other way (b) the escapement stops the gear attached to the mass moving so the mass stays in place. (Courtesy: Katherine Skipper/IOP Publishing)

A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by

T=2πlg

where g is the acceleration due to gravity.

Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.

Radius of the Earth, R = 6.3781 × 106 m

Period of one Earth day, τ0 = 8.64 × 104 s

How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]

Solution

We will write the period of oscillation of the pendulum at the surface of the Earth to be

T0=2πlg0.

At a height h above the surface of the Earth the period of oscillation will be

Th=2πlgh,

where g0 and gh are the acceleration due to gravity at the surface of the Earth and a height h above it respectively.

We can define τ0 to be the total duration of the day which is 8.64 × 104 seconds and equal to N complete oscillations of the pendulum at the surface. The lag is then τh which will equal N times the difference in one period of the two clocks, τh = NΔT, where ΔT = (ThT0). We can now take a ratio of the lag over the day and the total duration of the day:

τhτ0=NThT0NT0τh=τ0ThT0T0=τh=τ0ThT01

Then by substituting in the expressions we have for the period of a pendulum at the surface and height h we can write this in terms of the gravitational constant,

τh=τ0g0gh1

[Award 1 mark for finding the ratio of the lag over the day and the total period of the day.]

The acceleration due to gravity at the Earth’s surface is

g0=GMR2

where G is the universal gravitational constant, M is the mass of the Earth and R is the radius of the Earth. At an altitude h, it will be

gh=GMR+h2

[Award 1 mark for finding the expression for the acceleration due to gravity at height h.]

Substituting into our expression for the lag, we get:

τh=τ0R+h2R21=τ01+2hR+h2R21=τ0R2+2hR+h2R1=τ0R+hR1

This simplifies to an expression for the lag over a day. We can then substitute in the given values to find,

τh=τ0hR=8.64×104 s·300 m8.3781×106 m =4.064 s4 s

[Award 2 marks for completing the simplification of the ratio and finding the lag to be ≈ 4 s.]

Question 4: Quantum stick

Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few milli­kelvins, so the stick is in its ground state.

Moment of inertia for a rod,

I=13ml2

where m is the mass and l is the length.

Uncertainty principle,

ΔxΔp2

There are several possible approximations and simplifications you could make in solving this problem, including:

sinθ ≈ θ for small θ

cosh1x=ln x+x21

and

sinh1x=ln x+x2+1

Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]

Hint: Consider the two possible initial conditions that arise from the uncertainty principle.

Solution

We can imagine this as an inverted pendulum, with gravity acting from the centre of mass l2 and at an angle θ from the unstable equilibrium point.

[Award 1 mark for a suitable diagram of the system.]

We must now find the equations of motion of the system. For this we can use Newton’s second law F=ma in its rotational form τ = Iα (torque = moment of inertia × angular acceleration). We have another equation for torque we can use as well

τ=r×F=rFsinθn^

where r is the distance from the pivot to the centre of mass l2 and F is the force, which in this case is gravity mg. We can then equate these giving

rFsinθ=Iα

Substituting in the given moment of inertia of the stick and that the angular acceleration

α=δ2θδt2=θ¨

We can cancel a few things and rearrange to get a differential equation of the form:

θ¨3g2lsinθ=0

we then can take the small angle approximation sin θ ≈ θ, resulting in

θ¨3g2lθ=0

[Award 2 marks for finding the equation of motion for the system and using the small angle approximation.]

Solve with ansatz of θ = Aeωt + Be−ωt, where we have chosen

ω2=3g2l

We can clearly see that this will satisfy the differential equation

θ˙=ωAeωtωBeωt and θ¨=ω2Aeωt+ω2Beωt

Now we can apply initial conditions to find A and B, by looking at the two cases from the uncertainty principle

ΔxΔp=ΔxmΔv2

Case 1: The stick is at an angle but not moving

At t = 0, θ = Δθ

θ = Δθ = A + B

At t = 0, θ˙=0

θ˙=0=ωAeω0ωBeω0 , A=B

This implies Δθ = 2A and we can then find

A=Δθ2=2Δx2l=Δxl

So we can now write

θ=Aeωteωt=Δvωeωteωt or θ=2Δvωlsinh ωt

Case 2: The stick is at upright but moving

At t = 0, θ = 0

This condition gives us A = −B.

At t = 0, θ¨=2vl

This initial condition has come from the relationship between the tangential velocity, Δv which equals the distance to the centre of mass from the pivot point, l2 and the angular velocity θ˙. Using the above initial condition gives us θ˙=2ωA where A=Δvωl

We can now write

θ=Aeωteωt=Δvωeωteωt or θ=2Δvωlsinh ωt

[Award 4 marks for finding the two expressions for θ by using the two cases of the uncertainty principle.]

Now there are a few ways we can finish off this problem, we shall look at three different ways. In each case when the stick has fallen on the ground θtf=π2.

Method 1

Take θ=2Δxlcosh ωt and θ=2Δvωlsinh ωt, use θtf=π2 then rearrange for tf in both cases. We have

tf=1ωcosh1πl4Δx and tf=1ωsinh1πωl4Δv

Look at the expression for cosh−1 x and sinh−1 x given in the question. They are almost identical, we can then approximate the two arguments to each other and we find,

Δx=Δvω

we can then substitute in the uncertainty principle ΔxΔp=2 as Δv=2mδx and then write an expression of Δx=2mω, which we can put back into our arccosh expression (or do it for Δv and put into arcsinh).

tf=1ωcosh1πl4Δx

where Δx=2mω and ω=3g2l.

Method 2

In this next method, when you get to the inverse hyperbolic functions, you can take an expansion of their natural log forms in the tending to infinity limit. To first order both functions give ln 2x, we can then equate the arguments and find Δx or Δv in terms of the other and use the uncertainty principle. This would give the time taken as,

tf=1ωlnπl2Δx

where Δx=2mω and ω=3g2l.

Method 3

Rather than using hyperbolic functions, you could do something like above and do an expansion of the exponentials in the two expressions for tf or we could make life even easier and do the following.

Disregard the e−ωt terms as they will be much smaller than the eωt terms. Equate the two expressions for θtf=π2 and then take the natural logs, once again arriving at an expression of

tf=1ωlnπl2Δx

where Δx=2mω and ω=3g2l.

This method efficiently sets B = 0 when applying the initial conditions.

[Award 2 marks for reaching an expression for t using one of the methods above or a suitable alternative that gives the correct units for time.]

Then, by using one of the expressions above for time, substitute in the values and find that t = 10.58 seconds.

[Award 1 mark for finding the correct time value of t = 10.58 seconds.]

  • If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk

The post PLANCKS physics quiz – the solutions appeared first on Physics World.

Filter inspired by deep-sea sponge cleans up oil spills

30 janvier 2025 à 14:00

Oil spills can pollute large volumes of surrounding water – thousands of times greater than the spill itself – causing long-term economic, environmental, social and ecological damage. Effective methods for in situ capture of spilled oil are thus essential to minimize contamination from such disasters.

Many oil spill cleanup technologies, however, exhibit poor hydrodynamic stability under complex flow conditions, which leads to poor oil-capture efficiency. To address this shortfall, researchers from Harbin Institute of Technology in China have come up with a new approach to oil cleanup using a vortex-anchored filter (VAF).

“Since the 1979 Atlantic Empress disaster, interception and adsorption have been the primary methods for oil spill recovery, but these are sensitive to water-flow fluctuation,” explains lead author Shijie You. Oil-in-water emulsions from leaking pipelines and offshore industrial discharge are particularly challenging, says You, adding that “these problems inspire us to consider how we can address hydrodynamic stability of oil-capture devices under turbulent conditions”.

Inspired by the natural world

You and colleagues believe that the answers to oil spill challenges could come from nature – arguably the world’s greatest scientist. They found that the deep-sea glass sponge E. aspergillum, which lives at depths of up to 1000 m in the Pacific Ocean, has an excellent ability to filter feed with a high effectiveness, selectivity and robustness, and that its food particles share similarities with oil droplets.

The anatomical structure of E. aspergillum – also known as Venus’ flower basket – provided inspiration for the researchers to design their VAF. By mimicking the skeletal architecture and filter feeding patterns of the sponge, they created a filter that exhibited a high mass transfer and hydrodynamic stability in cleaning up oil spills under turbulent flow.

“The E. aspergillum has a multilayered skeleton–flagellum architecture, which creates 3D streamlines with frequent collision, deflection, convergence and separation,” explains You. “This can dissipate macro-scale turbulent flows into small-scale swirling flow patterns called low-speed vortical flows within the body cavity, which reduces hydrodynamic load and enhances interfacial mass transfer.”

For the sponges, this allows them to maintain a high mechanical stability while absorbing nutrients from the water. The same principles can be applied to synthetic materials for cleaning up oil spills.

Design of the vortex-anchored filter
VAF design Skeletal motif of E. aspergillum and (right column) front and top views of the VAF with a bio-inspired hollow cylinder skeleton and flagellum adsorbent. (Courtesy: Y Yu et al. Nat. Commun. 10.1038/s41467-024-55587-y)

The VAF is a synthetic form of the sponge’s architecture and, according to You, “is capable of transferring kinematic energy from an external water flow into multiple small-scale low-speed vortical flows within the body cavity to enhance hydrodynamic stability and oil capture efficiency”.

The tubular outer skeleton of the VAF comprises a helical ridge and chequerboard lattice. It is this skeleton that creates a slow vortex field inside the cavity and enables mass transfer of oil during the filtering process. Once the oil has been forced into the filter, the internal area – composed of flagellum-shaped adsorbent materials – provides a large interfacial area for oil adsorption.

Using the VAF to clean up oil spills

The researchers used their nature-inspired VAF to clean up oil spills under complex hydrodynamic conditions. You states that “the VAF can retain the external turbulent-flow kinetic energy in the low-speed vortical flows – with a small Kolmogorov microscale (85 µm) [the size of the smallest eddy in a turbulent flow] – inside the cavity of the skeleton, leading to enhanced interfacial mass transfer and residence time”.

“This led to an improvement in the hydrodynamic stability of the filter compared to other approaches by reducing the Reynolds stresses in nearly quiescent wake flows,” You explains. The filter was also highly resistant to bending stresses caused at the boundary of the filter when trying separate viscous fluids. When put into practice, the VAF was able to capture more than 97% of floating, underwater and emulsified oils, even under strong turbulent flow.

When asked how the researchers plan to improve the filter further, You tells Physics World that they “will integrate the VAF with photothermal, electrothermal and electrochemical modules for environmental remediation and resource recovery”.

“We look forward to applying VAF-based technologies to solve sea pollution problems with a filter that has an outstanding flexibility and adaptability, easy-to-handle operability and scalability, environmental compatibility and life-cycle sustainability,” says You.

The research is published in Nature Communications.

The post Filter inspired by deep-sea sponge cleans up oil spills appeared first on Physics World.

Anomalous Hall crystal is made from twisted graphene

30 janvier 2025 à 10:25

A topological electronic crystal (TEC) in which the quantum Hall effect emerges without the need for an external magnetic field has been unveiled by an international team of physicists. Led by Josh Folk at the University of British Columbia, the group observed the effect in a stack of bilayer and trilayer graphene that is twisted at a specific angle.

In a classical electrical conductor, the Hall voltage and its associated resistance appear perpendicular both to the direction of an applied electrical current and an applied magnetic field. A similar effect is also seen in 2D electron systems that have been cooled to ultra-low temperatures. But in this case, the Hall resistance becomes quantized in discrete steps.

This quantum Hall effect can emerge in electronic crystals, also known as Wigner crystals. These are arrays of electrons that are held in place by their mutual repulsion. Some researchers have considered the possibility of a similar effect occurring in structures called TECs, but without an applied magnetic field. This is called the “quantum anomalous Hall effect”.

Anomalous Hall crystal

“Several theory groups have speculated that analogues of these structures could emerge in quantized anomalous Hall systems, giving rise to a type of TEC termed an ‘anomalous Hall crystal’,” Folk explains. “This structure would be insulating, due to a frozen-in electronic ordering in its interior, with dissipation-free currents along the boundary.”

For Folk’s team, the possibility of anomalous hall crystals emerging in real systems was not the original focus of their research. Initially, a team at the University of Washington had aimed to investigate the diverse phenomena that emerge when two or more flakes of graphene are stacked on top of each other, and twisted relative to each other at different angles

While many interesting behaviours emerged from these structures, one particular stack caught the attention of Washington’s Dacen Waters, which inspired his team to get in touch with Folk and his colleagues in British Columbia.

In a vast majority of cases, the twisted structures studied by the team had moiré patterns that were very disordered. Moiré patterns occur when two lattices are overlaid and rotated relative to each other. Yet out of tens of thousands of permutations of twisted graphene stacks, one structure appeared to be different.

Exceptionally low levels of disorder

“One of the stacks seemed to have exceptionally low levels of disorder,” Folk describes. “Waters shared that one with our group to explore in our dilution refrigerator, where we have lots of experience measuring subtle magnetic effects that appear at a small fraction of a degree above absolute zero.”

As they studied this highly ordered structure, the team found that its moiré pattern helped to modulate the system’s electronic properties, allowing a TEC to emerge.

“We observed the first clear example of a TEC, in a device made up of bilayer graphene stacked atop trilayer graphene with a small, 1.5° twist,” Folk explains. “The underlying topology of the electronic system, combined with strong electron-electron interactions, provide the essential ingredients for the crystal formation.”

After decades of theoretical speculation, Folk, Waters and colleagues have identified an anomalous Hall crystal, where the quantum Hall effect emerges from an in-built electronic structure, rather than an applied magnetic field.

Beyond confirming the theoretical possibility of TECs, the researchers are hopeful that their results could lay the groundwork for a variety of novel lines of research.

“One of the most exciting long-term directions this work may lead is that the TEC by itself – or perhaps a TEC coupled to a nearby superconductor – may host new kinds of particles,” Folk says. “These would be built out of the ‘normal’ electrons in the TEC, but totally unlike them in many ways: such as their fractional charge, and properties that would make them promising as topological qubits.”

The research is described in Nature.

The post Anomalous Hall crystal is made from twisted graphene appeared first on Physics World.

What ‘equity’ really means for physics

29 janvier 2025 à 12:09

If you have worked in a university, research institute or business during the past two decades you will be familiar with the term equality, diversity and inclusion (EDI). There is likely to be an EDI strategy that includes measures and targets to nurture a workforce that looks more like the wider population and a culture in which everyone can thrive. You may find a reasoned business case for EDI, which extends beyond the organization’s legal obligations, to reflect and understand the people that you work with.

Look more closely and it is possible that the “E” in EDI is not actually equality, but rather equity. Equity is increasingly being used as a more active commitment, not least by the Institute of Physics, which publishes Physics World.  How, though, is equity different to equality? What is causing this change of language and will it make any difference in practice?

These questions have become more pressing as discussions around equality and equity have become entwined in the culture wars.  This is a particularly live issue in the US as Donald Trump’s second term as US president has begun to withdraw funding from EDI activities.  But it has also influenced science policy in the UK.

The distinction between equality and equity is often illustrated by a cartoon published in 2016 by the UK artist Angus Maguire (above). It shows a fence and people of variable height gaining an equal view of a baseball match thanks to different numbers of crates that they stand on. This has itself, however, resulted in arguments about other factors such as the conditions necessary to watch the game in the stadium, or indeed even join in. That requires consideration about how the teams and the stadium could adapt to the needs of all potential participants, but also how these changes might affect the experience of others involved.

In terms of education, the Organization for Economic Co-operation and Development (OECD) states that equity “does not mean that all students obtain equal education outcomes, but rather that differences in students’ outcomes are unrelated to their background or to economic and social circumstances over which the students have no control”. This is an admirable goal, but there are questions about how to achieve it.

In OECD member countries, freedom of choice and competition yield social inequalities that flow through to education and careers. This means that governments are continually balancing the benefits of inspiring and rewarding individuals alongside concerns about group injustice.

In 2024, we hosted a multidisciplinary workshop about equity in science, and especially physics. Held at the University of Birmingham, it brought together physicists at different career stages with social scientists and people who had worked on science and education in government, charities and learned societies. At the event, social scientists told us that equality is commonly conceived as a basic right to be treated equally and not discriminated against, regardless of personal characteristics. This right provides a platform for “equality of opportunity” whereby barriers are removed so talent and effort can be rewarded.

In the UK, the promotion of equality of opportunity is enshrined within the country’s Equality Act 2010 and underpins current EDI work in physics. This includes measures to promote physics to young people in deprived areas, and to women and ethnic minorities, as well as mentoring and additional academic and financial support through all stages of education and careers.  It extends to re-shaping the content and promotion of physics courses in universities so they are more appealing and responsive to a wider constituency. In many organizations, there is also training for managers to combat discrimination and bias, whether conscious or not.

Actions like these have helped to improve participation and progression across physics education and careers, but there is still significant underrepresentation and marginalization due to gender, ethnicity and social background. This is not unusual in open and competitive societies where the effects of promoting equal opportunities are often outweighed by the resources and connections of people with characteristics that are highly represented. Talent and effort are crucial in “high-performance” sectors such as academia and industry, but they are not the only factors influencing success.

Physicists at the meeting told us that they are motivated by intellectual curiosity, fascination with the natural world and love for their subject. Yet there is also, in physics, a culture of “genius” and competition, in which confidence is crucial. Facilities and working conditions, which often involve short-term contracts and international mobility, are difficult to balance alongside other life commitments. Although inequalities and exclusions are recognized, they are often ascribed to broader social factors or the inherent requirements of research. As a result, physicists tend not to accept responsibility for inequities within the discipline.

Physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others

Many physicists want merit to be a reflection of talent and effort. But we identified that physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others. Across the community, some believe in positive action beyond the removal of discrimination, but others can be actively hostile to any measure associated with EDI. This is a challenging environment for any young researcher and we heard distressing stories of isolation from women and colleagues who had hidden disabilities or those who were the first in their family to go to university.

The experience, positive or not, when joining a research group as a postgraduate or postdoctoral researcher is often linked with the personality of leaders. Peer groups and networks have helped many physicists through this period of their career, but it is also where the culture in a research group or department can drive some to the margins and ultimately out of the profession. In environments like this, equal opportunities have proved insufficient to advance diversity, let alone inclusion.

Culture change

Organizations that have replaced equality with equity want to signal a commitment not just to equal treatment, but also more equitable outcomes. However, those who have worked in government told us that some people become disengaged, thinking such efforts can only be achieved by reducing standards and threatening cultures they value. Given that physics needs technical proficiency and associated resources and infrastructure, it is not a discipline where equity can mean an equal distribution of positions and resources.

Physics can, though, counter the influence of wider inequalities by helping colleagues who are under-represented to gain the attributes, experiences and connections that are needed to compete successfully for doctoral studentships, research contracts and academic positions. It can also face up to its cultural problems, so colleagues who are minoritized feel less marginalized and they are ultimately recognized for their efforts and contributions.

This will require physicists giving more prominence to marginalized voices as well as critically and honestly examining their culture and tackling unacceptable behaviour. We believe we can achieve this by collaborating with our social science colleagues. That includes gathering and interpreting qualitative data, so there is shared understanding of problems, as well as designing strategies with people who are most affected, so that everyone has a stake in success.

If this happens, we can look forward to a physics community that genuinely practices equity, rather than espousing equality of opportunity.

The post What ‘equity’ really means for physics appeared first on Physics World.

When Bohr got it wrong: the impact of a little-known paper on the development of quantum theory

28 janvier 2025 à 19:00
Niels Bohr, illustration
Brilliant mind Illustration of the Danish physicist and Nobel laureate Niels Bohr (1885-1962). Bohr made numerous contributions to physics during his career, but it was his work on atomic structure and quantum theory that won him the 1922 Nobel Prize for Physics. (Courtesy: Sam Falconer, Debut Art/Science Photo Library)

One hundred and one years ago, Danish physicist Niels Bohr proposed a radical theory together with two young colleagues – Hendrik Kramers and John Slater – in an attempt to resolve some of the most perplexing issues in fundamental physics at the time. Entitled “The Quantum Theory of Radiation”, and published in the Philosophical Magazine, their hypothesis was quickly proved wrong, and has since become a mere footnote in the history of quantum mechanics.

Despite its swift demise, their theory perfectly illustrates the sense of crisis felt by physicists at that moment, and the radical ideas they were prepared to contemplate to resolve it. For in their 1924 paper Bohr and his colleagues argued that the discovery of the “quantum of action” might require the abandonment of nothing less than the first law of thermodynamics: the conservation of energy.

As we celebrate the centenary of Werner Heisenberg’s 1925 quantum breakthrough with the International Year of Quantum Science and Technology (IYQ) 2025, Bohr’s 1924 paper offers a lens through which to look at how the quantum revolution unfolded. Most physicists at that time felt that if anyone was going to rescue the field from the crisis, it would be Bohr. Indeed, this attempt clearly shows signs of the early rift between Bohr and Albert Einstein about the quantum realm, that would turn into a lifelong argument. Remarkably, the paper also drew on an idea that later featured in one of today’s most prominent alternatives to Bohr’s “Copenhagen” interpretation of quantum mechanics.

Genesis of a crisis

The quantum crisis began when German physicist Max Planck proposed the quantization of energy in 1900, as a mathematical trick for calculating the spectrum of radiation from a warm, perfectly absorbing “black body”. Later, in 1905, Einstein suggested taking this idea literally to account for the photoelectric effect, arguing that light consisted of packets or quanta of electromagnetic energy, which we now call photons.

Bohr entered the story in 1912 when, working in the laboratory of Ernest Rutherford in Manchester, he devised a quantum theory of the atom. In Bohr’s picture, the electrons encircling the atomic nucleus (that Rutherford had discovered in 1909) are constrained to specific orbits with quantized energies. The electrons can hop in “quantum jumps” by emitting or absorbing photons with the corresponding energy.

Albert Einstein and Niels Bohr
Conflicting views Stalwart physicists Albert Einstein and Niels Bohr had opposing views on quantum fundamentals from early on, which turned into a lifelong scientific argument between the two. (Paul Ehrenfest/Wikimedia Commons)

Bohr had no theoretical justification for this ad hoc assumption, but he showed that, by accepting it, he could predict (more or less) the spectrum of the hydrogen atom. For this work Bohr was awarded the 1922 Nobel Prize for Physics, the same year that Einstein collected the prize for his work on light quanta and the photoelectric effect (he had been awarded it in 1921 but was unable to attend the ceremony).

After establishing an institute of theoretical physics (now the Niels Bohr Institute) in Copenhagen in 1917, Bohr’s mission was to find a true theory of the quantum: a mechanics to replace, at the atomic scale, the classical physics of Isaac Newton that worked at larger scales. It was clear that classical physics did not work at the scale of the atom, although Bohr’s correspondence principle asserted that quantum theory should give the same results as classical physics at a large enough scale.

Hendrik Kramers
Mathematical mind Dutch physicist Hendrik Kramers spent 10 years as Niels Bohr’s assistant in Copenhagen. (Wikimedia Commons)

Quantum theory was at the forefront of physics at the time, and so was the most exciting topic for any aspiring young physicist. Three groups stood out as the most desirable places to work for anyone seeking a fundamental mathematical theory to replace the makeshift and sometimes contradictory “old” quantum theory that Bohr had cobbled together: that of Arnold Sommerfeld in Münich, of Max Born in Göttingen, and of Bohr in Copenhagen.

Dutch physicist Hendrik Kramers had hoped to work on his doctorate with Born – but in 1916 the First World War ruled that out, and so he opted instead for Copenhagen, in politically neutral Denmark. There he became Bohr’s assistant for ten years: as was the case with several of Bohr’s students, Kramers did the maths (it was never Bohr’s forte) while Bohr supplied the ideas, philosophy and kudos. Kramers ended up working on an impressive range of problems, from chemical physics to pure mathematics.

Reckless and radical

One of the most vexing question for Bohr and his Copenhagen circle in the early 1920s was how to think about electron orbits in atoms. Try as they might, they couldn’t find a way to make the orbits “fit” with experimental observations of atomic spectra.

Perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all

Bohr and others, including Heisenberg, began to voice a possibility that seemed almost reckless: perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all. Maybe we just can’t think of quantum particles as objects moving along trajectories in space and time.

This struck others, such as Einstein, as desperate, if not crazy. Surely the goal of science had always been to offer a picture of the world in terms of “things happening to objects in space”. What else could there be than that? How could we just give it all up?

But it was worse than that. For one thing, Bohr’s quantum jumps were supposed to happen instantaneously: an electron, say, jumping from one orbit to another in no time at all. In classical physics, everything happens continuously: a particle gets from here to there by moving smoothly across the intervening space, in some finite time. The discontinuities of quantum jumps seemed to some – like Austrian physicist Erwin Schrödinger in Vienna – bordering on the obscene.

Worse still was the fact that while the old quantum theory stipulated the energy of quantum jumps, there was nothing to dictate when they would happen – they simply did. In other words, there was no causal kick that instigated a quantum jump: the electron just seemed to make up its own mind about when to jump. As Heisenberg would later proclaim in his 1927 paper on the uncertainty principle (Zeitschrift für Physik 43 172),  quantum theory “establishes the final failure of causality”.

Such notions were not the only source of friction between the Copenhagen team and Einstein. Bohr didn’t like light quanta. While they seemed to explain the photoelectric effect, Bohr was convinced that light had to be fundamentally wave-like, so that photons (to use the anachronistic term) were only a way of speaking, not real entities.

To add to the turmoil in 1924, the French physicist Louis de Broglie had, in his doctoral thesis for the Sorbonne, turned the quantum idea on its head by proposing that particles such as electrons might show wave-like behaviour. Einstein had at first considered this too wild, but soon came round to the idea.

Go where the waves take you

In 1924 these virtually heretical ideas were only beginning to surface, but they were creating such a sense of crisis that it seemed anything was possible. In the 1960s, science historian Paul Forman suggested that the feverish atmosphere in physics was part of an even wider cultural current. By rejecting causality and materialism, the German quantum physicists, Forman said, were attempting to align their ideas with a rejection of mechanistic thinking while embracing the irrational – as was the fashion in the philosophical and intellectual circles of the beleaguered Weimar republic. The idea has been hotly debated by historians and philosophers of science – but it was surely in Copenhagen, not Munich or Göttingen, that the most radical attitudes to quantum theory were developing.

John Clark Slater
Particle pilot In 1923, US physicist John Clark Slater moved to Copenhagen, and suggested the concept of a “virtual field” that spread throughout a quantum system. (Emilio Segrè Visual Archives General Collection/MIT News Office)

Then, just before Christmas in 1923, a new student arrived at Copenhagen. John Clarke Slater, who had a PhD in physics from Harvard, turned up at Bohr’s institute with a bold idea. “You know those difficulties about not knowing whether light is old-fashioned waves or Mr Einstein’s light particles”, he wrote to his family during a spell in Cambridge that November. “I had a really hopeful idea… I have both the waves and the particles, and the particles are sort of carried along by the waves, so that the particles go where the waves take them.” The waves were manifested in a kind of “virtual field” of some kind that spread throughout the system, and they acted to “pilot” the particles.

Bohr was mostly not a fan of Slater’s idea, not least because it retained the light particles that he wished to dispose of. But he liked Slater’s notion of a virtual field that could put one part of a quantum system in touch with others. Together with Slater and Kramers, Bohr prepared a paper in a remarkably short time (especially for him) outlining what became known as the Bohr-Kramers-Slater (BKS) theory. They sent it off to the Philosophical Magazine (where Bohr had published his seminal papers on the quantum atom) at the end of January 1924, and it was published in May (47(281) 785). As was increasingly characteristic of Bohr’s style, it was free of any mathematics (beyond Einstein’s quantum relationship E=hν).

In the BKS picture, an excited atom about to emit light can “communicate continually” with the other atoms around it via the virtual field. The transition, with emission of a light quantum, is then not spontaneous but induced by the virtual field. This mechanism could solve the long-standing question of how an atom “knows” which frequency of light to emit in order to reach another energy level: the virtual field effectively puts the atom “in touch” with all the possible energy states of the system.

The problem was that this meant the emitting atom was in instant communication with its environment all around – which violated the law of causality. Well then, so much the worse for causality: BKS abandoned it. The trio’s theory also violated the conservation of energy and momentum – so they had to go too.

Causality and conservation, abandoned

But wait: hadn’t these conservation laws been proved? In 1923 the American physicist Arthur Compton in Cambridge had shown that when light is scattered by electrons, they exchange energy, and the frequency of the light decreases as it gives up energy to the electrons. The results of Compton’s experiments agreed perfectly with predictions made on the assumptions that light is a stream of quanta (photons) and that their collisions with electrons conserve energy and momentum.

Ah, said BKS, but that’s only true statistically. The quantities are conserved on average, but not in individual collisions. After all, such statistical outcomes were familiar to physicists: that was the basis of the second law of thermodynamics, which presented the inexorable increase in entropy as a statistical phenomenon that need not constrain processes involving single particles.

The radicalism of the BKS paper got a mixed reception. Einstein, perhaps predictably, was dismissive. “Abandonment of causality as a matter of principle should be permitted only in the most extreme emergency”, he wrote. Wolfgang Pauli, who had worked in Copenhagen in 1922–23, confessed to being “completely negative” about the idea. Born and Schrödinger were more favourable.

But the ultimate arbiter is experiment. Was energy conservation really violated in single-particle interactions? The BKS paper motivated others to find out. In early 1925, German physicists Walther Bothe and Hans Geiger in Berlin looked more closely at Compton’s X-ray scattering by electrons. Having read the BKS paper, Bothe felt that “it was immediately obvious that this question would have to be decided experimentally, before definite progress could be made.

Walther Bothe and Hans Geiger
Experimental arbitrators German physicists Walther Bothe and Hans Geiger (right) conducted an experiment to explore the BKS paper, that looked at X-ray scattering from electrons to determine the conservation of energy at microscopic scales. (IPP/© Archives of the Max Planck Society)

Geiger agreed, and the duo devised a scheme for detecting both the scattered electron and the scattered photon in separate detectors. If causality and energy conservation were preserved, the detections should be simultaneous; while any delay between them could indicate a violation. As Bothe would later recall “The ‘question to Nature’ which the experiment was designed to answer could therefore be formulated as follows: is it exactly a scatter quantum and a recoil electron that are simultaneously emitted in the elementary process, or is there merely a statistical relationship between the two?” It was incredibly painstaking work to seek such coincident detections using the resources then available. But in April 1925 Geiger and Bothe reported simultaneity within a millisecond – close enough to make a strong case that Compton’s treatment, which assumed energy conservation, was correct. Compton himself, working with Alfred Simon using a cloud chamber, confirmed that energy and momentum were conserved for individual events (Phys. Rev. 26 289).

Revolutionary defeat… singularly important

Bothe was awarded the 1954 Nobel Prize for Physics for the work. He shared it with Born for his work on quantum theory, and Geiger would surely have been a third recipient, if he had not died in 1945. In his Nobel speech, Bothe definitively stated that “the strict validity of the law of the conservation of energy even in the elementary process had been demonstrated, and the ingenious way out of the wave-particle problem discussed by Bohr, Kramers, and Slater was shown to be a blind alley.”

Bohr was gracious in his defeat, writing to a colleague in April 1925 that “It seems… there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible.” Yet he was soon to have no need of that particular revolution, for just a few months later Heisenberg, who had returned to Göttingen after working with Bohr in Copenhagen for six months, came up the first proper theory of quantum mechanics, later called matrix mechanics.

“In spite of its short lifetime, the BKS theory was singularly important,” says historian of science Helge Kragh, now emeritus professor at the Niels Bohr Institute. “Its radically new approach paved the way for a greater understanding, that methods and concepts of classical physics could not be carried over in a future quantum mechanics.”

The Bothe-Geiger experiment that [the paper] inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument [about] the probabilistic character of his matrix mechanics

The BKS paper was thus in a sense merely a mistaken curtain-raiser for the main event. But the Bothe-Geiger experiment that it inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument that the probabilistic character of his matrix mechanics (and also of Schrödinger’s 1926 version of quantum mechanics, called wave mechanics) couldn’t be explained away as a statistical expression of our ignorance about the details, as it is in classical statistical mechanics.

Quantum concept
Radical approach Despite its swift defeat, the BKS proposal showed how classical concepts could not apply to a quantum reality. (Courtesy: Shutterstock/Vink Fan)

Rather, the probabilities that emerged from Heisenberg’s and Schrödinger’s theories applied to individual events: they were, Heisenberg said, fundamental to the way single particles behave. Schrödinger was never happy with that idea, but today it seems inescapable.

Over the next few years, Bohr and Heisenberg argued that the new quantum mechanics indeed smashed causality and shattered the conventional picture of reality as an objective world of objects moving in space–time with fixed properties. Assisted by Born, Wolfgang Pauli and others, they articulated the “Copenhagen interpretation”, which became the predominant vision of the quantum world for the rest of the century.

Failed connections

Slater wasn’t at all pleased with what became of the idea he took to Copenhagen. Bohr and Kramers had pressured him into accepting their take on it, “without the little lump carried along on the waves”, as he put it in mid-January. “I am willing to let them have their way”, he wrote at the time, but in retrospect he felt very unhappy about his time in Denmark. After the BKS theory was disproved, Bohr wrote to Slater saying “I have a bad conscience in persuading you to our views”.

Slater replied that there was no need for that. But in later life – after he had made a name for himself in solid-state physics – Slater admitted to a great deal of resentment. “I completely failed to make any connection with Bohr”, he said in a 1963 interview with the historian of science Thomas Kuhn. “I fought with them [Bohr and Kramers] so seriously that I’ve never had any respect for those people since. I had a horrible time in Copenhagen.” While most of Bohr’s colleagues and students expressed adulation, Slater’s was a rare dissenting voice.

But Slater might have reasonably felt more aggrieved at what became of his “pilot-wave” idea. Today, that interpretation of quantum theory is generally attributed to de Broglie – who intimated a similar notion in his 1924 thesis, before presenting the theory in more detail at the famous 1927 Solvay Conference – and to American physicist David Bohm, who revitalized the idea in the 1950s. Initially dismissed on both occasions, the de Broglie-Bohm theory has gained advocates in recent years, not least because it can be applied to a classical hydrodynamic analogue, in which oil droplets are steered by waves on an oil surface.

Whether or not it is the right way to think about quantum mechanics, the pilot-wave theory touches on the deep philosophical problems of the field. Can we rescue an objective reality of concrete particles with properties described by hidden variables, as Einstein had advocated, from the fuzzy veil that Bohr and Heisenberg seemed to draw over the quantum world? Perhaps Slater would at least be gratified to know that Bohr has not yet had the last word.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post When Bohr got it wrong: the impact of a little-known paper on the development of quantum theory appeared first on Physics World.

Theorists propose a completely new class of quantum particles

28 janvier 2025 à 14:04

In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.

Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.

“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.

Particle exchange

In quantum mechanics, the behaviour of particles (and quasiparticles) is probabilistic in nature and is described by mathematical entities known as wavefunctions. These govern the likelihood of finding a particle in a particular state, as defined by properties like position, velocity, and spin. The exchange statistics of a specific type of particle dictates how its wavefunction behaves when two identical particles swap places.

For bosons such as photons, the wavefunction remains unchanged when particles are exchanged. This means that many bosons can occupy the same quantum state, enabling phenomena like lasers and superfluidity. In contrast, when fermions such as electrons are exchanged, the sign of the wavefunction flips from positive to negative or vice versa. This antisymmetric property prevents fermions from occupying the same quantum state. This underpins the Pauli exclusion principle and results in the electronic structure of atoms and the nature of the periodic table.

Until now, physicists believed that these two types of particle statistics – bosonic and fermionic – were the only possibilities in 3D space. This is the result of fundamental principles like locality, which states that events occurring at one point in space cannot instantaneously influence events at a distant location.

Breaking boundaries

Hazzard and Wang’s research overturns the notion that 3D systems are limited to bosons and fermions and shows that new types of particle statistics, called parastatistics, can exist without violating locality.

The key insight in their theory lies in the concept of hidden internal characteristics. Beyond the familiar properties like position and spin, paraparticles require additional internal parameters that enable more complex wavefunction behaviour. This hidden information allows paraparticles to exhibit exchange statistics that go beyond the binary distinction of bosons and fermions.

Paraparticles exhibit phenomena that resemble – but are distinct from – fermionic and bosonic behaviours. For example, while fermions cannot occupy the same quantum state, up to two paraparticles could be allowed to coexist in the same point in space. This behaviour strikes a balance between the exclusivity of fermions and the clustering tendency of bosons.

Bringing paraparticles to life

While no elementary particles are known to exhibit paraparticle behaviour, the researchers believe that paraparticles might manifest as quasiparticles in engineered quantum systems or certain materials. A quasiparticle is particle-like collective excitation of a system. A familiar example is the hole, which is created in a semiconductor when a valence-band electron is excited to the conduction band. The vacancy (or hole) left in the valence band behaves as a positively-charged particle that can travel through the semiconductor lattice.

Experimental systems of ultracold atoms created by collaborators of the duo could be one place to look for the exotic particles. “We are working with them to see if we can detect paraparticles there,” explains Wang.

In ultracold atom experiments, lasers and magnetic fields are used to trap and manipulate atoms at temperatures near absolute zero. Under these conditions, atoms can mimic the behaviour of more exotic particles. The team hopes that similar setups could be used to observe paraparticle-like behaviour in higher-dimensional systems, such as 3D space. However, further theoretical advances are needed before such experiments can be designed.

Far-reaching implications

The discovery of paraparticles could have far-reaching implications for physics and technology. Fermionic and bosonic statistics have already shaped our understanding of phenomena ranging from the stability of neutron stars to the behaviour of superconductors. Paraparticles could similarly unlock new insights into the quantum world.

“Fermionic statistics underlie why some systems are metals and others are insulators, as well as the structure of the periodic table,” Hazzard explains. “Bose-Einstein condensation [of bosons] is responsible for phenomena such as superfluidity. We can expect a similar variety of phenomena from paraparticles, and it will be exciting to see what these are.”

As research into paraparticles continues, it could open the door to new quantum technologies, novel materials, and deeper insights into the fundamental workings of the universe. This theoretical breakthrough marks a bold step forward, pushing the boundaries of what we thought possible in quantum mechanics.

The paraparticles are described in Nature.

The post Theorists propose a completely new class of quantum particles appeared first on Physics World.

The secret to academic success? Publish a top paper as a postdoc, study finds

28 janvier 2025 à 11:53

If you’re a postdoc who wants to nail down that permanent faculty position, it’s wise to publish a highly cited paper after your PhD. That’s the conclusion of a study by an international team of researchers, which finds that publication rates and performance during the postdoc period is key to academic retention and early-career success. Their analysis also reveals that more than four in 10 postdocs drop out of academia.

A postdoc is usually a temporary appointment that is seen as preparation for an academic career. Many researchers, however, end up doing several postdocs in a row as they hunt for a permanent faculty job. “There are many more postdocs than there are faculty positions, so it is a kind of systemic bottleneck,” says Petter Holme, a computer scientist at Aalto University in Finland, who led the study.

Previous research into academic career success has tended to overlook the role of a postdoc, focusing instead on, say, the impact of where researchers did their PhD. To eke out the effect of a postdoc, Holme and colleagues combined information of academics’ career stages from LinkedIn with their publication history obtained from Microsoft Academic Graph. The resulting global dataset covered 45, 572 careers spanning 25 years across all academic disciplines.

Overall, they found, 41% of postdocs left academia. But researchers who publish a highly cited paper as a postdoc are much more likely to pursue a faculty career – whether they published a highly cited paper during their PhD degree, or not. Publication rate is also vital, with researchers who publish less as postdocs compared to their PhD days being more likely to drop out of academia. Conversely, as productivity increased, so did the likelihood of a postdoc gaining a faculty position.

Expanding horizons

Holme says their results suggest that a researcher only has a few years “to get on the positive feedback loop, where one success leads to another”. In fact, the team found that a “moderate” change in research topic when moving from PhD to postdoc could improve future success. “It is a good thing to change your research focus, but not too much,” says Holme because it widens perspective without having to learn an entire new research topic from scratch.

Likewise, shifting perspective by moving abroad can also benefit postdocs. The analysis shows that a researcher moving abroad for a postdoc boosts their citations, but a move to a different institution in the same country has a negligible impact.

The post The secret to academic success? Publish a top paper as a postdoc, study finds appeared first on Physics World.

Nanocrystals measure tiny forces on tiny length scales

22 janvier 2025 à 18:14

Two independent teams in the US have demonstrated the potential of using the optical properties of nanocrystals to create remote sensors that measure tiny forces on tiny length scales. One team is based at Stanford University and used nanocrystals to measure the micronewton-scale forces exerted by a worm as it chewed bacteria. The other team is based at several institutes and used the photon avalanche effect in nanocrystals to measure sub-nanonewton to micronewton forces. The latter technique could potentially be used to study forces involved in processes such as stem cell differentiation.

Remote sensing of forces at small scales is challenging, especially inside living organisms. Optical tweezers cannot make remote measurements inside the body, while fluorophores – molecules that absorb and re-emit light – can measure forces in organisms, but have limited range, problematic stability or, in the case of quantum dots, toxicity. Nanocrystals with optical properties that change when subjected to external forces offer a way forward.

At Stanford, materials scientist Jennifer Dionne led a team that used nanocrystals doped with ytterbium and erbium. When two ytterbium atoms absorb near-infrared photons, they can then transfer energy to a nearby erbium atom. In this excited state, the erbium can either decay directly to its lowest energy state by emitting red light, or become excited to an even higher-energy state that decays by emitting green light. These processes are called upconversion.

Colour change

The ratio of green to red emission depends on the separation between the ytterbium and erbium atoms, and the separation between the erbium atoms – explains Dionne’s PhD student Jason Casar, who is lead author of a paper describing the Stanford research. Forces on the nanocrystal can change these separations and therefore affect that ratio.

The researchers encased their nanocrystals in polystyrene vessels approximately the size of a E coli bacterium. They then mixed the encased nanoparticles with E coli bacteria that were then fed to tiny nematode worms. To extract the nutrients, the worm’s pharynx needs to break open the bacterial cell wall. “The biological question we set out to answer is how much force is the bacterium generating to achieve that breakage?” explains Stanford’s Miriam Goodman.

The researchers shone near-infrared light on the worms, allowing them to monitor the flow of the nanocrystals. By measuring the colour of the emitted light when the particles reached the pharynx, they determined the force it exerted with micronewton-scale precision.

Meanwhile, a collaboration of scientists at Columbia University, Lawrence Berkeley National Laboratory and elsewhere has shown that a process called photon avalanche can be used to measure even smaller forces on nanocrystals. The team’s avalanching nanoparticles (ANPs) are sodium yttrium fluoride nanocrystals doped with thulium – and were discovered by the team in 2021.

The fun starts here

The sensing process uses a laser tuned off-resonance from any transition from the ground state of the ANP. “We’re bathing our particles in 1064 nm light,” explains James Schuck of Columbia University, whose group led the research. “If the intensity is low, that all just blows by. But if, for some reason, you do eventually get some absorption – maybe a non-resonant absorption in which you give up a few phonons…then the fun starts. Our laser is resonant with an excited state transition, so you can absorb another photon.”

This creates a doubly excited state that can decay radiatively directly to the ground state, producing an upconverted photon. Or, it energy can be transferred to a nearby thulium atom, which becomes resonant with the excited state transition and can excite more thulium atoms into resonance with the laser. “That’s the avalanche,” says Schuck; “We find on average you get 30 or 40 of these events – it’s analogous to a chain reaction in nuclear fission.”

Now, Schuck and colleagues have shown that the exact number of photons produced in each avalanche decreases when the nanoparticle experiences compressive force. One reason is that the phonon frequencies are raised as the lattice is compressed, making non-radiatively decay energetically more favourable.

The thulium-doped nanoparticles decay by emitting either red or near infrared photons. As the force increases, the red dims more quickly, causing a change in the colour of the emitted light. These effects allowed the researchers to measure forces from the sub-nanonewton to the micronewton range – at which point the light output from the nanoparticles became too low to detect.

Not just for forces

Schuck and colleagues are now seeking practical applications of their discovery, and not just for measuring forces.

“We’re discovering that this avalanching process is sensitive to a lot of things,” says Schuck. “If we put these particles in a cell and we’re trying to measure a cellular force gradient, but the cell also happened to change its temperature, that would also affect the brightness of our particles, and we would like to be able to differentiate between those things. We think we know how to do that.”

If the technique could be made to work in a living cell, it could be used to measure tiny forces such as those involved in the extra-cellular matrix that dictate stem cell differentiation.

Andries Meijerink of Utrecht University in the Netherlands believes both teams have done important work that is impressive in different ways. Schuck and colleagues for unveiling a fundamentally new force sensing technique and Dionne’s team for demonstrating a remarkable practical application.

However, Meijerink is sceptical that photon avalanching will be useful for sensing in the short term. “It’s a very intricate process,” he says, adding, “There’s a really tricky balance between this first absorption step, which has to be slow and weak, and this resonant absorption”. Nevertheless, he says that researchers are discovering other systems that can avalanche. “I’m convinced that many more systems will be found,” he says.

Both studies are described in Nature. Dionne and colleagues report their results here, and Schuck and colleagues here.

The post Nanocrystals measure tiny forces on tiny length scales appeared first on Physics World.

IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics

22 janvier 2025 à 15:37

Last year was the year of elections and 2025 is going to be the year of decisions.

After many countries, including the UK, Ireland and the US, went to the polls in 2024, the start of 2025 will see governments at the beginning of new terms, forced to respond swiftly to mounting economic, social, security, environmental and technological challenges.

These issues would be difficult to address at any given time, but today they come amid a turbulent geopolitical context. Governments are often judged against short milestones – the first 100 days or a first budget – but urgency should not come at the cost of thinking long-term, because the decisions over the next few months will shape outcomes for years, perhaps decades, to come. This is no less true for science than it is for health and social care, education or international relations.

In the UK, the first half of the year will be dominated by the government’s spending review. Due in late spring, it could be one of the toughest political tests for UK science, as the implications of the tight spending plans announced in the October budget become clear. Decisions about departmental spending will have important implications for physics funding, from research to infrastructure, facilities and teaching.

One of the UK government’s commitments is to establish 10-year funding cycles for key R&D activities – a policy that could be a positive improvement. Physics discoveries often take time to realise in full, but their transformational nature is indisputable. From fibre-optic communications to magnetic resonance imaging, physics has been indispensable to many of the world’s most impactful and successful innovations.

Emerging technologies, enabled by physicists’ breakthroughs in fields such as materials science and quantum physics, promise to transform the way we live and work, and create new business opportunities and open up new markets. A clear, comprehensive and long-term vision for R&D would instil confidence among researchers and innovators, and long-term and sustainable R&D funding would enable people and disruptive ideas to flourish and drive tomorrow’s breakthroughs.

Alongside the spending review, we are also expecting the publication of the government’s industrial strategy. The focus of the green paper published last year was an indication of how the strategy will place significance on science and technology in positioning the UK for economic growth.

If we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead

Physics-based industries are a foundation stone for the UK economy and are highly productive, as highlighted by research commissioned by the Institute of Physics, which publishes Physics World. Across the UK, the physics sector generates £229bn gross value added, or 11% of total UK gross domestic product. It creates a collective turnover of £643bn, or £1380bn when indirect and induced turnover is included.

Labour productivity in physics-based businesses is also strong at £84 300 per worker, per year. So, if physics is not at the heart of this effort, then the government’s mission of economic revival is in danger of failing to get off the launch pad.

A pivotal year

Another of the new government’s policy priorities is the strategic defence review, which is expected to be published later this year. It could have huge implications for physics given its core role in many of the technologies that contribute to the UK’s defence capabilities. The changing geopolitical landscape, and potential for strained relations between global powers, may well bring research security to the front of the national mind.

Intellectual property, and scientific innovation, are some of the UK’s greatest strengths and it is right to secure them. But physics discoveries in particular can be hampered by overzealous security measures. So much of the important work in our discipline comes from years of collaboration between researchers across the globe. Decisions about research security need to protect, not hamper, the future of UK physics research.

This year could also be pivotal for UK universities, as securing their financial stability and future will be one of the major challenges. Last year, the pressures faced by higher education institutions became apparent, with announcements of course closures, redundancies and restructures as a way of saving money. The rise in tuition fees has far from solved the problem, so we need to be prepared for more turbulence coming for the higher education sector.

These things matter enormously. We have heard that universities are facing a tough situation, and it’s getting harder for physics departments to exist. But if we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead.

As we celebrate the International Year of Quantum Science and Technology that marks the centenary of the initial development of quantum mechanics by Werner Heisenberg, 2025 is a reminder of how the benefits of physics span over decades.

We need to enhance all the vital and exciting developments that are happening in physics departments. The country wants and needs a stronger scientific workforce – just think about all those individuals who studied physics and now work in industries that are defending the country – and that workforce will be strongly dependent on physics skills. So our priority is to make sure that physics departments keep doing world-leading research and preparing the next generation of physicists that they do so well.

The post IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics appeared first on Physics World.

Wrinkles in space–time could remember the secrets of exploding stars

20 janvier 2025 à 18:45

Permanent distortions in space–time caused by the passage of gravitational waves could be detectable from Earth. Known as “gravitational memory”, such distortions are predicted to occur most prominently when the core of a supernova collapses. Observing them could therefore provide a window into the death of massive stars and the creation of black holes, but there’s a catch: the supernova might have to happen in our own galaxy.

Physicists have been detecting gravitational waves from colliding stellar-mass black holes and neutron stars for almost a decade now, and theory predicts that core-collapse supernovae should also produce them. The difference is that unlike collisions, supernovae tend to be lopsided – they don’t explode outwards equally in all directions. It is this asymmetry – in both the emission of neutrinos from the collapsing core and the motion of the blast wave itself – that produces the gravitational-wave memory effect.

“The memory is the result of the lowest frequency aspects of these motions,” explains Colter Richardson, a PhD student at the University of Tennessee in Knoxville, US and co-lead author (with Haakon Andresen of Sweden’s Oskar Klein Centre) of a Physical Review Letters paper describing how gravitational-wave memory detection might work on Earth.

Filtering out seismic noise

Previously, many physicists assumed it wouldn’t be possible to detect the memory effect from Earth. This is because it manifests at frequencies below 10 Hz, where noise from seismic events tends to swamp detectors. Indeed, Harvard astrophysicist Kiranjyot Gill argues that detecting gravitational memory “would require exceptional sensitivity in the millihertz range to separate it from background noise and other astrophysical signals” – a sensitivity that she says Earth-based detectors simply don’t have.

Anthony Mezzacappa, Richardson’s supervisor at Tennessee, counters this by saying that while the memory signal itself cannot be detected, the ramp-up to it can. “The signal ramp-up corresponds to a frequency of 20–30 Hz, which is well above 10 Hz, below which the detector response needs to be better characterized for what we can detect on Earth, before dropping down to virtually 0 Hz where the final memory amplitude is achieved,” he tells Physics World.

The key, Mezzacappa explains, is a “matched filter” technique in which templates of what the ramp-up should look like are matched to the signal to pick it out from low-frequency background noise. Using this technique, the team’s simulations show that it should be possible for Earth-based gravitational-wave detectors such as LIGO to detect the ramp-up even though the actual deformation effect would be tiny – around 10-16 cm “scaled to the size of a LIGO detector arm”, Richardson says.

The snag is that for the ramp-up to be detectable, the simulations suggest the supernova would need to be close – probably within 10 kiloparsecs (32,615 light-years) of Earth. That would place it within our own galaxy, and galactic supernovae are not exactly common. The last to be observed in real time was spotted by Johannes Kepler in 1604; though there have been others since, we’ve only identified their remnants after the fact.

Going to the Moon

Mezzacappa and colleagues are optimistic that multimessenger astronomy techniques such as gravitational-wave and neutrino detectors will help astronomers identify future Milky Way supernovae as they happen, even if cosmic dust (for example) hides their light for optical observers.

Gill, however, prefers to look towards the future. In a paper under revision at Astrophysical Journal Letters, and currently available as a preprint, she cites two proposals for detectors on the Moon that could transform gravitational-wave physics and extend the range at which gravitational memory signals can be detected.

The first, called the Lunar Gravitational Wave Antenna, would use inertial sensors to detect the Moon shaking as gravitational waves ripple through it. The other, known as the Laser Interferometer Lunar Antenna, would be like a giant, triangular version of LIGO with arms spanning tens of kilometres open to space. Both are distinct from the European Space Agency’s Laser Interferometer Space Antenna, which is due for launch in the 2030s, but is optimized to detect gravitational waves from supermassive black holes rather than supernovae.

“Lunar-based detectors or future space-based observatories beyond LISA would overcome the terrestrial limitations,” Gill argues. Such detectors, she adds, could register a memory effect from supernovae tens or even hundreds of millions of light-years away. This huge volume of space would encompass many galaxies, making the detection of gravitational waves from core-collapse supernovae almost routine.

The memory of something far away

In response, Richardson points out that his team’s filtering method could also work at longer ranges – up to approximately 10 million light-years, encompassing our own Local Group of galaxies and several others – in certain circumstances. If a massive star is spinning very quickly, or it has an exceptionally strong magnetic field, its eventual supernova explosion will be highly collimated and almost jet-like, boosting the amplitude of the memory effect. “If the amplitude is significantly larger, then the detection distance is also significantly larger,” he says.

Whatever technologies are involved, both groups agree that detecting gravitational-wave memory is important. It might, for example, tell us whether a supernova has left behind a neutron star or a black hole, which would be valuable because the reasons one forms and not the other remain a source of debate among astrophysicists.

“By complementing other multimessenger observations in the electromagnetic spectrum and neutrinos, gravitational-wave memory detection would provide unparalleled insights into the complex interplay of forces in core-collapse supernovae,” Gill says.

Richardson agrees that a detection would be huge and hopes that his work and that of others “motivates new investigations into the low-frequency region of gravitational-wave astronomy”.

The post Wrinkles in space–time could remember the secrets of exploding stars appeared first on Physics World.

‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question

20 janvier 2025 à 14:52

Several years ago I was sitting at the back of a classroom supporting a newly qualified science teacher. The lesson was going well, a pretty standard class on Hooke’s law, when a student leaned over to me and asked “Why are we doing this? What’s the point?”.

Having taught myself, this was a question I had been asked many times before. I suspect that when I was a teacher, I went for the knee-jerk “it’s useful if you want to be an engineer” response, or something similar. This isn’t a very satisfying answer, but I never really had the time to formulate a real justification for studying Hooke’s law, or physics in general for that matter.

Who is the physics curriculum designed for? Should it be designed for the small number of students who will pursue the subject, or subjects allied to it, at the post-16 and post-18 level? Or should we be reflecting on the needs of the overwhelming majority who will never use most of the curriculum content again? Only about 10% of students pursue physics or physics-rich subjects post-16 in England, and at degree level, only around 4000 students graduate with physics degrees in the UK each year.

One argument often levelled at me is that learning this is “useful”, to which I retort – in a similar vein to the student from the first paragraph – “In what way?” In the 40 years or so since first learning Hooke’s law, I can’t remember ever explicitly using it in my everyday life, despite being a physicist. Whenever I give a talk on this subject, someone often pipes up with a tenuous example, but I suspect they are in the minority. An audience member once said they consider the elastic behaviour of wire when hanging pictures, but I suspect that many thousands of pictures have been successfully hung with no recourse to F = –kx.

Hooke’s law is incredibly important in engineering but, again, most students will not become engineers or rely on a knowledge of the properties of springs, unless they get themselves a job in a mattress factory.

From a personal perspective, Hooke’s law fascinates me. I find it remarkable that we can see the macroscopic properties of materials being governed by microscopic interactions and that this can be expressed in a simple linear form. There is no utilitarianism in this, simply awe, wonder and aesthetics. I would always share this “joy of physics” with my students, and it was incredibly rewarding when this was reciprocated. But for many, if not most, my personal perspective was largely irrelevant, and they knew that the curriculum content would not directly support them in their future careers.

At this point, I should declare my position – I don’t think we should take Hooke’s law, or physics, off the curriculum, but my reason is not the one often given to students.

A series of lessons on Hooke’s law is likely to include: experimental design; setting up and using equipment; collecting numerical data using a range of devices; recording and presenting data, including graphs; interpreting data; modelling data and testing theories; devising evidence-based explanations; communicating ideas; evaluating procedures; critically appraising data; collaborating with others; and working safely.

Science education must be about preparing young people to be active and critical members of a democracy, equipped with the skills and confidence to engage with complex arguments that will shape their lives. For most students, this is the most valuable lesson they will take away from Hooke’s law. We should encourage students to find our subject fascinating and relevant, and in doing so make them receptive to the acquisition of scientific knowledge throughout their lives.

At a time when pressures on the education system are greater than ever, we must be able to articulate and justify our position within a crowded curriculum. I don’t believe that students should simply accept that they should learn something because it is on a specification. But they do deserve a coherent reason that relates to their lives and their careers. As science educators, we owe it to our students to have an authentic justification for what we are asking them to do. As physicists, even those who don’t have to field tricky questions from bored teenagers, I think it’s worthwhile for all of us to ask ourselves how we would answer the question “What is the point of this?”.

The post ‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question appeared first on Physics World.

New Journal of Physics seeks to expand its horizons

20 janvier 2025 à 12:35

The New Journal of Physics (NJP) has long been a flagship journal for IOP Publishing. The journal published its first volume in 1998 and was an early pioneer of open-access publishing. Co-owned by the Institute of Physics, which publishes Physics World, and the Deutsche Physikalische Gesellschaft (DPG), after some 25 years the journal is now seeking to establish itself further as a journal that represents the entire range of physics disciplines.

New Journal of Physics
A journal for all physics: the New Journal of Physics publishes research in a broad range of disciplines including quantum optics and quantum information, condensed-matter physics as well as high-energy physics. (Courtesy: IOP Publishing)

NJP publishes articles in pure, applied, theoretical and experimental research, as well as interdisciplinary topics. Research areas include optics, condensed-matter physics, quantum science and statistical physics, and the journal publishes a range of article types such as papers, topical reviews, fast-track communications, perspectives and special issues.

While NJP has been seen as a leading journal for quantum information, optics and condensed-matter physics, the journal is currently undergoing a significant transformation to broaden its scope to attract a wider array of physics disciplines. This shift aims to enhance the journal’s relevance, foster a broader audience and maintain NJP’s position as a leading publication in the global scientific community.

While quantum physics in general, and quantum optics and quantum information in particular, will remain crucial areas for the journal, researchers in other fields such as gravitational-wave research, condensed- and soft-matter physics, polymer physics, theoretical chemistry, statistical and mathematical physics are being encouraged to submit their articles to the journal. “It’s a reminder to the community that NJP is a journal for all kinds of physics and not just a select few,” says quantum physicist Andreas Buchleitner from the Albert-Ludwigs-Universität Freiburg who is NJP’s editor-in-chief.

Historically, NJP has had a strong focus on theoretical physics, particularly in quantum information. Yet another significant aspect of NJP’s new strategy is the inclusion of more experimental research. Attracting high-quality experimental papers to balance its content and enhance its reputation as a comprehensive physics journal, will also allow it to compete with other leading physics journals. Part of this shift will also involve attracting a reliable and loyal group of authors who regularly publish their best work in NJP.

A broader scope

To aid this move, NJP has recently grown its editorial board to add expertise in subjects such as gravitational-wave physics. This diversity of capabilities is crucial to evaluate submissions from different areas of physics and maintain high standards of quality during the peer-review process. That point is particularly relevant for Buchleitner, who sees the expansion of the editorial board as helping to improve the journal’s handling of submissions to ensure that authors feel their work is being evaluated fairly and by knowledgeable and engaged individuals. “Increasing the editorial board was quite an important concept in terms of helping the journal expand,” adds Buchleitner. “What is important to me is that scientists who contact the journal feel that they are talking to people and not to artificial intelligence substitutes.”

While citation metrics such as impact factors are often debated in terms of their scientific value, they remain essential for a journal’s visibility and reputation. In the competitive landscape of scientific publishing, they can set a journal apart from its competitors. With that in mind, NJP, which has an impact factor of 2.8, is also focusing on improving its citation indices to compete with top-tier journals.

Yet that doesn’t only just include the impact factor but other metrics that ensure efficient and constructive handling of submissions that will encourage researchers to publish with the journal again. To set it apart from competitors, the time taken to first decision before peer review, for example, is only six days while the journal has a median of 50 days to first decision after peer review.

Society benefits

While NJP pioneered the open-access model of scientific publishing, that position is no longer unique given the huge increase in open-access journals over the past decade. Yet the publishing model continues to be an important aspect of the journal’s identity to ensure that the research it publishes is freely available to all. Another crucial factor to attract authors and set it apart from commercial entities is that NJP is published by learned societies – the IOP and DPG.

NJP has often been thought of as a “European journal”. Indeed, NJP’s role is significant in the context of the UK leaving the European Union, in that it serves as a bridge between the UK and mainland European research communities. “That’s one of the reasons why I like the journal,” says Buchleitner, who adds that with a wider scope NJP will not only publish the best research from around the world but also strengthen its identity as a leading European journal.

The post <em>New Journal of Physics</em> seeks to expand its horizons appeared first on Physics World.

World’s darkest skies threatened by industrial megaproject in Chile, astronomers warn

17 janvier 2025 à 12:59

The darkest, clearest skies anywhere in the world could suffer “irreparable damage” by a proposed industrial megaproject. That is the warning from the European Southern Observatory (ESO) in response to plans by AES Andes, a subsidiary of the US power company AES Corporation, to develop a green hydrogen project just a few kilometres from ESO’s flagship Paranal Observatory in Chile’s Atacama Desert.

The Atacama Desert is considered one of the most important astronomical research sites in the world due to its stable atmosphere and lack of light pollution. Sitting 2635 m above sea level, on Cerro Paranal, the Paranal Observatory is home to key astronomical instruments including the Very Large Telescope. The Extremely Large Telescope (ELT) – the largest visible and infrared light telescope in the world – is also being constructed at the observatory on Cerro Armazones with first light expected in 2028.

AES Chile submitted an Environmental Impact Assessment in Chile for an industrial-scale green hydrogen project at the end of December. The complex is expected to cover more than 3000 hectares – similar in size to 1200 football pitches. According to AES, the project is in the early stages of development, but could include green hydrogen and ammonia production plants, solar and wind farms as well as battery storage facilities.

ESO is calling for the development to be relocated to preserve “one of Earth’s last truly pristine dark skies” and “safeguard the future” of astronomy. “The proximity of the AES Andes industrial megaproject to Paranal poses a critical risk to the most pristine night skies on the planet,” says ESO director general Xavier Barcons. “Dust emissions during construction, increased atmospheric turbulence, and especially light pollution will irreparably impact the capabilities for astronomical observation.”

In a statement sent to Physics World, an AES spokesperson says they “understand there are concerns raised by ESO regarding the development of renewable energy projects in the area”. The spokesperson adds that the project would be in an area “designated for renewable energy development”. They also claim that the company is “dedicated to complying with all regulatory guidelines and rules” and “supporting local economic development while maintaining the highest environmental and safety standards”.

According to the statement, the proposal “incorporates the highest standards in lighting” to comply with Chilean regulatory requirements designed “to prevent light pollution, and protect the astronomical quality of the night skies”.

Yet Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, noted that it is “obvious” that the light pollution from such a large complex will negatively affect observations. “There are not many places left in the world with the dark and other climatic conditions necessary to do cutting-edge science in the field of observational astrophysics,” adds Corradi. “Light pollution is a global effect and it is therefore essential to protect sites as important as Paranal.”

The post World’s darkest skies threatened by industrial megaproject in Chile, astronomers warn appeared first on Physics World.

Could bubble-like microrobots be the drug delivery vehicles of the future?

17 janvier 2025 à 10:30

Biomedical microrobots could revolutionize future cancer treatments, reliably delivering targeted doses of toxic cancer-fighting drugs to destroy malignant tumours while sparing healthy bodily tissues. Development of such drug-delivering microrobots is at the forefront of biomedical engineering research. However, there are many challenges to overcome before this minimally invasive technology moves from research lab to clinical use.

Microrobots must be capable of rapid, steady and reliable propulsion through various biological materials, while generating enhanced image contrast to enable visualization through thick body tissue. They require an accurate guidance system to precisely target diseased tissue. They also need to support sizable payloads of drugs, maintain their structure long enough to release this cargo, and then efficiently biodegrade – all without causing any harm to the body.

Aiming to meet this tall order, researchers at the California Institute of Technology (Caltech) and the University of Southern California have designed a hydrogel-based, image-guided, bioresorbable acoustic microrobot (BAM) with these characteristics and capabilities. Reporting their findings in Science Robotics, they demonstrated that the BAMs could successfully deliver drugs that decreased the size of bladder tumours in mice.

Microrobot design

The team, led by Caltech’s Wei Gao, fabricated the hydrogel-based BAMs using high-resolution two-photon polymerization. The microrobots are hollow spheres with an outer diameter of 30 µm and an 18 µm-diameter internal cavity to trap a tiny air bubble inside.

The BAMs have a hydrophobic inner surface to prolong microbubble retention within biofluids and a hydrophilic outer layer that prevents microrobot clustering and promotes degradation. Magnetic nanoparticles and therapeutic agents integrated into the hydrogel matrix enable wireless magnetic steering and drug delivery, respectively.

The entrapped microbubbles are key as they provide propulsion for the BAMs. When stimulated by focused ultrasound (FUS), the bubbles oscillate at their resonant frequencies. This vibration creates microstreaming vortices around the BAM, generating a propulsive force in the opposite direction of the flow. The microbubbles inside the BAMs also act as ultrasound contrast agents, enabling real-time, deep-tissue visualization.

The researchers designed the microrobots with two cylinder-like openings, which they found achieves faster propulsion speeds than single- or triple-opening spheres. They attribute this to propulsive forces that run parallel to the sphere’s boundary improving both speed and stability of movement when activated by FUS.

Flow patterns generated by a vibrating BAM
Numerical simulations Flow patterns generated by a BAM vibrating at its resonant frequency. The microrobot’s two openings are clearly visible. Scale bar, 15 µm. (Courtesy: Hong Han)

They also discovered that asymmetric placement of the microbubble centre from the centre of the sphere generated propulsion speeds more than twice that achieved by BAMS with a symmetric design.

To perform simultaneous imaging of BAM location and acoustic propulsion within soft tissue, the team employed a dual-probe design. An ultrasound imaging probe enabled real-time imaging of the bubbles, while the acoustic field generated by a FUS probe (at an excitation frequency of 480 kHz and an applied acoustic pressure of 626 kPa peak-to-peak) provided effective propulsion.

In vitro and in vivo testing 

The team performed real-time imaging of the propulsion of BAMs in vitro, using an agarose chamber to simulate an artificial bladder. When exposed to an ultrasound field generated by the FUS probe, the BAMs demonstrated highly efficient motion, as observed in the ultrasound imaging scans. The propulsion direction of BAMs could be precisely controlled by an external magnetic field.

The researchers also conducted in vivo testing, using laboratory mice with bladder cancer and the anti-cancer drug 5-fluorouracil (5-FU). They treated groups of mice with either phosphate buffered saline, free drug, passive BAMs or active (acoustically actuated and magnetically guided) BAMs, at three day intervals over four sessions. They then monitored the tumour progression for 21 days, using bioluminescence signals emitted by cancer cells.

The active BAM group exhibited a 93% decrease in bioluminescence by the 14th day, indicating large tumour shrinkage. Histological examination of excised bladders revealed that mice receiving this treatment had considerably reduced tumour sizes compared with the other groups.

“Embedding the anticancer drug 5-FU into the hydrogel matrix of BAMs substantially improved the therapeutic efficiency compared with 5-FU alone,” the authors write. “These BAMs used a controlled-release mechanism that prolonged the bioavailability of the loaded drug, leading to sustained therapeutic activity and better outcomes.”

Mice treated with active BAMS experienced no weight changes, and no adverse effects to the heart, liver, spleen, lung or kidney compared with the control group. The researchers also evaluated in vivo degradability by measuring BAM bioreabsorption rates following subcutaneous implantation into both flanks of a mouse. Within six weeks, they observed complete breakdown of the microrobots.

Gao tells Physics World that the team has subsequently expanded the scope of its work to optimize the design and performance of the microbubble robots for broader biomedical applications.

“We are also investigating the use of advanced surface engineering techniques to further enhance targeting efficiency and drug loading capacity,” he says. “Planned follow-up studies include preclinical trials to evaluate the therapeutic potential of these robots in other tumour models, as well as exploring their application in non-cancerous diseases requiring precise drug delivery and tissue penetration.”

The post Could bubble-like microrobots be the drug delivery vehicles of the future? appeared first on Physics World.

Sustainability spotlight: PFAS unveiled

17 janvier 2025 à 10:06

So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.

Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.

Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.

Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.

Left to right: Jonas Baltrusaitis, Linda S. Lee, Clinton Williams, Sara Lupton, Jude Maul

Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.

Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.

Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.

Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.

Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.

About this journal

Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic.
Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA

 

The post Sustainability spotlight: PFAS unveiled appeared first on Physics World.

String theory may be inevitable as a unified theory of physics, calculations suggest

16 janvier 2025 à 17:33

Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.

The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.

Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.

“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”

Why strings?

String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.

However, a major hurdle remains. The characteristic size of these strings is estimated to be around 1035 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.

Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.

“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”

In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.

Unique solution

Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.

Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”

The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.

Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.

The research is described in Physical Review Letters.

The post String theory may be inevitable as a unified theory of physics, calculations suggest appeared first on Physics World.

Photonics West shines a light on optical innovation

15 janvier 2025 à 16:00

SPIE Photonics West, the world’s largest photonics technologies event, takes place in San Francisco, California, from 25 to 30 January. Showcasing cutting-edge research in lasers, biomedical optics, biophotonics, quantum technologies, optoelectronics and more, Photonics West features leaders in the field discussing the industry’s challenges and breakthroughs, and sharing their research and visions of the future.

As well as 100 technical conferences with over 5000 presentations, the event brings together several world-class exhibitions, kicking off on 25 January with the BiOS Expo, the world’s largest biomedical optics and biophotonics exhibition.

The main Photonics West Exhibition starts on 28 January. Hosting more than 1200 companies, the event highlights the latest developments in laser technologies, optoelectronics, photonic components, materials and devices, and system support. The newest and fastest growing expo, Quantum West, showcases photonics as an enabling technology for a quantum future. Finally, the co-located AR | VR | MR exhibition features the latest extended reality hardware and systems. Here are some of the innovative products on show at this year’s event.

HydraHarp 500: a new era in time-correlated single-photon counting

Photonics West sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.

HydraHarp 500
Redefining what’s possible PicoQuant presents HydraHarp 500, a next-generation TCSPC unit that maximizes precision, flexibility and efficiency. (Courtesy: PicoQuant)

At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.

In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.

The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.

Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering
precision, flexibility and efficiency combined with reliability and seamless integration to
achieve breakthrough results.

For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

SmarAct: shaping the future of precision

SmarAct is set to make waves at the upcoming SPIE Photonics West, the world’s leading exhibition for photonics, biomedical optics and laser technologies, and the parallel BiOS trade fair. SmarAct will showcase a portfolio of cutting-edge solutions designed to redefine precision and performance across a wide range of applications.

At Photonics West, SmarAct will unveil its latest innovations, as well as its well-established and appreciated iris diaphragms and optomechanical systems. All of the highlighted technologies exemplify SmarAct’s commitment to enabling superior control in optical setups, a critical requirement for research and industrial environments.

Attendees can also experience the unparalleled capabilities of electromagnetic positioners and SmarPod systems. With their hexapod-like design, these systems offer nanometre-scale precision and flexibility, making them indispensable tools for complex alignment tasks in photonics and beyond.

SmarAct’s advanced positioning systems
Ensuring optimal performance SmarAct’s advanced positioning systems provide the precision and stability required for the alignment and microassembly of intricate optical components. (Courtesy: SmarAct)

One major highlight is SmarAct’s debut of a 3D pick-and-place system designed for handling optical fibres. This state-of-the-art solution integrates precision and flexibility, offering a glimpse into the future of fibre alignment and assembly. Complementing this is a sophisticated gantry system for microassembly of optical components. Designed to handle large travel ranges with remarkable accuracy, this system meets the growing demand for precision in the assembly of intricate optical technologies. It combines the best of SmarAct’s drive technologies, such as fast (up to 1 m/s) and durable electromagnetic positioners and scanner stages based on piezo-driven mechanical flexures with maximum scanning speed and minimum scanning error.

Simultaneously, at the BiOS trade fair SmarAct will spotlight its new electromagnetic microscopy stage, a breakthrough specifically tailored for life sciences applications. This advanced stage delivers exceptional stability and adaptability, enabling researchers to push the boundaries of imaging and experimental precision. This innovation underscores SmarAct’s dedication to addressing the unique challenges faced by the biomedical and life sciences sectors, as well as bioprinting and tissue engineering companies.

Throughout the event, SmarAct’s experts will demonstrate these solutions in action, offering visitors an interactive and hands-on understanding of how these technologies can meet their specific needs. Visit SmarAct’s booths to engage with experts and discover how SmarAct solutions can empower your projects.

Whether you’re advancing research in semiconductors, developing next-generation photonic devices or pioneering breakthroughs in life sciences, SmarAct’s solutions are tailored to help you achieve your goals with unmatched precision and reliability.

Precision positioning systems enable diverse applications 

For 25 years Mad City Labs has provided precision instrumentation for research and industry – including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes, atomic-force microscopes (AFMs) and customized solutions.

The company’s newest micropositioning system – the MMP-UHV50 – is a modular, linear micropositioner designed for ultrahigh-vacuum (UHV) environments. Constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks, the MMP-UHV50 offers 50 mm travel range with 190 nm step size and a maximum vertical payload of 2 kg.

The MMP-UHV50 micropositioning system
UHV compatible The new MMP-UHV50 micropositioning system is designed for ultrahigh-vacuum environments. (Courtesy: Mad City Labs)

Uniquely, the MMP-UHV50 incorporates a zero-power feature when not in motion, to minimize heating and drift. Safety features include limit switches and overheat protection – critical features when operating in vacuum environments. The system includes the Micro-Drive-UHV digital electronic controller, supplied with LabVIEW-based software and compatible with user-written software via the supplied DLL file (for example, Python, Matlab or C++).

Other products from Mad City Labs include piezo nanopositioners featuring the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution. These high-performance sensors enable motion control down to the single picometre level.

For scanning probe microscopy, Mad City Labs’s nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. The company offers both an optical deflection AFM – the MadAFM, a multimodal sample scanning AFM in a compact, tabletop design and designed for simple installation – plus resonant probe AFM models.

The resonant probe products include the company’s AFM controllers, MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs’ micro- and nanopositioners.  All AFM instruments are ideal for material characterization, but the resonant probe AFMs are uniquely suitable for quantum sensing and nano-magnetometry applications.

Mad City Labs also offers standalone micropositioning products, including optical microscope stages, compact positioners for photonics and the Mad-Deck XYZ stage platform, all of which employ proprietary intelligent control to optimize stability and precision. They are also compatible with the high-resolution nanopositioning systems, enabling motion control across micro-to-picometre length scales.

Finally, for high-end microscopy applications, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multi-colour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques.

Our product portfolio, coupled with our expertise in custom design and manufacturing, ensures that we are able to provide solutions for nanoscale motion for diverse applications such as astronomy, photonics, metrology and quantum sensing.

  • Learn more at BiOS booth #8525 and Photonics West booth #3525.

 

The post Photonics West shines a light on optical innovation appeared first on Physics World.

Trump nominates AI experts for key science positions

15 janvier 2025 à 12:30

Incoming US President Donald Trump has selected Silicon Valley executive Michael Kratsios as director of the Office of Science and Technology Policy (OSTP). Kratsios will also serve as Trump’s science advisor, a position that, unlike the OSTP directorship, does not require approval by the US Senate. Meanwhile, computer scientist Lynne Parker from the University of Tennessee, Knoxville, has been appointed to a new position – executive director of the President’s Council on Advisors on Science and Technology. Parker, who is a former member of OSTP, will also act as counsellor to the OSTP director.

Kratsios, with a BA in politics from Princeton University, was previously chief of staff to Silicon Valley venture capitalist Peter Thiel before becoming the White House’s chief technology officer in 2017 at the start of Trump’s first stint as US president. In addition to his technology remit, Kratsios was effectively Trump’s science advisor until meteorologist Kelvin Droegemeier took that position in January 2019. Kratsios then became the Department of Defense’s acting undersecretary of research and engineering. After the 2020 presidential election, Kratsios left government to run the San Francisco-based company Scale AI.

Parker has a MS from the University of Tennessee and a PhD from Massachusetts Institute of Technology, both in computer science. She was founding director of the University of Tennessee’s AI Tennessee Initiative before spending four years as a member of OSTP, bridging the first Trump and Biden administrations. There, she served as deputy chief technology officer and was the inaugural director of OSTP’s National Artificial Intelligence Initiative Office.

Unlike some other Trump nominations, the appointments have been positively received by the science community. “APLU is enthusiastic that President-elect Trump has selected two individuals who recognize the importance of science to national competitiveness, health, and economic growth,” noted the Association of Public & Land Universities – a membership organisation of public research universities — in a statement. Analysts expect the nominations to reflect the returning president’s interest in pursuing AI, which could indicate a move towards technology over scientific research in the coming four years.

  • Bill Nelson – NASA’s departing administrator – has handed over a decision about when to retrieve samples from Mars to potential successor Jared Isaacman. In the wake of huge cost increases and long delays in the schedule for bringing back samples collected by the rover Perseverance, NASA had said last year that it would develop a fresh plan for the “Mars Sample Return” mission. Nelson now says the agency had two lower-cost plans in mind – but that a choice will not be made until mid-2026. One plan would use a sky crane system resembling that which delivered Perseverance to the Martian surface, while the other would require a commercially produced “heavy lift lander” to pick up samples. Each option could cost up to $7.5 bn – much less than the rejected plan’s $11 bn.

The post Trump nominates AI experts for key science positions appeared first on Physics World.

Fermilab seeks new boss after Lia Merminga resigns as director

14 janvier 2025 à 14:30

Lia Merminga has resigned as director of Fermilab – the US’s premier particle-physics lab. She stepped down yesterday after a turbulent year that saw staff layoffs, a change in the lab’s management contractor and accusations of a toxic atmosphere. Merminga is being replaced by Young-Kee Kim from the University of Chicago, who will serve as interim director until a permanent successor is found. Kim was previously Fermilab’s deputy director between 2006 and 2013.

Tracy Marc, a spokerperson for Fermilab, says that the search for Merminga’s successor has already begun, although without a specific schedule. “Input from Fermilab employees is highly valued and we expect to have Fermilab employee representatives as advisory members on the search committee, just as has been done in the past,” Marc told Physics World. “The search committee will keep the Fermilab community informed about the progress of this search.”

The departure of Merminga, who became Fermilab director in August 2022, was announced by Paul Alivisatos, president of the University of Chicago. The university jointly manages the lab with Universities Research Association (URA), a consortium of research universities, as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates.

“Her dedication and passion for high-energy physics and Fermilab’s mission have been deeply appreciated,” Alivisatos said in a statement. “This leadership change will bring fresh perspectives and expertise to the Fermilab leadership team.”

Turbulent times

The reasons for Merminga’s resignation are unclear but Fermilab has experienced a difficult last two years with questions raised about its internal management and external oversight. Last August, a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.

The document highlighted issues such as management cover ups of dangerous behaviour including guns being brought onto Fermilab’s campus and a male employee’s attack on a female colleague. In addition, key experiments such as the Deep Underground Neutrino Experiment suffered notable delays. Cost overruns also led to a “limited operations period” with most staff on leave in late August.

In October, the US Department of Energy, which oversees Fermilab, announced a new organization – Fermi Forward Discovery Group – to manage the lab. Yet that decision came under scrutiny given it is dominated by the University of Chicago and URA, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off, adding to portray an institution in crisis.

The whistleblowers, who told Physics World that they still stand by their analysis of the lab’s issues, say that the layoffs “undermined Fermilab’s scientific mission” and claim that it sidelined “some of its most accomplished” researchers at the lab. “Meanwhile, executive managers, insulated by high salaries and direct oversight responsibilities, remained unaffected,” they allege.

Born in Greece, Merminga, 65, earned a BSc in physics from the University of Athens before moving to the University of Michigan where she completed an MS and PhD in physics. Before taking on Fermilab’s directorship, she held leadership posts in governmental physics-related institutions in the US and Canada.

The post Fermilab seeks new boss after Lia Merminga resigns as director appeared first on Physics World.

Antimatter partner of hyperhelium-4 is spotted at CERN

14 janvier 2025 à 10:41

CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation  has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.

Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.

Exotic antinuclei

Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).

Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.

Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.

Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.

Rapid decay

But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.

Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.

The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.

The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.

The observation is described in a paper that has been submitted to Physical Review Letters.

The post Antimatter partner of hyperhelium-4 is spotted at CERN appeared first on Physics World.

How publishing in Electrochemical Society journals fosters a sense of community

14 janvier 2025 à 10:06

The Electrochemical Society (ECS) is an international non-profit scholarly organization that promotes research, education and technological innovation in electrochemistry, solid-state science and related fields.

Founded in 1902, the ECS brings together scientists and engineers to share knowledge and advance electrochemical technologies.

As part of that mission, the society publishes several journals including the flagship Journal of the Electrochemical Society (JES), which is over 120 years old and covers a wide range of topics in electrochemical science and engineering.

Someone who has seen their involvement with the ECS and ECS journals increase over their career is chemist Trisha Andrew from the University of Massachusetts Amherst. She directs the wearable electronics lab, a multi-disciplinary research team that produces garment-integrated technologies using reactive vapor deposition.

Trisha Andrew from the University of Massachusetts Amherst. (Courtesy: Trisha Andrew)

Her involvement with the ECS began when she was invited by the editor-in-chief of ECS Sensors Plus to act as a referee for the journal. Andrew found the depth and practical application of the papers she reviewed interesting and of high quality. This resulted in her submitting her own work to ECS journals and she later became an associate editor for both ECS Sensors Plus and JES.

Professional Opportunities

Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China, meanwhile, says that due to the reputation of ECS journals, they have been his “go-to” place to publish since graduate school.

Weiran Zheng
Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China. (Courtesy: Weiran Zheng)

One of his papers entitled “Python for electrochemistry: a free an all-in-one toolset” (ECS Adv. 2 040502) has been downloaded over 8000 times and is currently the most-read ECS Advances article. This led to an invitation to deliver an ECS webinar — Introducing Python for Electrochemistry Research. “I never expected such an impact when the paper was accepted, and none of this would be possible without the platform offered by ECS journals,” adds Zheng.

Publishing in ECS journals has helped Zheng’s career advance through new connections and becoming more involved with ECS activities. This has not only boosted his research but also professional network and given these benefits, Zheng plans to continue to publish his latest findings in ECS journals.

Highly cited papers

Battery researcher Thierry Brousse from Nantes University in France, came to electrochemistry later on in his career having first carried out a PhD in high-temperature superconducting thin films at the University of Caen Normandy.

Thierry Brousse
Battery researcher Thierry Brousse from Nantes University in France. (Courtesy: Thierry Brousse)

When he began working in the field he collaborated with the chemist Donald Schleich from Polytech Nantes, who was an ECS member. It was then that he began to read the JES finding it a prestigious platform for his research in supercapacitors and microdevices for energy storage. “Most of the inspiring scientific papers I was reading at that time were from JES,” notes Brousse. “Naturally, my first papers were then submitted to this journal.”

Brousse says that publishing in ECS journals has provided him with new collaborations as well as invitations to speak at major conferences. He emphasizes the importance of innovative work and the positive impact of publishing in ECS journals where some of his most cited work has been published.

Brousse, who is an associate editor for JES, adds that he particularly values how publishing with ECS journals fosters a quick integration into specific research communities. This, he says, has been instrumental in advancing his career.

Long-standing relationships

Robert Savinell’s relationship with the ECS and ECS journals began during his PhD research in electrochemistry, which he carried out at the University of Pittsburgh. Now at Case Western Reserve University in Cleveland, Ohio, his research focusses on developing a flow battery for low-cost long duration energy storage primarily using iron and water. It is designed to improve the efficiency of the power grid and accelerate the addition of solar and wind power supplies.

Robert F Savinell
Robert Savinell at Case Western Reserve University in Cleveland, Ohio. (Courtesy: Robert Savinell)

Savinell also leads a Department of Energy funded Emerging Frontier Research Center on Breakthrough Electrolytes for Energy Storage. This Center focuses on fundamental research on nano to meso-scale structured electrolytes for energy storage.

ECS journals have been a cornerstone of his professional career, providing a platform for his research and fostering valuable professional connections. “Some of my research published in JES many years ago are still cited today,” says Savinell.

Savinell’s contributions to the ECS community have been recognized through various roles, including being elected a fellow of the ECS and he has previously served as chair of the ECS’s electrolytic and electrochemical engineering division. He was editor-in-chief of JES for the past decade and most recently was elected third vice president of the ECS.

Savinell says that the connections he has made through ECS have been significant, ranging from funding programme managers to personal friends. “My whole professional career has been focused around ECS,” he says, adding that he aims to continue to publish in ECS journals and hopes that his work will inspire solutions to some of society’s biggest problems.

Personal touch

For many researchers in the field, publishing in ECS journals has brought with it several benefits. That includes the high level of engagement and the personal touch within the ECS community and also the promotional support ECS provides for published work.

The ECS journals’ broad portfolio also ensure that researcher’s work reaches the right audience, and such a visibility and engagement is a significant factor when it comes to advancing the careers of scientists. “The difference between ECS journals is the amount of engagement, views and reception that you receive,” says Andrew. “That’s what I found to be the most unique”.

The post How publishing in Electrochemical Society journals fosters a sense of community appeared first on Physics World.

Higher-order brain function revealed by new analysis of fMRI data

10 janvier 2025 à 16:34

An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.

To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.

Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.

“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”

Higher-order interactions

Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.

To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.

Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.

To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”

Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.

Distinguishing between tasks

With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.

By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.

By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.

“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.

The research is described in Nature Communications.

The post Higher-order brain function revealed by new analysis of fMRI data appeared first on Physics World.

Start-stop operation and the degradation impact in electrolysis

10 janvier 2025 à 12:00

start-stop graph

This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.

An interactive Q&A session follows the presentation.

Shaun Alia
Shaun Alia

Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.

 

 

The post Start-stop operation and the degradation impact in electrolysis appeared first on Physics World.

NMR technology shows promise in landmine clearance field trials

9 janvier 2025 à 11:49

Novel landmine detectors based on nuclear magnetic resonance (NMR) have passed their first field-trial tests. Built by the Sydney-based company mRead, the devices could speed up the removal of explosives in former war zones. The company tested its prototype detectors in Angola late last year, finding that they could reliably sense explosives buried up to 15 cm underground — the typical depth of a deployed landmine.

Landmines are a problem in many countries recovering from armed conflict. According to NATO, some 110 million landmines are located in 70 countries worldwide including Cambodia and Bosnia despite conflict ending in both nations decades ago. Ukraine is currently the world’s most mine-infested country, making vast swathes of Ukraine’s agricultural land potentially unusable for decades.

Such landmines also continue to kill innocent civilians. According to the Landmine and Cluster Munition Monitor, nearly 2000 people died from landmine incidents in 2023 – double the number compared to 2022 – and a further 3660 were injured. Over 80% of the casualties were civilians, with children accounting for 37% of deaths.

Humanitarian “deminers”, who are trying to remove these explosives, currently inspect suspected minefields with hand-held metal detectors. These devices use magnetic induction coils that respond to the metal components present in landmines. Unfortunately, they react to every random piece of metal and shrapnel in the soil, leading to high rates of false positives.

“It’s not unreasonable with a metal detector to see 100 false alarms for every mine that you clear,” says Matthew Abercrombie, research and development officer at the HALO Trust, a de-mining charity. “Each of these false alarms, you still have to investigate as if it were a mine.” But for every mine excavated, about 50 hours is wasted on excavating false positives, meaning that clearing a single minefield could take months or years.

“Landmines make time stand still,” adds HALO Trust research officer Ronan Shenhav. “They can lie silent and invisible in the ground for decades. Once disturbed they kill and maim civilians, as well as valuable livestock, preventing access to schools, roads, and prime agricultural land.”

Hope for the future

One alternative landmine-detecting technology option is NMR, which is already widely used to look for underground mineral resources and scan for drugs at airports. NMR results in nuclei inside atoms emitting a weak electromagnetic signal in the presence of a strong constant magnetic field and a weak oscillating field. As the frequency of the signal depends on the molecule’s structure, every chemical compound has a specific electromagnetic fingerprint.

The problem with using it to sniff out landmines is pervasive environmental radio noise, with the electromagnetic signal emitted by the excited molecules being 16 orders of magnitude weaker than that used to trigger the effect. Digital radio transmission, electricity generators and industrial infrastructure all produce noise of the same frequency as the one the detectors are listening for. Even thunderstorms trigger such a radio hum that can spread across vast distances.

mRead scanner
The handheld detectors developed by MRead emit radio pulses at frequencies between 0.5 and 5 MHz. (Courtesy: mRead)

“It’s easier to listen to the Big Bang at the edge of the Universe,” says Nick Cutmore, chief technology officer at mRead. “Because the signal is so small, every interference stops you. That stopped a lot of practical applications of this technique in the past.” Cutmore is part of a team that has been trying to cut the effects of noise since the early 2000s, eventually finding a way to filter out this persistent crackle through a proprietary sensor design.

MRead’s handheld detectors emit radio pulses at frequencies between 0.5 and 5 MHz, which are much higher than the kilohertz-range frequencies used by conventional metal detectors. The signal elicits the magnetic resonance response in atoms of sodium, potassium and chlorine, which are commonly found in explosives. A sensor inside the detector “listens out” for the particular fingerprint signal, locating a forgotten mine more precisely than is possible with conventional metal detectors.

With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter

James Cowan

Given that the detected signal is so small, it has be amplified, but this resulted in adding noise. The company says it has found a way to make sure the electronics in the detector do not exacerbate the problem. “Our current handheld system only consumes 40 to 50 W when operating,” says Cutmore. “Previous systems have sometimes operated at a few kilowatts, making them power-hungry and bulky.”

Having tested the prototype detectors in a simulated minefield in Australia in August 2024, mRead engineers have now deployed them in minefields in Angola in cooperation with the HALO Trust. As the detectors respond directly to the explosive substance, they almost eliminated false positives completely, allowing deminers to double-check locations flagged by metal detectors before time-consuming digging took place.

During the three-week trial, the researchers also detected mines that had a low content of metal, which is difficult to spot with metal detectors.“Instead of doing 1000 metal detections and finding one mine, we can isolate those detections and very quickly before people start digging,” says Cutmore.

Researchers at mRead plan to return to Angola later this year for further tests. They also want to finetune their prototypes and begin working on devices that could be produced commercially. “I am tremendously excited by the results of these trials,” says James Cowan, chief executive officer of the HALO Trust. “With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter.”

The post NMR technology shows promise in landmine clearance field trials appeared first on Physics World.

Solid-state nuclear clocks brought closer by physical vapour deposition

8 janvier 2025 à 17:36
8-1-25 PVD thorium clock article
Solid-state clock Illustration of how thorium atoms are vaporized (bottom) and then deposited in a thin film on the substrate’s surface (middle). This film could form the basis for a nuclear clock (top). (Courtesy: Steven Burrows/Ye group)

Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.

Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.

Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.

Sensitive to new physics

Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.

Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.

That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.

In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”

Layers of thorium fluoride

In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.

The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.

More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.

Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”

The research is described in Nature

The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.

Moonstruck: art and science collide in stunning collection of lunar maps and essays

8 janvier 2025 à 10:43

As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are currently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.

As explained in LUNAR: a History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee-table books, one cannot fail but to become a little mad for the Moon too.

Many faces of the Moon

As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylized, crescent moon prop.)

The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).

Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.

One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”

Many moons ago

Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.

Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.

“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”

From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.

Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.

Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.

New moon

Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.

As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”

I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.

  • 2024 Thames and Hudson 256pp £50.00

The post Moonstruck: art and science collide in stunning collection of lunar maps and essays appeared first on Physics World.

Entanglement entropy in protons affects high-energy collisions, calculations reveal

7 janvier 2025 à 09:50

An international team of physicists has used the principle of entanglement entropy to examine how particles are produced in high-energy electron–proton collisions. Led by Kong Tu at Brookhaven National Laboratory in the US, the researchers showed that quarks and gluons in protons are deeply entangled and approach a state of maximum entanglement when they take part in high-energy collisions.

While particle physicists have made significant progress in understanding the inner structures of protons, neutrons, and other hadrons, there is still much to learn. Quantum chromodynamics (QCD) says that the proton and other hadrons comprise quarks, which are tightly bound together via exchanges of gluons – mediators of the strong force. However, using QCD to calculate the properties of hadrons is notoriously difficult except under certain special circumstances.

Calculations can be simplified by describing the quarks and gluons as partons in a model that was developed in late 1960s by James Bjorken, Richard Feynman, Vladimir Gribov and others. “Here, all the partons within a proton appear ‘frozen’ when the proton is moving very fast relative to an observer, such as in high-energy particle colliders,” explains Tu.

Dynamic and deeply complex interactions

While the parton model is useful for interpreting the results of particle collisions, it cannot fully capture the dynamic and deeply complex interactions between quarks and gluons within protons and other hadrons. These interactions are quantum in nature and therefore involve entanglement. This is a purely quantum phenomenon whereby a group of particles can be more highly correlated than is possible in classical physics.

“To analyse this concept of entanglement, we utilize a tool from quantum information science named entanglement entropy, which quantifies the degree of entanglement within a system,” Tu explains.

In physics, entropy is used to quantify the degree of randomness and disorder in a system. However, it can also be used in information theory to measure the degree of uncertainty within a set of possible outcomes.

“In terms of information theory, entropy measures the minimum amount of information required to describe a system,” Tu says. “The higher the entropy, the more information is needed to describe the system, meaning there is more uncertainty in the system. This provides a dynamic picture of a complex proton structure at high energy.”

Deeply entangled

In this context, particles in a system with high entanglement entropy will be deeply entangled – whereas those in a system with low entanglement entropy will be mostly uncorrelated.

In recent studies, entanglement entropy has been used to described how hadrons are produced through deep inelastic scattering interactions – such as when an electron or neutrino collides with a hadron at high energy. However, the evolution with energy of entanglement entropy within protons had gone largely unexplored. “Before we did this work, no one had looked at entanglement inside of a proton in experimental high-energy collision data,” says Tu.

Now, Tu’s team investigated how entanglement entropy varies with the speed of the proton – and how this relationship relates to the hadrons created during inelastic collisions.

Matching experimental data

Their study revealed that the equations of QCD can accurately predict the evolution of entanglement entropy – with their results closely matching with experimental collision data. Perhaps most strikingly, they discovered that if this entanglement entropy is increased at high energies, it may approach a state of maximum entanglement under certain conditions. This high degree of entropy is evident in the large numbers of particles that are produced in electron–proton collisions.

The researchers are now confident that their approach could lead to further insights about QCD. “This method serves as a powerful tool for studying not only the structure of the proton, but also those of the nucleons within atomic nuclei.” Tu explains. “It is particularly useful for investigating the underlying mechanisms by which nucleons are modified in the nuclear environment.”

In the future, Tu and colleagues hope that their model could boost our understanding of processes such as the formation and fragmentation of hadrons within the high-energy jets created in particle collisions, and the resulting shift in parton distributions within atomic nuclei. Ultimately, this could lead to a fresh new perspective on the inner workings of QCD.

The research is described in Reports on Progress in Physics.

The post Entanglement entropy in protons affects high-energy collisions, calculations reveal appeared first on Physics World.

PLANCKS physics quiz – how do you measure up against the brightest physics students in the UK and Ireland?

24 décembre 2024 à 10:00

Each year, the International Association of Physics Students organizes a physics competition for bachelor’s and master’s students from across the world. Known as the Physics League Across Numerous Countries for Kick-ass Students (PLANCKS), it’s a three-day event where teams of three to four students compete to answer challenging physics questions.

In the UK and Ireland, teams compete in a preliminary competition to be sent to the final. Here are some fiendish questions from past PLANCKS UK and Ireland preliminaries and the 2024 final in Dublin, written by Anthony Quinlan and Sam Carr, for you to try this holiday season.

Question 1: 4D Sun

Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]

Boltzmann constant, kB = 1.38 × 10−23 J K−1

Speed of light, c = 3 × 108 m s−1

Question 2: Heavy stuff

In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]

Question 3: Just like clockwork

Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.

Simplified schematic of a pendulum clock mechanism
1 Tick tock Simplified schematic of a pendulum clock mechanism. When the pendulum swings one way (a), the escapement releases the gear attached to the hanging mass and allows it to fall. When the pendulum swings the other way (b) the escapement stops the gear attached to the mass moving so the mass stays in place. (Courtesy: Katherine Skipper/IOP Publishing)

A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by

T=2πlg

where g is the acceleration due to gravity.

Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.

Radius of the Earth, R = 6.3781 × 106 m

Period of one Earth day, τ0 = 8.64 × 104 s

How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]

Question 4: Quantum stick

Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few milli­kelvins, so the stick is in its ground state.

Moment of inertia for a rod,

I=13ml2

where m is the mass and l is the length.

Uncertainty principle,

ΔxΔp2

There are several possible approximations and simplifications you could make in solving this problem, including:

sinθ ≈ θ for small θ

cosh1x=ln x+x21

and

sinh1x=ln x+x2+1

Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]

Hint: Consider the two possible initial conditions that arise from the uncertainty principle.

  • Answers will be posted here on the Physics World website next month. There are no prizes.
  • If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk

The post PLANCKS physics quiz – how do you measure up against the brightest physics students in the UK and Ireland? appeared first on Physics World.

How the operating window of LFP/Graphite cells affects their lifetime

23 décembre 2024 à 12:42

 

Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.

Eniko Zsoldos
Eniko Zsoldos

Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.

 

The Electrochemical Society

 

The post How the operating window of LFP/Graphite cells affects their lifetime appeared first on Physics World.

❌