Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.
The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.
The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.
“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB in Belgrade signed a letter in support of the students, one of many from various research institutions since December.
Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.
“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”
Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.
Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.
Missed exams
Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.
Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.
Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.
Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.
“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.
To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.
In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.
“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”
Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.
In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.
Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.
Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.
Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.
“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”
In this episode of the Physics World Weekly podcast, we explore how computational physics is being used to develop new quantum materials; and we look at how ultrasound can help detect breast cancer.
Our first guest is Bhaskaran Muralidharan, who leads the Computational Nanoelectronics & Quantum Transport Group at the Indian Institute of Technology Bombay. In a conversation with Physics World’s Hamish Johnston, he explains how computational physics is being used to develop new materials and devices for quantum science and technology. He also shares his personal perspective on quantum physics in this International Year of Quantum Science and Technology.
Our second guest is Daniel Sarno of the UK’s National Physical Laboratory, who is an expert in the medical uses of ultrasound. In a conversation with Physics World’s Tami Freeman, Sarno explains why conventional mammography can struggle to detect cancer in patients with higher density breast tissue. This is a particular problem because women with such tissue are at higher risk of developing the disease. To address this problem, Sarno and colleagues have developed a ultrasound technique for measuring tissue density and are commercializing it via a company called sona.
Bhaskaran Muralidharan is an editorial board member on Materials for Quantum Technology. The journal is produced by IOP Publishing, which also brings you Physics World
A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.
A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.
In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.
This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.
But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.
The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.
The Terrell effect in the lab
While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.
The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.
When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.
Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.
The research is available on the arXiv pre-print server.
The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification. That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).
When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.
After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.
Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.
Sustained vigilance
The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has bee temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.
The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.
After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.
The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.
“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.
The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.
St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.
The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers
We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.
Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.
The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.
Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.
Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.
In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.
Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.
The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.
As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.
But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.
2024 Manchester University Press 328pp £25.00/$36.95
As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.
This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!
Toby Bond
Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.
The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.
What’s the headline take on SQMS?
Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.
Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.
How is the centre delivering versus the vision laid out in the NQI?
SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)
Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.
How are you building on these breakthroughs?
First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.
We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.
Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.
What technologies pursued by SQMS will be critical to the scale-up of quantum systems?
There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.
Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)
Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)
How important are partnerships to the SQMS mission?
Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.
In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.
Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.
What are the drivers for your engagement with the quantum technology industry?
The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.
I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.
How do Rigetti and IBM, for example, benefit from their engagement with SQMS?
The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.
At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.
Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?
Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.
We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.
How is your team building capacity to support quantum R&D and technology innovation?
We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.
Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)
In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.
What is the vision for the future of SQMS?
SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.
We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.
It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.
Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.
How SQMS is bridging the quantum skills gap
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)
As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.
Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.
Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.
“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.
“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”
When a mantis shrimp uses shock waves to strike and kill its prey, how does it prevent those shock waves from damaging its own tissues? Researchers at Northwestern University in the US have answered this question by identifying a structure within the shrimp that filters out harmful frequencies. Their findings, which they obtained by using ultrasonic techniques to investigate surface and bulk wave propagation in the shrimp’s dactyl club, could lead to novel advanced protective materials for military and civilian applications.
Dactyl clubs are hammer-like structures located on each side of a mantis shrimp’s body. They store energy in elastic structures similar to springs that are latched in place by tendons. When the shrimp contracts its muscles, the latch releases, releasing the stored energy and propelling the club forward with a peak force of up to 1500 N.
This huge force (relative to the animal’s size) creates stress waves in both the shrimp’s target – typically a hard-shelled animal such as a crab or mollusc – and the dactyl club itself, explains biomechanical engineer Horacio Dante Espinosa, who led the Northwestern research effort. The club’s punch also creates bubbles that rapidly collapse to produce shockwaves in the megahertz range. “The collapse of these bubbles (a process known as cavitation collapse), which takes place in just nanoseconds, releases intense bursts of energy that travel through the target and shrimp’s club,” he explains. “This secondary shockwave effect makes the shrimp’s strike even more devastating.”
Protective phononic armour
So how do the shrimp’s own soft tissues escape damage? To answer this question, Espinosa and colleagues studied the animal’s armour using transient grating spectroscopy (TGS) and asynchronous optical sampling (ASOPS). These ultrasonic techniques respectively analyse how stress waves propagate through a material and characterize the material’s microstructure. In this work, Espinosa and colleagues used them to provide high-resolution, frequency-dependent wave propagation characteristics that previous studies had not investigated experimentally.
The team identified three distinct regions in the shrimp’s dactyl club. The outermost layer consists of a hard hydroxyapatite coating approximately 70 μm thick, which is durable and resists damage. Beneath this, an approximately 500 μm-thick layer of mineralized chitin fibres arranged in a herringbone pattern enhances the club’s fracture resistance. Deeper still, Espinosa explains, is a region that features twisted fibre bundles organized in a corkscrew-like arrangement known as a Bouligand structure. Within this structure, each successive layer is rotated relative to its neighbours, giving it a unique and crucial role in controlling how stress waves propagate through the shrimp.
“Our key finding was the existence of phononic bandgaps (through which waves within a specific frequency range cannot travel) in the Bouligand structure,” Espinosa explains. “These bandgaps filter out harmful stress waves so that they do not propagate back into the shrimp’s club and body. They thus preserve the club’s integrity and protect soft tissue in the animal’s appendage.”
The team also employed finite element simulations incorporating so-called Bloch-Floquet analyses and graded mechanical properties to understand the phonon bandgap effects. The most surprising result, Espinosa tells Physics World, was the formation of a flat branch around the 450 to 480 MHz range, which correlates to frequencies arising from bubble collapse originating during club impact.
Evolution and its applications
For Espinosa and his colleagues, a key goal of their research is to understand how evolution leads to natural composite materials with unique photonic, mechanical and thermal properties. In particular, they seek to uncover how hierarchical structures in natural materials and the chemistry of their constituents produce emergent mechanical properties. “The mantis shrimp’s dactyl club is an example of how evolution leads to materials capable of resisting extreme conditions,” Espinosa says. “In this case, it is the violent impacts the animal uses for predation or protection.”
The properties of the natural “phononic shield” unearthed in this work might inspire advanced protective materials for both military and civilian applications, he says. Examples could include the design of helmets, personnel armour, and packaging for electronics and other sensitive devices.
In this study, which is described in Science, the researchers analysed two-dimensional simulations of wave behaviour. Future research, they say, should focus on more complex three-dimensional simulations to fully capture how the club’s structure interacts with shock waves. “Designing aquatic experiments with state-of-the-art instrumentation would also allow us to investigate how phononic properties function in submerged underwater conditions,” says Espinosa.
The team would also like to use biomimetics to make synthetic metamaterials based on the insights gleaned from this work.
From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images. When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.
Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.
Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.
Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.
“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”
Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.
MeerKAT: glimpsing the heart of the Milky Way
In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)
Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.
Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.
Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures
Farhad Yusef-Zadeh
Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.
“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”
The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”
In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.
An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.
HERA: the first stars and galaxies
When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.
When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.
Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))
Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.
MWA: “unexpected” new objects
Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))
Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.
New transients are challenging our current models of stellar evolution
Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia
After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.
The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.
This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.
ASKAP: giant sky surveys
The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.
Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA37 e048).
Down under The AKSAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)
Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.
This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.
By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA38 e003). These are giant rings of radiation that are observed only in radio waves. Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.
The precursors are so important. They’ve given us new questions. And it’s incredibly exciting
Philippa Hartley, SKAO, Manchester
While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.
Trouble on the horizon
As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.
Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)
ASKAP, MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.
Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”
In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.
The US firm Firefly Aerospace has claimed to be the first commercial company to achieve “a fully successful soft landing on the Moon”. Yesterday, the company’s Blue Ghost lunar lander touched down on the Moon’s surface in an “upright, stable configuration”. It will now operate for 14 days where it will drill into the lunar soil and image a total eclipse from the Moon where the Earth blocks the Sun.
Blue Ghost was launched on 15 January from NASA’s Kennedy Space Center in Florida via a SpaceX Falcon 9 rocket. Following a 45-day trip, the craft landed in Mare Crisium, touching down within its 100 m landing target next to a volcanic feature called Mons Latreille.
The mission is carrying 10 NASA instruments, which includes a lunar subsurface drill, sample collector, X-ray imager and dust-mitigation experiments. “With the hardest part behind us, Firefly looks forward to completing more than 14 days of surface operations, again raising the bar for commercial cislunar capabilities,” notes Shea Ferring, chief technology officer at Firefly Aerospace.
In February 2024 the Houston-based company Intuitive Machines became the first private firm to soft land on the Moon with its Odysseus mission. Yet it suffered a few hiccups prior to touch down and rather than landing vertically, did so at a 30 degree angle, which affected radio-transmission rates.
The Firefly mission is part of NASA’s Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.
Firefly’s Blue Ghost Mission 2 is expected to launch next year, where it will aim to land on the far side of the Moon. “With annual lunar missions, Firefly is paving the way for a lasting lunar presence that will help unlock access to the rest of the solar system for our nation, our partners, and the world,” notes Jason Kim, chief executive officer of Firefly Aerospace.
Apart from the usual set of mathematical skills ranging from probability theory and linear algebra to aspects of cryptography, the most valuable skill is the ability to think in a critical and dissecting way. Also, one mustn’t be afraid to go in different directions and connect dots. In my particular case, I was lucky enough that I knew the foundations of quantum physics and the problems that cryptographers were facing and I was able to connect the two. So I would say it’s important to have a good understanding of topics outside your narrow field of interest. Nature doesn’t know that we divided all phenomena into physics, chemistry and biology, but we still put ourselves in those silos and don’t communicate with each other.
Flying high and low “Physics – not just quantum mechanics, but all its aspects – deeply shapes my passion for aviation and scuba diving,” says Artur Ekert. “Experiencing and understanding the world above and below brings me great joy and often clarifies the fine line between adventure and recklessness.” (Courtesy: Artur Ekert)
What do you like best and least about your job?
Least is easy, all admin aspects of it. Best is meeting wonderful people. That means not only my senior colleagues – I was blessed with wonderful supervisors and mentors – but also the junior colleagues, students and postdocs that I work with. This job is a great excuse to meet interesting people.
What do you know today that you wish you’d known at the start of your career?
That it’s absolutely fine to follow your instincts and your interests without paying too much attention to practicalities. But of course that is a post-factum statement. Maybe you need to pay attention to certain practicalities to get to the comfortable position where you can make the statement I just expressed.
Globular springtails (Dicyrtomina minuta) are small bugs about five millimetres long that can be seen crawling through leaf litter and garden soil. While they do not have wings and cannot fly, they more than make up for it with their ability to hop relatively large heights and distances.
This jumping feat is thanks to a tail-like appendage on their abdomen called a furcula, which is folded in beneath their body, held under tension.
When released, it snaps against the ground in as little as 20 milliseconds, flipping the springtail up to 6 cm into the air and 10 cm horizontally.
They modified a cockroach-inspired robot to include a latch-mediated spring actuator, in which potential energy is stored in an elastic element – essentially a robotic fork-like furcula.
Via computer simulations and experiments to control the length of the linkages in the furcula as well as the energy stored in them, the team found that the robot could jump some 1.4 m horizontally, or 23 times its body length – the longest of any existing robot relative to body length.
The work could help design robots that can traverse places that are hazardous to humans.
“Walking provides a precise and efficient locomotion mode but is limited in terms of obstacle traversal,” notes Harvard’s Robert Wood. “Jumping can get over obstacles but is less controlled. The combination of the two modes can be effective for navigating natural and unstructured environments.”
The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.
The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.
Limitation of thermostats for radiative heat
Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.
Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.
In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.
Integrating optical sensors to improve room comfort
The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.
The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.
The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.
The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.
New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.
The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.
Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.
Hot and violent region
The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.
The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.
New result reinforces previous interpretations
The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.
Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.
Very long baseline interferometry
To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.
“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”
Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.
A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.
Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.
Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.
Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.
Shifting resonances
However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.
“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.
Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.
Dithering the cavity
This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.
The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”
Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”
In this episode of the Physics World Weekly podcast, online editor Margaret Harris chats about her recent trip to CERN. There, she caught up with physicists working on some of the lab’s most exciting experiments and heard from CERN’s current and future leaders.
Founded in Geneva in 1954, today CERN is most famous for the Large Hadron Collider (LHC), which is currently in its winter shutdown. Harris describes her descent 100 m below ground level to visit the huge ATLAS detector and explains why some of its components will soon be updated as part of the LHC’s upcoming high luminosity upgrade.
She explains why new “crab cavities” will boost the number of particle collisions at the LHC. Among other things, this will allow physicists to better study how Higgs bosons interact with each other, which could provide important insights into the early universe.
Harris describes her visit to CERN’s Antimatter Factory, which hosts several experiments that are benefitting from a 2021 upgrade to the lab’s source of antiprotons. These experiments measure properties of antimatter – such as its response to gravity – to see if its behaviour differs from that of normal matter.
Harris also heard about the future of the lab from CERN’s director general Fabiola Gianotti and her successor Mark Thomson, who will take over next year.
Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.
Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.
Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.
Almost twice as much 10Be than expected
In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.
The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.
Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.
To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”
Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.
Update 7 March 2025: In a statement, Intuitive Machines announced that while Athena performed a soft landing on the Moon on 6 March, it landed on its side about 250m away from the intended landing spot. Given that the lander is unable to reharge its batteries, the firm declared the mission over with the team accessing the data that has been collected.
The private firm Intuitive Machines has launched a lunar lander to test extraction methods for water and volatile gases. The six-legged Moon lander, dubbed Athena, took off yesterday aboard a SpaceX Falcon 9 rocket from NASA’s Kennedy Space Center in Florida . Also aboard the rocket was NASA’s Lunar Trailblazer – a lunar orbiter that will investigate water on the Moon and its geology.
In February 2024, Intuitive Machines’ Odysseus mission became the first US mission to make a soft landing on the Moon since Apollo 17 and the first private craft to do so. After a few hiccups during landing, the mission carried out measurements with an optical and radio telescope before it ended seven days later.
Athena is the second lunar lander by Intuitive Machines in its quest to build infrastructure on the Moon that would be required for long-term lunar exploration.
The mission, standing almost five meters tall, aims to land in the Mons Mouton region, which is about 160 km from the lunar south pole.
It will use a drill to bore one meter into the surface and test the extraction of substances – including volatiles such as carbon dioxide as well as water – that it will then analyse with a mass spectrometer.
Athena also contains a “hopper” dubbed Grace that can travel up to 25 kilometres on the lunar surface. Carrying about 10 kg of payloads, the rocket-propelled drone will aim to take images of the lunar surface and explore nearby craters.
As well as Grace, Athena carries two rovers. MAPP, built by Lunar Outpost, will autonomously navigate the lunar surface while a small, lightweight rover dubbed Yaoki, which has been built by the Japanese firm Dymon, will explore the Moon within 50 meters of the lander.
Athena is part of NASA’s $2.6bn Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.
Taking the Moon’s temperature
Lunar Trailblazer, meanwhile, will spend two years orbiting the Moon from a 100 km altitude polar orbit. Weighing 200 kg and about the size of a washing machine, it will map the distribution of water on the Moon’s surface about 12 times a day with a resolution of about 50 meters.
While it is known that water exists on the lunar surface, little is known about its form, abundance, distribution or how it arrived. Various hypothesis range from “wet” asteroids crashing into the Moon to volcanic eruptions producing water vapour from the Moon’s interior.
Water hunter: NASA’s Lunar Trailblazer will spend two years mapping the distribution of water on the surface of the Moon (courtesy: Lockheed Martin Space for Lunar Trailblazer)
To help answer that question, the craft will examine water deposits via an imaging spectrometer dubbed the High-resolution Volatiles and Minerals Moon Mapper that has been built by NASA’s Jet Propulsion Laboratory.
A thermal mapper, meanwhile, that has been developed by the University of Oxford, will plot the temperature of the Moon’s surface and help to confirm the presence and location of water.
Lunar Trailblazer was selected in 2019 as part of NASA’s Small Innovative Missions for Planetary Exploration programme.
While the biology of how an entire organism develops from a single cell has long been a source of fascination, recent research has increasingly highlighted the role of mechanical forces. “If we want to have rigorous predictive models of morphogenesis, of tissues and cells forming organs of an animal,” says Konstantin Doubrovinski at the University of Texas Southwestern Medical Center, “it is absolutely critical that we have a clear understanding of material properties of these tissues.”
Now Doubrovinski and his colleagues report a rheological study explaining why the developing fruit fly (Drosophila melanogaster) epithelial tissue stretches as it does over time to allow the embryo to change shape.
Previous studies had shown that under a constant force, tissue extension was proportional to the time the force had been applied to the power of one half. This had puzzled the researchers, since it did not fit a simple model in which epithelial tissues behave like linear springs. In such a model, the extension obeys Hooke’s law and is proportional to the force applied alone, such that the exponent of time in the relation would be zero.
They and other groups had tried to explain this observation of an exponent equal to 0.5 as due to the viscosity of the medium surrounding the cells, which would lead to deformation near the point of pulling that then gradually spreads. However, their subsequent experiments ruled out viscosity as a cause of the non-zero exponent.
Tissue pulling experiments Schematic showing how a ferrofluid droplet positioned inside one cell is used to stretch the epithelium via an external magnetic field. The lower images are snapshots from an in vivo measurement. (Courtesy: Konstantin Doubrovinski/bioRxiv 10.1101/2023.09.12.557407)
For their measurements, the researchers had exploited a convenient feature of Drosophila epithelial cells – a small hole, through which they could manipulate a droplet of ferrofluid to enter using a permanent magnet. Once inside the cell, a magnet acting on this droplet could exert forces on the cell to stretch the surrounding tissue.
For the current study, the researchers first tested the observed scaling law over longer periods of time. A power law gives a straight line on a log–log plot but as Doubrovinski points out, curves also look like straight lines over short sections. However, even when they increased the time scales probed in their experiments to cover three orders of magnitude – from fractions of a second to several minutes – the observed power law still held.
Understanding the results
One of the post docs on the team – Mohamad Ibrahim Cheikh – stumbled upon the actual relation giving the power law with an exponent of 0.5 while working on a largely unrelated problem. He had been modelling ellipsoids in a hexagonal meshwork on a surface, in what Doubrovinski describes as a “large” and “relatively complex” simulation. He decided to examine what would happen if he allowed the mesh to relax in its stretched position, which would model the process of actin turnover in cells.
Cheikh’s simulation gave the power law observed in the epithelial cells. “We totally didn’t expect it,” says Doubrovinski. “We pursued it and thought, why are we getting it? What’s going on here?”
Although this simulation yielded the power law with an exponent of 0.5, because the simulation was so complex, it was hard to get a handle on why. “There are all these different physical effects that we took into account that we thought were relevant,” he tells Physics World.
To get a more intuitive understanding of the system, the researchers attempted to simplify the model into a lattice of springs in one dimension, keeping only some of the physical effects from the simulations, until they identified the effects required to give the exponent value of 0.5. They could then scale this simplified one-dimensional model back up to three dimensions and test how it behaved.
According to their model, if they changed the magnitude of various parameters, they should be able to rescale the curves so that they essentially collapse onto a single curve. “This makes our prediction falsifiable,” says Doubrovinski, and in fact the experimental curves could be rescaled in this way.
When the researchers used measured values for the relaxation constant based on the actin turnover rate, along with other known parameters such as the size of the force and the size of the extension, they were able to calculate the force constant of the epithelial cell. This value also agreed with their previous estimates.
Doubrovinski explains how the ferrofluid droplet engages with individual “springs” of the lattice as it moves through the mesh. “The further it moves, the more springs it catches on,” he says. “So the rapid increase of one turns into a slow increase with an exponent of 0.5.” Against this model, all the pieces fit into place.
“I find it inspiring that the authors, first motivated by in vivo mechanical measurements, could develop a simple theory capturing a new phenomenological law of tissue rheology,” says Pierre Françoise Lenne, group leader at the Institut de Biologie du Development de Marseille at L’Universite d’Aix-Marseille. Lenne specializes in the morphogenesis of multicellular systems but was not involved in the current research.
Next, Doubrovinski and his team are keen to see where else their results might apply, such as other developmental stages and other types of organisms, such as mammals, for example.
Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.
Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.
The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.
An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.
For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.
Highly compressed “tensor networks”
To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.
These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.
Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.
This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.
The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.
Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?
Heavy duty: The new transfer arm. (Courtesy: UHV Design)
The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.
“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”
The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.
Continual development, review and improvement
While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.
“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”
The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.
Moving multiple objects
As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.
Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”
Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.
However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.
Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.
“That’s a big no-no”
“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.
Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.
But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.
As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.
Built-in error protection
As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.
It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.
In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.
Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.
The device is a remarkable achievement from the materials science and fabrication standpoint
Ivar Martin, Argonne National Laboratory
“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).
Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.
Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.
“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”
Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.
This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.
In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.
Philip Minnmann
Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.
Johannes Schubert
Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.
The fusion physicist Ian Chapman is to be the next head of UK Research and Innovation (UKRI) – the UK’s biggest public research funder. He will take up the position in June, replacing the geniticist Ottoline Leyser who has held the position since 2020.
UK science minister Patrick Vallance notes that Chapman’s “leadership experience, scientific expertise and academic achievements make him an exceptionally strong candidate to lead UKRI”.
UKRI chairman Andrew Mackenzie, meanwhile, states that Chapman “has the skills, experience, leadership and commitment to unlock this opportunity to improve the lives and livelihoods of everyone”.
Hard act to follow
After gaining an MSc in mathematics and physics from Durham University, Chapman completed a PhD at Imperial College London in fusion science, which he partly did at Culham Science Centre in Oxfordshire.
In 2014 he became head of tokamak science at Culham and then became fusion programme manager a year later. In 2016, aged just 34, he was named chief executive of the UK Atomic Energy Authority (UKAEA), which saw him lead the UK’s magnetic confinement fusion research programme at Culham.
In that role he oversaw an upgrade to the lab’s Mega Amp Spherical Tokamak as well as the final operation of the Joint European Torus (JET) – one of the world’s largest nuclear fusion devices – that closed in 2024.
Chapman also played a part in planning a prototype fusion power plant. Known as the Spherical Tokamak for Energy Production (STEP), it was first announced by the UK government in 2019 with operations expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.
Chapman, who currently sits on UKRI’s board, says that he is “excited” to take over as head of UKRI. “Research and innovation must be central to the prosperity of our society and our economy, so UKRI can shape the future of the country,” he notes. “I was tremendously fortunate to represent UKAEA, an organisation at the forefront of global research and innovation of fusion energy, and I look forward to building on those experiences to enable the wider UK research and innovation sector.”
The UKAEA has announced that Tim Bestwick, who is currently UKAEA’s deputy chief executive, will take over as interim UKAEA head until a permanent replacement is found.
Steve Cowley, director of the Princeton Plasma Physics Laboratory in the US and a former chief executive of UKAEA, told Physics World that Chapman is an “astonishing science leader” and that the UKRI is in “excellent hands”. “[Chapman] has set a direction for UK fusion research that is bold and inspired,” adds Cowley. “It will be a hard act to follow but UK fusion development will go ahead with great energy.”
A team at the Trento Proton Therapy Centre in Italy has delivered the first clinical treatments using proton arc therapy, an emerging proton delivery technique. Following successful dosimetric comparisons with clinically delivered proton plans, the researchers confirmed the feasibility of PAT delivery and used PAT to treat nine cancer patients, reporting their findings inMedical Physics.
Currently, proton therapy is mostly delivered using pencil-beam scanning (PBS), which provides highly conformal dose distributions. But PBS delivery can be compromised by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this limitation by moving to an arc trajectory.
“Proton arc treatments are different from any other pencil-beam proton delivery technique because of the large number of beam angles used and the possibility to optimize the number of energies used for each beam direction, which enables optimization of the delivery time,” explains first author Francesco Fracchiolla. “The ability to optimize both the number of energy layers and the spot weights makes these treatments superior to any previous delivery technique.”
Plan comparisons
The Trento researchers – working with colleagues from RaySearch Laboratories – compared the dosimetric parameters of PAT plans with those of state-of-the-art multiple-field optimized (MFO) PBS plans, for 10 patients with head-and-neck cancer. They focused on this site due to the high number of organs-at-risk (OARs) close to the target that may be spared using this new technique.
In future, PAT plans will be delivered with the beam on during gantry motion (dynamic mode). This requires dynamic arc plan delivery with all system settings automatically adjusted as a function of gantry angle – an approach with specific hardware and software requirements that have so far impeded clinical rollout.
Instead, Fracchiolla and colleagues employed an alternative version of static PAT, in which the static arc is converted into a series of PBS beams and delivered using conventional delivery workflows. Using the RayStation treatment planning system, they created MFO plans (using six noncoplanar beam directions) and PAT plans (with 30 beam directions), robustly optimized against setup and range uncertainties.
PAT plans dramatically improved dose conformality compared with MFO treatments. While target coverage was of equal quality for both treatment types, PAT decreased the mean doses to OARs for all patients. The biggest impact was in the brainstem, where PAT reduced maximum and mean doses by 19.6 and 9.5 Gy(RBE), respectively. Dose to other primary OARs did not differ significantly between plans, but PAT achieved an impressive reduction in mean dose to secondary OARs not directly adjacent to the target.
The team also evaluated how these dosimetric differences impact normal tissue complication probability (NTCP). PAT significantly reduced (by 8.5%) the risk of developing dry mouth and slightly lowered other NTCP endpoints (swallowing dysfunction, tube feeding and sticky saliva).
To verify the feasibility of clinical PAT, the researchers delivered MFO and PAT plans for one patient on a clinical gantry. Importantly, delivery times (from the start of the first beam to the end of the last) were similar for both techniques: 36 min for PAT with 30 beam directions and 31 min for MFO. Reducing the number of beam directions to 20 reduced the delivery time to 25 min, while maintaining near-identical dosimetric data.
First patient treatments
The successful findings of the plan comparison and feasibility test prompted the team to begin clinical treatments.
“The final trigger to go live was the fact that the discretized PAT plans maintained pretty much exactly the optimal dosimetric characteristics of the original dynamic (continuous rotation) arc plan from which they derived, so there was no need to wait for full arc to put the potential benefits to clinical use. Pretreatment verification showed excellent dosimetric accuracy and everything could be done in a fully CE-certified environment,” say Frank Lohr and Marco Cianchetti, director and deputy director, respectively, of the Trento Proton Therapy Center. “The only current drawback is that we are not at the treatment speed that we could be with full dynamic arc.”
To date, nine patients have received or are undergoing PAT treatment: five with head-and-neck tumours, three with brain tumours and one thorax cancer. For the first two head-and-neck patients, the team created PAT plans with a half arc (180° to 0°) with 10 beam directions and a mean treatment time of 12 min. The next two were treated with a complete arc (360°) with 20 beam directions. Here, the mean treatment time was 24 min. Patient-specific quality assurance revealed an average gamma passing rate (3%, 3 mm) of 99.6% and only one patient required replanning.
All PAT treatments were performed using the centre’s IBA ProteusPlus proton therapy unit and the existing clinical workflow. “Our treatment planning system can convert an arc plan into a PBS plan with multiple beams,” Fracchiolla explains. “With this workaround, the entire clinical chain doesn’t change and the plan can be delivered on the existing system. This ability to convert the arc plans into PBS plans means that basically every proton centre can deliver these treatments with the current hardware settings.”
The researchers are now analysing acute toxicity data from the patients, to determine whether PAT reduces toxicity. They are also looking to further reduce the delivery times.
“Hopefully, together with IBA, we will streamline the current workflow between the OIS [oncology information system] and the treatment control system to reduce treatment times, thus being competitive in comparison with conventional approaches, even before full dynamic arc treatments become a clinical reality,” adds Lohr.
Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)
Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.
The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable
The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.
The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.
The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”
There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.
Is smaller better?
A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.
In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.
But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)
However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.
The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.
1 Superconductors
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)
Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc). Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.
What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.
Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.
However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current. This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.
High-performance fusion materials
Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.
The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.
2 REBCO unit cell
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)
The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.
Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).
REBCO compounds are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.
REBCO wires: Bendy but brittle
REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.
Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.
Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.
Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in 2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.
Pushing superconductors to the limit
Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).
These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.
Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.
Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.
To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).
TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.
We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.
Searching for clues
The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.
In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials3 52).
We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol.36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)
Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.
We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol.34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin48 710).
Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum.95 063907). The effect of other critical parameters like magnetic field and strain also still needs to be explored.
Towards net zero
The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.
This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices
Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.
This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.
Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.
Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.
Observing the planet during a transit event
On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.
Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.
“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”
The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.
Removing unwanted signals
According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.
“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”
The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”
Join us for an insightful webinar that delves into the role of Cobalt-60 in intracranial radiosurgery using Leksell Gamma Knife.
Through detailed discussions and expert insights, attendees will learn how Leksell Gamma Knife, powered by cobalt-60, has and continues to revolutionize the field of radiosurgery, offering patients a safe and effective treatment option.
Participants will gain a comprehensive understanding of the use of cobalt in medical applications, highlighting its significance, and learn more about the unique properties of cobalt-60. The webinar will explore the benefits of cobalt-60 in intracranial radiosurgery and why it is an ideal choice for treating brain lesions while minimizing damage to surrounding healthy tissue.
Don’t miss this opportunity to enhance your knowledge and stay at the forefront of medical advancements in radiosurgery!
Riccardo Bevilacqua
Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather, and writes popular science articles on physics and radiation.
Working with “student LEGO enthusiasts”, they have developed a fully functional LEGO interferometer kit that consists of lasers, mirrors, beamsplitters and, of course, some LEGO bricks.
The set, designed as a teaching aid for secondary-school pupils and older, is aimed at making quantum science more accessible and engaging as well as demonstrating the basic principles of interferometry such as interference patterns.
“Developing this project made me realise just how incredibly similar my work as a quantum scientist is to the hands-on creativity of building with LEGO,” notes Nottingham quantum physicist Patrik Svancara. “It’s an absolute thrill to show the public that cutting-edge research isn’t just complex equations. It’s so much more about curiosity, problem-solving, and gradually bringing ideas to life, brick by brick!”
A team at Cardiff University will now work on the design and develop materials that can be used to train science teachers with the hope that the sets will eventually be made available throughout the UK.
“We are sharing our experiences, LEGO interferometer blueprints, and instruction manuals across various online platforms to ensure our activities have a lasting impact and reach their full potential,” adds Svancara.
If you want to see the LEGO interferometer in action for yourself then it is being showcased at the Cosmic Titans: Art, Science, and the Quantum Universe exhibition at Nottingham’s Djanogly Art Gallery, which runs until 27 April.
2 In quantum cryptography, who eavesdrops on Alice and Bob?
(Courtesy: Andy Roberts IBM Research/Science Photo Library)
3 Which artist made the Quantum Cloud sculpture in London?
4 IBM used which kind of atoms to create its Quantum Mirage image?
5 When Werner Heisenberg developed quantum mechanics on Helgoland in June 1925, he had travelled to the island to seek respite from what? A His allergies B His creditors C His funders D His lovers
6 According to the State of Quantum 2024 report, how many countries around the world had government initiatives in quantum technology at the time of writing? A 6 B 17 C 24 D 33
7 The E91 quantum cryptography protocol was invented in 1991. What does the E stand for? A Edison B Ehrenfest C Einstein D Ekert
8 British multinational consumer-goods firm Reckitt sells a “Quantum” version of which of its household products? A Air Wick freshener B Finish dishwasher tablets C Harpic toilet cleaner D Vanish stain remover
9 John Bell’s famous theorem of 1964 provides a mathematical framework for understanding what quantum paradox? A Einstein–Podolsky–Rosen B Quantum indefinite causal order C Schrödinger’s cat D Wigner’s friend
10 Which celebrated writer popularized the notion of Schrödinger’s cat in the mid-1970s? A Douglas Adams B Margaret Atwood C Arthur C Clarke D Ursula K le Guin
11 Which of these isn’t an interpretation of quantum mechanics? A Copenhagen B Einsteinian C Many worlds D Pilot wave
12 Which of these companies is not a real quantum company? A Qblox B Qruise C Qrypt D Qtips
13 Which celebrity was spotted in the audience at a meeting about quantum computers and music in London in December 2022? A Peter Andre B Peter Capaldi C Peter Gabriel D Peter Schmeichel
14 What of the following birds has not yet been chosen by IBM as the name for different versions of its quantum hardware? A Condor B Eagle C Flamingo D Peregrine
15 When quantum theorist Erwin Schrödinger fled Nazi-controlled Vienna in 1938, where did he hide his Nobel-prize medal? A In a filing cabinet B Under a pot plant C Behind a sofa D In a desk drawer
16 Which of the following versions of the quantum Hall effect has not been observed so far in the lab? A Fractional quantum Hall effect B Anomalous fractional quantum Hall effect C Anyonic fractional quantum Hall effect D Excitonic fractional quantum Hall effect
17 What did Quantum Coffee on Front Street West in Toronto call its recently launched pastry, which is a superposition of a croissant and muffin? A Croissin B Cruffin C Muffant D Muffcro
18 What destroyed the Helgoland guest house where Heisenberg stayed in 1925 while developing quantum mechanics? A A bomb B A gas leak C A rat infestation D A storm
This quiz is for fun and there are no prizes. Answers will be revealed on the Physics World website in April.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
As physicists, we like to think that physics and politics are – indeed, ought to be – unconnected. And a lot of the time, that’s true.
Certainly, the value of the magnetic moment of the muon or the behaviour of superconductors in a fusion reactor (look out for our feature article next week) have nothing do with where anyone sits on the political spectrum. It’s subjects like climate change, evolution and medical research that tend to get caught in the political firing line.
But scientists of all disciplines in the US are now feeling the impact of politics at first hand. The new administration of Donald Trump has ordered the National Institutes of Health to slash the “indirect” costs of its research projects, threatening medical science and putting the universities that support it at risk. The National Science Foundation, which funds much of US physics, is under fire too, with staff sacked and grant funding paused.
Trump has also signed a flurry of executive orders that, among other things, ban federal government initiatives to boost diversity, equity and inclusion (DEI) and instruct government departments to “combat illegal private-sector DEI preferences, mandates, policies, programs and activities”. Some organizations are already abandoning such efforts for fear of these future repercussions.
What’s troubling for physics is that attacks on diversity initiatives fall most heavily on people from under-represented groups, who are more likely to quit physics or not go into it in the first place. That’s bad news for our subject as a whole because we know that a diverse community brings in smart ideas, new approaches and clever thinking.
The speed of changes in the US is bewildering too. Yes, the proportion from federal grants for indirect costs might be too high, but making dramatic changes at short notice, with no consultation is bizarre. There’s also a danger that universities will try to recoup lost money by raising tuition fees, which will hit poorer students the hardest.
So far, it’s been left to senior leaders such as James Gates – a theoretical physicist at the University of Maryland – to warn of the dangers in store. “My country,” he said at an event earlier this month, “is in for a 50-year period of a new dark ages.”
This episode of the Physics World Weekly podcast features an interview with the theoretical physicist Jim Gates who is at the University of Maryland and Brown University – both in the US.
He updates his theorist’s bucket list, which he first shared with Physics World back in 2014. This is a list of breakthroughs in physics that Gates would like to see happen before he dies.
One list item – the observation or gravitational waves – happened in 2015 and Gates explains the importance of the discovery. He also explains why the observation of gravitons, which are central to a theory of quantum gravity, is on his bucket list.
Quantum information
Gates is known for his work on supersymmetry and superstring theory, so it is not surprising that experimental evidence for those phenomena are on the bucket list. Gates also talks about a new item on his list that concerns the connections between quantum physics and information theory.
In this interview with Physics World’s Margaret Harris, Gates also reflects on how the current political upheaval in the US is affecting science and society – and what scientists can do ensure that the public has faith in science.
I studied physics at the University of Oxford and I was the first person in my family to go to university. I then completed a DPhil at Oxford in 1991 studying cosmic rays and neutrinos. In 1992 I moved to University College London as a research fellow. That was the first time I went to CERN and two years later I began working on the Large Electron-Positron Collider, which was the predecessor of the Large Hadron Collider. I was fortunate enough to work on some of the really big measurements of the W and Z bosons and electroweak unification, so it was a great time in my life. In 2000 I worked at the University of Cambridge where I set up a neutrino group. It was then that I began working at Fermilab – the US’s premier particle physics lab.
So you flipped from collider physics to neutrino physics?
Over the past 20 years, I have oscillated between them and sometimes have done both in parallel. Probably the biggest step forward was in 2013 when I became spokesperson for the Deep Underground Neutrino Experiment – a really fascinating, challenging and ambitious project. In 2018 I was then appointed executive chair of the Science and Technology Facilities Council (STFC) – one of the main UK funding agencies. The STFC funds particle physics and astronomy in the UK and maintains relationships with organizations such as CERN and the Square Kilometre Array Observatory, as well as operating some of the UK’s biggest national infrastructures such as the Rutherford Appleton Laboratory and the Daresbury Laboratory.
What did that role involve?
It covered strategic funding of particle physics and astronomy in the UK and also involved running a very large scientific organization with about 2800 scientific, technical and engineering staff. It was very good preparation for the role as CERN director-general.
What attracted you to become CERN director-general?
CERN is such an important part of the global particle-physics landscape. But I don’t think there was ever a moment where I just thought “Oh, I must do this”. I’ve spent six years on the CERN Council, so I know the organization well. I realized I had all of the tools to do the job – a combination of the science, knowing the organization and then my experience in previous roles. CERN has been a large part of my life for many years, so it’s a fantastic opportunity for me.
It was quite a surreal moment. My first thoughts were “Well, OK, that’s fun”, so it didn’t really sink in until the evening. I’m obviously very happy and it was fantastic news but it was almost a feeling of “What happens now?”.
What so does happen now as CERN director-general designate?
There will be a little bit of shadowing, but you can’t shadow someone for the whole year, that doesn’t make very much sense. So what I really have to do is understand the organization, how it works from the inside and, of course, get to know the fantastic CERN staff, which I’ve already started doing. A lot of my time at the moment is meeting people and understanding how things work.
How might you do things differently?
I don’t think I will do anything too radical. I will have a look at where we can make things work better. But my priority for now is putting in place the team that will work with me from January. That’s quite a big chunk of work.
We have a decision to make on what comes after the High Luminosity-LHC in the mid-2040s
What do you think your leadership style will be?
I like to put around me a strong leadership team and then delegate and trust the leadership team to deliver. I’m there to set the strategic direction but also to empower them to deliver. That means I can take an outward focus and engage with the member states to promote CERN. I think my leadership style is to put in place a culture where the staff can thrive and operate in a very open and transparent way. That’s very important to me because it builds trust both within the organization and with CERN’s partners. The final thing is that I’m 100% behind CERN being an inclusive organization.
So diversity is an important aspect for you?
I am deeply committed to diversity and CERN is deeply committed to it in all its forms, and that will not change. This is a common value across Europe: our member states absolutely see diversity as being critical, and it means a lot to our scientific communities as well. From a scientific point of view, if we’re not supporting diversity, we’re losing people who are no different from others who come from more privileged backgrounds. Also, diversity at CERN has a special meaning: it means all the normal protected characteristics, but also national diversity. CERN is a community of 24 member states and quite a few associate member states, and ensuring nations are represented is incredibly important. It’s the way you do the best science, ultimately, and it’s the right thing to do.
The LHC is undergoing a £1bn upgrade towards a High Luminosity-LHC (HL-LHC), what will that entail?
The HL-LHC is a big step up in terms of capability and the goal will be to increase the luminosity of the machine. We are also upgrading the detectors to make them even more precise. The HL-LHC will run from about 2030 to the early 2040s. So by the end of LHC operations, we would have only taken about 10% of the overall data set once you add what the HL-LHC is expected to produce.
What physics will that allow?
There’s a very specific measurement that we would like to make around the nature of the Higgs mechanism. There’s something very special about the Higgs boson that it has a very strange vacuum potential, so it’s always there in the vacuum. With the HL-LHC, we’re going to start to study the structure of that potential. That’s a really exciting and fundamental measurement and it’s a place where we might start to see new physics.
Beyond the HL-LHC, you will also be involved in planning what comes next. What are the options?
We have a decision to make on what comes after the HL-LHC in the mid-2040s. It seems a long way off but these projects need a 20-year lead-in. I think the consensus amongst the scientific community for a number of years has been that the next machine must explore the Higgs boson. The motivation for a Higgs factory is incredibly strong.
Yet there has not been much consensus whether that should be a linear or circular machine?
My personal view is that a circular collider is the way forward. One option is the Future Circular Collider (FCC) – a 91 km circumference collider that would be built at CERN.
What would the benefits of the FCC be?
We know how to build circular colliders and it gives you significantly more capability than a linear machine by producing more Higgs bosons. It is also a piece of research infrastructure that will be there for many years beyond the electron–positron collider. The other aspect is that at some point in the future, we are going to want a high-energy hadron collider to explore the unknown.
But it won’t come cheap, with estimates being about £12–15bn for the electron–positron version, dubbed FCC-ee?
While the price tag for the FCC-ee is significant, that is spread over 24 member states for 15 years and contributions can also come from elsewhere. I’m not saying it’s going to be easy to actually secure that jigsaw puzzle of resource, because money will need to come from outside Europe as well.
China is also considering the Circular Electron Positron Collider (CEPC) that could, if approved, be built by the 2030s. What would happen to the FCC if the CEPC were to go ahead?
I think that will be part of the European Strategy for Particle Physics, which will happen throughout this year, to think about the ifs and buts. Of course, nothing has really been decided in China. It’s a big project and it might not go ahead. I would say it’s quite easy to put down aggressive timescales on paper but actually delivering them is always harder. The big advantage of CERN is that we have the scientific and engineering heritage in building colliders and operating them. There is only one CERN in the world.
What do you make of alternative technologies such as muon colliders that could be built in the existing LHC tunnel and offer high energies?
It’s an interesting concept but technically we don’t know how to do it. There’s a lot of development work but it’s going to take a long time to turn that into a real machine. So looking at a muon collider on the time scale of the mid-2040s is probably unrealistic. What is critical for an organization like CERN and for global particle physics is that when the HL-LHC stops by 2040, there’s not a large gap without a collider project.
Last year CERN celebrated its 70th anniversary, what do you think particle physics might look like in the next 70 years?
If you look back at the big discoveries over the last 30 years we’ve seen neutrino oscillations, the Higgs boson, gravitational waves and dark energy. That’s four massive discoveries. In the coming decade we will know a lot more about the nature of the neutrino and the Higgs boson via the HL-LHC. The big hope is we find something else that we don’t expect.
A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.
For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.
To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.
The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.
Reproducing human coughs and sneezes
One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.
During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)
“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”
These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.
“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.
The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.
The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.
Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.
Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.
“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”
This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.
Triboelectric series
Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.
Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.
According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.
In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”
Starting from scratch
For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.
At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.
Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.
Contact history
“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”
To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.
“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”
The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.
Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.
Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)
The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.
Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).
As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.
The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.
The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.
Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.
Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.
Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.
Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.
Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.
Three decades ago – in May 1995 – the British-born mathematical physicist Freeman Dyson published an article in the New York Review of Books. Entitled “The scientist as rebel”, it described how all scientists have one thing in common. No matter what their background or era, they are rebelling against the restrictions imposed by the culture in which they live.
“For the great Arab mathematician and astronomer Omar Khayyam, science was a rebellion against the intellectual constraints of Islam,” Dyson wrote. Leading Indian physicists in the 20th century, he added, were rebelling against their British colonial rulers and the “fatalistic ethic of Hinduism”. Even Dyson traced his interest in science as an act of rebellion against the drudgery of compulsory Latin and football at school.
“Science is an alliance of free spirits in all cultures rebelling against the local tyranny that each culture imposes,” he wrote. Through those acts of rebellion, scientists expose “oppressive and misguided conceptions of the world”. The discovery of evolution and of DNA changed our sense of what it means to be human, he said, while black holes and Gödel’s theorem gave us new views of the universe and the nature of mathematics.
But Dyson feared that this view of science was being occluded. Writing in the 1990s, which was a time of furious academic debate about the “social construction of science”, he feared that science’s liberating role was becoming hidden by a cabal of sociologists and philosophers who viewed scientists as like any other humans, governed by social, psychological and political motives. Dyson didn’t disagree with that view, but underlined that nature is the ultimate arbiter of what’s important.
Today’s rebels
One wonders what Dyson, who died in 2020, would make of current events were he alive today. It’s no longer just a small band of academics disputing science. Its opponents also include powerful and highly placed politicians, who are tarring scientists and scientific findings for lacking objectivity and being politically motivated. Science, they say, is politics by other means. They then use that charge to justify ignoring or openly rejecting scientific findings when creating regulations and making decisions.
Thousands of researchers, for instance, contribute to efforts by the United Nations Intergovernmental Panel on Climate Change (IPCC) to measure the impact and consequences of the rising amounts of carbon dioxide in the atmosphere. Yet US President Donald Trump –speaking after Hurricane Helene left a trail of destruction across the south-east US last year – called climate change “one of the great scams”. Meanwhile, US chief justice John Roberts once rejected using mathematics to quantify the partisan effects of gerrymandering, calling it “sociological gobbledygook”.
In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations
These attitudes are not only anti-science but also undermine democracy by sidelining experts and dissenting voices, curtailing real debate, scapegoating and harming citizens.
A worrying precedent for how things may play out in the Trump administration occurred in 2012 when North Carolina’s legislators passed House Bill 819. By prohibiting the use of models of sea-level rise to protect people living near the coast from flooding, the bill damaged the ability of state officials to protect its coastline, resources and citizens. It also prevented other officials from fulfilling their duty to advise and protect people against threats to life and property.
In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations. In the first Trump administration, bills were introduced in the US Congress to stop politicians from using science produced by the Department of Energy in policies to avoid admitting the reality of climate change.
We can expect more anti-scientific efforts, if the first Trump administration is anything to go by. Dyson’s rebel alliance, it seems, now faces not just posturing academics but a Galactic Empire.
The critical point
In his 1995 essay, Dyson described how scientists can be liberators by abstaining from political activity rather than militantly engaging in it. But how might he have seen them meeting this moment? Dyson would surely not see them turning away from their work to become politicians themselves. After all, it’s abstaining from politics that empowers scientists to be “in rebellion against the restrictions” in the first place. But Dyson would also see them as aware that science is not the driving force in creating policies; political implementation of scientific findings ultimately depends on politicians appreciating the authority and independence of these findings.
One of Trump’s most audacious “Presidential Actions”, made in the first week of his presidency, was to define sex. The action makes a female “a person belonging, at conception, to the sex that produces the large reproductive cell” and a male “a person belonging, at conception, to the sex that produces the small reproductive cell”. Trump ordered the government to use this “fundamental and incontrovertible reality” in all regulations.
An editorial in Nature (563 5) said that this “has no basis in science”, while cynics, citing certain biological interpretations that all human zygotes and embryos are initially effectively female, gleefully insisted that the order makes all of us female, including the new US president. For me and other Americans, Trump’s action restructures the world as it has been since Genesis.
Still, I imagine that Dyson would still see his rebels as hopeful, knowing that politicians don’t have the last word on what they are doing. For, while politicians can create legislation, they cannot legislate creation.
In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.
The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.
“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”
With and without sebum
Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.
This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.
What is responsible for the low ice adhesion?
Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.
This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.
Insights into human technology
As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”
The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”
For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.
Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.
“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”
One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.
Collective rotation
A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.
“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”
Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.
Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.
Thin magnetic film
The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.
To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.
This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.
While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”
Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.