Agrivoltaics is an interdisciplinary research area that lies at the intersection of photovoltaics (PVs) and agriculture. Traditional PV systems used in agricultural settings are made from silicon materials and are opaque. The opaque nature of these solar cells can block sunlight reaching plants and hinder their growth. As such, there’s a need for advanced semi-transparent solar cells that can provide sufficient power but still enable plants to grow instead of casting a shadow over them.
In a recent study headed up at the Institute for Microelectronics and Microsystems (IMM) in Italy, Alessandra Alberti and colleagues investigated the potential of semi-transparent perovskite solar cells as coatings on the roof of a greenhouse housing radicchio seedlings.
Solar cell shading an issue for plant growth
Opaque solar cells are known to induce shade avoidance syndrome in plants. This can cause morphological adaptations, including changes in chlorophyll content and an increased leaf area, as well as a change in the metabolite profile of the plant. Lower UV exposure can also reduce polyphenol content – antioxidant and anti-inflammatory molecules that humans get from plants.
Addressing these issues requires the development of semi-transparent PV panels with high enough efficiencies to be commercially feasible. Some common panels that can be made thin enough to be semi-transparent include organic and dye-sensitized solar cells (DSSCs). While these have been used to provide power while growing tomatoes and lettuces, they typically only have a power conversion efficiency (PCE) of a few percent – a more efficient energy harvester is still required.
A semi-transparent perovskite solar cell greenhouse
Perovskite PVs are seen as the future of the solar cell industry and show a lot of promise in terms of PCE, even if they are not yet up to the level of silicon. However, perovskite PVs can also be made semi-transparent.
Experimental set-up The laboratory-scale greenhouse. (Courtesy: CNR-IMM)
In this latest study, the researchers designed a laboratory-scale greenhouse using a semi-transparent europium (Eu)-enriched CsPbI3 perovskite-coated rooftop and investigated how radicchio seeds grew in the greenhouse for 15 days. They chose this Eu-enriched perovskite composition because CsPbI3 has superior thermal stability compared with other perovskites, making it ideal for long exposures to the Sun’s rays. The addition of Eu into the CsPbI3 structure improved the perovskite stability by minimizing the number of intrinsic defects and increasing the surface-to-volume ratio of perovskite grains.
Alongside this stability, this perovskite also has no volatile components that could potentially effuse under high surface temperatures. It also typically possesses a high PCE – the record for this composition is 21.15%, which is significantly higher and much more commercially feasible than previously possible with organic PVs and DSSCs. This perovskite, therefore, provides a good trade-off between the PCE that can be achieved while transmitting enough light to allow the seedlings to grow.
Low light conditions promote seedling growth
Even though the seedlings were exposed to lower light conditions than natural light, the team found that they grew more quickly, and with bigger leaves, than those under glass panels. This is attributed to the perovskite acting as a filter for only red light to pass through. And red light is known to improve the photosynthetic efficiency and light absorption capabilities of plants, as well as increase the levels of sucrose and hexose within the plant.
The researchers also found that seedlings grown under these conditions had different gene expression patterns compared with those grown under glass. These expression patterns were associated with environmental stress responses, growth regulation, metabolism and light perception, suggesting that the seedlings naturally adapted to different light conditions – although further research is needed to see whether these adaptations will improve the crop yield.
Overall, the use of perovskite PVs strikes a good balance between being able to provide enough power to cover the annual energy needs for irrigation, lighting and air conditioning, while still allowing the seedlings to grow – and grow much quicker and faster. The team suggest that the perovskite solar cells could help with indoor food production operations in the agricultural sector as a potentially affordable solution, although more work now needs to be done on much larger scales to test the technology’s commercial feasibility.
The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.
The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.
“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.
DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”. The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.
Intrinsic energy density
Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.
DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.
This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.
DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.
The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.
Density fluctuations DESI observations showing nearby bright galaxies (yellow), luminous red galaxies (orange), emission-line galaxies (blue), and quasars (green). The inset shows the large-scale structure of a small portion of the universe. (Courtesy: Claire Lamman/DESI collaboration)
“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”
As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.
The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.
However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.
“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”
Dark energy takes over
The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.
“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.
Scalar quantum field
Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.
However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”
Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.
“Big rip” avoided
Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.
“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.
Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”
The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.
Core physics This apple tree at Woolsthorpe Manor is believed to have been the inspiration for Isaac Newton. (Courtesy: Bs0u10e01/CC BY-SA 4.0)
Physicists in the UK have drawn up plans for an International Year of Classical Physics (IYC) in 2027 – exactly three centuries after the death of Isaac Newton. Following successful international years devoted to astronomy (2009), light (2015) and quantum science (2025), they want more recognition for a branch of physics that underpins much of everyday life.
A bright green Flower of Kent apple has now been picked as the official IYC logo in tribute to Newton, who is seen as the “father of classical physics”. Newton, who died in 1727, famously developed our understanding of gravity – one of the fundamental forces of nature – after watching an apple fall from a tree of that variety in his home town of Woolsthorpe, Lincolnshire, in 1666.
“Gravity is central to classical physics and contributes an estimated $270bn to the global economy,” says Crispin McIntosh-Smith, chief classical physicist at the University of Lincoln. “Whether it’s rockets escaping Earth’s pull or skiing down a mountain slope, gravity is loads more important than quantum physics.”
McIntosh-Smith, who also works in cosmology having developed the Cosmic Crisp theory of the universe during his PhD, will now be leading attempts to get endorsement for IYC from the United Nations. He is set to take a 10-strong delegation from Bramley, Surrey, to Paris later this month.
An official gala launch ceremony is being pencilled in for the Travelodge in Grantham, which is the closest hotel to Newton’s birthplace. A parallel scientific workshop will take place in the grounds of Woolsthorpe Manor, with a plenary lecture from TV physicist Brian Cox. Evening entertainment will feature a jazz band.
Numerous outreach events are planned for the year, including the world’s largest demonstration of a wooden block on a ramp balanced by a crate on a pulley. It will involve schoolchildren pouring Golden Delicious apples into the crate to illustrate Newton’s laws of motion. Physicists will also be attempting to break the record for the tallest tower of stacked Braeburn apples.
But there is envy from those behind the 2025 International Year of Quantum Science and Technology. “Of course, classical physics is important but we fear this year will peel attention away from the game-changing impact of quantum physics,” says Anne Oyd from the start-up firm Qrunch, who insists she will only play a cameo role in events. “I believe the impact of classical physics is over-hyped.”
FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).
Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.
Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.
Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.
For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.
The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.
FLASH confers skin protection
To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.
The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.
To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.
The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)
For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.
Similar skin damage between electrons and protons
Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.
Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.
The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”
Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.
Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.
With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.
From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.
For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.
As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.
Supporting students
The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.
Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.
Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.
More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.
The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.
Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.
Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.
Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.
The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.
The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”
At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.
Recent claims of quantum advantage
In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.
To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.
Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.
The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors. “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”
“How progress gets made”
Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.
The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.
Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”
Sometimes, you just have to follow your instincts and let serendipity take care of the rest.
North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.
But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.
My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.
This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.
Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.
The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)
Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.
One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)
The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.
Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.
The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)
Cell separation Illustration of the fabricated optimal acousto-microfluidic chip. (Courtesy: Afshin Kouhkord and Naserifar Naser)
Analysing circulating tumour cells (CTCs) in the blood could help scientists detect cancer in the body. But separating CTCs from blood is a difficult, laborious process and requires large sample volumes.
Researchers at the K N Toosi University of Technology (KNTU) in Tehran, Iran believe that ultrasonic waves could separate CTCs from red blood cells accurately, in an energy efficient way and in real time. They publish their study in the journal Physics of Fluids.
“In a broader sense, we asked: ‘How can we design a microfluidic, lab-on-a-chip device powered by SAWs [standing acoustic waves] that remains simple enough for medical experts to use easily, while still delivering precise and efficient cell separation?’,” says senior author Naser Naserifar, an assistant professor in mechanical engineering at KNTU. “We became interested in acoustofluidics because it offers strong, biocompatible forces that effectively handle cells with minimal damage.”
Acoustic waves can deliver enough force to move cells over small distances without damaging them. The researchers used dual pressure acoustic fields at critical positions in a microchannel to separate CTCs from other cells. The CTCs are gathered at an outlet for further analyses, cultures and laboratory procedures.
In the process of designing the chip, the researchers integrated computational modelling, experimental analysis and artificial intelligence (AI) algorithms to analyse acoustofluidic phenomena and generate datasets that predict CTC migration in the body.
“We introduced an acoustofluidic microchannel with two optimized acoustic zones, enabling fast, accurate separation of CTCs from RBCs [red blood cells],” explains Afshin Kouhkord, who performed the work while a master’s student in the Advance Research in Micro And Nano Systems Lab at KNTU. “Despite the added complexity under the hood, the resulting chip is designed for simple operation in a clinical environment.”
So far, the researchers have evaluated the device with numerical simulations and tested it using a physical prototype. Simulations modelled fluid flow, acoustic pressure fields and particle trajectories. The physical prototype was made of lithium niobate, with polystyrene microspheres used as surrogates for red blood cells and CTCs. Results from the prototype agreed with numerical simulations to within 3.5%.
“This innovative approach in laboratory-on-chip technology paves the way for personalized medicine, real-time molecular analysis and point-of-care diagnostics,” Kouhkord and Naserifar write.
The researchers are now refining their design, aiming for a portable device that could be operated with a small battery pack in resource-limited and remote environments.
D-Wave Systems has used quantum annealing to do simulations of quantum magnetic phase transitions. The company claims that some of their calculations would be beyond the capabilities of the most powerful conventional (classical) computers – an achievement referred to as quantum advantage. This would mark the first time quantum computers had achieved such a feat for a practical physics problem.
However, the claim has been challenged by two independent groups of researchers in Switzerland and the US, who have published papers on the arXiv preprint server that report that similar calculations could be done using classical computers. D-Wave’s experts believe these classical results fall well short of the company’s own accomplishments, and some independent experts agree with D-Wave.
While most companies trying to build practical quantum computers are developing “universal” or “gate model” quantum systems, US-based D-Wave has principally focused on quantum annealing devices. While such systems are less programmable than gate model systems, the approach has allowed D-Wave to build machines with many more quantum bits (qubits) than any of its competitors. Whereas researchers at Google Quantum AI and researchers in China have, independently, recently unveiled 105-qubit universal quantum processors, some of D-Wave’s have more than 5000 qubits. Moreover, D-Wave’s systems are already in practical use, with hardware owned by the Japanese mobile phone company NTT Docomo being used to optimize cell tower operations. Systems are also being used for network optimization at motor companies, food producers and elsewhere.
Trevor Lanting, the chief development officer at D-Wave, explains the central principles behind quantum-annealing computation: “You have a network of qubits with programmable couplings and weights between those devices and then you program in a certain configuration – a certain bias on all of the connections in the annealing processor,” he says. The quantum annealing algorithm places the system in a superposition of all possible states of the system. When the couplings are slowly switched off, the system settles into its most energetically favoured state – which is the desired solution.
Quantum hiking
Lanting compares this to a hiker in the mountains searching for the lowest point on a landscape: “As a classical hiker all you can really do is start going downhill until you get to a minimum, he explains; “The problem is that, because you’re not doing a global search, you could get stuck in a local valley that isn’t at the minimum elevation.” By starting out in a quantum superposition of all possible states (or locations in the mountains), however, quantum annealing is able to find the global potential minimum.
In the new work, researchers at D-Wave and elsewhere set out to show that their machines could use quantum annealing to solve practical physics problems beyond the reach of classical computers. The researchers used two different 1200-qubit processors to model magnetic quantum phase transitions. This is a similar problem to one studied in gate-model systems by researchers at Google and Harvard University in independent work announced in February.
“When water freezes into ice, you can sometimes see patterns in the ice crystal, and this is a result of the dynamics of the phase transition,” explains Andrew King, who is senior distinguished scientist at D-Wave and the lead author of a paper describing the work. “The experiments that we’re demonstrating shed light on a quantum analogue of this phenomenon taking place in a magnetic material that has been programmed into our quantum processors and a phase transition driven by a magnetic field.” Understanding such phase transitions are important in the discovery and design of new magnetic materials.
Quantum versus classical
The researchers studied multiple configurations, comprising ever-more spins arranged in ever-more complex lattice structures. The company says that its system performed the most complex simulation in minutes. They also ascertained how long it would take to do the simulations using several leading classical computation techniques, including neural network methods, and how the time to achieve a solution grew with the complexity of the problem. Based on this, they extrapolated that the most complex lattices would require almost a million years on Frontier, which is one of the world’s most powerful supercomputers.
However, two independent groups – one at EPFL in Switzerland and one at the Flatiron Institute in the US – have posted papers on the arXiv preprint server claiming to have done some of the less complex calculations using classical computers. They argue that their results should scale simply to larger sizes; the implication being that classical computers could solve the more complicated problems addressed by D-Wave.
King has a simple response: “You don’t just need to do the easy simulations, you need to do the hard ones as well, and nobody has demonstrated that.” Lanting adds that “I see this as a healthy back and forth between quantum and classical methods, but I really think that, with these results, we’re pulling ahead of classical methods on the biggest scales we can calculate”.
Very interesting work
Frank Verstraete of the University of Cambridge is unsurprised by some scientists’ scepticism. “D-Wave have historically been the absolute champions at overselling what they did,” he says. “But now it seems they’re doing something nobody else can reproduce, and in that sense it’s very interesting.” He does note, however, that the specific problem chosen is not, in his view an interesting one from a physics perspective, and has been chosen purely to be difficult for a classical computer.
Daniel Lidar of the University of Southern California, who has previously collaborated with D-Wave on similar problems but was not involved in the current work, says “I do think this is quite the breakthrough…The ability to anneal very fast on the timescales of the coherence times of the qubits has now become possible, and that’s really a game changer here.” He concludes that “the arms race is destined to continue between quantum and classical simulations, and because, in all likelihood, these are problems that are extremely hard classically, I think the quantum win is going to become more and more indisputable.”
Scientists who have been publicly accused of sexual misconduct see a significant and immediate decrease in the rate at which their work is cited, according to a study by behavioural scientists in the US. However, researchers who are publicly accused of scientific misconduct are found not to suffer the same drop in citations (PLOS One20 e0317736). Despite its flaws, citation rates are often seen a marker of impact and quality.
The study was carried by a team led by Giulia Maimone from the University of California, Los Angeles, who collected data from the Web of Science covering 31,941 scientific publications across 18 disciplines. They then analysed the citation rates for 5888 papers authored by 30 researchers accused of either sexual or scientific misconduct, the latter including data fabrication, falsification and plagiarism.
Maimone told Physics World that they used strict selection criteria to ensure that the two groups of academics were comparable and that the accusations against them were public. This meant her team only used scholars whose misconduct allegations have been reported in the media and had “detailed accounts of the allegations online”.
Maimone’s team concluded that papers by scientists accused of sexual misconduct experienced a significant drop in citations in the three years after allegations become public compared with a “control” group of academics of a similar professional standing. Those accused of scientific fraud, meanwhile, saw no statistically significant change in the citation rates of their papers.
Further work
To further explore attitudes towards sexual and scientific misconduct, the researchers surveyed 231 non-academics and 240 academics. The non-academics considered sexual misconduct more reprehensible than scientific misconduct and more deserving of punishment, while academics claimed that they would more likely continue to cite researchers accused of sexual misconduct as compared to scientific misconduct. “Exactly the opposite of what we observe in the real data,” adds Maimone.
According to the researchers, there are two possible explanations for this discrepancy. One is that academics, according to Maimone, “overestimate their ability to disentangle the scientists from the science”. Another is that scientists are aware that they would not cite sexual harassers, but they are unwilling to admit it because they feel they should take a harsher professional approach towards scientific misconduct.
Maimone says they would now like to explore the longer-term consequences of misconduct as well as the psychological mechanisms behind the citation drop for those accused of sexual misconduct. “Do [academics] simply want to distance themselves from these allegations or are they actively trying to punish these scholars?” she asks.
Researchers have demonstrated that they can remotely detect radioactive material from 10 m away using short-pulse CO2 lasers – a distance over ten times farther than achieved via previous methods.
Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.
The study may one day lead to remote sensing technologies that could be used in nuclear disaster response and nuclear security.
Using atmospheric ionization
Radioactive materials emit particles – such as alpha, beta or gamma particles – that can ionize air molecules, creating free electrons and negative ions. These charged particles are typically present at very low concentrations, making them difficult to detect.
Senior author Howard Milchberg and colleagues – also from Brookhaven National Laboratory, Los Alamos National Laboratory and Lawrence Livermore National Laboratory – demonstrated that CO2 lasers could accelerate these charged particles, causing them to collide with neutral gas molecules, in turn creating further ionization. These additional free charges would then undergo the same laser-induced accelerations and collisions, leading to a cascade of charged particles.
This effect, known as “electron avalanche breakdown”, can create microplasmas that scatter laser light. By measuring the profile of the backscattered light, researchers can detect the presence of radioactive material.
The team tested their technique using a 3.6-mCi polonium-210 alpha particle source at a standoff distance of 10 m, significantly longer than previous experiments that used different types of lasers and electromagnetic radiation sources.
“The researchers successfully demonstrated 10-m standoff detection of radioactive material, significantly surpassing the previous range of approximately 1 m,” she says.
Milchberg and collaborators had previously used a mid-infrared laser in a similar experiment in 2019. Changing to a long-wavelength (9.2 μm) CO2 laser brought significant advantages, he says.
“You can’t use any laser to do this cascading breakdown process,” Milchberg explains. The CO2 laser’s wavelength was able to enhance the avalanche process, while being low energy enough to not create its own ionization sources. “CO2 is sort of the limit for long wavelengths on powerful lasers and it turns out CO2 lasers are very, very efficient as well,” he says. “So this is like a sweet spot.”
Imaging microplasmas
The team also used a CMOS camera to capture visible-light emissions from the microplasmas. Milchberg says that this fluorescence around radioactive sources resembled balls of plasma, indicating the localized regions where electron avalanche breakdowns had occurred.
By counting these “plasma balls” and calibrating them against the backscattered laser signal, the researchers could link fluorescence intensity to the density of ionization in the air, and use that to determine the type of radiation source.
The CMOS imagers, however, had to be placed close to the measured radiation source, reducing their applicability to remote sensing. “Although fluorescence imaging is not practical for field deployment due to the need for close-range cameras, it provides a valuable calibration tool,” Milchberg says.
Scaling to longer distances
The researchers believe their method can be extended to standoff distances exceeding 100 m. The primary limitation is the laser’s focusing geometry, which would affect the regions in which it could trigger an avalanche breakdown. A longer focal length would require a larger laser aperture but could enable kilometre-scale detection.
Choi points out, however, that deploying a CO2 laser may be difficult in real-world applications. “A CO₂ laser is a bulky system, making it challenging to deploy in a portable manner in the field,” she says, adding that mounting the laser for long-range detection may be a solution.
Milchberg says that the next steps will be to continue developing a technique that can differentiate between different types of radioactive sources completely remotely. Choi agrees, noting that accurately quantifying both the amount and type of radioactive material continues to be a significant hurdle to realising remote sensing technologies in the field.
“There’s also the question of environmental conditions,” says Milchberg, explaining that it is critical to ensure that detection techniques are robust against the noise introduced by aerosols or air turbulence.
The Square Kilometre Array (SKA) Observatory has released the first images from its partially built low-frequency telescope in Australia, known as SKA-Low.
The new SKA-Low image was created using 1024 two-metre-high antennas. It shows an area of the sky that would be obscured by a person’s clenched fist held at arm’s length.
Observed at 150 MHz to 175 MHz, the image contains 85 of the brightest known galaxies in that region, each with a black hole at their centre.
“We are demonstrating that the system as a whole is working,” notes SKA Observatory director-general Phil Diamond. “As the telescopes grow, and more stations and dishes come online, we’ll see the images improve in leaps and bounds and start to realise the full power of the SKAO.”
SKA-Low will ultimately have 131 072 two-metre-high antennas that will be clumped together in arrays to act as a single instrument.
These arrays collect the relatively quiet signals from space and combine them to produce radio images of the sky with the aim of answering some of cosmology’s most enigmatic questions, including what dark matter is, how galaxies form, and if there is other life in the universe.
When the full SKA-Low gazes at the same portion of sky as captured in the image released yesterday, it will be able to observe more than 600,000 galaxies.
“The bright galaxies we can see in this image are just the tip of the iceberg,” says George Heald, lead commissioning scientist for SKA-Low. “With the full telescope we will have the sensitivity to reveal the faintest and most distant galaxies, back to the early universe when the first stars and galaxies started to form.”
‘Milestone’ achieved
SKA-Low is one of two telescopes under construction by the observatory. The other, SKA-Mid, which observes mid-frequency range, will include 197 three-storey dishes and is being built in South Africa.
The telescopes, with a combined price tag of £1bn, are projected to begin making science observations in 2028. They are being funded through a consortium of member states, including China, Germany and the UK.
University of Cambridge astrophysicist Eloy de Lera Acedo, who is principal Investigator at his institution for the observatory’s science data processor, says the first image from SKA-Low is an “important milestone” for the project.
“It is worth remembering that these images now require a lot of work, and a lot more data to be captured with the telescope as it builds up, to reach the science quality level we all expect and hope for,” he adds.
Rob Fender, an astrophysicist at the University of Oxford, who is not directly involved in the SKA Observatory, says that the first image “hints at the enormous potential” for the array that will eventually “provide humanity’s deepest ever view of the universe at wavelengths longer than a metre”.
A new study probing quantum phenomena in neurons as they transmit messages in the brain could provide fresh insight into how our brains function.
In this project, described in the Computational and Structural Biotechnology Journal, theoretical physicist Partha Ghose from the Tagore Centre for Natural Sciences and Philosophy in India, together with theoretical neuroscientist Dimitris Pinotsis from City St George’s, University of London and the MillerLab of MIT, proved that established equations describing the classical physics of brain responses are mathematically equivalent to equations describing quantum mechanics. Ghose and Pinotsis then derived a Schrödinger-like equation specifically for neurons.
Our brains process information via a vast network containing many millions of neurons, which can each send and receive chemical and electrical signals. Information is transmitted by nerve impulses that pass from one neuron to the next, thanks to a flow of ions across the neuron’s cell membrane. This results in an experimentally detectable change in electrical potential difference across the membrane known as the “action potential” or “spike”.
When this potential passes a threshold value, the impulse is passed on. But below the threshold for a spike, a neuron’s action potential randomly fluctuates in a similar way to classical Brownian motion – the continuous random motion of tiny particles suspended in a fluid – due to interactions with its surroundings. This creates the so-called “neuronal noise” that the researchers investigated in this study.
Previously, “both physicists and neuroscientists have largely dismissed the relevance of standard quantum mechanics to neuronal processes, as quantum effects are thought to disappear at the large scale of neurons,” says Pinotsis. But some researchers studying quantum cognition hold an alternative to this prevailing view, explains Ghose.
“They have argued that quantum probability theory better explains certain cognitive effects observed in the social sciences than classical probability theory,” Ghose tells Physics World. “[But] most researchers in this field treat quantum formalism [the mathematical framework describing quantum behaviour] as a purely mathematical tool, without assuming any physical basis in quantum mechanics. I found this perspective rather perplexing and unsatisfactory, prompting me to explore a more rigorous foundation for quantum cognition – one that might be physically grounded.”
As such, Ghose and Pinotsis began their work by taking ideas from American mathematician Edward Nelson, who in 1966 derived the Schrödinger equation – which predicts the position and motion of particles in terms of a probability wave known as a wavefunction – using classical Brownian motion.
Firstly they proved that the variables in the classical equations for Brownian motion that describe the random neuronal noise seen in brain activity also obey quantum mechanical equations, deriving a Schrödinger-like equation for a single neuron. This equation describes neuronal noise by revealing the probability of a neuron having a particular value of membrane potential at a specific instant. Next, the researchers showed how the FitzHugh-Nagumo equations, which are widely used for modelling neuronal dynamics, could be re-written as a Schrödinger equation. Finally, they introduced a neuronal constant in these Schrödinger-like equations that is analogous to Planck’s constant (which defines the amount of energy in a quantum).
“I got excited when the mathematical proof showed that the FitzHugh-Nagumo equations are connected to quantum mechanics and the Schrödinger equation,” enthuses Pinotsis. “This suggested that quantum phenomena, including quantum entanglement, might survive at larger scales.”
“Penrose and Hameroff have suggested that quantum entanglement might be related to lack of consciousness, so this study could shed light on how anaesthetics work,” he explains, adding that their work might also connect oscillations seen in recordings of brain activity to quantum phenomena. “This is important because oscillations are considered to be markers of diseases: the brain oscillates differently in patients and controls and by measuring these oscillations we can tell whether a person is sick or not.”
Going forward, Ghose hopes that “neuroscientists will get interested in our work and help us design critical neuroscience experiments to test our theory”. Measuring the energy levels for neurons predicted in this study, and ultimately confirming the existence of a neuronal constant along with quantum effects including entanglement would, he says, “represent a big step forward in our understanding of brain function”.
(Courtesy: EHT Collaboration; Los Alamos National Laboratory)
1 When the Event Horizon Telescope imaged a black hole in 2019, what was the total mass of all the hard drives needed to store the data? A 1 kg B 50 kg C 500 kg D 2000 kg
2 In 1956 MANIAC I became the first computer to defeat a human being in chess, but because of its limited memory and power, the pawns and which other pieces had to be removed from the game? A Bishops B Knights C Queens D Rooks
(Courtesy: IOP Publishing; CERN)
3 The logic behind the Monty Hall problem, which involves a car and two goats behind different doors, is one of the cornerstones of machine learning. On which TV game show is it based? A Deal or No Deal BFamily Fortunes CLet’s Make a Deal DWheel of Fortune
4 In 2023 CERN broke which barrier for the amount of data stored on devices at the lab? A 10 petabytes (1016 bytes) B 100 petabytes (1017 bytes) C 1 exabyte (1018 bytes) D 10 exabytes (1019 bytes)
5 What was the world’s first electronic computer? A Atanasoff–Berry Computer (ABC) B Electronic Discrete Variable Automatic Computer (EDVAC) C Electronic Numerical Integrator and Computer (ENIAC) D Small-Scale Experimental Machine (SSEM)
6 What was the outcome of the chess match between astronaut Frank Poole and the HAL 9000 computer in the movie 2001: A Space Odyssey? A Draw B HAL wins C Poole wins D Match abandoned
7 Which of the following physics breakthroughs used traditional machine learning methods? A Discovery of the Higgs boson (2012) B Discovery of gravitational waves (2016) C Multimessenger observation of a neutron-star collision (2017) D Imaging of a black hole (2019)
8 The physicist John Hopfield shared the 2024 Nobel Prize for Physics with Geoffrey Hinton for their work underpinning machine learning and artificial neural networks – but what did Hinton originally study? A Biology B Chemistry C Mathematics D Psychology
9 Put the following data-driven discoveries in chronological order. A Johann Balmer’s discovery of a formula computing wavelength from Anders Ångström’s measurements of the hydrogen lines B Johannes Kepler’s laws of planetary motion based on Tycho Brahe’s astronomical observations C Henrietta Swan Leavitt’s discovery of the period-luminosity relationship for Cepheid variables D Ole Rømer’s estimation of the speed of light from observations of the eclipses of Jupiter’s moon Io
10 Inspired by Alan Turing’s “Imitation Game” – in which an interrogator tries to distinguish between a human and machine – when did Joseph Weizenbaum develop ELIZA, the world’s first “chatbot”? A 1964 B 1984 C 2004 D 2024
11 What does the CERN particle-physics lab use to store data from the Large Hadron Collider? A Compact discs B Hard-disk drives C Magnetic tape D Solid-state drives
12 In preparation for the High Luminosity Large Hadron Collider, CERN tested a data link to the Nikhef lab in Amsterdam in 2024 that ran at what speed? A 80 Mbps B 8 Gbps C 80 Gbps D 800 Gbps
13 When complete, the Square Kilometre Array telescope will be the world’s largest radio telescope. How many petabytes of data is it expected to archive per year? A 15 B 50 C 350 D 700
This quiz is for fun and there are no prizes. Answers will be published in April.
Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.
Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.
Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.
As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.
Tantalizing clues
However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.
The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.
It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.
Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.
To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.
Synchrotron and mass spectrometer
“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.
These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.
However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.
Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.
Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.
In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security.
A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.
The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.
NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.
Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.
Climate concerns
In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.
The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.
The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”
At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.
A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.
Falling behind
Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.
Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.
Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”
New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.
Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.
The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.
With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.
Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.
Precision motion drives innovative instruments for physics applications
For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).
This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.
Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.
Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)
The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.
Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.
Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.
The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.
For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.
Visit Mad City Labs at the APS Global Summit, at booth #401
New lasers target quantum, Raman spectroscopy and life sciences
HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.
Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)
In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.
For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.
For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.
The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.
In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.
Visit HÜBNER Photonics at the APS Global Summit, at booth #359.
Researchers from the Amazon Web Services (AWS) Center for Quantum Computing have announced what they describe as a “breakthrough” in quantum error correction. Their method uses so-called cat qubits to reduce the total number of qubits required to build a large-scale, fault-tolerant quantum computer, and they claim it could shorten the time required to develop such machines by up to five years.
Quantum computers are promising candidates for solving complex problems that today’s classical computers cannot handle. Their main drawback is the tendency for errors to crop up in the quantum bits, or qubits, they use to perform computations. Just like classical bits, the states of qubits can erroneously flip from 0 to 1, which is known as a bit-flip error. In addition, qubits can suffer from inadvertent changes to their phase, which is a parameter that characterizes their quantum superposition (phase-flip errors). A further complication is that whereas classical bits can be copied in order to detect and correct errors, the quantum nature of qubits makes copying impossible. Hence, errors need to be dealt with in other ways.
One error-correction scheme involves building physical or “measurement” qubits around each logical or “data” qubit. The job of the measurement qubits is to detect phase-flip or bit-flip errors in the data qubits without destroying their quantum nature. In 2024, a team at Google Quantum AI showed that this approach is scalable in a system of a few dozen qubits. However, a truly powerful quantum computer would require around a million data qubits and an even larger number of measurement qubits.
Cat qubits to the rescue
The AWS researchers showed that it is possible reduce this total number of qubits. They did this by using a special type of qubit called a cat qubit. Named after the Schrödinger’s cat thought that illustrates the concept of quantum superposition, cat qubits use the superposition of coherent states to encode information in a way that resists bit flips. Doing so may increase the number of phase-flip errors, but special error-correction algorithms can deal with these efficiently.
The AWS team got this result by building a microchip containing an array of five cat qubits. These are connected to four transmon qubits, which are a type of superconducting qubit with a reduced sensitivity to charge noise (a major source of errors in quantum computations). Here, the cat qubits serve as data qubits, while the transmon qubits measure and correct phase-flip errors. The cat qubits were further stabilized by connecting each of them to a buffer mode that uses a non-linear process called two-photon dissipation to ensure that their noise bias is maintained over time.
According to Harry Putterman, a senior research scientist at AWS, the team’s foremost challenge (and innovation) was to ensure that the system did not introduce too many bit-flip errors. This was important because the system uses a classical repetition code as its “outer layer” of error correction, which left it with no redundancy against residual bit flips. With this aspect under control, the researchers demonstrated that their superconducting quantum circuit suppressed errors from 1.75% per cycle for a three-cat qubit array to 1.65% per cycle for a five-cat qubit array. Achieving this degree of error suppression with larger error-correcting codes previously required tens of additional qubits.
On a scalable path
AWS’s director of quantum hardware, Oskar Painter, says the result will reduce the development time for a full-scale quantum computer by 3-5 years. This is, he says, a direct outcome of the system’s simple architecture as well as its 90% reduction in the “overhead” required for quantum error correction. The team does, however, need to reduce the error rates of the error-corrected logical qubits. “The two most important next steps towards building a fault-tolerant quantum computer at scale is that we need to scale up to several logical qubits and begin to perform and study logical operations at the logical qubit level,” Painter tells Physics World.
According to David Schlegel, a research scientist at the French quantum computing firm Alice & Bob, which specializes in cat qubits, this work marks the beginning of a shift from noisy, classically simulable quantum devices to fully error-corrected quantum chips. He says the AWS team’s most notable achievement is its clever hybrid arrangement of cat qubits for quantum information storage and traditional transmon qubits for error readout.
However, while Schlegel calls the research “innovative”, he says it is not without limitations. Because the AWS chip incorporates transmons, it still needs to address both bit-flip and phase-flip errors. “Other cat qubit approaches focus on completely eliminating bit flips, further reducing the qubit count by more than a factor of 10,” Schlegel says. “But it remains to be seen which approach will prove more effective and hardware-efficient for large-scale error-corrected quantum devices in the long run.”
Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.
The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.
The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.
“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB in Belgrade signed a letter in support of the students, one of many from various research institutions since December.
Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.
“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”
Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.
Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.
Missed exams
Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.
Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.
Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.
Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.
“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.
To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.
In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.
“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”
Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.
In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.
Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.
Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.
Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.
“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”
A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.
A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.
In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.
This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.
But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.
The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.
The Terrell effect in the lab
While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.
The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.
When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.
Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.
The research is available on the arXiv pre-print server.
The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification. That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).
When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.
After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.
Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.
Sustained vigilance
The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has been temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.
The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.
After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.
The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.
“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.
The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.
St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.
The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers
We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.
Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.
The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.
Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.
Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.
In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.
Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.
The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.
As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.
But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.
2024 Manchester University Press 328pp £25.00/$36.95
As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.
This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!
Toby Bond
Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.
The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.
What’s the headline take on SQMS?
Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.
Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.
How is the centre delivering versus the vision laid out in the NQI?
SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)
Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.
How are you building on these breakthroughs?
First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.
We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.
Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.
What technologies pursued by SQMS will be critical to the scale-up of quantum systems?
There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.
Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)
Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)
How important are partnerships to the SQMS mission?
Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.
In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.
Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.
What are the drivers for your engagement with the quantum technology industry?
The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.
I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.
How do Rigetti and IBM, for example, benefit from their engagement with SQMS?
The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.
At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.
Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?
Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.
We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.
How is your team building capacity to support quantum R&D and technology innovation?
We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.
Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)
In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.
What is the vision for the future of SQMS?
SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.
We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.
It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.
Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.
How SQMS is bridging the quantum skills gap
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)
As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.
Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.
Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.
“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.
“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”
From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images. When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.
Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.
Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.
Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.
“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”
Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.
MeerKAT: glimpsing the heart of the Milky Way
In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)
Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.
Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.
Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures
Farhad Yusef-Zadeh
Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.
“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”
The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”
In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.
An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.
HERA: the first stars and galaxies
When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.
When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.
Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))
Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.
MWA: “unexpected” new objects
Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))
Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.
New transients are challenging our current models of stellar evolution
Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia
After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.
The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.
This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.
ASKAP: giant sky surveys
The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.
Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA37 e048).
Down under The ASKAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)
Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.
This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.
By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA38 e003). These are giant rings of radiation that are observed only in radio waves. Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.
The precursors are so important. They’ve given us new questions. And it’s incredibly exciting
Philippa Hartley, SKAO, Manchester
While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.
Trouble on the horizon
As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.
Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)
ASKAP, MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.
Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”
In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.
The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.
The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.
Limitation of thermostats for radiative heat
Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.
Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.
In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.
Integrating optical sensors to improve room comfort
The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.
The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.
The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.
The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.
A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.
Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.
Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.
Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.
Shifting resonances
However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.
“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.
Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.
Dithering the cavity
This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.
The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”
Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”
Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?
Heavy duty: The new transfer arm. (Courtesy: UHV Design)
The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.
“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”
The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.
Continual development, review and improvement
While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.
“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”
The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.
Moving multiple objects
As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.
Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”
Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.
However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.
Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.
“That’s a big no-no”
“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.
Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.
But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.
As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.
Built-in error protection
As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.
It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.
In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.
Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.
The device is a remarkable achievement from the materials science and fabrication standpoint
Ivar Martin, Argonne National Laboratory
“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).
Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.
Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.
“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”
Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.
This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.
In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.
Philip Minnmann
Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.
Johannes Schubert
Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.
Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)
Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.
The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable
The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.
The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.
The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”
There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.
Is smaller better?
A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.
In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.
But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)
However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.
The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.
1 Superconductors
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)
Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc). Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.
What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.
Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.
However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current. This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.
High-performance fusion materials
Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.
The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.
2 REBCO unit cell
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)
The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.
Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).
REBCO compounds are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.
REBCO wires: Bendy but brittle
REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.
Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.
Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.
Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in 2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.
Pushing superconductors to the limit
Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).
These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.
Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.
Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.
To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).
TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.
We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.
Searching for clues
The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.
In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials3 52).
We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol.36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)
Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.
We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol.34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin48 710).
Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum.95 063907). The effect of other critical parameters like magnetic field and strain also still needs to be explored.
Towards net zero
The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.
This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices
Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.
This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.
Join us for an insightful webinar that delves into the role of Cobalt-60 in intracranial radiosurgery using Leksell Gamma Knife.
Through detailed discussions and expert insights, attendees will learn how Leksell Gamma Knife, powered by cobalt-60, has and continues to revolutionize the field of radiosurgery, offering patients a safe and effective treatment option.
Participants will gain a comprehensive understanding of the use of cobalt in medical applications, highlighting its significance, and learn more about the unique properties of cobalt-60. The webinar will explore the benefits of cobalt-60 in intracranial radiosurgery and why it is an ideal choice for treating brain lesions while minimizing damage to surrounding healthy tissue.
Don’t miss this opportunity to enhance your knowledge and stay at the forefront of medical advancements in radiosurgery!
Riccardo Bevilacqua
Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather, and writes popular science articles on physics and radiation.
Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.
Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.
“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”
This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.
Triboelectric series
Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.
Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.
According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.
In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”
Starting from scratch
For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.
At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.
Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.
Contact history
“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”
To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.
“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”
The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.
Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.
Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)
The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.
Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).
As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.
The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.
The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.
Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.
Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)
The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.
Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.
Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.
Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.
For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.
Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.
“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”
One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.
Collective rotation
A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.
“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”
Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.
Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.
Thin magnetic film
The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.
To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.
This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.
While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”
Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.
A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.
Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.
In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.
Rising to the occasion?
To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.
What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.
You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.
Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.
You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)
Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.
There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.
Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?
What is the muon g-2?
Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.
However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.
This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.
“Virtual” particles
(Courtesy: CERN)
The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.
“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.
The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev.73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev.74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.
The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment. However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.
New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”
Testing the Standard Model: experiment vs theory
In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett.126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)
Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”
The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep.887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke– making the discrepancy all the more intriguing.
While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.
Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.
This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.
1 Eyes on the prize
(Courtesy: Muon g-2 collaboration/IOP Publishing)
Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.
That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.
The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”
If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.
To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.
2 Which Standard Model?
(Courtesy: Alex Keshavarzi/IOP Publishing)
Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.
Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.
The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.
The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.
On your marks, get set, bake!
In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.
For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.
New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)
The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.
It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.
Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultrahigh-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.
“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”
Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or being absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.
Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are also expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.
Underwater detection
Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.
The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000 m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.
KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.
Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.
New class of astrophysical messengers
While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultrahigh-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.
Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them.”
Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”
With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.
Resconi explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”
Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments.
To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).
The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.
“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.
In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.
Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.
Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.
“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.
The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.
Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.
If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.
Scientists across the US have been left reeling after a spate of executive orders from US President Donald Trump has led to research funding being slashed, staff being told to quit and key programmes being withdrawn. In response to the orders, government departments and external organizations have axed diversity, equity and inclusion (DEI) programmes, scrubbed mentions of climate change from websites, and paused research grants pending tests for compliance with the new administration’s goals.
Since taking up office on 20 January, Trump has signed dozens of executive orders. One ordered the closure of the US Agency for International Development, which has supported medical and other missions worldwide for more than six decades. The administration said it was withdrawing almost all of the agency’s funds and wanted to sack its entire workforce. A federal judge has temporarily blocked the plans, saying they may violate the US’s constitution, which reserves decisions on funding to Congress.
Individual science agencies are under threat too. Politico reported that the Trump administration has asked the National Science Foundation (NSF), which funds much US basic and applied research, to lay off between a quarter and a half of its staff in the next two months. Another report suggests there are plans to cut the agency’s annual budget from roughly $9bn to $3bn. Meanwhile, former officials of the National Oceanic and Atmospheric Administration (NOAA) told CBS News that half its staff could be sacked and its budget slashed by 30%.
Even before they had learnt of plans to cut its staff and budget, officials at the NSF were starting to examine details of thousands of grants it had awarded for references to DEI, climate change and other topics that Trump does not like. The swiftness of the announcements has caused chaos, with recipients of grants suddenly finding themselves unable to access the NSF’s award cash management service, which holds grantees’ funds, including their salaries.
NSF bosses have taken some steps to reassure grantees. “Our top priority is resuming our funding actions and services to the research community and our stakeholders,” NSF spokesperson Mike England told Physics World in late January. In what is a highly fluid situation, there was some respite on 2 February when the NSF announced that access had been restored with the system able to accept payment requests.
“Un-American” actions
Trump’s anti-DEI orders have caused shockwaves throughout US science. According to 404 Media, NASA staff were told on 22 January to “drop everything” to remove mentions of DEI, Indigenous people, environmental justice and women in leadership, from public websites. Another victim has been NASA’s Here to Observe programme, which links undergraduates from under-represented groups with scientists who oversee NASA’s missions. Science reported that contracts for half the scientists involved in the programme had been cancelled by the end of January.
It is still unclear, however, what impact the Trump administration’s DEI rules will have on the make-up of NASA’s astronaut corps. Since choosing its first female astronaut in 1978, NASA has sought to make the corps more representative of US demographics. How exactly the agency should move forward will fall to Jared Isaacman, the space entrepreneur and commercial astronaut who has been nominated as NASA’s next administrator.
Anti-DEI initiatives have hit individual research labs too. Physics World understands that Fermilab – the US’s premier particle-physics lab – suspended its DEI office and its women in engineering group in January. Meanwhile, the Fermilab LBGTQ+ group, called Spectrum, was ordered to cease all activities and its mailing list deleted. Even the rainbow “Pride” flag was removed from the lab’s iconic Wilson Hall.
There was also some confusion that the American Chemical Society had removed its webpage on diversity and inclusion, but they had in fact published a new page and failed to put a redirect in place. “Inclusion and Belonging is a core value of the American Chemical Society, and we remain committed to creating environments where people from diverse backgrounds, cultures, perspectives and experiences thrive,” a spokesperson told Physics World. “We know the broken link caused confusion and some alarm, and we apologize.”
Dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men
Neal Lane, Rice University
Such a response – which some opponents denounce as going beyond what is legally required for fear of repercussions if no action is taken – has left it up to individual leaders to underline the importance of diversity in science. Neal Lane, a former science adviser to President Clinton, told Physics World that “dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men, including scientists, engineers, technical workers – essentially everyone who contributes to advancing America’s global leadership in science and technology”.
Lane, who is now a science and technology policy fellow at Rice University in Texas, think that the new administration’s anti-DEI actions “will weaken the US” and believes they should be considered “un-American”. “The purpose of DEI policies programmes and activities is to ensure all Americans have the opportunity to participate and the country is able to benefit from their participation,” he says.
One senior physicist at a US university, who wishes to remain anonymous, told Physics World that those behind the executive orders are relying on institutions and individuals to “comply in advance” with what they perceive to be the spirit of the orders. “They are relying on people to ignore the fine print, which says that executive orders can’t and don’t overwrite existing law. But it is up to scientists to do the reading — and to follow our consciences. More than universities are on the line: the lives of our students and colleagues are on the line.”
Education turmoil
Another target of the Trump administration is the US Department of Education, which was set up in 1978 to oversee everything from pre-school to postgraduate education. It has already put dozens of its civil servants on leave, ostensibly because their work involves DEI issues. Meanwhile, the withholding of funds has led to the cancellation of scientific meetings, mostly focusing on medicine and life sciences, that were scheduled in the US for late January and early February.
Colleges and universities in the US have also reacted to Trump’s anti-DEI executive order. Academic divisions at Harvard University and the Massachusetts Institute of Technology, for example, have already indicated that they will no longer require applicants for jobs to indicate how they plan to advance the goals of DEI. Northeastern University in Boston has removed the words “diversity” and “inclusion” from a section of its website.
Not all academic organizations have fallen into line, however. Danielle Holly, president of the women-only Mount Holyoke College in South Hadley, Massachusetts, says it will forgo contracts with the federal government if they required abolishing DEI. “We obviously can’t enter into contracts with people who don’t allow DEI work,” she told the Boston Globe. “So for us, that wouldn’t be an option.”
Climate concerns
For an administration that doubts the reality of climate change and opposes anti-pollution laws, the Environmental Protection Agency (EPA) is under fire too. Trump administration representatives were taking action even before the Senate approved Lee Zeldin, a former Republican Congressman from New York who has criticized much environmental legislation, as EPA Administrator. They removed all outside advisers on the EPA’s scientific advisory board and its clean air scientific advisory committee – purportedly to “depoliticize” the boards.
Once the Senate approved Zeldin on 29 January, the EPA sent an e-mail warning more than 1000 probationary employees who had spent less than a year in the agency that their roles could be “terminated” immediately. Then, according to the New York Times, the agency developed plans to demote longer-term employees who have overseen research, enforcement of anti-pollution laws, and clean-ups of hazardous waste. According to Inside Climate News, staff also found their individual pronouns scrubbed from their e-mails and websites without their permission – the result of an order to remove “gender ideology extremism”.
Critics have also questioned the nomination of Neil Jacobs to lead the NOAA. He was its acting head during Trump’s first term in office, serving during the 2019 “Sharpiegate” affair when Trump used a Sharpie pen to alter a NOAA weather map to indicate that Hurricane Dorian would affect Alabama. While conceding Jacobs’s experience and credentials, Rachel Cleetus of the Union of Concerned Scientists asserts that Jacobs is “unfit to lead” given that he “fail[ed] to uphold scientific integrity at the agency”.
Spending cuts
Another concern for scientists is the quasi-official team led by “special government employee” and SpaceX founder Elon Musk. The administration has charged Musk and his so-called “department of government efficiency”, or DOGE, to identify significant cuts to government spending. Though some of DOGE’s activities have been blocked by US courts, agencies have nevertheless been left scrambling for ways to reduce day-to-day costs.
The National Institutes of Health (NIH), for example, has said it will significantly reduce its funding for “indirect” costs of research projects it supported – the overheads that, for example, cover the cost of maintaining laboratories, administering grants, and paying staff salaries. Under the plans, indirect cost reimbursement for federally funded research would be capped at 15%, a drastic cut from its usual range.
NIH personnel have tried to put a positive gloss on its actions. “The United States should have the best medical research in the world,” a statement from NIH declared. “It is accordingly vital to ensure that as many funds as possible go towards direct scientific research costs rather than administrative overhead.”
Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives
US senator Patty Murray
Opponents of the Trump administration, however, are unconvinced. They argue that the measure will imperil critical clinical research because many academic recipients of NIH funds did not have the endowments to compensate for the losses. “Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives,” says US senator Patty Murray, a Democrat from Washington state.
Slashing universities’ share of grants to below 15%, could, however, force institutions to make up the lost income by raising tuition fees, which could “go through the roof”, according to the anonymous senior physicist contacted by Physics World. “Far from being a populist policy, these cuts to overheads are an attack on the subsidies that make university education possible for students from a range of socioeconomic backgrounds. The alternative is to essentially shut down the university research apparatus, which would in many ways be the death of American scientific leadership and innovation.”
Musk and colleagues have also gained unprecedented access to government websites related to civil servants and the country’s entire payments system. That access has drawn criticism from several commentators who note that, since Musk is a recipient of significant government support through his SpaceX company, he could use the information for his own advantage.
“Musk has access to all the data on federal research grantees and contractors: social security numbers, tax returns, tax payments, tax rebates, grant disbursements and more,” wrote physicist Michael Lubell from City College of New York. “Anyone who depends on the federal government and doesn’t toe the line might become a target. This is right out of (Hungarian prime minister) Viktor Orbán’s playbook.”
A new ‘dark ages’
As for the long-term impact of these changes, James Gates – a theoretical physicist at the University of Maryland and a past president of the US National Society of Black Physicists – is blunt. “My country is in for a 50-year period of a new dark ages,” he told an audience at the Royal College of Art in London, UK, on 7 February.
My country is in for a 50-year period of a new dark ages
James Gates, University of Maryland
Speaking at an event sponsored by the college’s association for Black students – RCA BLK – and supported by the UK’s organization for Black physicists, the Blackett Lab Family, he pointed out that the US has been through such periods before. As examples, Gates cited the 1950s “Red Scare” and the period after 1876 when the federal government abandoned efforts to enforce the civil rights of Black Americans in southern states and elsewhere.
However, he is not entirely pessimistic. “Nothing is permanent in human behaviour. The question is the timescale,” Gates said. “There will be another dawn, because that’s part of the human spirit.”
With additional reporting by Margaret Harris, online editor of Physics World, in London and Michael Banks, news editor of Physics World