America has a once-in-a-generation opportunity to open the space frontier by initiating a sustained program of human exploration of Mars. Elon Musk’s SpaceX Starship launch system may soon be operational, offering payload delivery capability comparable to a Saturn V moon rocket at about 5% the cost. President Trump has announced that he plans on sending […]
SALT LAKE CITY — French propulsion startup ThrustMe has high expectations for an iodine-fueled Hall-effect thruster set to launch in 2026 on a Marble Imaging satellite built by Reflex Aerospace. “We expect to revolutionize the market,” Dmytro Rafalskyi, ThrustMe chief technology officer, told SpaceNews. Since ThrustMe announced plans in July to conduct an in-orbit demonstration […]
SALT LAKE CITY — Telespazio Germany announced plans Aug. 12 to enhance its EASE-Rise mission management platform with Digantara space situational awareness (SSA) services and Intella artificial intelligence tools. “Through these new partnerships, we are adding more capability for our users: intelligence and advanced flight dynamics and collision avoidance,” Stewart Hall, Telespazio Germany sales director, […]
The space scientist Michele Dougherty from Imperial College London has been appointed the next Astronomer Royal – the first woman to hold the position. She will succeed the University of Cambridge cosmologist Martin Rees, who has held the role for the past three decades.
The title of Astronomer Royal dates back to the creation of the Royal Observatory in Greenwich in 1675, when it mostly involved advising Charles II on using the stars to improve navigation at sea. John Flamsteed from Derby was the first Astronomer Royal and since then 15 people have held the role.
Dougherty will now act as the official adviser to King Charles III on astronomical matters. She will hold the role alongside her Imperial job as well as being executive chair of the Science and Technology Facilities Council and the next president of the Institute of Physics (IOP), a two-year position she will take up in October.
After gaining a PhD in 1988 from the University of Natal in South Africa, Dougherty moved to Imperial in 1991, where she was head of physics from 2018 until 2024. She has been principal investigator of the magnetometer on the Cassini-Huygens mission to Saturn and its moons and also for the magnetometer for the JUICE craft, which is currently travelling to Jupiter to study its three icy moons.
She was made Commander of the Order of the British Empire in the 2018 New Year Honours for “services to UK Physical Science Research”. Dougherty is also a fellow of the Royal Society, who won its Hughes medal in 2008 for studying Saturn’s moons and had a Royal Society Research Professorship from 2014 to 2019.
“I am absolutely delighted to be taking on the important role of Astronomer Royal,” says Doughtery. “As a young child I never thought I’d end up working on planetary spacecraft missions and science, so I can’t quite believe I’m actually taking on this position. I look forward to engaging the general public in how exciting astronomy is, and how important it and its outcomes are to our everyday life.”
Tom Grinyer, IOP group chief executive officer, offered his “warmest congratulations” to Dougherty. “As incoming president of the IOP and the first woman to hold this historic role [of Astronomer Royal], Dougherty is an inspirational ambassador for science and a role model for every young person who has gazed up at the stars and imagined a future in physics or astronomy.”
The clash between dark matter and modified Newtonian dynamics (MOND) can get a little heated at times. On one side is the vast majority of astronomers who vigorously support the concept of dark matter and its foundational place in cosmology’s standard model. On the other side is the minority – a group of rebels convinced that tweaking the laws of gravity rather than introducing a new particle is the answer to explaining the composition of our universe.
Both sides argue passionately and persuasively, pointing out evidence that supports their view while discrediting the other side. Often it seems to come down to a matter of perspective – both sides use the same results as evidence for their cause. For the rest of us, how can we tell who is correct?
As long as we still haven’t identified what dark matter is made of, there will remain some ambiguity, leaving a door ajar for MOND. However, it’s a door that dark-matter researchers hope will be slammed shut in the not-too-distant future.
Crunch time for WIMPs
In part two of this series, where I looked at the latest proposals from dark-matter scientists, we met University College London’s Chamkaur Ghag, who is the spokesperson for Lux-ZEPLIN. This experiment is searching for “weakly interacting massive particles” or WIMPs – the leading dark-matter candidate – down a former gold mine in South Dakota, US. A huge seven-tonne tank of liquid xenon, surrounded by an array of photomultiplier tubes, watches patiently for the flashes of light that may occur when a passing WIMP interacts with a xenon atom.
Running since 2021, the experiment just released the results of its most recent search through 280 days of data, which uncovered no evidence of WIMPs above a mass of 9 GeV/c2 (Phys. Rev. Lett.135 011802). These results help to narrow the range of possible dark-matter theories, as the new limits impose constraints on WIMP parameters that are almost five times more rigorous than the previous best. Another experiment at the INFN Laboratori Nazionali del Gran Sasso in Italy, called XENONnT, is also hoping to spot the elusive WIMPs – in its case by looking for rare nuclear recoil interactions in a liquid xenon target chamber.
Deep underground The XENON Dark Matter Project is hosted by the INFN Gran Sasso National Laboratory in Italy. The latest detector in this programme is the XENONnT (pictured) which uses liquid xenon to search for dark-matter particles. (Courtesy: XENON Collaboration)
Lux-ZEPLIN and XENONnT will cover half the parameter space of masses and energies that WIMPs could in theory have, but Ghag is more excited about a forthcoming, next-generation xenon-based WIMP detector dubbed XLZD that might settle the matter. XLZD brings together both the Lux-ZEPLIN and XENONnT collaborations, to design and build a single, common multi-tonne experiment that will hopefully leave WIMPs with no place to hide. “XLZD will probably be the final experiment of this type,” says Ghag. “It’s designed to be much larger and more sensitive, and is effectively the definitive experiment.”
I think none of us are ever going to fully believe it completely until we’ve found a WIMP and can reproduce it in a lab
Richard Massey
If WIMPs do exist, then this detector will find them, and it could happen on UK shores. Several locations around the world are in the running to host the experiment, including Boulby Mine Underground Laboratory near Whitby Bay on the north-east coast of England. If everything goes to plan, XLZD – which will contain between 40 and 100 tonnes of xenon – will be up and running and providing answers by the 2030s. It will be a huge moment for dark matter, and a nervous one for its researchers.
“I think none of us are ever going to fully believe it completely until we’ve found [a WIMP] and can reproduce it in a lab and show that it’s not just some abstract stuff that we call dark matter, but that it is a particular particle that we can identify,” says astronomer Richard Massey of the University of Durham, UK.
But if WIMPs are in fact a dead-end, then it’s not a complete death-blow for dark matter – there are other dark-matter candidates and other dark-matter experiments. For example, the Forward Search Experiment (FASER) at CERN’s Large Hadron Collider is looking for less massive dark-matter particles such as axions (read more about them in part 2). However, WIMPs have been a mainstay of dark-matter models since the 1980s. If the xenon-based experiments turn up empty-handed it will be a huge blow, and the door will creak open just a little bit more for MOND.
Galactic frontier
MOND’s battleground isn’t in particle detectors – it’s in the outskirts of galaxies and galaxy clusters, and its proof lies in the history of how our universe formed. This is dark matter’s playground too, with the popular models for how galaxies grow being based on a universe in which dark matter forms 85% of all matter. So it’s out in the depths of space where the two models clash.
The current standard model of cosmology describes how the growth of the large-scale structure of the universe, over the past 13.8 billion years of cosmic history since the Big Bang, is influenced by a combination of dark matter and dark energy (responsible for the accelerated expansion of the universe). Essentially, density fluctuations in the cosmic microwave background (CMB) radiation reflect the clumping of dark matter in the very early universe. As the cosmos aged, these clumps thinned out into the cosmic web of matter. This web is a universe-spanning network of dark-matter filaments, where all the matter lies, between which are voids that are comparatively less densely packed with matter than the filaments. Galaxies can form inside “dark matter haloes”, and at the densest points in the dark-matter filaments, galaxy clusters coalesce.
Simulations in this paradigm – known as lambda cold dark matter (ΛCDM) – suggest that galaxy and galaxy-cluster formation should be a slow process, with small galaxies forming first and gradually merging over billions of years to build up into the more massive galaxies that we see in the universe today. And it works – kind of. Recently, the James Webb Space Telescope (JWST) peered back in time to between just 300 and 400 million years after the Big Bang and found the universe to be populated by tiny galaxies perhaps just a thousand or so light-years across (ApJ970 31). This is as expected, and over time they would grow and merge into larger galaxies.
1 Step back in time
a (Courtesy: NASA/ESA/CSA/STScI/ Brant Robertson, UC Santa Cruz/ Ben Johnson, CfA/ Sandro Tacchella, University of Cambridge/ Phill Cargile, CfA)
b (Courtesy: NASA/ESA/CSA/ Joseph Olmsted, STScI/ S Carniani, Scuola Normale Superiore/ JADES Collaboration)
Data from the James Webb Space Telescope (JWST) form the basis of the JWST Advanced Deep Extragalactic Survey (JADES). (a) This infrared image from the JWST’s NIRCam highlights galaxy JADES-GS-z14-0. (b) The JWST’s NIRSpec (Near-Infrared Spectrograph) obtained this spectrum of JADES-GS-z14-0. A galaxy’s redshift can be determined from the location of a critical wavelength known as the Lyman-alpha break. For JADES-GS-z14-0 the redshift value is 14.32 (+0.08/–0.20), making it the second most distant galaxy known at less than 300 million years after the Big Bang. The current record holder, as of August 2025, is MoM-z14, which has a redshift of 14.4 (+0.02/–0.02), placing it less than 280 million years after the Big Bang (arXiv:2505.11263). Both galaxies belong to an era referred to as the “cosmic dawn”, following the epoch of reionization, when the universe became transparent to light. JADES-GS-z14-0 is particularly interesting to researchers not just because of its distance, but also because it is very bright. Indeed, it is much more intrinsically luminous and massive than expected for a galaxy that formed so soon after the Big Bang, raising more questions on the evolution of stars and galaxies in the early universe.
Yet the deeper we push into the universe, the more we observe challenges to the ΛCDM model, which ultimately threatens the very existence of dark matter. For example, those early galaxies that the JWST has observed, while being quite small, are also surprisingly bright – more so than ΛCDM predicts. This has been attributed to an initial mass function (IMF – the property that determines the average mass of stars that form) that skews more towards higher-mass stars and therefore more luminous stars than today. It does sound reasonable, except that astronomers still don’t understand why the IMF is what it is today (favouring the smallest stars; massive stars are rare) never mind what it might have been over 13 billion years ago.
Not everyone is convinced, and this is compounded by slightly later galaxies, seen around a billion years after the Big Bang, which continue the trend of being more luminous and more massive than expected. Indeed, some of these galaxies sport truly enormous black holes hundreds of times more massive than the black hole at the heart of our Milky Way. Just a couple of billion years later and significantly large galaxy clusters are already present, earlier than one would have surmised with ΛCDM.
The fall of ΛCDM?
Astrophysicist and MOND advocate Pavel Kroupa, from the University of Bonn in Germany, highlights giant elliptical galaxies in the early universe as an example of what he sees as a divergence from ΛCDM.
“We know from observations that the massive elliptical galaxies formed on shorter timescales than the less massive ellipticals,” he explains. This phenomenon has been referred to as “downsizing”, and Kroupa declares it is “a big problem for ΛCDM” because the model says that “the big galaxies take longer to form, but what we see is exactly the opposite”.
To quantify this problem, a 2020 study (MNRAS498 5581) by Australian astronomer Sabine Bellstedt and colleagues showed that half the mass in present-day elliptical galaxies was in place 11 billion years ago, compared with other galaxy types that only accrued half their mass on average about 6 billion years ago. The smallest galaxies only accrued that mass as recently as 4 billion years ago, in apparent contravention of ΛCDM.
Observations (ApJ905 40) of a giant elliptical galaxy catalogued as C1-23152, which we see as it existed 12 billion years ago, show that it formed 200 billion solar masses worth of stars in just 450 million years – a huge firestorm of star formation that ΛCDM simulations just can’t explain. Perhaps it is an outlier – we’ve only sampled a few parts of the sky, not conducted a comprehensive census yet. But as astronomers probe these cosmic depths more extensively, such explanations begin to wear thin.
Kroupa argues that by replacing dark matter with MOND, such giant early elliptical galaxies suddenly make sense. Working with Robin Eappen, who is a PhD student at Charles University in Prague, they modelled a giant gas cloud in the very early universe collapsing under gravity according to MOND, rather than if there were dark matter present.
“It is just stunning that the time [of formation of such a large elliptical] comes out exactly right,” says Kroupa. “The more massive cloud collapses faster on exactly the correct timescale, compared to the less massive cloud that collapses slower. So when we look at an elliptical galaxy, we know that thing formed from MOND and nothing else.”
Elliptical galaxies are not the only thing with a size problem. In 2021 Alexia Lopez, a PhD student at the University of Central Lancashire, UK, discovered a “Giant Arc” of galaxies spanning 3.3 billion light-years, some 9.2 billion light-years away. And in 2023 Lopez spotted another gigantic structure, a “Big Ring” (shaped more like a coil) of galaxies 1.3 billion light-years in diameter, but with a circumference of about 4 billion light-years. The opposite of these giant structures are the massive under-dense voids that take up space between the filaments of the cosmic web. The KBC Void (sometimes called the “Local Hole”), for example, is about two billion light-years across and the Milky Way among a host of other galaxies sits inside it. The trouble is, simulations in ΛCDM, with dark matter at the heart of it, cannot replicate structures and voids this big.
“We live in this huge under-density; we’re not at the centre of it but we are within it and such an under-density is completely impossible in ΛCDM,” says Kroupa, before declaring, “Honestly, it’s not worthwhile to talk about the ΛCDM model anymore.”
A bohemian model
Such fighting talk is dismissed by dark-matter astronomers because although there are obviously deficiencies in the ΛCDM model, it does such a good job of explaining so many other things. If we’re to kill ΛCDM because it cannot explain a few large ellipticals or some overly large galaxy groups or voids, then there needs to be a new model that can explain not only these anomalies, but also everything else that ΛCDM does explain.
“Ultimately we need to explain all the observations, and some of those MOND does better and some of those ΛCDM does better, so it’s how you weigh those different baskets,” says Stacy McGaugh, a MOND researcher from Case Western Reserve University in the US.
As it happens, Kroupa and his Bonn colleague Jan Pflamm-Altenburg are working on a new model that they think has what it takes to overthrow dark matter and the broader ΛCDM paradigm. Calling it the Bohemian model (the name has a double meaning – Kroupa is originally from Czechia), it incorporates MOND as its main pillar and Kroupa describes the results they are getting from their simulations in this paradigm as “stunning” (A&A698 A167).
A lot of experts at Ivy League universities will say it’s all completely impossible. But I know that part of the community is just itching to have a completely different model
Pavel Kroupa
But Kroupa admits that not everybody will be happy to see it published. “If it’s published, a lot of experts at Ivy League universities will say it’s all completely impossible,” he says. “But I know for a fact that there is part of the community, the ‘bright part’ as I call them, which is just itching to have a completely different model.”
Kroupa is staying tight-lipped on the precise details of his new model, but says that according to simulations the puzzle of large-scale structure forming earlier than expected, and growing larger faster than expected, is answered by the Bohemian model. “These structures [such as the Giant Arc and the KBC Void] are so radical that they are not possible in the ΛCDM model,” he says. “However, they pop right out of this Bohemian model.”
Binary battle
Whether you believe Kroupa’s promises of a better model or whether you see it all as bluster, the fact remains that a dark-matter-dominated universe still has some problems. Maybe they’re not serious, and all it will take is a few tweaks to make those problems go away. But maybe they’ll persist, and require new physics of some kind, and it’s this possibility that continues to leave the door open for MOND. For the rest of us, we’re still grasping for a definitive statement one way or another.
For MOND, perhaps that definitive statement could still turn out to be binary stars, as discussed in the first article in this series. Researchers have been particularly interested in so-called “wide binaries” – pairs of stars that are more than 500 AU apart. Thanks to the vast distance between them, the gravitational impact of each star on the other is weak, making it a perfect test for MOND. Idranil Banik, of the University of St Andrews, UK, controversially concluded that there was no evidence for MOND operating on the smaller scales of binary-star systems. However, other researchers such as Kyu-Hyun Chae of Sejong University in South Korea argue that they have found evidence for MOND in binary systems, and have hit out at Banik’s findings.
Indeed, after the first part of this series was published, Chae reached out to me, arguing that Banik had analysed the data incorrectly. Chae specifically points out the fraction of wide binaries (pairs that are more than 500 AU apart, meaning that the gravitational impact of each star on the other is weak, making it a perfect test for MOND) with an extra unseen close stellar companion (a factor designated fmulti) to one or both of the binary stars must be calibrated for when performing the MOND calculations. Often when two stars are extremely close together, their angular separation is so small that we can’t resolve them and don’t realize that they are binary, he explains. So we might mistake a triple system, with two stars so close together that we can’t distinguish them and a third star on a wider circumbinary orbit, for just a wide binary.
“I initially believed Banik’s claim, but because what’s at stake is too big and I started feeling suspicious, I chose to do my own investigation,” says Chae (ApJ952 128). “I came to realize the necessity of calibrating fmulti due to the intrinsic degeneracy between mass and gravity (one cannot simultaneously determine the gravity boost factor and the amount of hidden mass).”
The probability of a wide binary having an unseen extra stellar companion is the same as for shorter binaries (those that we can resolve). But for shorter binaries the gravitational acceleration is high enough that they obey regular Newtonian gravity – MOND only comes into the picture at wider separations. Therefore, the mass uncertainty in the study of wide binaries in a MOND regime can be calibrated for using those shorter-period binaries. Chae argues that Banik did not do this. “I’m absolutely confident that if the Banik et al. analysis is properly carried out, it will reveal MOND’s low-acceleration gravitational anomaly to some degree.”
So perhaps there is hope for MOND in binary systems. Given that dark matter shouldn’t be present on the scale of binary systems, any anomalous gravitational effect could only be explained by MOND. A detection would be pretty definitive, if only everyone could agree upon it.
Bullet time and mass This spectacular new image of the Bullet Cluster was created using NASA’s James Webb Space Telescope and Chandra X-ray Observatory. The new data allow for an improved measurement of the thousands of galaxies in the Bullet Cluster. This means astronomers can more accurately “weigh” both the visible and invisible mass in these galaxy clusters. Astronomers also now have an improved idea of how that mass is distributed. (X-ray: NASA/CXC/SAO; near-infrared: NASA/ESA/CSA/STScI; processing: NASA/STScI/ J DePasquale)
But let’s not kid ourselves – MOND still has a lot of catching up to do on dark matter, which has become a multi-billion-dollar industry with thousands of researchers working on it and space missions such as the European Space Agency’s Euclid space telescope. Dark matter is still in pole position, and its own definitive answers might not be too far away.
“Finding dark matter is definitely not too much to hope for, and that’s why I’m doing it,” says Richard Massey. He highlights not only Euclid, but also the work of the James Webb Space Telescope in imaging gravitational lensing on smaller scales and the Nancy G Roman Space Telescope, which will launch later this decade on a mission to study weak gravitational lensing – the way in which small clumps of matter, such as individual dark matter haloes around galaxies, subtly warp space.
“These three particular telescopes give us the opportunity over the next 10 years to catch dark matter doing something, and to be able to observe it when it does,” says Massey. That “something” could be dark-matter particles interacting, perhaps in a cluster merger in deep space, or in a xenon tank here on Earth.
“That’s why I work on dark matter rather than anything else,” concludes Massey. “Because I am optimistic.”
In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales
In the second part of the series, Keith Cooper explored competing theories of dark matter
A study of plastic bottles washed up on the Pacific coast of Latin America has identified a double problem—a mass of local waste combined with long-traveling bottles from Asia.
A new method for generating high-energy proton beams could one day improve the precision of proton therapy for treating cancer. Developed by an international research collaboration headed up at the National University of Singapore, the technique involves accelerating H2+ ions and then using a novel two-dimensional carbon membrane to split the high-energy ion beam into beams of protons.
One obstacle when accelerating large numbers of protons together is that they all carry the same positive charge and thus naturally repel each other. This so-called space–charge effect makes it difficult to keep the beam tight and focused.
“By accelerating H₂⁺ ions instead of single protons, the particles don’t repel each other as strongly,” says project leader Jiong Lu. “This enables delivery of proton beam currents up to an order of magnitude higher than those from existing cyclotrons.”
Lu explains that a high-current proton beam can deliver more protons in a shorter time, making proton treatments quicker, more precise and targeting tumours more effectively. Such a proton beam could also be employed in FLASH therapy, an emerging treatment that delivers therapeutic radiation at ultrahigh dose rates to reduce normal tissue toxicity while preserving anti-tumour activity.
Industry-compatible fabrication
The key to this technique lies in the choice of an optimal membrane with which to split the H₂⁺ ions. For this task, Lu and colleagues developed a new material – ultraclean monolayer amorphous carbon (UC-MAC). MAC is similar in structure to graphene, but instead of an ordered honeycomb structure of hexagonal rings, it contains a disordered mix of five-, six-, seven and eight-membered carbon rings. This disorder creates angstrom-scale pores in the films, which can be used to split the H₂⁺ ions into protons as they pass through.
Pentagons, hexagons, heptagons, octagons Illustration of disorder-to-disorder synthesis (left); scanning transmission electron microscopy image of UC-MAC (right). (Courtesy: National University of Singapore)
Scaling the manufacture of ultrathin MAC films, however, has previously proved challenging, with no industrial synthesis method available. To address this problem, the researchers proposed a new fabrication approach in which the emergence of long-range order in the material is suppressed, not by the conventional approach of low-temperature growth, but by a novel disorder-to-disorder (DTD) strategy.
DTD synthesis uses plasma-enhanced chemical vapor deposition (CVD) to create a MAC film on a copper substrate containing numerous nanoscale crystalline grains. This disordered substrate induces high levels of randomized nucleation in the carbon layer and disrupts long-range order. The approach enabled wafer-scale (8-inch) production of UC-MAC films within just 3 s – an order of magnitude faster than conventional CVD methods.
Disorder creates precision
To assess the ability of UC-MAC to split H₂⁺ ions into protons, the researchers generated a high-energy H2+ nanobeam and focused it onto a freestanding two-dimensional UC-MAC crystal. This resulted in the ion beam splitting to create high-precision proton beams. For comparison they repeated the experiment (with beam current stabilities controlled within 10%) using single-crystal graphene, non-clean MAC with metal impurities and commercial carbon thin films (8 nm).
Measuring double-proton events – in which two proton signals are detected from a single H2+ ion splitting – as an indicator for proton scattering revealed that the UC-MAC membrane produced far fewer unwanted scattered protons than the other films. Ion splitting using UC-MAC resulted in about 47 double-proton events over a 20 s collection time, while the graphene film exhibited roughly twice this number and the non-clean MAC slightly more. The carbon thin film generated around 46 times more scattering events.
The researchers point out that the reduced double-proton events in UC-MAC “demonstrate its superior ability to minimize proton scattering compared with commercial materials”. They note that as well as UC-MAC creating a superior quality proton beam, the technique provides control over the splitting rate, with yields ranging from 88.8 to 296.0 proton events per second per detector.
“Using UC-MAC to split H₂⁺ produces a highly sharpened, high-energy proton beam with minimal scattering and high spatial precision,” says Lu. “This allows more precise targeting in proton therapy – particularly for tumours in delicate or critical organs.”
“Building on our achievement of producing proton beams with greatly reduced scattering, our team is now developing single molecule ion reaction platforms based on two-dimensional amorphous materials using high-energy ion nanobeam systems,” he tells Physics World. “Our goal is to make proton beams for cancer therapy even more precise, more affordable and easier to use in clinical settings.”
In this episode of Space Minds, host Mike Gruss speaks with Professor Sir Martin Sweeting, Executive Chairman of Surrey Satellite Technology Limited and a professor at the Surrey Space Institute of the University of Surrey.
Belgian startup Edgx has raised seed funding to develop Sterna, an artificial intelligence computer designed to run complex algorithms onboard satellites to speed decisions and use limited bandwidth more efficiently.
Learn more about how elephants have been using non-verbal gestures to communicate with humans, the first of this complex method documented in non-primates.
SALT LAKE CITY – Ascending Node Technologies has added a constellation-design tool to its Spaceline mission-planning software. “Not only is this useful for new constellations, but also for satellite operators adding instruments or increasing the size of their constellations,” John Kidd, ANT chief aerospace engineer, told SpaceNews. In addition, ANT is “testing optimization algorithms to […]
08/11/2025, Salt Lake City, UT – The cooperation, announced during the Small Satellite Conference in Utah, addresses the growing demand for advanced onboard applications among SmallSat developers. The cooperation brings […]
Geopolitical shifts are unlocking new business for European smallsat firms, but they’re also creating a new set of headaches – especially around tariffs. European governments are planning to sharply increase their defense budgets in the next several years as part of a pledge made by NATO members at a June summit meeting to go from […]
Evidence of the coherent elastic scattering of reactor antineutrinos from atomic nuclei has been reported by the German-Swiss Coherent Neutrino Nucleus Scattering (CONUS) collaboration. This interaction has a higher cross section (probability) than the processes currently used to detect neutrinos, and could therefore lead to smaller detectors. It also involves lower-energy neutrinos, which could offer new ways to look for new physics beyond the Standard Model.
Antineutrinos only occasionally interact with matter, which makes them very difficult to detect. They can be observed using inverse beta decay, which involves the capture of electron antineutrinos by protons, producing neutrons and positrons. An alternative method involves observing the scattering of antineutrinos from electrons. Both these reactions have small cross sections, so huge detectors are required to capture just a few events. Moreover, inverse beta decay can only detect antineutrinos if they have energies above about 1.8 MeV, which precludes searches for low-energy physics beyond the Standard Model.
It is also possible to detect neutrinos by the tiny kick a nucleus receives when a neutrino scatters off it. “It’s very hard to detect experimentally because the recoil energy of the nucleus is so low, but on the other hand the interaction probability is a factor of 100–1000 higher than these typical reactions that are otherwise used,” says Christian Buck of the Max Planck Institute for Nuclear Physics in Heidelberg. This enables measurements with kilogram-scale detectors.
This was first observed in 2017 by the COHERENT collaboration using a 14.6 kg caesium iodide crystal to detect neutrinos from the Spallation Neutron Source at the Oak Ridge National Laboratory in the US. These neutrinos have a maximum energy of 55 MeV, making them ideal for the interaction. Moreover, the neutrinos come in pulses, allowing the signal to be distinguished from background radiation.
Reactor search
Multiple groups have subsequently looked for signals from nuclear reactors, which produce lower-energy neutrinos. These include the CONUS collaboration, which operated at the Brokdorf nuclear reactor in Germany until 2022. However, the only group to report a strong hint of a signal included Juan Collar of the University of Chicago. In 2022 it published results suggesting a stronger than expected signal at the Dresden-2 power reactor in the US.
Now, Buck and his CONUS colleagues present data from the CONUS+ experiment conducted at the Leibstadt reactor in Switzerland. They used three 1 kg germanium diodes sensitive to energies as low as 160 eV. They extracted the neutrino spectrum from background radiation by taking data when the reactor was running and when it was not. Writing in Nature, the team conclude that 395±106 neutrinos had been detected during 119 days of operation, which is consistent with the Standard Model 3.7σ away from zero. The experiment is currently in its second run, with the detector masses increased to 2.4 kg to provide better statistics and potentially a lower threshold energy.
Collar, however, is sceptical of the result. “[The researchers] seem to have an interest in dismissing the limitations of these detectors – limitations that affect us too,” he says. “The main difference between our approach and theirs is that we have made a best effort to demonstrate that our data are not contaminated by residual sources of low-energy noise dominant in this type of device prior to a careful analysis.” His group will soon release data taken at the Vandellòs reactor in Spain. “When we release these, we will take the time to point out the issues visible in their present paper,” he says. “It is a long list.”
Buck accepts that, if the previous measurements by Collar’s group are correct, the CONUS+ researchers should have detected least 10 times more neutrinos than they actually did. “I would say the control of backgrounds at our site in Leibstadt is better because we do not have such a strong neutron background. We have clearly demonstrated that the noise Collar has in mind is not dominant in the energy region of interest in our case.”
Patrick Huber at Virginia Tech in the US says, “Let’s see what Collar’s new result is going to be. I think this is a good example of the scientific method at work. Science doesn’t care who’s first – scientists care, but for us, what matters is that we get it right. But with the data that we have in hand, most experts, myself included, think that the current result is essentially the result we have been looking for.”