States who received funding under an EPA project aimed to expand solar energy for low-income communities
Nearly two dozen states are suing the Trump administration over its cancellation of a $7bn grant program aimed at expanding solar energy in low-income communities, according to court papers.
In a statement on Thursday, California’s attorney general, Rob Bonta, announced two lawsuits by a group of states that received grants under the Environmental Protection Agency’s Solar for All program. The EPA’s administrator, Lee Zeldin, announced the termination of the program in August. The agency said in an email that it would not comment on pending litigation.
Government ministers met representatives from the fossil fuel industry more than 500 times during their first year in power – equivalent to twice every working day, according to research.
The analysis found that fossil fuel lobbyists were present at 48% more ministerial meetings during Labour’s first year in power than under the Conservatives in 2023.
Ministers at the Department for Energy Security and Net Zero (DESNZ) met fossil fuel lobbyists 274 times, with industry figures present at almost a quarter of meetings.
During the same period DESNZ ministers met trade union representatives 61 times.
Ed Miliband, the secretary for energy and climate change, met fossil fuel lobbyists 91 times – with a third of all his meetings attended by industry figures.
Three fossil fuel companies: BP, Shell and Equinor, met ministers 100 times between them.
Fossil fuel lobbyistsattended almost every government meeting about the energy profits levy, a temporary windfall tax on the “extraordinary profits” of North Sea oil and gas companies.
This article was amended on 16 October 2025. Owing to an error in supplied information, an earlier version said Ed Miliband met fossil fuel lobbyists 250 times during Labour’s first year in power; this should have said 91 meetings.
Climate advisers warn that current plans to protect against extreme weather are inadequate
Britain must prepare for global heating far in excess of the level scientists have pegged as the limit of safety, the government’s climate advisers have warned, as current plans to protect against extreme weather are inadequate.
Heatwaves will occur in at least four of every five years in England by 2050, and time spent in drought will double. The number of days of peak wildfire conditions in July will nearly treble for the UK, while floods will increase in frequency throughout the year, with some peak river flows increasing by 40%.
Nuclear power in the UK is on the rise – and so too are the job opportunities for physicists. Whether it’s planning and designing new reactors, operating existing plants safely and reliably, or dealing with waste management and decommissioning, physicists play a key role in the burgeoning nuclear industry.
While many see fusion as the future of nuclear power, it is still in the research and development stages, so fission remains where most job opportunities lie. Although eight of the current fleet of nuclear reactors are to be retired by the end of this decade, the first of the next generation are already in construction. At Hinkley Point C in Somerset, two new reactors are being built with costs estimated to reach £46bn; and in July 2025, Sizewell C in Suffolk got the final go-ahead.
Rolls-Royce, meanwhile, has just won a government-funded bid to develop small modular reactors (SMR) in the UK. Although currently an unproven technology, the hope is that SMRs will be cheaper and quicker to build than traditional plants, with proponents saying that each reactor could produce enough affordable emission-free energy to power about 600,000 homes for at least 60 years.
Supported by an investment of £763m by 2030 from the UK government and industry, the plan’s objectives include quadrupling the number of PhDs in nuclear fission, and doubling the number of graduates entering the workforce. It also aims to provide opportunities for people to “upskill” and join the sector mid-career. The overall hope is to fill 40,000 new jobs by the end of the decade.
Having a degree in physics can open the door to any part of the nuclear-energy industry, from designing, operating or decommissioning a reactor, to training staff, overseeing safety or working as a consultant. We talk to six nuclear experts who all studied physics at university but now work across the sector, for a range of companies – including EDF Energy and Great British Energy–Nuclear. They give a quick snapshot of their “nuclear journeys”, and offer advice to those thinking of following in their footsteps.
My interest in nuclear power started when I did a project on energy at secondary school. I learnt that there were significant challenges around the world’s future energy demands, resource security, and need for clean generation. Although at the time these were not topics commonly talked about, I could see they were vital to work on, and thought nuclear would play an important role.
I went on to study physics at the University of Surrey, with a year at Michigan State University in the US and another at CERN. After working for a couple of years, I returned to Surrey to do a part-time masters in radiation detection and instrumentation, followed a few years later by a PhD in radiation-hard semiconductor neutron detectors.
Up until recently, my professional work has mainly been in the supply chain for nuclear applications, working for Thermo Fisher Scientific, Centronic and Exosens. Nuclear power isn’t made by one company, it’s a combination of thousands of suppliers and sub-suppliers, the majority of which are small to medium-sized enterprises that need to operate across multiple industries. My job was primarily a technical design authority for manufacturers of radiation detectors and instruments, used in applications such as reactor power monitoring, health physics, industrial controls, and laboratory equipment, to name but a few. Now I work at Rolls-Royce SMR as a lead engineer for the control and instrumentation team. This role involves selecting and qualifying the thousands of different detectors and control instruments that will support the operation of small modular reactors.
Logical, evidence-based problem solving is the cornerstone of science and a powerful tool in any work setting
Beyond the technical knowledge I’ve gained throughout my education, studying physics has also given me two important skills. Firstly, learning how to learn – this is critical in academia but it also helps you step into any professional role. The second skill is the logical, evidence-based problem solving that is the cornerstone of science, which is a powerful tool in any work setting.
A career in nuclear energy can take many forms. The industry is comprised of a range of sectors and thousands of organizations that altogether form a complex support structure. My advice for any role is that knowledge is important, but experience is critical. While studying, try to look for opportunities to gain professional experience – this may be industry placements, research projects, or even volunteering. And it doesn’t have to be in your specific area of interest – cross-disciplinary experience breeds novel thinking. Utilizing these opportunities can guide your professional interests, set your CV apart from your peers, and bring pragmatism to your future roles.
I studied physics at the University of Leicester simply because it was a subject I enjoyed – at the time I had no idea what I wanted to do for a career. I first became interested in nuclear energy when I was looking for graduate jobs. The British Energy (now EDF) graduate scheme caught my eye because it offered a good balance of training and on-the-job experience. I was able to spend time in multiple different departments at different power stations before I decided which career path was right for me.
At the end of my graduate scheme, I worked in nuclear safety for several years. This involved reactor physics testing and advising on safety issues concerning the core and fuel. It was during that time I became interested in the operational response to faults. I therefore applied for the company’s reactor operator training programme – a two-year course that was a mixture of classroom and simulator training. I really enjoyed being a reactor operator, particularly during outages when the plant would be shutdown, cooled, depressurised and dissembled for refuelling before reversing the process to start up again. But after almost 10 years in the control room, I wanted a new challenge.
Now I develop and deliver the training for the control-room teams. My job, which includes simulator and classroom training, covers everything from operator fundamentals (such as reactor physics and thermodynamics) and normal operations (e.g. start up and shutdown), through to accident scenarios.
My background in physics gives me a solid foundation for understanding the reactor physics and thermodynamics of the plant. However, there are also a lot of softer skills essential for my role. Teaching others requires the ability to present and explain technical material; to facilitate a constructive debrief after a simulator scenario; and to deliver effective coaching and feedback. The training focuses as much on human performance as it does technical knowledge, highlighting the importance of effective teamwork, error prevention and clear communications.
A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines
With Hinkley Point C construction progressing well and the recent final investment decision for Sizewell C, now is an exciting time to join the nuclear industry. A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines, before making the decision about which area is right for you.
I’d been generally interested in nuclear science throughout my undergraduate physics degree at the University of Manchester, but this really accelerated after studying modules in applied nuclear and reactor physics. The topic was engaging, and the nuclear industry offered a way to explore real-world implementation of physics concepts. This led me to do a masters in nuclear science and technology, also at Manchester (under the Nuclear Technology Education Consortium), to develop the skills the UK nuclear sector required.
My first job was as a graduate nuclear safety engineer at Atkins (now AtkinsRealis), an engineering consultancy. It opened my eyes to the breadth of physics-related opportunities in the industry. I worked on new and operational power station projects for Hitachi-GE and EDF, as well as a variety of defence new-build projects. I primarily worked in hazard analysis, using modelling and simulation tools to generate evidence on topics like fire, blast and flooding to support safety case claims and inform reactor designs. I was also able to gain experience in project management, business development, and other energy projects, such as offshore wind farms. The analytical and problem solving skills I had developed during my physics studies really helped me to adapt to all of these roles.
Currently I work as a principal nuclear safety inspector at the Office for Nuclear Regulation. My role is quite varied. Day to day I might be assessing safety case submissions from a prospective reactor vendor; planning and delivering inspections at fuel and waste sites; or managing fire research projects as part of an international programme. A physics background helps me to understand complex safety arguments and how they link to technical evidence; and to make reasoned and logical regulatory judgements as a result.
Physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more
It’s a great time to join the nuclear industry with a huge amount of activity and investment across the nuclear lifecycle. I’d advise early-career professionals to cast the net wide when looking for roles. There are some obvious physics-related areas such as health physics, fuel and core design, and criticality safety, but physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more. Don’t be limited by the physicist label.
My interest in a career in nuclear energy sparked mid-way through my degree in physics and mathematics at the University of Sheffield, when I was researching “safer nuclear power” for an essay. Several rabbit holes later, I had discovered a myriad of opportunities in the sector that would allow me to use the skills and knowledge I’d gained through my degree in an industrial setting.
My first job in the field was as a technical support advisor on a graduate training scheme, where I supported plant operations on a nuclear licensed site. Next, I did a stint working in strategy development and delivery across the back end of the fuel cycle, before moving into consultancy. I now work as a principal consultant for Galson Sciences Ltd, part of the Egis group. Egis is an international multi-disciplinary consulting and engineering firm, within which Galson Sciences provides specialist nuclear decommissioning and waste management consultancy services to nuclear sector clients worldwide.
Ultimately, my role boils down to providing strategic and technical support to help clients make decisions. My focus these days tends to be around radioactive waste management, which can mean anything from analysing radioactive waste inventories to assessing the environmental safety of disposal facilities.
In terms of technical skills needed for the role, data analysis and the ability to provide high-quality reports on time and within budget are at the top of the list. Physics-wise, an understanding of radioactive decay, criticality mechanisms and the physico-chemical properties of different isotopes are fairly fundamental requirements. Meanwhile, as a consultant, some of the most important soft skills are being able to lead, teach and mentor less experienced colleagues; develop and maintain strong client relationships; and look after the well-being and deployment of my staff.
Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference
My advice to anyone looking to go into the nuclear energy is to go for it. There are lots of really interesting things happening right now across the industry, all the way from building new reactors and operating the current fleet, to decommissioning, site remediation and waste management activities. Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference, whether that’s by cleaning up the legacy of years gone by or by helping to meet the UK’s energy demands. Don’t be afraid to say “yes” to opportunities even if they’re outside your comfort zone, keep learning, and keep being curious about the world around you.
As a child, I remember going to the visitors’ centre at the Sellafield nuclear site – a large nuclear facility in the north-west of England that’s now the subject of a major clean-up and decommissioning operation. At the centre, there was a show about splitting the atom that really sparked my interest in physics and nuclear energy.
I went on to study physics at Durham University, and did two summer placements at Sellafield, working with radiometric instruments. I feel these placements helped me get a place on the Rolls-Royce nuclear engineering graduate scheme after university. From there I joined Urenco, an international supplier of uranium enrichment services and fuel cycle products for the civil nuclear industry.
While at Urenco, I have undertaken a range of interesting roles in nuclear safety and radiation physics, including criticality safety assessment and safety case management. Highlights have included being the licensing manager for a project looking to deploy a high-temperature gas-cooled reactor design, and presenting a paper at a nuclear industry conference in Japan. These roles have allowed me to directly apply my physics background – such as using Monte Carlo radiation transport codes to model nuclear systems and radiation sources – as well as develop broader knowledge and skills in safety, engineering and project management.
My current role is nuclear licensing manager at the Capenhurst site in Cheshire, where we operate a number of nuclear facilities including three uranium enrichment plants, a uranium chemical deconversion facility, and waste management facilities. I lead a team who ensure the site complies with regulations, and achieves the required approvals for our programme of activities. Key skills for this role include building relationships with internal and external stakeholders; being able to understand and explain complex technical issues to a range of audiences; and planning programmes of work.
I would always recommend anyone interested in working in nuclear energy to look for work experience
Some form of relevant experience is always advantageous, so I would always recommend anyone interested in working in nuclear energy to look for work experience visits, summer placements or degree schemes that include working with industry.
During my physics degree at the University of Bristol, my interest in energy led me to write a dissertation on nuclear power. This inspired me to do a masters in nuclear science and technology at the University of Manchester under the Nuclear Technology Education Consortium. The course opened doors for me, such as a summer placement with the UK National Nuclear Laboratory, and my first role as a junior safety consultant with Orano.
I worked in nuclear safety for roughly 10 years, progressing to principal consultant with Abbott Risk Consulting, but decided that this wasn’t where my strengths and passions lay. During my career, I volunteered for the Nuclear Institute (NI), and worked with the society’s young members group – the Young Generation Network (YGN). I ended up becoming chair of the YGN and a trustee of the NI, which involved supporting skills initiatives including those feeding into the Nuclear Skills Plan. Having a strategic view of the sector and helping to solve its skills challenges energized me in a new way, so I chose to change career paths and moved to Great British Energy – Nuclear (GBE-N) as skills lead. In this role I plan for what skills the business and wider sector will need for a nuclear new build programme, as well as develop interventions to address skills gaps.
GBE-N’s current remit is to deliver Europe’s first fleet of small modular reactors, but there is relatively limited experience of building this technology. Problem-solving skills from my background in physics have been essential to understanding what assumptions we can put in place at this early stage, learning from other nuclear new builds and major infrastructure projects, to help set us up for the future.
The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap
To anyone interested in nuclear energy, my advice is to get involved now. The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap. Nuclear offers a lifelong career with challenging, complex projects – ideal for physicists who enjoy solving problems and making a difference.
Silicon-based lithium-ion batteries exhibit severe time-based degradation resulting in poor calendar lives. In this webinar, we will talk about how calendar aging is measured, why the traditional measurement approaches are time intensive and there is a need for new approaches to optimize materials for next generation silicon based systems. Using this new approach we also screen multiple new electrolyte systems that can lead to calendar life improvements in Si containing batteries.
An interactive Q&A session follows the presentation.
Ankit Verma
Ankit Verma’s expertise is in physics-based and data-driven modeling of lithium-ion and next generation lithium metal batteries. His interests lie in unraveling the coupled reaction-transport-mechanics behavior in these electrochemical systems with experiment-driven validation to provide predictive insights for practical advancements. Predominantly, he’s working on improving silicon anodes energy density and calendar life as part of the Silicon Consortium Project, understanding solid-state battery limitations and upcycling of end-of-life electrodes as part of the ReCell Center.
Verma’s past works include optimization of lithium-ion battery anodes and cathodes for high-power and fast-charge applications and understanding electrodeposition stability in metal anodes.
The world is changing rapidly – economically, geopolitically, technologically, militarily and environmentally. But when it comes to the environment, many people feel the world is on the cusp of catastrophe. That’s especially true for anyone directly affected by endemic environmental disasters, such as drought or flooding, where mass outmigration is the only option possible.
The challenges are considerable and the crisis is urgent. But we know that physics has already contributed enormously to society – and I believe that environmental physics can make a huge difference by identifying, addressing and alleviating the problems at stake. However, physicists will only be able to make a difference if we put environmental physics at the centre of our university teaching.
Grounded in physics
Environmental physics is defined as the response of living organisms to their environment within the framework of the physics principles and processes. It examines the interactions within and between the biosphere, the hydrosphere, the cryosphere, the lithosphere, the geosphere and the atmosphere. Stretching from geophysics, meteorology and climate change to renewable energy and remote sensing, it also covers soils and vegetation, the urban and built environment, and the survival of humans and animals in extreme environments.
Environmental physics was pioneered in the UK in the 1950s by the physicists Howard Penman and John Monteith, who were based at the Rothamsted Experimental Station, which is one of the oldest agricultural research institutions in the world. In recent decades, environmental physics has become more prevalent in universities across the world.
Some UK universities either teach environmental physics in their undergraduate physics degrees or have elements of it within environmental science degrees. That’s the approach taken, for example, by University College London as well as well as the universities of Cambridge, Leicester, Manchester, Oxford, Reading, Strathclyde and Warwick.
When it comes to master’s degrees in environmental physics, there are 17 related courses in the UK, including nuclear and environmental physics at Glasgow and radiation and environmental protection at Surrey. Even the London School of Economics has elements of environmental physics in some of its business, geography and economics degrees via a “physics of climate” course.
But we need to do more. The interdisciplinary nature of environmental physics means it overlaps with not just physics and maths but agriculture, biology, chemistry, computing, engineering, geology and health science too.
Indeed, recent developments in machine learning, digital technology and artificial intelligence (AI) have had an impact on environmental physics – for example, through the use of drones in environmental monitoring and simulations – while AI algorithms can catalyse modelling and weather forecasting. AI could also in future be used to predict natural disasters, such as earthquakes, tsunamis, hurricanes and volcanic eruptions, and to assess the health implications of environmental pollution.
Environmental physics is exciting and challenging, has solid foundations in mathematics and the sciences via experiments both in the lab and field. Environmental measurements are a great way to learn about the use of uncertainties, monitoring and modelling, while providing scope for project and teamwork. A grounding in environmental physics can also open the door to lots of exciting career opportunities, with ongoing environmental change meaning lots of ongoing environmental research will be vital.
Solving major regional and global environmental problems is a key part of sociopolitics and so environmental physics has a special role to play in the public arena. It gives students the chance to develop presentational and interpersonal skills that can be used to influence decision makers at local and national government level.
Taken together, I believe a module on environmental physics should be a component of every undergraduate degree as a minimum, ideally having the same weight as quantum or statistical physics or optics. Students of environmental physics have the potential to be enabled, engaged and, ultimately, to be empowered to meet the demands that the future holds.
An unconventional approach to solving the dark energy problem called the cosmologically coupled black hole (CCBH) hypothesis appears to be compatible with the observed masses of neutrinos. This new finding from researchers working at the DESI collaboration suggests that black holes may represent little Big Bangs played in reverse and could be used as a laboratory to study the birth and infancy of our universe. The study also confirms that the strength of dark energy has increased along with the formation rate of stars.
The Dark Energy Spectroscopic Instrument (DESI) is located on the Nicholas U Mayall four-metre Telescope at Kitt Peak National Observatory in Arizona. Its raison d’être is to shed more light on the “dark universe” – the 95% of the mass and energy in the universe that we know very little about. Dark energy is a hypothetical entity invoked to explain why the rate of expansion of the universe is (mysteriously) increasing – something that was discovered at the end of the last century.
According to standard theories of cosmology, matter is thought to comprise cold dark matter (CDM) and normal matter (mostly baryons and neutrinos). DESI can observe fluctuations in the matter density of the universe known as baryonic acoustic oscillations (BAOs), which are density fluctuations that were created after the Big Bang in the hot plasma of baryons and electrons that prevailed then. BAOs expand with the growth of the universe and represent a sort of “standard ruler” that allows cosmologists to map the universe’s expansion by statistically analysing the distance that separates pairs of galaxies and quasars.
Largest 3D map
DESI has produced the largest such 3D map of the universe ever and it recently published the first set of BAO measurements determined from observations of over 14 million extragalactic targets going back 11 billion years in time.
In the new study, the DESI researchers combined measurements from these new data with cosmic microwave background (CMB) datasets (which measure the density of dark matter and baryons from a time when the universe was less than 400,000 years old) to search for evidence of matter converting into dark energy. They did this by focusing on a new hypothesis known as the cosmologically coupled black hole (CCBH), which was put forward five years ago by DESI team member Kevin Croker, who works at Arizona State University (ASU), and his colleague Duncan Farrah at the University of Hawaii. This physical model builds on a mathematical description of black holes as bubbles of dark energy in space that was introduced over 50 years ago. CCBH describes a scenario in which massive stars exhaust their nuclear fuel and collapse to produce black holes filled with dark energy that then grows as the universe expands. The rate of dark energy production is therefore determined by the rate at which stars form.
Neutrino contribution
Previous analyses by DESI scientists suggested that there is less matter in the universe today compared to when it was much younger. When they then added the additional, known, matter source from neutrinos, there appeared to be no “room” and the masses of these particles therefore appeared negative in their calculations. Not only is this unphysical, explains team member Rogier Windhorst of the ASU’s School of Earth and Space Exploration, it also goes against experimental measurements made so far on neutrinos that give them a greater-than-zero mass.
When the researchers re-interpreted the new set of data with the CCBH model, they were able to resolve this issue. Since stars are made of baryons and black holes convert exhausted matter from stars into dark energy, the number of baryons today has decreased in comparison to the CMB measurements. This means that neutrinos can indeed contribute to the universe’s mass, slowing down the expansion of the universe as the dark energy produced sped it up.
“The new data are the most precise measurements of the rate of expansion of the universe going back more than 10 billion years,” says team member Gregory Tarlé at the University of Michigan, “and it results from the hard work of the entire DESI collaboration over more than a decade. We undertook this new study to confront the CCBH hypothesis with these data.”
Black holes as a laboratory
“We found that the standard assumptions currently employed for cosmological analyses simply did not work and we had to carefully revisit and rewrite massive amounts of a lot of cosmological computer code,” adds Croker.
“If dark energy is being sourced by black holes, these structures may be used as a laboratory to study the birth and infancy of our own universe,” he tells Physics World. “The formation of black holes may represent little Big Bangs played in reverse, and to make a biological analogy, they may be the ‘offspring’ of our universe.”
The researchers say they studied the CCBH scenario in its simplest form in this work, and found that it performs very well. “The next big observational test will involve a new layer of complexity, where consistency with the large-scale features of the Big Bang relic radiation, or CMB, and the statistical properties of the distribution of galaxies in space will make or break the model,” says Tarlé.
A new transition-metal oxide crystal that reversibly and repeatedly absorbs and releases oxygen could be ideal for use in fuel cells and as the active medium in clean energy technologies such as thermal transistors, smart windows and new types of batteries. The “breathing” crystal, discovered by scientists at Pusan National University in Korea and Hokkaido University in Japan, is made from strontium, cobalt and iron and contains oxygen vacancies.
Transition-metal oxides boast a huge range of electrical properties that can be tuned all the way from insulating to superconducting. This means they can find applications in areas as diverse as energy storage, catalysis and electronic devices.
Among the different material parameters that can be tuned are the oxygen vacancies. Indeed, ordering these vacancies can produce new structural phases that show much promise for oxygen-driven programmable devices.
Element-specific behaviours
In the new work, a team of researchers led by physicist Hyoungjeen Jeen of Pusan and materials scientist Hiromichi Ohta in Hokkaido studied SrFe0.5Co0.5Ox. The researchers focused on this material, they say, since it belongs to the family of topotactic oxides, which are the main oxides being studied today in solid-state ionics. “However, previous work had not discussed which ion in this compound was catalytically active,” explains Jeen. “What is more, the cobalt-containing topotactic oxides studied so far were fragile and easily fractured during chemical reactions.”
The team succeeded in creating a unique platform from a solid solution of epitaxial SrFe0.5Co0.5O2.5 in which both the cobalt and iron ions bathed in the same chemical environment. “In this way, we were able to test which ion was better for reduction reactions and whether or not it sustained its structural integrity,” Jeen tells Physics World. “We found that our material showed element-specific reduction behaviours and reversible redox reactions.”
The researchers made their material using a pulsed laser deposition technique, ideal for the epitaxial synthesis of multi-element oxides that allowed them to grow SrFe0.5Co0.5O2.5 crystals in which the iron and cobalt ions were randomly located in the crystal. This random arrangement was key to the material’s ability to repeatedly release and absorb oxygen, they say.
“It’s like giving the crystal ‘lungs’ so that it can inhale and exhale oxygen on command,” says Jeen.
Stable and repeatable
This simple breathing picture comes from the difference in the catalytic activity of cobalt and iron in the compound, he explains. Cobalt ions prefer to lose and gain oxygen and these ions are the main sites for the redox activity. However, since iron ions prefer not to lose oxygen during the reduction reaction, they serve as pillars in this architecture. This allows for stable and repeatable oxygen release and uptake.
Until now, most materials that absorb and release oxygen in such a controlled fashion were either too fragile or only functioned at extremely high temperatures. The new material works under more ambient conditions and is stable. “This finding is striking in two ways: only cobalt ions are reduced, and the process leads to the formation of an entirely new and stable crystal structure,” explains Jeen.
The researchers also showed that the material could return to its original form when oxygen was reintroduced, so proving that the process is fully reversible. “This is a major step towards the realization of smart materials that can adjust themselves in real time,” says Ohta. “The potential applications include developing a cathode for intermediate solid oxide fuel cells, an active medium for thermal transistors (devices that can direct heat like electrical switches), smart windows that adjust their heat flow depending on the weather and even new types of batteries.”
Looking ahead, Jeen, Ohta and colleagues aim to investigate the material’s potential for practical applications.
Dark matter could be accumulating inside planets close to the galactic centre, potentially even forming black holes that might consume the afflicted planets from the inside-out, new research has predicted.
According to the standard model of cosmology, all galaxies including the Milky Way sit inside huge haloes of dark matter, with the greatest density at the centre. This dark matter primarily interacts only through gravity, although some popular models such as weakly interacting massive particles (WIMPS) do imply that dark-matter particles may occasionally scatter off normal matter.
This has led PhD student Mehrdad Phoroutan Mehr and Tara Fetherolf of the University of California, Riverside, to make an extraordinary proposal: that dark matter could elastically scatter off molecules inside planets, lose energy and become trapped inside those planets, and then grow so dense that they collapse to form a black hole. In some cases, a black hole could be produced in just ten months, according to Mehr and Fetherolf’s calculations, reported in Physical Review D.
Even more remarkable is that while many planets would be consumed by their parasitic black hole, it is feasible that some planets could actually survive with a black hole inside them, while in others the black hole might evaporate, Mehr tells Physics World.
“Whether a black hole inside a planet survives or not depends on how massive it is when it first forms,” he says.
This leads to a trade-off between how quickly the black hole can grow and how soon the black hole can evaporate via Hawking radiation – the quantum effect that sees a black hole’s mass radiated away as energy.
The mass of a dark-matter particle remains unknown, but the less massive it is, and the more massive a planet is, then the greater the chance a planet has of capturing dark matter, and the more massive a black hole it can form. If the black hole starts out relatively massive, then the planet is in big trouble, but if it starts out very small then it can evaporate before it becomes dangerous. Of course, if it evaporates, another black hole could replace it in the future.
“Interestingly,” adds Mehr, “There is also a special in-between mass where these two effects balance each other out. In that case, the black hole neither grows nor evaporates – it could remain stable inside the planet for a long time.”
Keeping planets warm
It’s not the first time that dark matter has been postulated to accumulate inside planets. In 2011 Dan Hooper and Jason Steffen of Fermilab proposed that dark matter could become trapped inside planets and that the energy released through dark-matter particles annihilating could keep a planet outside the habitable zone warm enough for liquid water to exist on its surface.
Mehr and Fetherolf’s new hypothesis “is worth looking into more carefully”, says Hooper.
That said, Hooper cautions that the ability of dark matter to accumulate inside a planet and form a black hole should not be a general expectation for all models of dark matter. Rather, “it seems to me that there could be a small window of dark-matter models where such particles could be captured in stars at a rate that is high enough to lead to black hole formation,” he says.
Currently there remains a large parameter space for the possible properties for dark matter. Experiments and observations continue to chip away at this parameter space, but there remain a very wide range of possibilities. The ability of dark matter to self-annihilate is just one of those properties – not all models of dark matter allow for this.
If dark-matter particles do annihilate at a sufficiently high rate when they come into contact, then it is unlikely that the mass of dark matter inside a planet would ever grow large enough to form a black hole. But if they don’t self-annihilate, or at least not at an appreciable rate, then a black hole formed of dark matter could still keep a planet warm with its Hawking radiation.
Searching for planets with black holes inside
The temperature anomaly that this would create could provide a means of detecting planets with black holes inside them. It would be challenging – the planets that we expect to contain the most dark matter would be near the centre of the galaxy 26,000 light years away, where the dark-matter concentration in the halo is densest.
Even if the James Webb Space Telescope (JWST) could detect anomalous thermal radiation from such a distant planet, Mehr says that it would not necessarily be a smoking gun.
“If JWST were to observe that a planet is hotter than expected, there could be many possible explanations, we would not immediately attribute this to dark matter or a black hole,” says Mehr. “Rather, our point is that if detailed studies reveal temperatures that cannot be explained by ordinary processes, then dark matter could be considered as one possible – though still controversial – explanation.”
Another problem is that black holes cannot be distinguished from planets purely through their gravity. A Jupiter-mass planet has the same gravitational pull as a Jupiter-mass black hole that has just eaten a Jupiter-mass planet. This means that planetary detection methods that rely on gravity, from radial velocity Doppler shift measurements to astrometry and gravitational microlensing events, could not tell a planet and a black hole apart.
The planets in our own Solar System are also unlikely to contain much dark matter, says Mehr. “We assume that the dark matter density primarily depends on the distance from the centre of the galaxy,” he explains.
Where we are, the density of dark matter is too low for the planets to capture much of it, since the dark-matter halo is concentrated in the galactic centre. Therefore, we needn’t worry about Jupiter or Saturn, or even Earth, turning into a black hole.
The clash between dark matter and modified Newtonian dynamics (MOND) can get a little heated at times. On one side is the vast majority of astronomers who vigorously support the concept of dark matter and its foundational place in cosmology’s standard model. On the other side is the minority – a group of rebels convinced that tweaking the laws of gravity rather than introducing a new particle is the answer to explaining the composition of our universe.
Both sides argue passionately and persuasively, pointing out evidence that supports their view while discrediting the other side. Often it seems to come down to a matter of perspective – both sides use the same results as evidence for their cause. For the rest of us, how can we tell who is correct?
As long as we still haven’t identified what dark matter is made of, there will remain some ambiguity, leaving a door ajar for MOND. However, it’s a door that dark-matter researchers hope will be slammed shut in the not-too-distant future.
Crunch time for WIMPs
In part two of this series, where I looked at the latest proposals from dark-matter scientists, we met University College London’s Chamkaur Ghag, who is the spokesperson for Lux-ZEPLIN. This experiment is searching for “weakly interacting massive particles” or WIMPs – the leading dark-matter candidate – down a former gold mine in South Dakota, US. A huge seven-tonne tank of liquid xenon, surrounded by an array of photomultiplier tubes, watches patiently for the flashes of light that may occur when a passing WIMP interacts with a xenon atom.
Running since 2021, the experiment just released the results of its most recent search through 280 days of data, which uncovered no evidence of WIMPs above a mass of 9 GeV/c2 (Phys. Rev. Lett.135 011802). These results help to narrow the range of possible dark-matter theories, as the new limits impose constraints on WIMP parameters that are almost five times more rigorous than the previous best. Another experiment at the INFN Laboratori Nazionali del Gran Sasso in Italy, called XENONnT, is also hoping to spot the elusive WIMPs – in its case by looking for rare nuclear recoil interactions in a liquid xenon target chamber.
Deep underground The XENON Dark Matter Project is hosted by the INFN Gran Sasso National Laboratory in Italy. The latest detector in this programme is the XENONnT (pictured) which uses liquid xenon to search for dark-matter particles. (Courtesy: XENON Collaboration)
Lux-ZEPLIN and XENONnT will cover half the parameter space of masses and energies that WIMPs could in theory have, but Ghag is more excited about a forthcoming, next-generation xenon-based WIMP detector dubbed XLZD that might settle the matter. XLZD brings together both the Lux-ZEPLIN and XENONnT collaborations, to design and build a single, common multi-tonne experiment that will hopefully leave WIMPs with no place to hide. “XLZD will probably be the final experiment of this type,” says Ghag. “It’s designed to be much larger and more sensitive, and is effectively the definitive experiment.”
I think none of us are ever going to fully believe it completely until we’ve found a WIMP and can reproduce it in a lab
Richard Massey
If WIMPs do exist, then this detector will find them, and it could happen on UK shores. Several locations around the world are in the running to host the experiment, including Boulby Mine Underground Laboratory near Whitby Bay on the north-east coast of England. If everything goes to plan, XLZD – which will contain between 40 and 100 tonnes of xenon – will be up and running and providing answers by the 2030s. It will be a huge moment for dark matter, and a nervous one for its researchers.
“I think none of us are ever going to fully believe it completely until we’ve found [a WIMP] and can reproduce it in a lab and show that it’s not just some abstract stuff that we call dark matter, but that it is a particular particle that we can identify,” says astronomer Richard Massey of the University of Durham, UK.
But if WIMPs are in fact a dead-end, then it’s not a complete death-blow for dark matter – there are other dark-matter candidates and other dark-matter experiments. For example, the Forward Search Experiment (FASER) at CERN’s Large Hadron Collider is looking for less massive dark-matter particles such as axions (read more about them in part 2). However, WIMPs have been a mainstay of dark-matter models since the 1980s. If the xenon-based experiments turn up empty-handed it will be a huge blow, and the door will creak open just a little bit more for MOND.
Galactic frontier
MOND’s battleground isn’t in particle detectors – it’s in the outskirts of galaxies and galaxy clusters, and its proof lies in the history of how our universe formed. This is dark matter’s playground too, with the popular models for how galaxies grow being based on a universe in which dark matter forms 85% of all matter. So it’s out in the depths of space where the two models clash.
The current standard model of cosmology describes how the growth of the large-scale structure of the universe, over the past 13.8 billion years of cosmic history since the Big Bang, is influenced by a combination of dark matter and dark energy (responsible for the accelerated expansion of the universe). Essentially, density fluctuations in the cosmic microwave background (CMB) radiation reflect the clumping of dark matter in the very early universe. As the cosmos aged, these clumps thinned out into the cosmic web of matter. This web is a universe-spanning network of dark-matter filaments, where all the matter lies, between which are voids that are comparatively less densely packed with matter than the filaments. Galaxies can form inside “dark matter haloes”, and at the densest points in the dark-matter filaments, galaxy clusters coalesce.
Simulations in this paradigm – known as lambda cold dark matter (ΛCDM) – suggest that galaxy and galaxy-cluster formation should be a slow process, with small galaxies forming first and gradually merging over billions of years to build up into the more massive galaxies that we see in the universe today. And it works – kind of. Recently, the James Webb Space Telescope (JWST) peered back in time to between just 300 and 400 million years after the Big Bang and found the universe to be populated by tiny galaxies perhaps just a thousand or so light-years across (ApJ970 31). This is as expected, and over time they would grow and merge into larger galaxies.
1 Step back in time
a (Courtesy: NASA/ESA/CSA/STScI/ Brant Robertson, UC Santa Cruz/ Ben Johnson, CfA/ Sandro Tacchella, University of Cambridge/ Phill Cargile, CfA)
b (Courtesy: NASA/ESA/CSA/ Joseph Olmsted, STScI/ S Carniani, Scuola Normale Superiore/ JADES Collaboration)
Data from the James Webb Space Telescope (JWST) form the basis of the JWST Advanced Deep Extragalactic Survey (JADES). (a) This infrared image from the JWST’s NIRCam highlights galaxy JADES-GS-z14-0. (b) The JWST’s NIRSpec (Near-Infrared Spectrograph) obtained this spectrum of JADES-GS-z14-0. A galaxy’s redshift can be determined from the location of a critical wavelength known as the Lyman-alpha break. For JADES-GS-z14-0 the redshift value is 14.32 (+0.08/–0.20), making it the second most distant galaxy known at less than 300 million years after the Big Bang. The current record holder, as of August 2025, is MoM-z14, which has a redshift of 14.4 (+0.02/–0.02), placing it less than 280 million years after the Big Bang (arXiv:2505.11263). Both galaxies belong to an era referred to as the “cosmic dawn”, following the epoch of reionization, when the universe became transparent to light. JADES-GS-z14-0 is particularly interesting to researchers not just because of its distance, but also because it is very bright. Indeed, it is much more intrinsically luminous and massive than expected for a galaxy that formed so soon after the Big Bang, raising more questions on the evolution of stars and galaxies in the early universe.
Yet the deeper we push into the universe, the more we observe challenges to the ΛCDM model, which ultimately threatens the very existence of dark matter. For example, those early galaxies that the JWST has observed, while being quite small, are also surprisingly bright – more so than ΛCDM predicts. This has been attributed to an initial mass function (IMF – the property that determines the average mass of stars that form) that skews more towards higher-mass stars and therefore more luminous stars than today. It does sound reasonable, except that astronomers still don’t understand why the IMF is what it is today (favouring the smallest stars; massive stars are rare) never mind what it might have been over 13 billion years ago.
Not everyone is convinced, and this is compounded by slightly later galaxies, seen around a billion years after the Big Bang, which continue the trend of being more luminous and more massive than expected. Indeed, some of these galaxies sport truly enormous black holes hundreds of times more massive than the black hole at the heart of our Milky Way. Just a couple of billion years later and significantly large galaxy clusters are already present, earlier than one would have surmised with ΛCDM.
The fall of ΛCDM?
Astrophysicist and MOND advocate Pavel Kroupa, from the University of Bonn in Germany, highlights giant elliptical galaxies in the early universe as an example of what he sees as a divergence from ΛCDM.
“We know from observations that the massive elliptical galaxies formed on shorter timescales than the less massive ellipticals,” he explains. This phenomenon has been referred to as “downsizing”, and Kroupa declares it is “a big problem for ΛCDM” because the model says that “the big galaxies take longer to form, but what we see is exactly the opposite”.
To quantify this problem, a 2020 study (MNRAS498 5581) by Australian astronomer Sabine Bellstedt and colleagues showed that half the mass in present-day elliptical galaxies was in place 11 billion years ago, compared with other galaxy types that only accrued half their mass on average about 6 billion years ago. The smallest galaxies only accrued that mass as recently as 4 billion years ago, in apparent contravention of ΛCDM.
Observations (ApJ905 40) of a giant elliptical galaxy catalogued as C1-23152, which we see as it existed 12 billion years ago, show that it formed 200 billion solar masses worth of stars in just 450 million years – a huge firestorm of star formation that ΛCDM simulations just can’t explain. Perhaps it is an outlier – we’ve only sampled a few parts of the sky, not conducted a comprehensive census yet. But as astronomers probe these cosmic depths more extensively, such explanations begin to wear thin.
Kroupa argues that by replacing dark matter with MOND, such giant early elliptical galaxies suddenly make sense. Working with Robin Eappen, who is a PhD student at Charles University in Prague, they modelled a giant gas cloud in the very early universe collapsing under gravity according to MOND, rather than if there were dark matter present.
“It is just stunning that the time [of formation of such a large elliptical] comes out exactly right,” says Kroupa. “The more massive cloud collapses faster on exactly the correct timescale, compared to the less massive cloud that collapses slower. So when we look at an elliptical galaxy, we know that thing formed from MOND and nothing else.”
Elliptical galaxies are not the only thing with a size problem. In 2021 Alexia Lopez, a PhD student at the University of Central Lancashire, UK, discovered a “Giant Arc” of galaxies spanning 3.3 billion light-years, some 9.2 billion light-years away. And in 2023 Lopez spotted another gigantic structure, a “Big Ring” (shaped more like a coil) of galaxies 1.3 billion light-years in diameter, but with a circumference of about 4 billion light-years. The opposite of these giant structures are the massive under-dense voids that take up space between the filaments of the cosmic web. The KBC Void (sometimes called the “Local Hole”), for example, is about two billion light-years across and the Milky Way among a host of other galaxies sits inside it. The trouble is, simulations in ΛCDM, with dark matter at the heart of it, cannot replicate structures and voids this big.
“We live in this huge under-density; we’re not at the centre of it but we are within it and such an under-density is completely impossible in ΛCDM,” says Kroupa, before declaring, “Honestly, it’s not worthwhile to talk about the ΛCDM model anymore.”
A bohemian model
Such fighting talk is dismissed by dark-matter astronomers because although there are obviously deficiencies in the ΛCDM model, it does such a good job of explaining so many other things. If we’re to kill ΛCDM because it cannot explain a few large ellipticals or some overly large galaxy groups or voids, then there needs to be a new model that can explain not only these anomalies, but also everything else that ΛCDM does explain.
“Ultimately we need to explain all the observations, and some of those MOND does better and some of those ΛCDM does better, so it’s how you weigh those different baskets,” says Stacy McGaugh, a MOND researcher from Case Western Reserve University in the US.
As it happens, Kroupa and his Bonn colleague Jan Pflamm-Altenburg are working on a new model that they think has what it takes to overthrow dark matter and the broader ΛCDM paradigm. Calling it the Bohemian model (the name has a double meaning – Kroupa is originally from Czechia), it incorporates MOND as its main pillar and Kroupa describes the results they are getting from their simulations in this paradigm as “stunning” (A&A698 A167).
A lot of experts at Ivy League universities will say it’s all completely impossible. But I know that part of the community is just itching to have a completely different model
Pavel Kroupa
But Kroupa admits that not everybody will be happy to see it published. “If it’s published, a lot of experts at Ivy League universities will say it’s all completely impossible,” he says. “But I know for a fact that there is part of the community, the ‘bright part’ as I call them, which is just itching to have a completely different model.”
Kroupa is staying tight-lipped on the precise details of his new model, but says that according to simulations the puzzle of large-scale structure forming earlier than expected, and growing larger faster than expected, is answered by the Bohemian model. “These structures [such as the Giant Arc and the KBC Void] are so radical that they are not possible in the ΛCDM model,” he says. “However, they pop right out of this Bohemian model.”
Binary battle
Whether you believe Kroupa’s promises of a better model or whether you see it all as bluster, the fact remains that a dark-matter-dominated universe still has some problems. Maybe they’re not serious, and all it will take is a few tweaks to make those problems go away. But maybe they’ll persist, and require new physics of some kind, and it’s this possibility that continues to leave the door open for MOND. For the rest of us, we’re still grasping for a definitive statement one way or another.
For MOND, perhaps that definitive statement could still turn out to be binary stars, as discussed in the first article in this series. Researchers have been particularly interested in so-called “wide binaries” – pairs of stars that are more than 500 AU apart. Thanks to the vast distance between them, the gravitational impact of each star on the other is weak, making it a perfect test for MOND. Idranil Banik, of the University of St Andrews, UK, controversially concluded that there was no evidence for MOND operating on the smaller scales of binary-star systems. However, other researchers such as Kyu-Hyun Chae of Sejong University in South Korea argue that they have found evidence for MOND in binary systems, and have hit out at Banik’s findings.
Indeed, after the first part of this series was published, Chae reached out to me, arguing that Banik had analysed the data incorrectly. Chae specifically points out the fraction of wide binaries (pairs that are more than 500 AU apart, meaning that the gravitational impact of each star on the other is weak, making it a perfect test for MOND) with an extra unseen close stellar companion (a factor designated fmulti) to one or both of the binary stars must be calibrated for when performing the MOND calculations. Often when two stars are extremely close together, their angular separation is so small that we can’t resolve them and don’t realize that they are binary, he explains. So we might mistake a triple system, with two stars so close together that we can’t distinguish them and a third star on a wider circumbinary orbit, for just a wide binary.
“I initially believed Banik’s claim, but because what’s at stake is too big and I started feeling suspicious, I chose to do my own investigation,” says Chae (ApJ952 128). “I came to realize the necessity of calibrating fmulti due to the intrinsic degeneracy between mass and gravity (one cannot simultaneously determine the gravity boost factor and the amount of hidden mass).”
The probability of a wide binary having an unseen extra stellar companion is the same as for shorter binaries (those that we can resolve). But for shorter binaries the gravitational acceleration is high enough that they obey regular Newtonian gravity – MOND only comes into the picture at wider separations. Therefore, the mass uncertainty in the study of wide binaries in a MOND regime can be calibrated for using those shorter-period binaries. Chae argues that Banik did not do this. “I’m absolutely confident that if the Banik et al. analysis is properly carried out, it will reveal MOND’s low-acceleration gravitational anomaly to some degree.”
So perhaps there is hope for MOND in binary systems. Given that dark matter shouldn’t be present on the scale of binary systems, any anomalous gravitational effect could only be explained by MOND. A detection would be pretty definitive, if only everyone could agree upon it.
Bullet time and mass This spectacular new image of the Bullet Cluster was created using NASA’s James Webb Space Telescope and Chandra X-ray Observatory. The new data allow for an improved measurement of the thousands of galaxies in the Bullet Cluster. This means astronomers can more accurately “weigh” both the visible and invisible mass in these galaxy clusters. Astronomers also now have an improved idea of how that mass is distributed. (X-ray: NASA/CXC/SAO; near-infrared: NASA/ESA/CSA/STScI; processing: NASA/STScI/ J DePasquale)
But let’s not kid ourselves – MOND still has a lot of catching up to do on dark matter, which has become a multi-billion-dollar industry with thousands of researchers working on it and space missions such as the European Space Agency’s Euclid space telescope. Dark matter is still in pole position, and its own definitive answers might not be too far away.
“Finding dark matter is definitely not too much to hope for, and that’s why I’m doing it,” says Richard Massey. He highlights not only Euclid, but also the work of the James Webb Space Telescope in imaging gravitational lensing on smaller scales and the Nancy G Roman Space Telescope, which will launch later this decade on a mission to study weak gravitational lensing – the way in which small clumps of matter, such as individual dark matter haloes around galaxies, subtly warp space.
“These three particular telescopes give us the opportunity over the next 10 years to catch dark matter doing something, and to be able to observe it when it does,” says Massey. That “something” could be dark-matter particles interacting, perhaps in a cluster merger in deep space, or in a xenon tank here on Earth.
“That’s why I work on dark matter rather than anything else,” concludes Massey. “Because I am optimistic.”
In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales
In the second part of the series, Keith Cooper explored competing theories of dark matter