What’s in Trump’s Tax Bill? Medicaid Reform, Food Stamp Cuts and More
© Paul Ratje for The New York Times
© Paul Ratje for The New York Times
Analysis comes as energy agency predicts systems will need as much energy by end of decade as Japan uses today
Artificial intelligence systems could account for nearly half of datacentre power consumption by the end of this year, analysis has revealed.
The estimates by Alex de Vries-Gao, the founder of the Digiconomist tech sustainability website, came as the International Energy Agency forecast that AI would require almost as much energy by the end of this decade as Japan uses today.
Continue reading...© Photograph: Sameer Al-Doumy/AFP/Getty Images
© Photograph: Sameer Al-Doumy/AFP/Getty Images
Exclusive: expert raises concerns over quantities allowed to be discharged from nuclear fuel factory near Preston
The Environment Agency has allowed a firm to dump three tonnes of uranium into one of England’s most protected sites over the past nine years, it can be revealed, with experts sounding alarm over the potential environmental impact of these discharges.
Documents obtained by the Guardian and the Ends Report through freedom of information requests show that a nuclear fuel factory near Preston discharged large quantities of uranium – legally, under its environmental permit conditions – into the River Ribble between 2015 and 2024. The discharges peaked in 2015 when 703kg of uranium was discharged, according to the documents.
Continue reading...© Photograph: Paul Melling/Alamy
© Photograph: Paul Melling/Alamy
Australia should also make moves to address the climate impact of its fossil fuel production and exports
Watching from the western Pacific, we saw many describe Australia’s recent election as a decisive moment for climate and energy policy. If that was the case, the people of Australia have spoken loud and clear.
Like many of us in the Pacific had hoped, most Australians wanted to throw off the shackles of the last decade’s “climate wars” and usher in a new era of responsible climate and energy policy, one that harnesses the limitless potential of Australia’s renewable energy superpowers and helps lead the Pacific region and the world to a safer and more prosperous climate future.
Continue reading...© Photograph: Matt Rand/AFP/Getty Images
© Photograph: Matt Rand/AFP/Getty Images
Developer shrinks five-year investment plans by £3bn blaming policy and planning delays
One of the UK’s biggest energy developers will cut its planned spending on new renewables projects in a blow to the government’s 2030 clean power targets.
SSE warned that it would be unlikely to meet its own renewable energy goals for the end of the decade after shrinking its five-year spending plans by £3bn to £17.5bn.
Continue reading...© Photograph: Dave Donaldson/Alamy
© Photograph: Dave Donaldson/Alamy
China has continued an uptick in launch activity with a Long March 7A mission to geosynchronous orbit and sea launch of a Ceres-1 solid rocket.
The post China launches classified comms satellite, conducts commercial sea launch appeared first on SpaceNews.
Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.
Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.
Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.
In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.
Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.
The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.
The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.
This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”
As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.
The present study is detailed in Physical Review D.
The post Plasma physics sets upper limit on the strength of ‘dark electromagnetism’ appeared first on Physics World.
Making weather forecasts more accurate is at the heart of what we do at the ECMWF, working in close collaboration with our member states and their national meteorological services (see box below). That means enhanced forecasting for the weeks and months ahead as well as seasonal and annual predictions. We also have a remit to monitor the atmosphere and the environment – globally and regionally – within the context of a changing climate.
Our task is to get the best representation, in a 3D sense, of the current state of the atmosphere versus key metrics like wind, temperature, humidity and cloud cover. We do this via a process of reanalysis and data assimilation: combining the previous short-range weather forecast, and its component data, with the latest atmospheric observations – from satellites, ground stations, radars, weather balloons and aircraft. Unsurprisingly, using all this observational data is a huge challenge, with the exploitation of satellite measurements a significant driver of improved forecasting over the past decade.
Consider the EarthCARE satellite that was launched in May 2024 by the European Space Agency (ESA) and is helping ECMWF to improve its modelling of clouds, aerosols and precipitation. EarthCARE has a unique combination of scientific instruments – a cloud-profiling radar, an atmospheric lidar, a multispectral imager and a broadband radiometer – to infer the properties of clouds and how they interact with solar radiation as well as thermal-infrared radiation emitted by different layers of the atmosphere.
The ECMWF team is learning how to interpret and exploit the EarthCARE data to directly initiate our models. Put simply, mathematical models that better represent clouds and, in turn, yield more accurate forecasts. Indirectly, EarthCARE is also revealing a clearer picture of the fundamental physics governing cloud formation, distribution and behaviour. This is just one example of numerous developments taking advantage of new satellite data. We are looking forward, in particular, to fully exploiting next-generation satellite programmes from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) – including the EPS-SG polar-orbiting system and the Meteosat Third Generation geostationary satellite for continuous monitoring over Europe, Africa and the Indian Ocean.
We talk of “a day, a decade” improvement in weather forecasting, such that a five-day forecast now is as good as a three-day forecast 20 years ago. A richer and broader mix of observational data underpins that improvement, with diverse data streams feeding into bigger supercomputers that can run higher-resolution models and better algorithms. Equally important is ECMWF’s team of multidisciplinary scientists, whose understanding of the atmosphere and climate helps to optimize our models and data assimilation methods. A case study in this regard is Destination Earth, an ambitious European Union initiative to create a series of “digital twins” – interactive computer simulations – of our planet by 2030. Working with ESA and EUMETSTAT, the ECMWF is building the software and data environment for Destination Earth as well as developing the first two digital twins.
Our Digital Twin on Weather-Induced and Geophysical Extremes will assess and predict environmental extremes to support risk assessment and management. Meanwhile, in collaboration with others, the Digital Twin on Climate Change Adaptation complements and extends existing capabilities for the analysis and testing of “what if” scenarios – supporting sustainable development and climate adaptation and mitigation policy-making over multidecadal timescales.
Progress in machine learning and AI has been dramatic over the past couple of years
Both digital twins integrate sea, atmosphere, land, hydrology and sea ice and their deep connections with a resolution currently impossible to reach. Right now, for example, the ECMWF’s operational forecasts cover the whole globe in a 9 km grid – effectively a localized forecast every 9 km. With Destination Earth, we’re experimenting with 4 km, 2 km, and even 1 km grids.
The new strategy prioritizes growing exploitation of data-driven methods anchored on established physics-based modelling – rapidly scaling up our previous deployment of machine learning and AI. There are also a variety of hybrid approaches combining data-driven and physics-based modelling.
On the one hand, data assimilation and observations will help us to directly improve as well as initialize our physics-based forecasting models – for example, by optimizing uncertain parameters or learning correction terms. We are also investigating the potential of applying machine-learning techniques directly on observations – in effect, to make another step beyond the current state-of-the-art and produce forecasts without the need for reanalysis or data assimilation.
Progress in machine learning and AI has been dramatic over the past couple of years – so much so that we launched our Artificial Intelligence Forecasting System (AIFS) back in February. Trained on many years of reanalysis and using traditional data assimilation, AIFS is already an important addition to our suite of forecasts, though still working off the coat-tails of our physics-based predictive models. Another notable innovation is our Probability of Fire machine-learning model, which incorporates multiple data sources beyond weather prediction to identify regional and localized hot-spots at risk of ignition. Those additional parameters – among them human presence, lightning activity as well as vegetation abundance and its dryness – help to pinpoint areas of targeted fire risk, improving the model’s predictive skill by up to 30%.
Every day, the ECMWF addresses cutting-edge scientific problems – as challenging as anything you’ll encounter in an academic setting – by applying its expertise in atmospheric physics, mathematical modelling, environmental science, big data and other disciplines. What’s especially motivating, however, is that the ECMWF is a mission-driven endeavour with a straight line from our research outcomes to wider societal and economic benefits.
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organization supported by 35 states – 23 member states and 12 co-operating states. Established in 1975, the centre employs around 500 staff from more than 30 countries at its headquarters in Reading, UK, and sites in Bologna, Italy, and Bonn, Germany. As a research institute and 24/7 operational service, the ECMWF produces global numerical weather predictions four times per day and other data for its member/cooperating states and the broader meteorological community.
The ECMWF processes data from around 90 satellite instruments as part of its daily activities (yielding 60 million quality-controlled observations each day for use in its Integrated Forecasting System). The centre is a key player in Copernicus – the Earth observation component of the EU’s space programme – by contributing information on climate change for the Copernicus Climate Change Service; atmospheric composition to the Copernicus Atmosphere Monitoring Service; as well as flooding and fire danger for the Copernicus Emergency Management Service. This year, the ECMWF is celebrating its 50th anniversary and has a series of celebratory events scheduled in Bologna (15–19 September) and Reading (1–5 December).
The post European centre celebrates 50 years at the forefront of weather forecasting appeared first on Physics World.
Physicists have observed axion quasiparticles for the first time in a two-dimensional quantum material. As well as having applications in materials science, the discovery could aid the search for fundamental axions, which are a promising (but so far hypothetical) candidate for the unseen dark matter pervading our universe.
Theorists first proposed axions in the 1970s as a way of solving a puzzle involving the strong nuclear force and charge-parity (CP) symmetry. In systems that obey this symmetry, the laws of physics are the same for a particle and the spatial mirror image of its oppositely charged antiparticle. Weak interactions are known to violate CP symmetry, and the theory of quantum chromodynamics (QCD) allows strong interactions to do so, too. However, no-one has ever seen evidence of this happening, and the so-called “strong CP problem” remains unresolved.
More recently, the axion has attracted attention as a potential constituent of dark matter – the mysterious substance that appears to make up more than 85% of matter in the universe. Axions are an attractive dark matter candidate because while they do have mass, and theory predicts that the Big Bang should have generated them in large numbers, they are much less massive than electrons, and they carry no charge. This combination means that axions interact only very weakly with matter and electromagnetic radiation – exactly the behaviour we expect to see from dark matter.
Despite many searches, though, axions have never been detected directly. Now, however, a team of physicists led by Jianxiang Qiu of Harvard University has proposed a new detection strategy based on quasiparticles that are axions’ condensed-matter analogue. According to Qiu and colleagues, these quasiparticle axions, as they are known, could serve as axion “simulators”, and might offer a route to detecting dark matter in quantum materials.
To detect axion quasiparticles, the Harvard team constructed gated electronic devices made from several two-dimensional layers of manganese bismuth telluride (MnBi2Te4). This material is a rare example of a topological antiferromagnet – that is, a material that is insulating in its bulk while conducting electricity on its surface, and that has magnetic moments that point in opposite directions. These properties allow quasiparticles known as magnons (collective oscillations of spin magnetic moments) to appear in and travel through the MnBi2Te4. Two types of magnon mode are possible: one in which the spins oscillate in sync; and another in which they are out of phase.
Qiu and colleagues applied a static magnetic field across the plane of their MnBi2Te4 sheets and bombarded the devices with sub-picosecond light pulses from a laser. This technique, known as ultrafast pump-probe spectroscopy, allowed them to observe the 44 GHz coherent oscillation of the so-called condensed-matter field. This field is the CP-violating term in QCD, and it is proportional to a material’s magnetoelectric coupling constant. “This is uniquely enabled by the out-of-phase magnon in this topological material,” explains Qiu. “Such coherent oscillations are the smoking-gun evidence for the axion quasiparticle and it is the combination of topology and magnetism in MnBi2Te4 that gives rise to it.”
Now that they have detected axion quasiparticles, Qiu and colleagues say their next step will be to do experiments that involve hybridizing them with particles such as photons. Such experiments would create a new type of “axion-polariton” that would couple to a magnetic field in a unique way – something that could be useful for applications in ultrafast antiferromagnetic spintronics, in which spin-polarized currents can be controlled with an electric field.
The axion quasiparticle could also be used to build an axion dark matter detector. According to the team’s estimates, the detection frequency for the quasiparticle is in the milli-electronvolt (meV) range. While several theories for the axion predict that it could have a mass in this range, most existing laboratory detectors and astrophysical observations search for masses outside this window.
“The main technical barrier to building such a detector would be grow high-quality large crystals of MnBi2Te4 to maximize sensitivity,” Qiu tells Physics World. “In contrast to other high-energy experiments, such a detector would not require expensive accelerators or giant magnets, but it will require extensive materials engineering.”
The research is described in Nature.
The post Axion quasiparticle appears in a topological antiferromagnet appeared first on Physics World.
Researchers at Linköping University in Sweden have developed a new fluid electrode and used it to make a soft, malleable battery that can recharge and discharge over 500 cycles while maintaining its high performance. The device, which continues to function even when stretched to twice its length, might be used in next-generation wearable electronics.
Futuristic wearables such as e-skin patches, e-textiles and even internal e-implants on the organs or nerves will need to conform far more closely to the contours of the human body than today’s devices can. To fulfil this requirement of being soft and stretchable as well as flexible, such devices will need to be made from mechanically pliant components powered by soft, supple batteries. Today’s batteries, however, are mostly rigid. They also tend to be bulky because long-term operations and power-hungry functions such as wireless data transfer, continuous sensing and complex processing demand plenty of stored energy.
To overcome these barriers, researchers led by the Linköping chemist Aiman Rahmanudin decided to rethink the very concept of battery electrode design. Instead of engineering softness and stretchability into a solid electrode, as was the case in most previous efforts, they made the electrode out of a fluid. “Bulky batteries compromise the mechanical compliance of wearable devices, but since fluids can be easily shaped into any configuration, this limitation is removed, opening up new design possibilities for next-generation wearables,” Rahmanudin says.
Designing a stretchable battery requires a holistic approach, he adds, as all the device’s components need to be soft and stretchy. For example, they used a modified version of the wood-based biopolymer lignin as the cathode and a conjugated poly(1-amino-5-chloroanthraquinone) (PACA) as the anode. They made these electrodes fluid by dispersing them separately with conductive carbon fillers in an aqueous electrolyte medium consisting of 0.1 M HClO4.
To integrate these electrodes into a complete cell, they had to design a stretchable current collector and an ion-selective membrane to prevent the cathodic and anodic fluids from crossing over. They also encapsulated the fluids in a robust, low-permeability elastomer to prevent them from drying up.
Previous flexible, high-performance electrode work by the Linköping team focused on engineering the mechanical properties of solid battery electrodes by varying their Young’s modulus. “For example, think of a rubber composite that can be stretched and bent,” explains Rahmanudin. “The thicker the rubber, however, the higher the force required to stretch it, which affects mechanical compliancy.
“Learning from our past experience and work on electrofluids (which are conductive particles dispersed in a liquid medium employed as stretchable conductors), we figured that mixing redox particles with conductive particles and suspending them in an electrolyte could potentially work as battery electrodes. And we found that it did.”
Rahmanudin tells Physics World that fluid-based electrodes could lead to entirely new battery designs, including batteries that could be moulded into textiles, embedded in skin-worn patches or integrated into soft robotics.
After reporting their work in Science Advances, the researchers are now working on increasing the voltage output of their battery, which currently stands 0.9 V. “We are also looking into using Earth-abundant and sustainable materials like zinc and manganese oxide for future versions of our device and aim at replacing the acidic electrolyte we used with a safer pH neutral and biocompatible equivalent,” Rahmanudin says.
Another exciting direction, he adds, will be to exploit the fluid nature of such materials to build batteries with more complex three-dimensional shapes, such as spirals or lattices, that are tailored for specific applications. “Since the electrodes can be poured, moulded or reconfigured, we envisage a lot of creative potential here,” Rahmanudin says.
The post Fluid electrodes make soft, stretchable batteries appeared first on Physics World.
Water molecules on the surface of an electrode flip just before they give up electrons to form oxygen – a feat of nanoscale gymnastics that explains why the reaction takes more energy than it theoretically should. After observing this flipping in individual water molecules for the first time, scientists at Northwestern University in the US say that the next step is to find ways of controlling it. Doing so could improve the efficiency of the reaction, making it easier to produce both oxygen and hydrogen fuel from water.
The water splitting process takes place in an electrochemical cell containing water and a metallic electrode. When a voltage is applied to the electrode, the water splits into oxygen and hydrogen via two separate half-reactions.
The problem is that the half-reaction that produces oxygen, known as the oxygen evolution reaction (OER), is difficult and inefficient and takes more energy than predicted by theory. “It should require 1.23 V,” says Franz Geiger, the Northwestern physical chemist who led the new study, “but in reality, it requires more like 1.5 or 1.8 V.” This extra energy cost is one of the reasons why water splitting has not been implemented on a large scale, he explains.
In the new work, Geiger and colleagues wanted to test whether the orientation of the water’s oxygen atoms affects the kinetics of the OER. To do this, they directed an 80-femtosecond pulse of infrared (1034 nm) laser light onto the surface of the electrode, which was in this case made of nickel. They then measured the intensity of the reflected light at half the incident wavelength.
This method, which is known as second harmonic and vibrational sum-frequency generation spectroscopy, revealed that the water molecules’ alignment on the surface of the electrode depended on the applied voltage. By analysing the amplitude and phase of the signal photons as this voltage was cycled, the researchers were able to pin down how the water molecules arranged themselves.
They found that before the voltage was applied, the water molecules were randomly oriented. At a specific applied voltage, however, they began to reorient. “We also detected water dipole flipping just before cleavage and electron transfer,” Geiger adds. “This allowed us to distinguish flipping from subsequent reaction steps.”
The researchers’ explanation for this flipping is that at high pH levels, the surface of the electrode is negatively charged due to the presence of nickel hydroxide groups that have lost their protons. The water molecules therefore align with their most positively charged ends facing the electrode. However, this means that the ends containing the electrons needed for the OER (which reside in the oxygen atoms) are pointing away from the electrode. “We hypothesized that water molecules must flip to align their oxygen atoms with electrochemically active nickel oxo species at high applied potential,” Geiger says.
This idea had not been explored until now, he says, because water absorbs strongly in the infrared range, making it appear opaque at the relevant frequencies. The electrodes typically employed are also too thick for infrared light to pass through. “We overcame these challenges by making the electrode thin enough for near-infrared transmission and by using wavelengths where water’s absorbance is low (the so-called ‘water window’),” he says.
Other challenges for the team included designing a spectrometer that could measure the second harmonic generation amplitude and phase and developing an optical model to extract the number of net-aligned water molecules and their flipping energy. “The full process – from concept to publication – took three years,” Geiger tells Physics World.
The team’s findings, which are detailed in Science Advances, suggest that controlling the orientation of water at the interface with the electrode could improve OER catalyst performance. For example, surfaces engineered to pre-align water molecules might lower the kinetic barriers to water splitting. “The results could also refine electrochemical models by incorporating structural water energetics,” Geiger says. “And beyond the OER, water alignment may also influence other reactions such as the hydrogen evolution reaction and CO₂ reduction to liquid fuels, potentially impacting multiple energy-related technologies.”
The researchers are now exploring alternative electrode materials, including NiFe and multi-element catalysts. Some of the latter can outperform iridium, which has traditionally been the best-performing electrocatalyst, but is very rare (it comes from meteorites) and therefore expensive. “We have also shown in a related publication (in press) that water flipping occurs on an earth-abundant semiconductor, suggesting broader applicability beyond metals,” Geiger reveals.
The post Splitting water takes more energy than theory predicts – and now scientists know why appeared first on Physics World.
If a water droplet flowing over a surface gets stuck, and then unsticks itself, it generates an electric charge. The discoverers of this so-called depinning phenomenon are researchers at RMIT University and the University of Melbourne, both in Australia, and they say that boosting it could make energy-harvesting devices more efficient.
The newly observed charging mechanism is conceptually similar to slide electrification, which occurs when a liquid leaves a surface – that is, when the surface goes from wet to dry. However, the idea that the opposite process can also generate a charge is new, says Peter Sherrell, who co-led the study. “We have found that going from dry to wet matters as well and may even be (in some cases) more important,” says Sherrell, an interdisciplinary research fellow at RMIT. “Our results show how something as simple as water moving on a surface still shows basic phenomena that have not been understood yet.”
Co-team leader Joe Berry, a fluid dynamics expert at Melbourne, notes that the charging mechanism only occurs when the water droplet gets temporarily stuck on the surface. “This suggests that we could design surfaces with specific structure and/or chemistry to control this charging,” he says. “We could reduce this charge for applications where it is a problem – for example in fuel handling – or, conversely, enhance it for applications where it is a benefit. These include increasing the speed of chemical reactions on catalyst surfaces to make next-generation batteries more efficient.”
To observe depinning, the researchers built an experimental apparatus that enabled them to control the sticking and slipping motion of a water droplet on a Teflon surface while measuring the corresponding change in electrical charge. They also controlled the size of the droplet, making it big enough to wet the surface all at once, or smaller to de-wet it. This allowed them to distinguish between multiple mechanisms at play as they sequentially wetted and dried the same region of the surface.
Their study, which is published in Physical Review Letters, is based on more than 500 wetting and de-wetting experiments performed by PhD student Shuaijia Chen, Sherrell says. These experiments showed that the largest change in charge – from 0 to 4.1 nanocoulombs (nC) – occurred the first time the water contacted the surface. The amount of charge then oscillated between about 3.2 and 4.1 nC as the system alternated between wet and dry phases. “Importantly, this charge does not disappear,” Sherrell says. “It is likely generated at the interface and probably retained in the droplet as it moves over the surface.”
The motivation for the experiment came when Berry asked Sherrell a deceptively simple question: was it possible to harvest electricity from raindrops? To find out, they decided to supervise a semester-long research project for a master’s student in the chemical engineering degree programme at Melbourne. “The project grew from there, first with two more research project students [before] Chen then took over to build the final experimental platform and take the measurements,” Berry recalls.
The main challenge, he adds, was that they did not initially understand the phenomenon they were measuring. “Another obstacle was to design the exact protocol required to repeatedly produce the charging effect we observed,” he says.
Understanding how and why electric charge is generated as liquids flow during over surfaces is important, Berry says, especially with new, flammable types of renewable fuels such as hydrogen and ammonia seen as part of the transition to net zero. “At present, with existing fuels, charge build-up is reduced by restricting flow using additives or other measures, which may not be effective in newer fuels,” he explains. “This knowledge may help us to engineer coatings that could mitigate charge in new fuels.”
The RMIT/Melbourne researchers now plan to investigate the stick-slip phenomenon with other types of liquids and surfaces and are keen to partner with industries to target applications that can make a real-world impact. “At this stage, we have simply reported that this phenomenon occurs,” Sherrell says. “We now want to show that we can control when and where these charging events happen – either to maximize them or eliminate them. We are still a long way off from using our discovery for chemical and energy applications – but it’s a big step in the right direction.”
The post Sliding droplets generate electrical charge as they stick and unstick appeared first on Physics World.
Scientists in the US have developed a new type of photovoltaic battery that runs on the energy given off by nuclear waste. The battery uses a scintillator crystal to transform the intense gamma rays from radioisotopes into electricity and can produce more than a microwatt of power. According to its developers at Ohio State University and the University of Toledo, it could be used to power microelectronic devices such as microchips.
The idea of a nuclear waste battery is not new. Indeed, Raymond Cao, the Ohio State nuclear engineer who led the new research effort, points out that the first experiments in this field date back to the early 1950s. These studies, he explains, used a 50 milli-Curie 90Sr-90Y source to produce electricity via the electron-voltaic effect in p-n junction devices.
However, the maximum power output of these devices was just 0.8 μW, and their power conversion efficiency (PCE) was an abysmal 0.4 %. Since then, the PCE of nuclear voltaic batteries has remained low, typically in the 1–3% range, and even the most promising devices have produced, at best, a few hundred nanowatts of power.
Cao is confident that his team’s work will change this. “Our yet-to-be-optimized battery has already produced 1.5 μW,” he says, “and there is much room for improvement.”
To achieve this benchmark, Cao and colleagues focused on a different physical process called the nuclear photovoltaic effect. This effect captures the energy from highly-penetrating gamma rays indirectly, by coupling a photovoltaic solar cell to a scintillator crystal that emits visible light when it absorbs radiation. This radiation can come from several possible sources, including nuclear power plants, storage facilities for spent nuclear fuel, space- and submarine-based nuclear reactors or, really, anyplace that happens to have large amounts of gamma ray-producing radioisotopes on hand.
The scintillator crystal Cao and colleagues used is gadolinium aluminium garnet (GAGG), and they attached it to a solar cell made from polycrystalline CdTe. The resulting device measures around 2 x 2 x 1 cm, and they tested it using intense gamma rays emitted by two different radioactive sources, 137Cs and 60Co, that produced 1.5 kRad/h and 10 kRad/h, respectively. 137Cs is the most common fission product found in spent nuclear fuel, while 60Co is an activation product.
The Ohio-Toledo team found that the maximum power output of their battery was around 288 nW with the 137Cs source. Using the 60Co irradiator boosted this to 1.5 μW. “The greater the radiation intensity, the more light is produced, resulting in increased electricity generation,” Cao explains.
The higher figure is already enough to power a microsensor, he says, and he and his colleagues aim to scale the system up to milliwatts in future efforts. However, they acknowledge that doing so presents several challenges. Scaling up the technology will be expensive, and gamma radiation gradually damages both the scintillator and the solar cell. To overcome the latter problem, Cao says they will need to replace the materials in their battery with new ones. “We are interested in finding alternative scintillator and solar cell materials that are more radiation-hard,” he tells Physics World.
The researchers are optimistic, though, arguing that optimized nuclear photovoltaic batteries could be a viable option for harvesting ambient radiation that would otherwise be wasted. They report their work in Optical Materials X.
The post Photovoltaic battery runs on nuclear waste appeared first on Physics World.
The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.
The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.
“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.
DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”. The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.
Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.
DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.
This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.
DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.
The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.
“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”
As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.
The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.
However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.
“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”
The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.
“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.
Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.
However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”
Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.
Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.
“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.
Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”
The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.
The post DESI delivers a cosmological bombshell appeared first on Physics World.
Physicists in Germany have found an alternative explanation for an anomaly that had previously been interpreted as potential evidence for a mysterious “dark force”. Originally spotted in ytterbium atoms, the anomaly turns out to have a more mundane cause. However, the investigation, which involved high-precision measurements of shifts in ytterbium’s energy levels and the mass ratios of its isotopes, could help us better understand the structure of heavy atomic nuclei and the physics of neutron stars.
Isotopes are forms of an element that have the same number of protons and electrons, but different numbers of neutrons. These different numbers of neutrons produce shifts in the atom’s electronic energy levels. Measuring these so-called isotope shifts is therefore a way of probing the interactions between electrons and neutrons.
In 2020, a team of physicists at the Massachusetts Institute of Technology (MIT) in the US observed an unexpected deviation in the isotope shift of ytterbium. One possible explanation for this deviation was the existence of a new “dark force” that would interact with both ordinary, visible matter and dark matter via hypothetical new force-carrying particles (bosons).
Although dark matter is thought to make up about 85 percent of the universe’s total matter, and its presence can be inferred from the way light bends as it travels towards us from distant galaxies, it has never been detected directly. Evidence for a new, fifth force (in addition to the known strong, weak, electromagnetic and gravitational forces) that acts between ordinary and dark matter would therefore be very exciting.
A team led by Tanja Mehlstäubler from the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig and Klaus Blaum from the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg has now confirmed that the anomaly is real. However, the PTB-MPIK researchers say it does not stem from a dark force. Instead, it arises from the way the nuclear structure of ytterbium isotopes deforms as more neutrons are added.
Mehlstäubler, Blaum and colleagues came to this conclusion after measuring shifts in the atomic energy levels of five different ytterbium isotopes: 168,170,172,174,176Yb. They did this by trapping ions of these isotopes in an ion trap at the PTB and then using an ultrastable laser to drive certain electronic transitions. This allowed them to pin down the frequencies of specific transitions (2S1/2→2D5/2 and 2S1/2→2F7/2) with a precision of 4 ×10−9, the highest to date.
They also measured the atomic masses of the ytterbium isotopes by trapping individual highly-charged Yb42+ ytterbium ions in the cryogenic PENTATRAP Penning trap mass spectrometer at the MPIK. In the strong magnetic field of this trap, team member and study lead author Menno Door explains, the ions are bound to follow a circular orbit. “We measure the rotational frequency of this orbit by amplifying the miniscule inducted current in surrounding electrodes,” he says. “The measured frequencies allowed us to very precisely determine the related mass ratios of the various isotopes with a precision of 4 ×10−12.”
From these data, the researchers were able to extract new parameters that describe how the ytterbium nucleus deforms. To back up their findings, a group at TU Darmstadt led by Achim Schwenk simulated the ytterbium nuclei on large supercomputers, calculating their structure from first principles based on our current understanding of the strong and electromagnetic interactions. “These calculations confirmed that the leading signal we measured was due to the evolving nuclear structure of ytterbium isotopes, not a new fifth force,” says team member Matthias Heinz.
“Our work complements a growing body of research that aims to place constraints on a possible new interaction between electrons and neutrons,” team member Chih-Han Yeh tells Physics World. “In our work, the unprecedented precision of our experiments refined existing constraints.”
The researchers say they would now like to measure other isotopes of ytterbium, including rare isotopes with high or low neutron numbers. “Doing this would allow us to control for uncertain ‘higher-order’ nuclear structure effects and further improve the constraints on possible new physics,” says team member Fiona Kirk.
Door adds that isotope chains of other elements such as calcium, tin and strontium would also be worth investigating. “These studies would allow to further test our understanding of nuclear structure and neutron-rich matter, and with this understanding allow us to probe for possible new physics again,” he says.
The work is detailed in Physical Review Letters.
The post Atomic anomaly explained without recourse to hypothetical ‘dark force’ appeared first on Physics World.
A research team headed up at Linköping University in Sweden and Cornell University in the US has succeeded in recycling almost all of the components of perovskite solar cells using simple, non-toxic, water-based solvents. What’s more, the researchers were able to use the recycled components to make new perovskite solar cells with almost the same power conversion efficiency as those created from new materials. This work could pave the way to a sustainable perovskite solar economy, they say.
While solar energy is considered an environmentally friendly source of energy, most of the solar panels available today are based on silicon, which is difficult to recycle. This has led to the first generation of silicon solar panels, which are reaching the end of their life cycles, ending up in landfills, says Xun Xiao, one of the team members at Linköping University.
When developing emerging solar cell technologies, we therefore need to take recycling into consideration, adds one of the leaders of the new study, Feng Gao, also at Linköping. “If we don’t know how to recycle them, maybe we shouldn’t put them on the market at all.”
To this end, many countries around the world are imposing legal requirements on photovoltaic manufacturers, to ensure that they collect and recycle any solar cell waste they produce. These initiatives include the WEEE directive 2012/19/EU in the European Union and equivalent legislation in Asia and the US.
Perovskites are one of the most promising materials for making next-generation solar cells. Not only are they relatively inexpensive, they are also easy to fabricate, lightweight, flexible and transparent. This allows them to be placed on top of a variety of surfaces, unlike their silicon counterparts. And since they boast a power conversion efficiency (PCE) of more than 25%, this makes them comparable to existing photovoltaics on the market.
One of their downsides, however, is that perovskite solar cells have a shorter lifespan than silicon solar cells. This means that recycling is even more critical for these materials. Today, perovskite solar cells are disassembled using dangerous solvents such as dimethylformamide, but Gao and colleagues have now developed a technique in which water can be used as the solvent.
Perovskites are crystalline materials with an ABX3 structure, where A is caesium, methylammonium (MA) or formamidinium (FA); B is lead or tin; and X is chlorine, bromine or iodine. Solar cells made of these materials are composed of different layers: the hole/electron transport layers; the perovskite layer; indium tin oxide substrates; and cover glasses.
In their work, which they detail in Nature, the researchers succeeded in delaminating end-of-life devices layer by layer, using water containing three low-cost additives: sodium acetate, sodium iodide and hypophosphorous acid. Despite being able to dissolve organic iodide salts such as methylammonium iodide and formamidinium iodide, water only marginally dissolves lead iodide (about 0.044 g per 100 ml at 20 °C). The researchers therefore developed a way to increase the amount of lead iodide that dissolves in water by introducing acetate ions into the mix. These ions readily coordinate with lead ions, forming highly soluble lead acetate (about 44.31 g per 100 ml at 20 °C).
Once the degraded perovskites had dissolved in the aqueous solution, the researchers set about recovering pure and high-quality perovskite crystals from the solution. They did this by providing extra iodide ions to coordinate with lead. This resulted in [PbI]+ transitioning to [PbI2]0 and eventually to [PbI3]− and the formation of the perovskite framework.
To remove the indium tin oxide substrates, the researchers sonicated these layers in a solution of water/ethanol (50%/50% volume ratio) for 15 min. Finally, they delaminated the cover glasses by placing the degraded solar cells on a hotplate preheated to 150 °C for 3 min.
They were able to apply their technology to recycle both MAPbI3 and FAPbI3 perovskites.
New devices made from the recycled perovskites had an average power conversion efficiency of 21.9 ± 1.1%, with the best samples clocking in at 23.4%. This represents an efficiency recovery of more than 99% compared with those prepared using fresh materials (which have a PCE of 22.1 ± 0.9%).
Looking forward, Gao and colleagues say they would now like to demonstrate that their technique works on a larger scale. “Our life-cycle assessment and techno-economic analysis has already confirmed that our strategy not only preserves raw materials, but also appreciably lowers overall manufacturing costs of solar cells made from perovskites,” says co-team leader Fengqi You, who works at Cornell University. “In particular, reclaiming the valuable layers in these devices drives down expenses and helps reduce the ‘levelized cost’ of electricity they produce, making the technology potentially more competitive and sustainable at scale,” he tells Physics World.
The post Perovskite solar cells can be completely recycled appeared first on Physics World.
As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.
This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!
Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.
The post The complex and spatially heterogeneous nature of degradation in heavily cycled Li-ion cells appeared first on Physics World.