↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Imaging reveals how microplastics may harm the brain

Par : Tami Freeman

Pollution from microplastics – small plastic particles less than 5 mm in size – poses an ongoing threat to human health. Independent studies have found microplastics in human tissues and within the bloodstream. And as blood circulates throughout the body and through vital organs, these microplastics reach can critical regions and lead to tissue dysfunction and disease. Microplastics can also cause functional irregularities in the brain, but exactly how they exert neurotoxic effects remains unclear.

A research collaboration headed up at the Chinese Research Academy of Environmental Sciences and Peking University has shed light on this conundrum. In a series of cerebral imaging studies reported in Science Advances, the researchers tracked the progression of fluorescent microplastics through the brains of mice. They found that microplastics entering the bloodstream become engulfed by immune cells, which then obstruct blood vessels in the brain and cause neurobehavioral abnormalities.

“Understanding the presence and the state of microplastics in the blood is crucial. Therefore, it is essential to develop methods for detecting microplastics within the bloodstream,” explains principal investigator Haipeng Huang from Peking University. “We focused on the brain due to its critical importance: if microplastics induce lesions in this region, it could have a profound impact on the entire body. Our experimental technology enables us to observe the blood vessels within the brain and detect microplastics present in these vessels.”

In vivo imaging

Huang and colleagues developed a microplastics imaging system by integrating a two-photon microscopy system with fluorescent plastic particles and demonstrated that it could image brain blood vessels in awake mice. They then fed five mice with water containing 5-µm diameter fluorescent microplastics. After a couple of hours, fluorescence images revealed microplastics within the animals’ cerebral vessels.

The microplastic flash
Lightening bolt The “MP-flash” observed as two plastic particles rapidly fly through the cerebral blood vessels. (Courtesy: Haipeng Huang)

As they move through rapidly flowing blood, the microplastics generate a fluorescence signal resembling a lightning bolt, which the researchers call a “microplastic flash” (MP-flash). This MP-flash was observed in four of the mice, with the entire MP-flash trajectory captured in a single imaging frame of less than 208 ms.

Three hours after administering the microplastics, the researchers observed fluorescent cells in the bloodstream. The signals from these cells were of comparable intensity to the MP-flash signal, suggesting that the cells had engulfed microplastics in the blood to create microplastic-labelled cells (MPL-cells). The team note that the microplastics did not directly attach to the vessel wall or cross into brain tissue.

To test this idea further, the researchers injected microplastics directly into the bloodstream of the mice. Within minutes, they saw the MP-Flash signal in the brain’s blood vessels, and roughly 6 min later MPL-cells appeared. No fluorescent cells were seen in non-treated mice. Flow cytometry of mouse blood after microplastics injection revealed that the MPL-cells, which were around 21 µm in dimeter, were immune cells, mostly neutrophils and macrophages.

Tracking these MPL-cells revealed that they sometimes became trapped within a blood vessel. Some cells exited the imaging field following a period of obstruction while others remained in cerebral vessels for extended durations, in some instances for nearly 2.5 h of imaging. The team also found that one week after injection, the MPL-cells had still not cleared, although the density of blockages was much reduced.

“[While] most MPL-cells flow rapidly with the bloodstream, a small fraction become trapped within the blood vessels,” Huang tells Physics World. “We provide an example where an MPL-cell is trapped at a microvascular turn and, after some time, is fortunate enough to escape. Many obstructed cells are less fortunate, as the blockage may persist for several weeks. Obstructed cells can also trigger a crash-like chain reaction, resulting in several MPL-cells colliding in a single location and posing significant risks.”

The MPL-cell blockages also impeded blood flow in the mouse brain. Using laser speckle contrast imaging to monitor blood flow, the researchers saw reduced perfusion in the cerebral cortical vessels, notably at 30 min after microplastics injection and particularly affecting smaller vessels.

Laser speckle contrast images showing blood flow in the mouse brain
Reduced blood flow These laser speckle contrast images show blood flow in the mouse brain at various times after microplastics injection. The images indicate that blockages of microplastic-labelled cells inhibit perfusion in the cerebral cortical vessels. (Courtesy: Huang et al. Sci. Adv. 11 eadr8243 (2025))

Changing behaviour

Lastly, Huang and colleagues investigated whether the reduced blood supply to the brain caused by cell blockages caused behavioural changes in the mice. In an open-field experiment (used to assess rodents’ exploratory behaviour) mice injected with microplastics travelled shorter distances at lower speeds than mice in the control group.

The Y-maze test for assessing memory also showed that microplastics-treated mice travelled smaller total distances than control animals, with a significant reduction in spatial memory. Tests to evaluate motor coordination and endurance revealed that microplastics additionally inhibited motor abilities. By day 28 after injection, these behavioural impairments were restored, corresponding with the observed recovery of MPL-cell obstruction in the cerebral vasculature at 28 days.

The researchers conclude that their study demonstrates that microplastics harm the brain indirectly – via cell obstruction and disruption of blood circulation – rather than directly penetrating tissue. They emphasize, however, that this mechanism may not necessarily apply to humans, who have roughly 1200 times greater volume of circulating blood volume than mice and significantly different vascular diameters.

“In the future, we plan to collaborate with clinicians,” says Huang. “We will enhance our imaging techniques for the detection of microplastics in human blood vessels, and investigate whether ‘MPL-cell-car-crash’ happens in human. We anticipate that this research will lead to exciting new discoveries.”

Huang emphasizes how the use of fluorescent microplastic imaging technology has fundamentally transformed research in this field over the past five years. “In the future, advancements in real-time imaging of depth and the enhanced tracking ability of microplastic particles in vivo may further drive innovation in this area of study,” he says.

The post Imaging reveals how microplastics may harm the brain appeared first on Physics World.

Alternative building materials could store massive amounts of carbon dioxide

Replacing conventional building materials with alternatives that sequester carbon dioxide could allow the world to lock away up to half the CO2 generated by humans each year – about 16 billion tonnes. This is the finding of researchers at the University of California Davis and Stanford University, both in the US, who studied the sequestration potential of materials such as carbonate-based aggregates and biomass fibre in brick.

Despite efforts to reduce greenhouse gas emissions by decarbonizing industry and switching to renewable sources of energy, it is likely that humans will continue to produce significant amounts of CO2 beyond the target “net zero” date of 2050. Carbon storage and sequestration – either at source or directly from the atmosphere – are therefore worth exploring as an additional route towards this goal. Researchers have proposed several possible ways of doing this, including injecting carbon underground or deep under the ocean. However, all these scenarios are challenging to implement practically and pose their own environmental risks.

Modifying common building materials

In the present work, a team of civil engineers and earth systems scientists led by Elisabeth van Roijen (then a PhD student at UC Davis) calculated how much carbon could be stored in modified versions of several common building materials. These include concrete (cement) and asphalt containing carbonate-based aggregates; bio-based plastics; wood; biomass-fibre bricks (from waste biomass); and biochar filler in cement.

The researchers obtained the “16 billion tonnes of CO2” figure by assuming that all aggregates currently employed in concrete would be replaced with carbonate-based versions. They also supplemented 15% of cement with biochar and the remainder with carbonatable cements; increased the amount of wood used in all new construction by 20%; and supplemented 15% of bricks with biomass and the remainder with carbonatable calcium hydroxide. A final element in their calculation was to replace all plastics used in construction today with bio-based plastics and all bitumen with bio-oil in asphalt.

“We calculated the carbon storage potential of each material based on the mass ratio of carbon in each material,” explains van Roijen. “These values were then scaled up based on 2016 consumption values for each material.”

“The sheer magnitude of carbon storage is pretty impressive”

While the production of some replacement materials would need to increase to meet the resulting demand, van Roijen and colleagues found that resources readily available today – for example, mineral-rich waste streams – would already let us replace 10% of conventional aggregates with carbonate-based ones. “These alone could store 1 billion tonnes of CO2,” she says. “The sheer magnitude of carbon storage is pretty impressive, especially when you put it in context of the level of carbon dioxide removal needed to stay below the 1.5 and 2 °C targets set by The Intergovernmental Panel on Climate Change (IPCC).”

Indeed, even if the world doesn’t implement these technologies until 2075, we could still store enough carbon between 2075 and 2100 to stay below these targets, she tells Physics World. “This is assuming, of course, that all other decarbonization efforts outlined in the IPCC reports are also implemented to achieve net-zero emissions,” she says.

Building materials are a good option for carbon storage

The motivation for the study, she explains, came from the urgent need – as expressed by the IPCC – to not only reduce new carbon emissions through rapid and significant decarbonization, but to also remove large amounts of COalready present in the atmosphere. “Rather than burying it in geological, terrestrial or ocean reservoirs, we wanted to look into the possibility of leveraging existing technology – namely conventional building materials – as a way to store CO2. Building materials are a good option for carbon storage given the massive quantity (30 billion tonnes) produced each year, not to mention their durability.”

Van Roijen, who is now a postdoctoral researcher at the US Department of Energy Renewable Energy Laboratory, hopes that this work, which is detailed in Science, will go beyond the reach of the research lab and attract the attention of policymakers and industrialists. While some of the technologies outlined in this study are new and require further research, others, such as bio-based plastics, are well established and simply need some economic and political support, she says. “That said, conventional building materials such as concrete and plastics are pretty cheap, so there will need to be some incentive for industries to make the switch over to these low-carbon materials.”

The post Alternative building materials could store massive amounts of carbon dioxide appeared first on Physics World.

Sustainability spotlight: PFAS unveiled

Par : No Author

So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.

Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.

Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.

Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.

Left to right: Jonas Baltrusaitis, Linda S. Lee, Clinton Williams, Sara Lupton, Jude Maul

Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.

Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.

Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.

Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.

Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.

About this journal

Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic.
Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA

 

The post Sustainability spotlight: PFAS unveiled appeared first on Physics World.

Start-stop operation and the degradation impact in electrolysis

Par : No Author

start-stop graph

This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.

An interactive Q&A session follows the presentation.

Shaun Alia
Shaun Alia

Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.

 

 

The post Start-stop operation and the degradation impact in electrolysis appeared first on Physics World.

How the operating window of LFP/Graphite cells affects their lifetime

Par : No Author

 

Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.

Eniko Zsoldos
Eniko Zsoldos

Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.

 

The Electrochemical Society

 

The post How the operating window of LFP/Graphite cells affects their lifetime appeared first on Physics World.

Generative AI has an electronic waste problem, researchers warn

The rising popularity of generative artificial intelligence (GAI), and in particular large language models such as ChatGPT, could produce a significant surge in electronic waste, according to new analyses by researchers in Israel and China. Without mitigation measures, the researchers warn that this stream of e-waste could reach 2.5 million tons (2.2 billion kg) annually by 2030, and potentially even more.

“Geopolitical factors, such as restrictions on semiconductor imports, and the trend for rapid server turnover for operational cost saving, could further exacerbate e-waste generation,” says study team member Asaf Tzachor, who studies existential risks at Reichman University in Herzliya, Israel.

GAI or Gen AI is a form of artificial intelligence that creates new content, such as text, images, music, or videos using patterns it has learned from existing data. Some of the principles that make this pattern-based learning possible were developed by the physicist John Hopfield, who shared the 2024 Nobel Prize for Physics with computer scientist and AI pioneer Geoffrey Hinton. Perhaps the best-known example of Gen AI is ChatGPT (the “GPT” stands for “generative pre-trained transformer”), which is an example of a Large Language Model (LLM).

While the potential benefits of LLMs are significant, they come at a price. Notably, they require so much energy to train and operate that some major players in the field, including Google and ChatGPT developer OpenAI, are exploring the possibility of building new nuclear reactors for this purpose.

Quantifying and evaluating Gen AI’s e-waste problem

Energy use is not the only environmental challenge associated with Gen AI, however. The amount of e-waste it produces – including printed circuit boards and batteries that can contain toxic materials such as lead and chromium – is also a potential issue. “While the benefits of AI are well-documented, the sustainability aspects, and particularly e-waste generation, have been largely overlooked,” Tzachor says.

Tzachor and his colleagues decided to address what they describe as a “significant knowledge gap” regarding how GAI contributes to e-waste. Led by sustainability scientist Peng Wang at the Institute of Urban Environment, Chinese Academy of Sciences, they developed a computational power-drive, material flow analysis (CP-MFA) framework to quantify and evaluate the e-waste it produces. This involved modelling the computational resources required for training and deploying LLMs, explains Tzachor, and translating these resources into material flows and e-waste projections.

“We considered various future scenarios of GAI development, ranging from the most aggressive to the most conservative growth,” he tells Physics World. “We also incorporated factors such as geopolitical restrictions and server lifecycle turnover.”

Using this CP-MFA framework, the researchers estimate that the total amount of Gen AI-related e-waste produced between 2023 and 2030 could reach the level of 5 million tons in a “worst-case” scenario where AI finds the most widespread applications.

A range of mitigation measures

That worst-case scenario is far from inevitable, however. Writing in Nature Computational Science, the researchers also modelled the effectiveness of different e-waste management strategies. Among the strategies they studied were increasing the lifespan of existing computing infrastructures through regular maintenance and upgrades; reusing or remanufacturing key components; and improving recycling processes to recover valuable materials in a so-called “circular economy”.

Taken together, these strategies could reduce e-waste generation by up to 86%, according to the team’s calculations. Investing in more energy-efficient technologies and optimizing AI algorithms could also significantly reduce the computational demands of LLMs, Tzachor adds, and would reduce the need to update hardware so frequently.

Another mitigation strategy would be to design AI infrastructure in a way that uses modular components, which Tzachor says are easier to upgrade and recycle. “Encouraging policies that promote sustainable manufacturing practices, responsible e-waste disposal and extended producer responsibility programmes can also play a key role in reducing e-waste,” he explains.

As well as helping policymakers create regulations that support sustainable AI development and effective e-waste management, the study should also encourage AI developers and hardware manufacturers to adopt circular economy principles, says Tzachor. “On the academic side, it could serve as a foundation for future research aimed at exploring the environmental impacts of AI applications other than LLMs and developing more comprehensive sustainability frameworks in general.”

The post Generative AI has an electronic waste problem, researchers warn appeared first on Physics World.

Conditioning prepares aluminium-ion batteries for real-world use

Imagine a smartphone that charges faster, lasts longer and is more eco-friendly – all at a lower cost. Aluminium-ion batteries (AIBs) could make this dream a reality, and scientists are working to unlock their potential as a more abundant, affordable and sustainable alternative to the lithium-ion batteries currently used in mobile devices, electric cars and large-scale energy storage. As part of this effort, Dmitrii A Rakov and colleagues at the University of Queensland, Australia recently overcame a technical hurdle with an AIB component called the solid-electrolyte interphase. Their insights could help AIBs match, or even surpass, the performance of their lithium-ion counterparts.

Like lithium-ion batteries, AIBs contain an anode, a cathode and an electrolyte. This electrolyte carries aluminium ions, which flow between the positively-charged anode and the negatively-charged cathode. During discharge, these ions move from the anode to the cathode, generating energy. Charging the battery reverses the process, with ions returning to the anode to store energy.

The promise and the problem

Sounds simple, right? But when it comes to making AIBs work effectively, this process is far from straightforward.

Aluminium is a promising anode material – it is lightweight and stores a lot of energy for its size, giving it a high energy density. The problem is that AIBs are prone to instabilities as they cycle between charging and discharging. During this cycling, aluminium can deposit unevenly on the anode, forming tree-like structures called dendrites that cause short circuits, leading to battery failure or even safety risks.

Researchers have been tackling these issues for years, trying to figure out how to get aluminium to deposit more evenly and stop dendrites from forming. An emerging focus of this work is something called the solid-electrolyte interphase (SEI).  This thin layer of organic and inorganic components forms on the anode as the battery charges, and like the protective seal on a jar of jam, it keeps everything inside fresh and functioning well.

In AIBs, though, the SEI sometimes forms unevenly or breaks, like a seal on a jar that doesn’t close properly. When that happens, the aluminium inside can misbehave, leading to performance issues. To complicate things further, the type of “jam” in the jar – different electrolytes, like chloroaluminate ionic liquids – affects how well this seal forms. Some electrolytes help create a better seal, while others make it harder to keep the aluminium deposits stable.

Cracking the code of aluminium deposition

In their study, which is published in ACS Nano, the Queensland scientists, together with colleagues at the University of Southern Queensland and Oak Ridge National Laboratory in the US, focused on how the aluminium anode interacts with the liquid electrolyte.  They found that the formation of the SEI layer is highly dependent on the current running through the battery and the type of counter electrode (the “partner” to the aluminium anode). Some currents and conditions allow the battery to work well for more cycles. But under other conditions, aluminium can build up in uneven, dendritic structures that ultimately cause the battery to fail.

Photo of a battery being assembled from vials of chemicals in a laboratory, with a person's blue-gloved hand visible next to the vials
Work in progress: Assembling a cell for testing. (Courtesy: Dmitrii Rakov)

To understand how this happens, the researchers investigated how different electrolytes and cycling conditions affect the SEI layer. They discovered that in some cases, when the SEI isn’t forming evenly, aluminium oxide (Al2O3) – which is normally a protective layer – can actually aggravate the problem by causing the aluminium to deposit unevenly. They also found that low currents can deplete some materials in the electrolyte, leading to parasitic reactions that further reduce the battery’s efficiency.

To solve these issues, the scientists recommend exploring different aluminium-alloy chemistries. They also suggest that specific conditioning protocols could smooth out the SEI layer and improve the cycling performance. One example of such a conditioning protocol is pre-cycling, which is a process where the battery is charged and discharged in a controlled way before regular use to condition it for better long-term performance.

“Our research demonstrates that, like in lithium-ion batteries, aluminium-ion batteries also need pre-cycling to maximize their lifetime,” Rakov tells Physics World. “This is important knowledge for aluminium-ion battery developers, who are rapidly emerging as start-ups around the world.”

By understanding the unique pre-cycling needs of aluminium-ion batteries, developers can work to design batteries that last longer and perform more reliably, bringing them closer to real-world applications.

How far are we from having an aluminium-ion battery in our mobile phones?

As for when those applications might become a reality, Rakov highlights that AIBs are still in the early stages of development, and many studies test them under conditions that aren’t realistic for everyday use. Often, these tests use very small amounts of active materials and extra electrolyte, which can make the batteries seem more durable than they might be in real life.

In this study, Rakov and colleagues focused on understanding how aluminium-ion batteries might degrade when handling higher energy loads and stronger currents, similar to what they would face in practical use. “We found that different types of positive electrode materials lead to different types of battery failure, but by using special pre-cycling steps, we were able to reduce these issues,” Rakov says.

The post Conditioning prepares aluminium-ion batteries for real-world use appeared first on Physics World.

Delayed Big Bang for dark matter could be detected in gravitational waves

Par : No Author

New constraints on a theory that says dark matter was created just after the Big Bang  – rather than at the Big Bang – have been determined by Richard Casey and  Cosmin Ilie at Colgate University in the US. The duo calculated the full range of parameters in which a “Dark Big Bang” could fit into the observed history of the universe. They say that evidence of this delayed creation could be found in gravitational waves.

Dark matter is a hypothetical substance that is believed to play an important role in the structure and dynamics of the universe. It appears to account for about 27% of the mass–energy in the cosmos and is part of the Standard Model of cosmology. However, dark matter particles have never been observed directly.

The Standard Model also says that the entire contents of the universe emerged nearly 14 billion years ago in the Big Bang. Yet in 2023, Katherine Freese and Martin Winkler at the University of Texas at Austin introduced a captivating new theory, which suggests that the universe’s dark matter may have been created after the Big Bang.

Evidence comes later on

Freese and Winkler pointed out that presence of photons and normal matter (mostly protons and neutrons) can be inferred from almost immediately after the Big Bang. However, the earliest evidence for dark matter comes from later on, when it began to exert its gravitational influence on normal matter. As a result, the duo proposed that dark matter may have appeared in a second event called the Dark Big Bang.

“In Freese and Winkler’s model, dark matter particles can be produced as late as one month after the birth of our universe,” Ilie explains. “Moreover, dark matter particles produced via a Dark Big Bang do not interact with regular matter except via gravity. Thus, this model could explain why all attempts at detecting dark matter – either directly, indirectly, or via particle production – have failed.”

According to this theory, dark matter particles are generated by a certain type of scalar field. This is an energy field that has a single value at every point in space and time (a familiar example is the field describing gravitational potential energy). Initially, each point of this scalar field would have occupied a local minimum in its energy potential. However, these points could have then transitioned to lower-energy minima via quantum tunnelling. During this transition, the energy difference between the two minima would be released, producing particles of dark matter.

Consistent with observations

Building on this idea, Casey and Ilie looked at how predictions of the Dark Big Bang model could be consistent with astronomers’ observations of the early universe.

“By focusing on the tunnelling potentials that lead to the Dark Big Bang, we were able to exhaust the parameter space of possible cases while still allowing for many different types of dark matter candidates to be produced from this transition,” Casey explains. “Aside from some very generous mass limits, the only major constraint on dark matter in the Dark Big Bang model is that it interacts with everyday particles through gravity alone.” This is encouraging because this limited interaction is what physicists expect of dark matter.

For now, the duo’s results suggest that the Dark Big Bang is far less constrained by past observations than Freese and Winkler originally anticipated. As Ilie explains, their constraints could soon be put to the test.

“We examined two Dark Big Bang scenarios in this newly found parameter space that produce gravitational wave signals in the sensitivity ranges of existing and upcoming surveys,” he says. “In combination with those considered in Freese and Winkler’s paper, these cases could form a benchmark for gravitational wave researchers as they search for evidence of a Dark Big Bang in the early universe.”

Subtle imprint on space–time

If a Dark Big Bang happened, then the gravitational waves it produced would have left a subtle imprint on the fabric of space–time. With this clearer outline of the Dark Big Bang’s parameter space, several soon-to-be active observational programmes will be well equipped to search for these characteristic imprints.

“For certain benchmark scenarios, we show that those gravitational waves could be detected by ongoing or upcoming experiments such as the International Pulsar Timing Array (IPTA) or the Square Kilometre Array Observatory (SKAO). In fact, the evidence of background gravitational waves reported in 2023 by the NANOGrav experiment – part of the IPTA – could be attributed to a Dark Big Bang realization,” Casey says.

If these studies find conclusive evidence for Freese and Winkler’s original theory, Casey and Ilie’s analysis could ultimately bring us a step closer to a breakthrough in our understanding of the ever-elusive origins of dark matter.

The research is described in Physical Review D.

The post Delayed Big Bang for dark matter could be detected in gravitational waves appeared first on Physics World.

Venkat Srinivasan: ‘Batteries are largely bipartisan’

Which battery technologies are you focusing on at Argonne?

We work on everything. We work on lead-acid batteries, a technology that’s 100 years old, because the research community is saying, “If only we could solve this problem with cycle life in lead-acid batteries, we could use them for energy storage to add resilience to the electrical grid.” That’s an attractive prospect because lead-acid batteries are extremely cheap, and you can recycle them easily.

We work a lot on lithium-ion batteries, which is what you find in your electric car and your cell phone. The big challenge there is that lithium-ion batteries use nickel and cobalt, and while you can get nickel from a few places, most of the cobalt comes from the Democratic Republic of Congo, where there are safety and environmental concerns about exactly how that cobalt is being mined, and who is doing the mining. Then there’s lithium itself. The supply chain for lithium is concentrated in China, and we saw during COVID the problems that can cause. You have one disruption somewhere and the whole supply chain collapses.

We’re also looking at technologies beyond lithium-ion batteries. If you want to start using batteries for aviation, you need batteries with a long range, and for that you have to increase energy density. So we work on things like solid-state batteries.

Finally, we are working on what I would consider really “out there” technologies, where it might be 20 years before we see them used. Examples might be lithium-oxygen or lithium-sulphur batteries, but there’s also a move to go beyond lithium because of the supply chain issues I mentioned. One alternative might be to switch to sodium-based batteries. There’s a big supply of soda ash in the US, which is the raw material for sodium, and sodium batteries would allow us to eliminate cobalt while using very little nickel. If we can do that, the US can be completely reliant on its own domestic minerals and materials for batteries.

What are the challenges associated with these different technologies?

Frankly, every chemistry has its challenges, but I can give you an example.

If you look at the periodic table, the most electronegative element is lithium, while the most electropositive is fluorine. So you might think the ultimate battery would be lithium-fluorine. But in practice, nobody should be using fluorine – it’s super dangerous. The next best option is lithium-oxygen, which is nice because you can get oxygen from the air, although you have to purify it first. The energy density of a lithium-oxygen battery is comparable to that of gasoline, and that is why people have been trying to make solid-state lithium-metal batteries since before I was born.

Photo of Arturo Gutierrez and Venkat Srinivasan. Gutierrez is wearing safety glasses and a white lab coat and has his arms inside a glovebox while Srinivasan looks on
Building batteries: Venkat Srinivasan (right) discusses battery research with materials scientist Arturo Gutierrez in one of the energy storage discovery labs at Argonne National Laboratory. (Courtesy: Argonne National Laboratory)

The problem is that when you charge a battery with a lithium metal anode, the electrolyte deposits on the lithium metal, and unfortunately it doesn’t create a thin, planar layer. Instead, it forms these needle-like structures called dendrites that short to the battery’s separator. Battery shorting is never a good thing.

Now, if you put a mechanically hard material next to the lithium metal, you can stop the dendrites from growing through. It’s like putting in a concrete wall next to the roots of a tree to stop the roots growing into the other side. But if you have a crack in your concrete wall, the roots will find a way – they will actually crack the concrete – and exactly the same thing happens with dendrites.

So the question becomes, “Can we make a defect-free electrolyte that will stop the dendrites?” Companies have taken a shot at this, and on the small scale, things look great: if you’re making one or two devices, you can have incredible control. But in a large-format manufacturing setup where you’re trying to make hundreds of devices per second, even a single defect can come back to bite you. Going from the lab scale to the manufacturing scale is such a challenge.

What are the major goals in battery research right now?

It depends on the application. For electric cars, we still have to get the cost down, and my sense is that we’ll ultimately need batteries that charge in five minutes because that’s how long it takes to refuel a gasoline-powered car. I worry about safety, too, and of course there’s the supply-chain issue I mentioned.

But if you forget about supply chains for a second, I think if we can get fast charging with incredibly safe batteries while reducing the cost by a factor of two, we are golden. We’ll be able to do all sorts of things.

A researcher holding a plug kneels next to an electric car. The car has a sign on the front door that reads "Argonne research vehicle"
Charging up: Developing better batteries for electric vehicles is a major goal of research in Argonne’s ACCESS collaboration. (Courtesy: Argonne National Laboratory)

For aviation, it’s a different story. We think the targets are anywhere from increasing energy density by a factor of two for the air taxi market, all the way to a factor of six if you want an electric 737 that can fly from Chicago to Washington, DC with 75 passengers. That’s kind of hard. It may be impossible. You can go for a hybrid design, in which case you will not need as much energy density, but you need a lot of power density because even when you’re landing, you still have to defy gravity. That means you need power even when the vehicle is in its lowest state of charge.

The political landscape in the US is shifting as the Biden administration, which has been very focused on clean energy, makes way for a second presidential term for Donald Trump, who is not interested in reducing carbon emissions. How do you see that impacting battery research?

If you look at this question historically, ReCell, which is Argonne’s R&D centre for battery recycling, got established during the first Trump administration. Around the same time, we got the Federal Consortium for Advanced Batteries, which brought together the Department of Energy, the Department of Defense, the intelligence community, the State Department and the Department of Commerce. The reason all those groups were interested in batteries is that there’s a growing feeling that we need to have energy independence in the US when it comes to supply chains for batteries. It’s an important technology, there’s lots of innovations, and we need to find a way to move them to market.

So that came about during the Trump administration, and then the Biden administration doubled down on it. What that tells me is that batteries are largely bipartisan, and I think that’s at least partly because you can have different motivations for buying them. Many of my neighbours aren’t particularly thinking about carbon emissions when they buy an electric vehicle (EV). They just want to go from zero to 60 in three seconds. They love the experience. Similarly, people love to be off-grid, because they feel like they’re controlling their own stuff. I suspect that because of this, there will continue to be largely bipartisan support for EVs. I remain hopeful that that’s what will happen.

  • Venkat Srinivasan will appear alongside William Mustain and Martin Freer at a Physics World Live panel discussion on battery technologies on 21 November 2024. Sign up here.

The post Venkat Srinivasan: ‘Batteries are largely bipartisan’ appeared first on Physics World.

UK plans £22bn splurge on carbon capture and storage

Par : No Author

Further details have emerged over the UK government’s pledge to spend almost £22bn on carbon capture and storage (CCS) in the next 25 years. While some climate scientists feel the money is vital to decarbonise heavy industry, others have raised concerns about the technology itself, including its feasibility at scale and potential to extend fossil fuel use rather than expanding renewable energy and other low-carbon technologies.

In 2023 the UK emitted about 380 million tonnes of carbon dioxide equivalent and the government claims that CCS could remove more than 8.5 million tonnes each year as part of its effort to be net-zero by 2050. Although there are currently no commercial CCS facilities in the UK, last year the previous Conservative government announced funding for two industrial clusters: HyNet in Merseyside and the East Coast Cluster in Teesside.

Projects at both clusters will capture carbon dioxide from various industrial sites, including hydrogen plants, a waste incinerator, a gas-fired power station and a cement works. The gas will then be transported down pipes to offshore storage sites, such as depleted oil and gas fields. According to the new Labour government, the plans will create 4000 jobs, with the wider CCS industry potentially supporting 50,000 roles.

Government ministers claim the strategy will make the UK a global leader in CCS and hydrogen production and is expected to attract £8bn in private investment. Rachel Reeves, the chancellor, said in September that CCS is a “game-changing technology” that will “ignite growth”. The Conservative’s strategy also included plans to set up two other clusters but no progress has been made on these yet.

The new investment in CCS comes after advice from the independent Climate Change Committee, which said it is necessary for decarbonising the UK’s heavy industry and for the UK to reach its net-zero target. The International Energy Agency (IEA) and the Intergovernmental Panel on Climate Change have also endorsed CCS as critical for decarbonisation, particularly in heavy industry.

“The world is going to generate more carbon dioxide from burning fossil fuels than we can afford to dump into the atmosphere,” says Myles Allen, a climatologist at the University of Oxford. “It is utterly unrealistic to pretend otherwise. So, we need to scale up a massive global carbon dioxide disposal industry.” Allen adds, however, that discussions are needed about how CCS is funded. “It doesn’t make sense for private companies to make massive profits selling fossil fuels while taxpayers pay to clean up the mess.”

Out of options

Globally there are around 45 commercial facilities that capture about 50 million tonnes of carbon annually, roughly 0.14% of global emissions. According to the IEA, up to 435 million tonnes of carbon could be captured every year by 2030, depending on the progress of more than 700 announced CCS projects.

One key part of the UK government’s plans is to use CCS to produce so-called “blue” hydrogen. Most hydrogen is currently made by heating methane from natural gas with a catalyst, producing carbon monoxide and carbon dioxide as by-products. Blue hydrogen involves capturing and storing those by-products, thereby cutting carbon emissions.

But critics warn that blue hydrogen continues our reliance on fossil fuels and risks leaks along the natural gas supply chain. There are also concerns about its commercial feasibility. The Norwegian energy firm Equinor, which is set to build several UK-based hydrogen plants, has recently abandoned plans to pipe blue hydrogen to Germany, citing cost and lack of demand.

“The hydrogen pipeline hasn’t proved to be viable,” Equinor spokesperson Magnus Frantzen Eidsvold told Reuters, adding that its plans to produce hydrogen had been “put aside”. Shell has also scrapped plans for a blue hydrogen plant in Norway, saying that the market for the fuel had failed to materialise.

To meet our climate targets, we do face difficult choices. There is no easy way to get there

Jessica Jewell

According to the Institute for Energy Economics and Financial Analysis (IEEFA), CCS “is costly, complex and risky with a history of underperformance and delays”. It believes that money earmarked for CCS would be better spent on proven decarbonisation technologies such as buildings insulation, renewable power, heat pumps and electric vehicles. It says the UK’s plans will make it “more reliant on fossil gas imports” and send “the wrong signal internationally about the need to stop expanding fossil fuel infrastructure”.

After delays to several CCS projects in the EU, there are also questions around progress on its target to store 50 million tonnes of carbon by 2030. Press reports, have recently revealed, for example, that a pipeline connecting Germany’s Rhine-Ruhr industrial heartland to a Dutch undersea carbon storage project will not come online until at least 2032.

Jessica Jewell, an energy expert at Chalmers University in Sweden, and colleagues have also found that CCS plants have a failure rate of about 90% largely because of poor investment prospects (Nature Climate Change 14 1047). “If we want CCS to expand and be taken more seriously, we have to make projects more profitable and make the financial picture work for investors,” Jewell told Physics World.

Subsidies like the UK plan could do so, she says, pointing out that wind power, for example, initially benefited from government support to bring costs down. Jewell’s research suggests that by cutting failure rates and enabling CCS to grow at the pace wind power did in the 2000s, it could capture a “not insignificant” 600 gigatonnes of carbon dioxide by 2100, which could help decarbonise heavy industry.

That view is echoed by Marcelle McManus, director of the Centre for Sustainable Energy Systems at the University of Bath, who says that decarbonising major industries such as cement, steel and chemicals is challenging and will benefit from CCS. “We are in a crisis and need all of the options available,” she says. “We don’t currently have enough renewable electricity to meet our needs, and some industrial processes are very hard to electrify.”

Although McManus admits we need “some storage of carbon”, she says it is vital to “create the pathways and technologies for a defossilised future”. CCS alone is not the answer and that, says Jewell, means rapidly expanding low carbon technologies like wind, solar and electric vehicles. “To meet our climate targets, we do face difficult choices. There is no easy way to get there.”

The post UK plans £22bn splurge on carbon capture and storage appeared first on Physics World.

❌