↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Sustainability spotlight: PFAS unveiled

Par : No Author

So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.

Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.

Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.

Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.

Left to right: Jonas Baltrusaitis, Linda S. Lee, Clinton Williams, Sara Lupton, Jude Maul

Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.

Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.

Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.

Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.

Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.

About this journal

Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic.
Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA

 

The post Sustainability spotlight: PFAS unveiled appeared first on Physics World.

Start-stop operation and the degradation impact in electrolysis

Par : No Author

start-stop graph

This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.

An interactive Q&A session follows the presentation.

Shaun Alia
Shaun Alia

Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.

 

 

The post Start-stop operation and the degradation impact in electrolysis appeared first on Physics World.

How the operating window of LFP/Graphite cells affects their lifetime

Par : No Author

 

Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.

Eniko Zsoldos
Eniko Zsoldos

Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.

 

The Electrochemical Society

 

The post How the operating window of LFP/Graphite cells affects their lifetime appeared first on Physics World.

Generative AI has an electronic waste problem, researchers warn

The rising popularity of generative artificial intelligence (GAI), and in particular large language models such as ChatGPT, could produce a significant surge in electronic waste, according to new analyses by researchers in Israel and China. Without mitigation measures, the researchers warn that this stream of e-waste could reach 2.5 million tons (2.2 billion kg) annually by 2030, and potentially even more.

“Geopolitical factors, such as restrictions on semiconductor imports, and the trend for rapid server turnover for operational cost saving, could further exacerbate e-waste generation,” says study team member Asaf Tzachor, who studies existential risks at Reichman University in Herzliya, Israel.

GAI or Gen AI is a form of artificial intelligence that creates new content, such as text, images, music, or videos using patterns it has learned from existing data. Some of the principles that make this pattern-based learning possible were developed by the physicist John Hopfield, who shared the 2024 Nobel Prize for Physics with computer scientist and AI pioneer Geoffrey Hinton. Perhaps the best-known example of Gen AI is ChatGPT (the “GPT” stands for “generative pre-trained transformer”), which is an example of a Large Language Model (LLM).

While the potential benefits of LLMs are significant, they come at a price. Notably, they require so much energy to train and operate that some major players in the field, including Google and ChatGPT developer OpenAI, are exploring the possibility of building new nuclear reactors for this purpose.

Quantifying and evaluating Gen AI’s e-waste problem

Energy use is not the only environmental challenge associated with Gen AI, however. The amount of e-waste it produces – including printed circuit boards and batteries that can contain toxic materials such as lead and chromium – is also a potential issue. “While the benefits of AI are well-documented, the sustainability aspects, and particularly e-waste generation, have been largely overlooked,” Tzachor says.

Tzachor and his colleagues decided to address what they describe as a “significant knowledge gap” regarding how GAI contributes to e-waste. Led by sustainability scientist Peng Wang at the Institute of Urban Environment, Chinese Academy of Sciences, they developed a computational power-drive, material flow analysis (CP-MFA) framework to quantify and evaluate the e-waste it produces. This involved modelling the computational resources required for training and deploying LLMs, explains Tzachor, and translating these resources into material flows and e-waste projections.

“We considered various future scenarios of GAI development, ranging from the most aggressive to the most conservative growth,” he tells Physics World. “We also incorporated factors such as geopolitical restrictions and server lifecycle turnover.”

Using this CP-MFA framework, the researchers estimate that the total amount of Gen AI-related e-waste produced between 2023 and 2030 could reach the level of 5 million tons in a “worst-case” scenario where AI finds the most widespread applications.

A range of mitigation measures

That worst-case scenario is far from inevitable, however. Writing in Nature Computational Science, the researchers also modelled the effectiveness of different e-waste management strategies. Among the strategies they studied were increasing the lifespan of existing computing infrastructures through regular maintenance and upgrades; reusing or remanufacturing key components; and improving recycling processes to recover valuable materials in a so-called “circular economy”.

Taken together, these strategies could reduce e-waste generation by up to 86%, according to the team’s calculations. Investing in more energy-efficient technologies and optimizing AI algorithms could also significantly reduce the computational demands of LLMs, Tzachor adds, and would reduce the need to update hardware so frequently.

Another mitigation strategy would be to design AI infrastructure in a way that uses modular components, which Tzachor says are easier to upgrade and recycle. “Encouraging policies that promote sustainable manufacturing practices, responsible e-waste disposal and extended producer responsibility programmes can also play a key role in reducing e-waste,” he explains.

As well as helping policymakers create regulations that support sustainable AI development and effective e-waste management, the study should also encourage AI developers and hardware manufacturers to adopt circular economy principles, says Tzachor. “On the academic side, it could serve as a foundation for future research aimed at exploring the environmental impacts of AI applications other than LLMs and developing more comprehensive sustainability frameworks in general.”

The post Generative AI has an electronic waste problem, researchers warn appeared first on Physics World.

UK plans £22bn splurge on carbon capture and storage

Par : No Author

Further details have emerged over the UK government’s pledge to spend almost £22bn on carbon capture and storage (CCS) in the next 25 years. While some climate scientists feel the money is vital to decarbonise heavy industry, others have raised concerns about the technology itself, including its feasibility at scale and potential to extend fossil fuel use rather than expanding renewable energy and other low-carbon technologies.

In 2023 the UK emitted about 380 million tonnes of carbon dioxide equivalent and the government claims that CCS could remove more than 8.5 million tonnes each year as part of its effort to be net-zero by 2050. Although there are currently no commercial CCS facilities in the UK, last year the previous Conservative government announced funding for two industrial clusters: HyNet in Merseyside and the East Coast Cluster in Teesside.

Projects at both clusters will capture carbon dioxide from various industrial sites, including hydrogen plants, a waste incinerator, a gas-fired power station and a cement works. The gas will then be transported down pipes to offshore storage sites, such as depleted oil and gas fields. According to the new Labour government, the plans will create 4000 jobs, with the wider CCS industry potentially supporting 50,000 roles.

Government ministers claim the strategy will make the UK a global leader in CCS and hydrogen production and is expected to attract £8bn in private investment. Rachel Reeves, the chancellor, said in September that CCS is a “game-changing technology” that will “ignite growth”. The Conservative’s strategy also included plans to set up two other clusters but no progress has been made on these yet.

The new investment in CCS comes after advice from the independent Climate Change Committee, which said it is necessary for decarbonising the UK’s heavy industry and for the UK to reach its net-zero target. The International Energy Agency (IEA) and the Intergovernmental Panel on Climate Change have also endorsed CCS as critical for decarbonisation, particularly in heavy industry.

“The world is going to generate more carbon dioxide from burning fossil fuels than we can afford to dump into the atmosphere,” says Myles Allen, a climatologist at the University of Oxford. “It is utterly unrealistic to pretend otherwise. So, we need to scale up a massive global carbon dioxide disposal industry.” Allen adds, however, that discussions are needed about how CCS is funded. “It doesn’t make sense for private companies to make massive profits selling fossil fuels while taxpayers pay to clean up the mess.”

Out of options

Globally there are around 45 commercial facilities that capture about 50 million tonnes of carbon annually, roughly 0.14% of global emissions. According to the IEA, up to 435 million tonnes of carbon could be captured every year by 2030, depending on the progress of more than 700 announced CCS projects.

One key part of the UK government’s plans is to use CCS to produce so-called “blue” hydrogen. Most hydrogen is currently made by heating methane from natural gas with a catalyst, producing carbon monoxide and carbon dioxide as by-products. Blue hydrogen involves capturing and storing those by-products, thereby cutting carbon emissions.

But critics warn that blue hydrogen continues our reliance on fossil fuels and risks leaks along the natural gas supply chain. There are also concerns about its commercial feasibility. The Norwegian energy firm Equinor, which is set to build several UK-based hydrogen plants, has recently abandoned plans to pipe blue hydrogen to Germany, citing cost and lack of demand.

“The hydrogen pipeline hasn’t proved to be viable,” Equinor spokesperson Magnus Frantzen Eidsvold told Reuters, adding that its plans to produce hydrogen had been “put aside”. Shell has also scrapped plans for a blue hydrogen plant in Norway, saying that the market for the fuel had failed to materialise.

To meet our climate targets, we do face difficult choices. There is no easy way to get there

Jessica Jewell

According to the Institute for Energy Economics and Financial Analysis (IEEFA), CCS “is costly, complex and risky with a history of underperformance and delays”. It believes that money earmarked for CCS would be better spent on proven decarbonisation technologies such as buildings insulation, renewable power, heat pumps and electric vehicles. It says the UK’s plans will make it “more reliant on fossil gas imports” and send “the wrong signal internationally about the need to stop expanding fossil fuel infrastructure”.

After delays to several CCS projects in the EU, there are also questions around progress on its target to store 50 million tonnes of carbon by 2030. Press reports, have recently revealed, for example, that a pipeline connecting Germany’s Rhine-Ruhr industrial heartland to a Dutch undersea carbon storage project will not come online until at least 2032.

Jessica Jewell, an energy expert at Chalmers University in Sweden, and colleagues have also found that CCS plants have a failure rate of about 90% largely because of poor investment prospects (Nature Climate Change 14 1047). “If we want CCS to expand and be taken more seriously, we have to make projects more profitable and make the financial picture work for investors,” Jewell told Physics World.

Subsidies like the UK plan could do so, she says, pointing out that wind power, for example, initially benefited from government support to bring costs down. Jewell’s research suggests that by cutting failure rates and enabling CCS to grow at the pace wind power did in the 2000s, it could capture a “not insignificant” 600 gigatonnes of carbon dioxide by 2100, which could help decarbonise heavy industry.

That view is echoed by Marcelle McManus, director of the Centre for Sustainable Energy Systems at the University of Bath, who says that decarbonising major industries such as cement, steel and chemicals is challenging and will benefit from CCS. “We are in a crisis and need all of the options available,” she says. “We don’t currently have enough renewable electricity to meet our needs, and some industrial processes are very hard to electrify.”

Although McManus admits we need “some storage of carbon”, she says it is vital to “create the pathways and technologies for a defossilised future”. CCS alone is not the answer and that, says Jewell, means rapidly expanding low carbon technologies like wind, solar and electric vehicles. “To meet our climate targets, we do face difficult choices. There is no easy way to get there.”

The post UK plans £22bn splurge on carbon capture and storage appeared first on Physics World.

Space-based solar power: ‘We have nothing to lose and everything to gain’

Par : No Author

The most important and pressing issue of our times is the transition to clean energy while meeting rising global demand. Cheap, abundant and reliable energy underpins the quality of life for all – and one potentially exciting way to do this is space-based solar power (SBSP). It would involve capturing sunlight in space and beaming it as microwaves down to Earth, where it would be converted into electricity to power the grid.

For proponents of SBSP such as myself, it’s a hugely promising technology. Others, though, are more sceptical. Earlier this year, for example, NASA published a report from its Office of Technology, Policy and Strategy that questioned the cost and practicality of SBSP. Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands, has also examined the technical challenges in a report for the IEEE.

Some of these sceptical positions on SBSP were addressed in a recent Physics World article by James McKenzie. Conventional solar power is cheap, he argued, so why bother putting large solar power satellites in space? After all, the biggest barriers to building more solar plants here on Earth aren’t technical, but mostly come in the form of belligerent planning officials and local residents who don’t want their views ruined.

However, in my view we need to take a whole-energy-system perspective to see why innovation is essential for the energy transition. Wind, solar and batteries are “low-density” renewables, requiring many tonnes of minerals to be mined and refined for each megawatt-hour of energy. How can this be sustainable and give us energy security, especially when so much of our supply of these minerals depends on production in China?

Low-density renewables also require a Herculean expansion in electricity grid transmission pylons and cables to connect them to users. Other drawbacks of wind and solar is that they depend on the weather and require suitable storage – which currently does not exist at the capacity or cost needed. These forms of energy also need duplicated back-up, which is expensive, and other sources of baseload power for times when it’s cloudy or there’s no wind.

Look to the skies

With no night or weather in space, however, a solar panel in space generates 13 times as much energy than the same panel on Earth. SBSP, if built, would generate power continuously, transmitted as microwaves through the atmosphere with almost no loss. It could therefore deliver baseload power 24 hours a day, irrespective of local weather conditions on Earth.

SBSP could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar

Another advantage of SBSP is that could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar. We currently do this using fossil-fuel-powered gas-fired “peaker” plants, which could therefore be put out to pasture. SBSP is also scalable, allowing the energy it produces to be easily exported to other nations without expensive cables, giving it a truly global impact.

A recent whole-energy-system study by researchers at Imperial College London concluded that introducing just 8 GW of SBSP into the UK’s energy mix would deliver system savings of over £4bn every year. In my view, which is shared by others too, the utility of SBSP is likely to be even greater when considering whole continents or global alliances. It can give us affordable and reliable clean energy.

My firm, Space Solar, has designed a solar-power satellite called CASSIOPeiA, which is more than twice as powerful – based on the key metric of power per unit mass – as ESA’s design. So far, we have built and successfully demonstrated our power beaming technology, and following £5m of engineering design work, we have arguably the most technically mature design in the world.

If all goes to plan, we’ll have our first commercial product by 2029. Offering 30 MW of power, it could be launched by a single Starship rocket, and scale to gigawatt systems from there. Sure, there are engineering challenges, but these are mostly based on ensuring that the economics remain competitive. Space Solar is also lucky in having world-class experts working in spacecraft engineering, advanced photovoltaics, power beaming and in-space robotics.

Brighter and better

But why then was NASA’s study so sceptical of SBSP? I think it was because the report made absurdly conservative assumptions of the economics. NASA assumed an operating life of only 10 years: so to run for 30 years, the whole solar power satellite would have to be built and launched three times. Yet satellites today generally last for more than 25 years, with most baselined for a minimum 15 year life.

The NASA report also assumed that a satellite launched by Starship would remain at around $1500/kg. However, other independent analyses, such as “Space: the dawn of a new age” produced in 2022 by Citi Group, have forecast that it will be an order of magnitude less – just at $100/kg – by 2040. I could go on as there are plenty more examples of risk-averse thinking in the NASA report.

Buried in the report, however, the study also looked at more reasonable scenarios than the “baseline” and concluded that “these conditions would make SBSP systems highly competitive with any assessed terrestrial renewable electricity production technology’s 2050 cost projections”. Curiously, these findings did not make it into the executive summary.

The NASA study has been widely criticized, including by former NASA physicist John Mankins, who invented another approach to space solar dubbed SPS Alpha. Speaking on a recent episode of the DownLink podcast, he suspected NASA’s gloomy stance may in part be because it focuses on space tech and space exploration rather than energy for Earth. NASA bosses might fear that if they were directed by Congress to pursue SBSP, money for other priorities might be at risk.

I also question Barde’s sceptical opinion of the technology of SBSP, which he expressed in an article for IEEE Spectrum. Barde appeared not to understand many of the design features that make SPBSP technically feasible. He wrote, for example, about “gigawatts of power coursing through microwave systems” of the solar panels on the satellite, which sounds ominous and challenging to achieve.

In reality, the gigawatts of sunlight are reflected onto a large area of photovoltaics containing a billion or so solar cells. Each cell, which includes an antenna and electronic components to convert the sunlight into microwaves, is arranged in a sandwich module just a few millimetres thick handling just 2 W of power. So although the satellite delivers gigawatts overall, the figure is much lower at the component level. What’s more, each cell can be made using tried and tested radio-frequency components.

As for Barde’s fears about thermal management – in other words, how we can stop the satellite from overheating – that has already been analysed in detail. The plan is to use passive radiative cooling without active systems. Barde also warns of temperature swings as the satellites pass through eclipse during the spring and autumn equinox. But this problem is common to all satellites and has, in any case, been analysed as part of our engineering work. In essence, Barde’s claim of “insurmountable technical difficulties” is simply his opinion.

Until the first solar power satellite is commissioned, there will always be sceptics [but] that was also true of reusable rockets and cubesats, both of which are now mainstream technology

Until the first solar power satellite is commissioned, there will always be sceptics of what we are doing. However, that was also true of reusable rockets and cubesats, both of which are now mainstream technology. SBSP is a “no-regrets” investment that will see huge environmental and economic benefits, with spin-off technologies in wireless power beaming, in-space assembly and photovoltaics.

It is the ultimate blend of space technology and societal benefit, which will inspire the next generation of students into physics and engineering. Currently, the UK has a leadership position in SBSP, and if we have the vision and ambition, there is nothing to lose and everything to gain from backing this. We just need to get on with the job.

The post Space-based solar power: ‘We have nothing to lose and everything to gain’ appeared first on Physics World.

Bursts of embers play outsized role in wildfire spread, say physicists

New field experiments carried out by physicists in California’s Sierra Nevada mountains suggest that intermittent bursts of embers play an unexpectedly large role in the spread of wildfires, calling into question some aspects of previous fire models. While this is not the first study to highlight the importance of embers, it does indicate that standard modelling tools used to predict wildfire spread may need to be modified to account for these rare but high-impact events.

Embers form during a wildfire due to a combination of heat, wind and flames. Once lofted into the air, they can travel long distances and may trigger new “spot fires” when they land. Understanding ember behaviour is therefore important for predicting how a wildfire will spread and helping emergency services limit infrastructure damage and prevent loss of life.

Watching it burn

In their field experiments, Tirtha Banerjee and colleagues at the University of California Irvine built a “pile fire” – essentially a bonfire fuelled by a representative mixture of needles, branches, pinecones and pieces of wood from ponderosa pine and Douglas fir trees – in the foothills of the Sierra Nevada mountains. A high-frequency (120 frames per second) camera recorded the fire’s behaviour for 20 minutes, and the researchers placed aluminium baking trays around it to collect the embers it ejected.

After they extinguished the pile fire, the researchers brought the ember samples back to the laboratory and measured their size, shape and density. Footage from the camera enabled them to estimate the fire’s intensity based on its height. They also used a technique called particle tracking velocimetry to follow firebrands and calculate their trajectories, velocities and accelerations.

Highly intermittent ember generation

Based on the footage, the team concluded that ember generation is highly intermittent, with occasional bursts containing orders of magnitude more embers than were ejected at baseline. Existing models do not capture such behaviour well, says Alec Petersen, an experimental fluid dynamicist at UC Irvine and lead author of a Physics of Fluids paper on the experiment. In particular, he explains that models with a low computational cost often make simplifications in characterizing embers, especially with regards to fire plumes and ember shapes. This means that while they can predict how far an average firebrand with a certain size and shape will travel, the accuracy of those predictions is poor.

“Although we care about the average behaviour, we also want to know more about outliers,” he says. “It only takes a single ember to ignite a spot fire.”

As an example of such an outlier, Petersen notes that sometimes a strong updraft from a fire plume coincides with the fire emitting a large number of embers. Similar phenomena occur in many types of turbulent flows, including atmospheric winds as well as buoyant fire plumes, and they are characterized by statistically infrequent but extreme fluctuations in velocity. While these fluctuations are rare, they could partially explain why the team observed large (>1mm) firebrands travelling further than models predict, he tells Physics World.

This is important, Petersen adds, because large embers are precisely the ones with enough thermal energy to start spot fires. “Given enough chances, even statistically unlikely events can become probable, and we need to take such events into account,” he says.

New models, fresh measurements

The researchers now hope to reformulate operational models to do just this, but they acknowledge that this will be challenging. “Predicting spot fire risk is difficult and we’re only just scratching the surface of what needs to be included for accurate and useful predictions that can help first responders,” Petersen says.

They also plan to do more experiments in conjunction with a consortium of fire researchers that Banerjee set up. Beginning in November, when temperatures in California are cooler and the wildfire risk is lower, members of the new iFirenet consortium plan to collaborate on a large-scale field campaign at the UC Berkeley Research Forests. “We’ll have tonnes of research groups out there, measuring all sorts of parameters for our various projects,” Petersen says. “We’ll be trying to refine our firebrand tracking experiments too, using multiple cameras to track them in 3D, hopefully supplemented with a thermal camera to measure their temperatures.

“My background is in measuring and describing the complex dynamics of particles carried by turbulent flows,” Petersen continues. “I don’t have the same deep expertise studying fires that I do in experimental fluid dynamics, so it’s always a challenge to learn the best practices of a new field and to familiarize yourself with the great research folks have done in the past and are doing now. But that’s what makes studying fluid dynamics so satisfying – it touches so many corners of our society and world, there’s always something new to learn.”

The post Bursts of embers play outsized role in wildfire spread, say physicists appeared first on Physics World.

Reanimating the ‘living Earth’ concept for a more cynical world

Par : James Dacey

Tie-dye, geopolitical tension and a digitized Abba back on stage. Our appetite for revisiting the 1970s shows no signs of waning. Science writer Ferris Jabr has now reanimated another idea that captured the era’s zeitgeist: the concept of a “living Earth”. In Becoming Earth: How Our Planet Came to Life Jabr makes the case that our planet is far more than a lump of rock that passively hosts complex life. Instead, he argues that the Earth and life have co-evolved over geological time and that appreciating these synchronies can help us to steer away from environmental breakdown.

“We, and all living things, are more than inhabitants of Earth – we are Earth, an outgrowth of its structure and an engine of its evolution.” If that sounds like something you might hear in the early hours at a stone circle gathering, don’t worry. Jabr fleshes out his case with the latest science and journalistic flair in what is an impressive debut from the Oregon-based writer.

Becoming Earth is a reappraisal of the Gaia hypothesis, proposed in 1972 by British scientist James Lovelock and co-developed over several decades by US microbiologist Lynn Margulis. This idea of the Earth functioning as a self-regulating living organism has faced scepticism over the years, with many feeling it is untestable and strays into the realm of pseudoscience. In a 1988 essay, the biologist and science historian Stephen Jay Gould called Gaia “a metaphor, not a mechanism”.

Though undoubtedly a prodigious intellect, Lovelock was not your typical academic. He worked independently across fields including medical research, inventing the electron capture detector and consulting for petrochemical giant Shell. Add that to Gaia’s hippyish name – evoking the Greek goddess of Earth – and it’s easy to see why the theory faced a branding issue within mainstream science. Lovelock himself acknowledged errors in the theory’s original wording, which implied the biosphere acted with intention.

Though he makes due reference to the Gaia hypothesis, Jabr’s book is a standalone work, and in revisiting the concept in 2024, he has one significant advantage: we now have a tonne of scientific evidence for tight coupling between life and the environment. For instance, microbiologists increasingly speak of soil as a living organism because of the interconnections between micro-organisms and soil’s structure and function. Physicists meanwhile happily speak of “complex systems” where collective behaviour emerges from interactions of numerous components – climate being the obvious example.

To simplify this sprawling topic, Becoming Earth is structured into three parts: Rock, Water and Air. Accessible scientific discussions are interspersed with reportage, based on Jabr’s visits to various research sites. We kick off at the Sanford Underground Research Facility in South Dakota (also home to neutrino experiments) as Jabr descends 1500 m in search of iron-loving microbes. We learn that perhaps 90% of all microbes live deep underground and they transform Earth wherever they appear, carving vast caverns and regulating the global cycling of carbon and nutrients. Crucially, microbes also created the conditions for complex life by oxygenating the atmosphere.

In the Air section, Jabr scales the 1500 narrow steps of the Amazon Tall Tower Observatory to observe the forest making its own rain. Plants are constantly releasing water into the air through their leaves, and this drives more than half of the 20 billion tonnes of rain that fall on its canopy daily – more than the volume discharged by the Amazon river. “It’s not that Earth is a single living organism in exactly the same way as a bird or bacterium, or even a superorganism akin to an ant colony,” explains Jabr. “Rather that the planet is the largest known living system – the confluence of all other ecosystems – with structures, rhythms, and self-regulating processes that resemble those of its smaller constituent life forms. Life rhymes at every scale.”

When it comes to life’s capacity to alter its environment, not all creatures are born equal. Humans are having a supersized influence on these planetary rhythms despite appearing in recent geological history. Jabr suggests the Anthropocene – a proposed epoch defined by humanity’s influence on the planet – may have started between 50,000 and 10,000 years ago. At that time, our ancestors hunted mammoths and other megafauna into extinction, altering grassland habitats that had preserved a relatively cool climate.

Some of the most powerful passages in Becoming Earth concern our relationship with hydrocarbons. “Fossil fuel is essentially an ecosystem in an urn,” writes Jabr to illustrate why coal and oil store such vast amounts of energy. Elsewhere, on a beach in Hawaii an earth scientist and artist scoop up “plastiglomerates” – rocks formed from the eroded remains of plastic pollution fused with natural sediments. Humans have “forged a material that had never existed before”.

A criticism of the original Gaia hypothesis is that its association with a self-regulating planet may have fuelled a type of climate denialism. Science historian Leah Aronowsky argued that Gaia created the conditions for people to deny humans’ unique capacity to tip the system.

Jabr doesn’t see it that way and is deeply concerned that we are hastening the end of a stable period for life on Earth. But he also suggests we have the tools to mitigate the worst impacts, though this will likely require far more than just cutting emissions. He visits the Orca project in Iceland, the world’s first and largest plant for removing carbon from the atmosphere and storing it over long periods – in this case injecting it into basalt deep below the surface.

In an epilogue, we finally meet a 100-year-old James Lovelock at his Dorset home three years before his death in 2022. Still cheerful and articulate, Lovelock thrived on humour and tackling the big questions. As pointed out by Jabr, Lovelock was also prone to contradiction and the occasional alarmist statement. For instance, in his 2006 book The Revenge of Gaia he claimed that the only few breeding humans left by the end of the century would be confined to the Arctic. Fingers crossed he’s wrong on that one!

Perhaps Lovelock was prone to the same phenomenon we see in quantum physics where even the sharpest scientific minds can end up shrouding the research in hype and woo. Once you strip away the new-ageyness, we may find that the idea of Gaia was never as “out there” as the cultural noise that surrounded it. Thanks to Jabr’s earnest approach, the living Earth concept is alive and kicking in 2024.

The post Reanimating the ‘living Earth’ concept for a more cynical world appeared first on Physics World.

❌