↩ Accueil

Vue normale

Reçu aujourd’hui — 2 septembre 20256.5 📰 Sciences English

Desert dust helps freeze clouds in the northern hemisphere

2 septembre 2025 à 10:07

Micron-sized dust particles in the atmosphere could trigger the formation of ice in certain types of clouds in the Northern Hemisphere. This is the finding of researchers in Switzerland and Germany, who used 35 years of satellite data to show that nanoscale defects on the surface of these aerosol particles are responsible for the effect. Their results, which agree with laboratory experiments on droplet freezing, could be used to improve climate models and to advance studies of cloud seeding for geoengineering.

In the study, which was led by environmental scientist Diego Villanueva of ETH Zürich, the researchers focused on clouds in the so-called mixed-phase regime, which form at temperatures of between −39° and 0°C and are commonly found in mid- and high-latitudes, particularly over the North Atlantic, Siberia and Canada. These mixed-phase regime clouds (MPRCs) are often topped by a liquid or ice layer, and their makeup affects how much sunlight they reflect back into space and how much water they can release as rain or snow. Understanding them is therefore important for forecasting weather and making projections of future climate.

Researchers have known for a while that MPRCs are extremely sensitive to the presence of ice-nucleating particles in their environment. Such particles mainly come from mineral dust aerosols (such as K-feldspar, quartz, albite and plagioclase) that get swept up into the upper atmosphere from deserts. The Sahara Desert in northern Africa, for example, is a prime source of such dust in the Northern Hemisphere.

More dust leads to more ice clouds

Using 35 years of satellite data collected as part of the Cloud_cci project and MERRA-2 aerosol reanalyses, Villanueva and colleagues looked for correlations between dust levels and the formation of ice-topped clouds. They found that at temperatures of between -15°C and -30°C, the more dust there was, the more frequent the ice clouds were. What is more, their calculated increase in ice-topped clouds with increasing dust loading agrees well with previous laboratory experiments that predicted how dust triggers droplet freezing.

The new study, which is detailed in Science, shows that there is a connection between aerosols in the micrometre-size range and cloud ice observed over distances of several kilometres, Villanueva says. “We found that it is the nanoscale defects on the surface of dust aerosols that trigger ice clouds, so the process of ice glaciation spans more than 15 orders of magnitude in length,” he explains.

Thanks to this finding, Villaneuva tells Physics World that climate modellers can use the team’s dataset to better constrain aerosol-cloud processes, potentially helping them to construct better estimates of cloud feedback and global temperature projections.

The result also shows how sensitive clouds are to varying aerosol concentrations, he adds. “This could help bring forward the field of cloud seeding and include this in climate geoengineering efforts.”

The researchers say they have successfully replicated their results using a climate model and are now drafting a new manuscript to further explore the implications of dust-driven cloud glaciation for climate, especially for the Arctic.

The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.

Radioactive ion beams enable simultaneous treatment and imaging in particle therapy

2 septembre 2025 à 10:00

Researchers in Germany have demonstrated the first cancer treatment using a radioactive carbon ion beam (11C), on a mouse with a bone tumour close to the spine. Performing particle therapy with radioactive ion beams enables simultaneous treatment and visualization of the beam within the body.

Particle therapy using beams of protons or heavy ions is a highly effective cancer treatment, with the favourable depth–dose deposition – the Bragg peak – providing extremely conformal tumour targeting. This conformality, however, makes particle therapy particularly sensitive to range uncertainties, which can impact the Bragg peak position.

One way to reduce such uncertainties is to use positron emission tomography (PET) to map the isotopes generated as the treatment beam interacts with tissues in the patient. For therapy with carbon (12C) ions, currently performed at 17 centres worldwide, this involves detecting the beta decay of 10C and 11C projectile fragments. Unfortunately, such fragments generate a small PET signal, while their lower mass shifts the measured activity peak away from the Bragg peak.

The researchers – working within the ERC-funded BARB (Biomedical Applications of Radioactive ion Beams) project – propose that treatment with positron-emitting ions such as 11C could overcome these obstacles. Radioactive ion beams have the same biological effectiveness as their corresponding stable ion beams, but generate an order of magnitude larger PET signal. They also reduce the shift between the activity and dose peaks, enabling precise localization of the ion beam in vivo.

“Range uncertainty remains the main problem of particle therapy, as we do not know exactly where the Bragg peak is,” explains Marco Durante, head of biophysics at the GSI Helmholtz Centre for Heavy Ion Research and principal investigator of the BARB project. “If we ‘aim-and-shoot’ using a radioactive beam and PET imaging, we can see where the beam is and can then correct it. By doing this, we can reduce the margins around the target that spoil the precision of particle therapy.”

In vivo experiments

To test this premise, Durante and colleagues performed in vivo experiments at the GSI/FAIR accelerator facility in Darmstadt. For online range verification, they used a portable small-animal in-beam PET scanner built by Katia Parodi and her team at LMU Munich. The scanner, initially designed for the ERC project SIRMIO (Small-animal proton irradiator for research in molecular image-guided radiation-oncology), contains 56 depth-of-interaction detectors – based on scintillator blocks of pixelated LYSO crystals – arranged spherically with an inner diameter of 72 mm.

LMU researchers with small-animal PET scanner
LMU researchers Members of the LMU team involved in the BARB project (left to right: Peter Thirolf, Giulio Lovatti, Angelica Noto, Francesco Evangelista, Munetaka Nitta and Katia Parodi) with the small-animal PET scanner. (Courtesy: Katia Parodi/Francesco Evangelista, LMU)

“Not only does our spherical in-beam PET scanner offer unprecedented sensitivity and spatial resolution, but it also enables on-the-fly monitoring of the activity implantation for direct feedback during irradiation,” says Parodi, co-principal investigator of the BARB project.

The researchers used a radioactive 11C-ion beam – produced at the GSI fragment separator – to treat 32 mice with an osteosarcoma tumour implanted in the neck near the spinal cord. To encompass the full target volume, they employed a range modulator to produce a spread-out Bragg peak (SOBP) and a plastic compensator collar, which also served to position and immobilize the mice. The anaesthetized animals were placed vertically inside the PET scanner and treated with either 20 or 5 Gy at a dose rate of around 1 Gy/min.

For each irradiation, the team compared the measured activity with Monte Carlo-simulated activity based on pre-treatment microCT scans. The activity distributions were shifted by about 1 mm, attributed to anatomical changes between the scans (with mice positioned horizontally) and irradiation (vertical positioning). After accounting for this anatomical shift, the simulation accurately matched the measured activity. “Our findings reinforce the necessity of vertical CT planning and highlight the potential of online PET as a valuable tool for upright particle therapy,” the researchers write.

With the tumour so close to the spine, even small range uncertainties risk damage to the spinal cord, so the team used the online PET images generated during the irradiation to check that the SOPB did not cover the spine. While this was not seen in any of the animals, Durante notes that if it had, the beam could be moved to enable “truly adaptive” particle therapy. Assessing the mice for signs of radiation-induced myelopathy (which can lead to motor deficits and paralysis) revealed that no mice exhibited severe toxicity, further demonstrating that the spine was not exposed to high doses.

PET imaging in a mouse
PET imaging in a mouse (a) Simulation showing the expected 11C-ion dose distribution in the pre-treatment microCT scan. (b) Corresponding simulated PET activity. (c) Online PET image of the activity during 11C irradiation, overlaid on the same microCT used for simulations. The target is outlined in black, the spine in red. (Courtesy: CC BY 4.0/Nat. Phys. 10.1038/s41567-025-02993-8)

Following treatment, tumour measurements revealed complete tumour control after 20 Gy irradiation and prolonged tumour growth delay after 5 Gy, suggesting complete target coverage in all animals.

The researchers also assessed the washout of the signal from the tumour, which includes a slow activity decrease due to the decay of 11C (which has a half-life of 20.34 min), plus a faster decrease as blood flow removes the radioactive isotopes from the tumour. The results showed that the biological washout was dose-dependent, with the fast component visible at 5 Gy but disappearing at 20 Gy.

“We propose that this finding is due to damage to the blood vessel feeding the tumour,” says Durante. “If this is true, high-dose radiotherapy may work in a completely different way from conventional radiotherapy: rather than killing all the cancer stem cells, we just starve the tumour by damaging the blood vessels.”

Future plans

Next, the team intends to investigate the use of 10C or 15O treatment beams, which should provide stronger signals and increased temporal resolution. A new Super-FRS fragment separator at the FAIR accelerator facility will provide the high-intensity beams required for studies with 10C.

Looking further ahead, clinical translation will require a realistic and relatively cheap design, says Durante. “CERN has proposed a design [the MEDICIS-Promed project] based on ISOL [isotope separation online] that can be used as a source of radioactive beams in current accelerators,” he tells Physics World. “At GSI we are also working on a possible in-flight device for medical accelerators.”

The findings are reported in Nature Physics.

The post Radioactive ion beams enable simultaneous treatment and imaging in particle therapy appeared first on Physics World.

European customer leases SI Imaging Services’ SpaceEye-T

2 septembre 2025 à 06:36

SAN FRANCISCO — South Korea’s SI Imaging Services announced a contract Sept. 2 to lease the capacity of Earth-observation satellite SpaceEye-T to a European customer under a contract with a value of more than 10 million euros ($11.7 million). SpaceEye-T, an optical satellite offering native resolution of 25 centimeters per pixel, reached orbit in March […]

The post European customer leases SI Imaging Services’ SpaceEye-T appeared first on SpaceNews.

Reçu hier — 1 septembre 20256.5 📰 Sciences English

Golden Dome for NATO is better than one for America

1 septembre 2025 à 15:00
NATO Secretary General Mark Rutte and the NATO Heads of State and Government in The Hague on 25 June 2025. Credit: Emmi Syrjäniemi/Office of the President of the Republic of Finland

President Trump should invite NATO allies to join the Golden Dome Initiative, transforming the proposed Golden Dome for America into a Golden Dome for NATO. Such a shift would better match today’s security realities and send the clear message to potential adversaries that we are united in deterring and defending against nuclear and conventional ballistic, […]

The post Golden Dome for NATO is better than one for America appeared first on SpaceNews.

Scaling smallsats: A conversation with Muon Space President Gregory Smirin

1 septembre 2025 à 13:00
Muon Space President Gregory Smirin. Credit: Muon Space

Muon Space is racing to expand production capabilities following a $90 million funding boost, targeting the growing appetite for increasingly capable satellites in the 100–500+ kilogram range. The California-based company recently introduced its largest small satellite platform yet, the 500-kilogram-class MuSat XL, to support more demanding missions at the upper end of the category. The […]

The post Scaling smallsats: A conversation with Muon Space President Gregory Smirin appeared first on SpaceNews.

Garbage in, garbage out: why the success of AI depends on good data

1 septembre 2025 à 14:00

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

Reçu avant avant-hier6.5 📰 Sciences English
❌