↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Imaging reveals how microplastics may harm the brain

Par : Tami Freeman

Pollution from microplastics – small plastic particles less than 5 mm in size – poses an ongoing threat to human health. Independent studies have found microplastics in human tissues and within the bloodstream. And as blood circulates throughout the body and through vital organs, these microplastics reach can critical regions and lead to tissue dysfunction and disease. Microplastics can also cause functional irregularities in the brain, but exactly how they exert neurotoxic effects remains unclear.

A research collaboration headed up at the Chinese Research Academy of Environmental Sciences and Peking University has shed light on this conundrum. In a series of cerebral imaging studies reported in Science Advances, the researchers tracked the progression of fluorescent microplastics through the brains of mice. They found that microplastics entering the bloodstream become engulfed by immune cells, which then obstruct blood vessels in the brain and cause neurobehavioral abnormalities.

“Understanding the presence and the state of microplastics in the blood is crucial. Therefore, it is essential to develop methods for detecting microplastics within the bloodstream,” explains principal investigator Haipeng Huang from Peking University. “We focused on the brain due to its critical importance: if microplastics induce lesions in this region, it could have a profound impact on the entire body. Our experimental technology enables us to observe the blood vessels within the brain and detect microplastics present in these vessels.”

In vivo imaging

Huang and colleagues developed a microplastics imaging system by integrating a two-photon microscopy system with fluorescent plastic particles and demonstrated that it could image brain blood vessels in awake mice. They then fed five mice with water containing 5-µm diameter fluorescent microplastics. After a couple of hours, fluorescence images revealed microplastics within the animals’ cerebral vessels.

The microplastic flash
Lightening bolt The “MP-flash” observed as two plastic particles rapidly fly through the cerebral blood vessels. (Courtesy: Haipeng Huang)

As they move through rapidly flowing blood, the microplastics generate a fluorescence signal resembling a lightning bolt, which the researchers call a “microplastic flash” (MP-flash). This MP-flash was observed in four of the mice, with the entire MP-flash trajectory captured in a single imaging frame of less than 208 ms.

Three hours after administering the microplastics, the researchers observed fluorescent cells in the bloodstream. The signals from these cells were of comparable intensity to the MP-flash signal, suggesting that the cells had engulfed microplastics in the blood to create microplastic-labelled cells (MPL-cells). The team note that the microplastics did not directly attach to the vessel wall or cross into brain tissue.

To test this idea further, the researchers injected microplastics directly into the bloodstream of the mice. Within minutes, they saw the MP-Flash signal in the brain’s blood vessels, and roughly 6 min later MPL-cells appeared. No fluorescent cells were seen in non-treated mice. Flow cytometry of mouse blood after microplastics injection revealed that the MPL-cells, which were around 21 µm in dimeter, were immune cells, mostly neutrophils and macrophages.

Tracking these MPL-cells revealed that they sometimes became trapped within a blood vessel. Some cells exited the imaging field following a period of obstruction while others remained in cerebral vessels for extended durations, in some instances for nearly 2.5 h of imaging. The team also found that one week after injection, the MPL-cells had still not cleared, although the density of blockages was much reduced.

“[While] most MPL-cells flow rapidly with the bloodstream, a small fraction become trapped within the blood vessels,” Huang tells Physics World. “We provide an example where an MPL-cell is trapped at a microvascular turn and, after some time, is fortunate enough to escape. Many obstructed cells are less fortunate, as the blockage may persist for several weeks. Obstructed cells can also trigger a crash-like chain reaction, resulting in several MPL-cells colliding in a single location and posing significant risks.”

The MPL-cell blockages also impeded blood flow in the mouse brain. Using laser speckle contrast imaging to monitor blood flow, the researchers saw reduced perfusion in the cerebral cortical vessels, notably at 30 min after microplastics injection and particularly affecting smaller vessels.

Laser speckle contrast images showing blood flow in the mouse brain
Reduced blood flow These laser speckle contrast images show blood flow in the mouse brain at various times after microplastics injection. The images indicate that blockages of microplastic-labelled cells inhibit perfusion in the cerebral cortical vessels. (Courtesy: Huang et al. Sci. Adv. 11 eadr8243 (2025))

Changing behaviour

Lastly, Huang and colleagues investigated whether the reduced blood supply to the brain caused by cell blockages caused behavioural changes in the mice. In an open-field experiment (used to assess rodents’ exploratory behaviour) mice injected with microplastics travelled shorter distances at lower speeds than mice in the control group.

The Y-maze test for assessing memory also showed that microplastics-treated mice travelled smaller total distances than control animals, with a significant reduction in spatial memory. Tests to evaluate motor coordination and endurance revealed that microplastics additionally inhibited motor abilities. By day 28 after injection, these behavioural impairments were restored, corresponding with the observed recovery of MPL-cell obstruction in the cerebral vasculature at 28 days.

The researchers conclude that their study demonstrates that microplastics harm the brain indirectly – via cell obstruction and disruption of blood circulation – rather than directly penetrating tissue. They emphasize, however, that this mechanism may not necessarily apply to humans, who have roughly 1200 times greater volume of circulating blood volume than mice and significantly different vascular diameters.

“In the future, we plan to collaborate with clinicians,” says Huang. “We will enhance our imaging techniques for the detection of microplastics in human blood vessels, and investigate whether ‘MPL-cell-car-crash’ happens in human. We anticipate that this research will lead to exciting new discoveries.”

Huang emphasizes how the use of fluorescent microplastic imaging technology has fundamentally transformed research in this field over the past five years. “In the future, advancements in real-time imaging of depth and the enhanced tracking ability of microplastic particles in vivo may further drive innovation in this area of study,” he says.

The post Imaging reveals how microplastics may harm the brain appeared first on Physics World.

Theorists propose a completely new class of quantum particles

Par : No Author

In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.

Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.

“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.

Particle exchange

In quantum mechanics, the behaviour of particles (and quasiparticles) is probabilistic in nature and is described by mathematical entities known as wavefunctions. These govern the likelihood of finding a particle in a particular state, as defined by properties like position, velocity, and spin. The exchange statistics of a specific type of particle dictates how its wavefunction behaves when two identical particles swap places.

For bosons such as photons, the wavefunction remains unchanged when particles are exchanged. This means that many bosons can occupy the same quantum state, enabling phenomena like lasers and superfluidity. In contrast, when fermions such as electrons are exchanged, the sign of the wavefunction flips from positive to negative or vice versa. This antisymmetric property prevents fermions from occupying the same quantum state. This underpins the Pauli exclusion principle and results in the electronic structure of atoms and the nature of the periodic table.

Until now, physicists believed that these two types of particle statistics – bosonic and fermionic – were the only possibilities in 3D space. This is the result of fundamental principles like locality, which states that events occurring at one point in space cannot instantaneously influence events at a distant location.

Breaking boundaries

Hazzard and Wang’s research overturns the notion that 3D systems are limited to bosons and fermions and shows that new types of particle statistics, called parastatistics, can exist without violating locality.

The key insight in their theory lies in the concept of hidden internal characteristics. Beyond the familiar properties like position and spin, paraparticles require additional internal parameters that enable more complex wavefunction behaviour. This hidden information allows paraparticles to exhibit exchange statistics that go beyond the binary distinction of bosons and fermions.

Paraparticles exhibit phenomena that resemble – but are distinct from – fermionic and bosonic behaviours. For example, while fermions cannot occupy the same quantum state, up to two paraparticles could be allowed to coexist in the same point in space. This behaviour strikes a balance between the exclusivity of fermions and the clustering tendency of bosons.

Bringing paraparticles to life

While no elementary particles are known to exhibit paraparticle behaviour, the researchers believe that paraparticles might manifest as quasiparticles in engineered quantum systems or certain materials. A quasiparticle is particle-like collective excitation of a system. A familiar example is the hole, which is created in a semiconductor when a valence-band electron is excited to the conduction band. The vacancy (or hole) left in the valence band behaves as a positively-charged particle that can travel through the semiconductor lattice.

Experimental systems of ultracold atoms created by collaborators of the duo could be one place to look for the exotic particles. “We are working with them to see if we can detect paraparticles there,” explains Wang.

In ultracold atom experiments, lasers and magnetic fields are used to trap and manipulate atoms at temperatures near absolute zero. Under these conditions, atoms can mimic the behaviour of more exotic particles. The team hopes that similar setups could be used to observe paraparticle-like behaviour in higher-dimensional systems, such as 3D space. However, further theoretical advances are needed before such experiments can be designed.

Far-reaching implications

The discovery of paraparticles could have far-reaching implications for physics and technology. Fermionic and bosonic statistics have already shaped our understanding of phenomena ranging from the stability of neutron stars to the behaviour of superconductors. Paraparticles could similarly unlock new insights into the quantum world.

“Fermionic statistics underlie why some systems are metals and others are insulators, as well as the structure of the periodic table,” Hazzard explains. “Bose-Einstein condensation [of bosons] is responsible for phenomena such as superfluidity. We can expect a similar variety of phenomena from paraparticles, and it will be exciting to see what these are.”

As research into paraparticles continues, it could open the door to new quantum technologies, novel materials, and deeper insights into the fundamental workings of the universe. This theoretical breakthrough marks a bold step forward, pushing the boundaries of what we thought possible in quantum mechanics.

The paraparticles are described in Nature.

The post Theorists propose a completely new class of quantum particles appeared first on Physics World.

Fast radio burst came from a neutron star’s magnetosphere, say astronomers

The exact origins of cosmic phenomena known as fast radio bursts (FRBs) are not fully understood, but scientists at the Massachusetts Institute of Technology (MIT) in the US have identified a fresh clue: at least one of these puzzling cosmic discharges got its start very close to the object that emitted it. This result, which is based on measurements of a fast radio burst called FRB 20221022A, puts to rest a long-standing debate about whether FRBs can escape their emitters’ immediate surroundings. The conclusion: they can.

“Competing theories argued that FRBs might instead be generated much farther away in shock waves that propagate far from the central emitting object,” explains astronomer Kenzie Nimmo of MIT’s Kavli Institute for Astrophysics and Space Research. “Our findings show that, at least for this FRB, the emission can escape the intense plasma near a compact object and still be detected on Earth.”

As their name implies, FRBs are brief, intense bursts of radio waves. The first was detected in 2007, and since then astronomers have spotted thousands of others, including some within our own galaxy. They are believed to originate from cataclysmic processes involving compact celestial objects such as neutron stars, and they typically last a few milliseconds. However, astronomers have recently found evidence for bursts a thousand times shorter, further complicating the question of where they come from.

Nimmo and colleagues say they have now conclusively demonstrated that FRB 20221022A, which was detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in 2022, comes from a region only 10 000 km in size. This, they claim, means it must have originated in the highly magnetized region that surrounds a star: the magnetosphere.

“Fairly intuitive” concept

The researchers obtained their result by measuring the FRB’s scintillation, which Nimmo explains is conceptually similar to the twinkling of stars in the night sky. The reason stars twinkle is that because they are so far away, they appear to us as point sources. This means that their apparent brightness is more affected by the Earth’s atmosphere than is the case for planets and other objects that are closer to us and appear larger.

“We applied this same principle to FRBs using plasma in their host galaxy as the ‘scintillation screen’, analogous to Earth’s atmosphere,” Nimmo tells Physics World. “If the plasma causing the scintillation is close to the FRB source, we can use this to infer the apparent size of the FRB emission region.”

According to Nimmo, different models of FRB origins predict very different sizes for this region. “Emissions originating within the magnetized environments of compact objects (for example, magnetospheres) would produce a much smaller apparent size compared to emission generated in distant shocks propagating far from the central object,” she explains. “By constraining the emission region size through scintillation, we can determine which physical model is more likely to explain the observed FRB.”

Challenge to existing models

The idea for the new study, Nimmo says, stemmed from a conversation with another astronomer, Pawan Kumar of the University of Texas at Austin, early last year. “He shared a theoretical result showing how scintillation could be used a ‘probe’ to constrain the size of the FRB emission region, and, by extension, the FRB emission mechanism,” Nimmo says. “This sparked our interest and we began exploring the FRBs discovered by CHIME to search for observational evidence for this phenomenon.”

The researchers say that their study, which is detailed in Nature, shows that at least some FRBs originate from magnetospheric processes near compact objects such as neutron stars. This finding is a challenge for models of conditions in these extreme environments, they say, because if FRB signals can escape the dense plasma expected to exist near such objects, the plasma may be less opaque than previously assumed. Alternatively, unknown factors may be influencing FRB propagation through these regions.

A diagnostic tool

One advantage of studying FRB 20221022A is that it is relatively conventional in terms of its brightness and the duration of its signal (around 2 milliseconds). It does have one special property, however, as discovered by Nimmo’s colleagues at McGill University in Canada: its light is highly polarized. What is more, the pattern of its polarization implies that its emitter must be rotating in a way that is reminiscent of pulsars, which are highly magnetized, rotating neutron stars. This result is reported in a separate paper in Nature.

In Nimmo’s view, the MIT team’s study of this (mostly) conventional FRB establishes scintillation as a “powerful diagnostic tool” for probing FRB emission mechanisms. “By applying this method to a larger sample of FRBs, which we now plan to investigate, future studies could refine our understanding of their underlying physical processes and the diverse environments they occupy.”

The post Fast radio burst came from a neutron star’s magnetosphere, say astronomers appeared first on Physics World.

String theory may be inevitable as a unified theory of physics, calculations suggest

Par : No Author

Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.

The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.

Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.

“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”

Why strings?

String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.

However, a major hurdle remains. The characteristic size of these strings is estimated to be around 1035 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.

Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.

“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”

In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.

Unique solution

Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.

Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”

The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.

Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.

The research is described in Physical Review Letters.

The post String theory may be inevitable as a unified theory of physics, calculations suggest appeared first on Physics World.

Antimatter partner of hyperhelium-4 is spotted at CERN

Par : No Author

CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation  has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.

Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.

Exotic antinuclei

Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).

Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.

Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.

Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.

Rapid decay

But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.

Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.

The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.

The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.

The observation is described in a paper that has been submitted to Physical Review Letters.

The post Antimatter partner of hyperhelium-4 is spotted at CERN appeared first on Physics World.

How publishing in Electrochemical Society journals fosters a sense of community

Par : No Author

The Electrochemical Society (ECS) is an international non-profit scholarly organization that promotes research, education and technological innovation in electrochemistry, solid-state science and related fields.

Founded in 1902, the ECS brings together scientists and engineers to share knowledge and advance electrochemical technologies.

As part of that mission, the society publishes several journals including the flagship Journal of the Electrochemical Society (JES), which is over 120 years old and covers a wide range of topics in electrochemical science and engineering.

Someone who has seen their involvement with the ECS and ECS journals increase over their career is chemist Trisha Andrew from the University of Massachusetts Amherst. She directs the wearable electronics lab, a multi-disciplinary research team that produces garment-integrated technologies using reactive vapor deposition.

Trisha Andrew from the University of Massachusetts Amherst. (Courtesy: Trisha Andrew)

Her involvement with the ECS began when she was invited by the editor-in-chief of ECS Sensors Plus to act as a referee for the journal. Andrew found the depth and practical application of the papers she reviewed interesting and of high quality. This resulted in her submitting her own work to ECS journals and she later became an associate editor for both ECS Sensors Plus and JES.

Professional Opportunities

Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China, meanwhile, says that due to the reputation of ECS journals, they have been his “go-to” place to publish since graduate school.

Weiran Zheng
Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China. (Courtesy: Weiran Zheng)

One of his papers entitled “Python for electrochemistry: a free an all-in-one toolset” (ECS Adv. 2 040502) has been downloaded over 8000 times and is currently the most-read ECS Advances article. This led to an invitation to deliver an ECS webinar — Introducing Python for Electrochemistry Research. “I never expected such an impact when the paper was accepted, and none of this would be possible without the platform offered by ECS journals,” adds Zheng.

Publishing in ECS journals has helped Zheng’s career advance through new connections and becoming more involved with ECS activities. This has not only boosted his research but also professional network and given these benefits, Zheng plans to continue to publish his latest findings in ECS journals.

Highly cited papers

Battery researcher Thierry Brousse from Nantes University in France, came to electrochemistry later on in his career having first carried out a PhD in high-temperature superconducting thin films at the University of Caen Normandy.

Thierry Brousse
Battery researcher Thierry Brousse from Nantes University in France. (Courtesy: Thierry Brousse)

When he began working in the field he collaborated with the chemist Donald Schleich from Polytech Nantes, who was an ECS member. It was then that he began to read the JES finding it a prestigious platform for his research in supercapacitors and microdevices for energy storage. “Most of the inspiring scientific papers I was reading at that time were from JES,” notes Brousse. “Naturally, my first papers were then submitted to this journal.”

Brousse says that publishing in ECS journals has provided him with new collaborations as well as invitations to speak at major conferences. He emphasizes the importance of innovative work and the positive impact of publishing in ECS journals where some of his most cited work has been published.

Brousse, who is an associate editor for JES, adds that he particularly values how publishing with ECS journals fosters a quick integration into specific research communities. This, he says, has been instrumental in advancing his career.

Long-standing relationships

Robert Savinell’s relationship with the ECS and ECS journals began during his PhD research in electrochemistry, which he carried out at the University of Pittsburgh. Now at Case Western Reserve University in Cleveland, Ohio, his research focusses on developing a flow battery for low-cost long duration energy storage primarily using iron and water. It is designed to improve the efficiency of the power grid and accelerate the addition of solar and wind power supplies.

Robert F Savinell
Robert Savinell at Case Western Reserve University in Cleveland, Ohio. (Courtesy: Robert Savinell)

Savinell also leads a Department of Energy funded Emerging Frontier Research Center on Breakthrough Electrolytes for Energy Storage. This Center focuses on fundamental research on nano to meso-scale structured electrolytes for energy storage.

ECS journals have been a cornerstone of his professional career, providing a platform for his research and fostering valuable professional connections. “Some of my research published in JES many years ago are still cited today,” says Savinell.

Savinell’s contributions to the ECS community have been recognized through various roles, including being elected a fellow of the ECS and he has previously served as chair of the ECS’s electrolytic and electrochemical engineering division. He was editor-in-chief of JES for the past decade and most recently was elected third vice president of the ECS.

Savinell says that the connections he has made through ECS have been significant, ranging from funding programme managers to personal friends. “My whole professional career has been focused around ECS,” he says, adding that he aims to continue to publish in ECS journals and hopes that his work will inspire solutions to some of society’s biggest problems.

Personal touch

For many researchers in the field, publishing in ECS journals has brought with it several benefits. That includes the high level of engagement and the personal touch within the ECS community and also the promotional support ECS provides for published work.

The ECS journals’ broad portfolio also ensure that researcher’s work reaches the right audience, and such a visibility and engagement is a significant factor when it comes to advancing the careers of scientists. “The difference between ECS journals is the amount of engagement, views and reception that you receive,” says Andrew. “That’s what I found to be the most unique”.

The post How publishing in Electrochemical Society journals fosters a sense of community appeared first on Physics World.

Higher-order brain function revealed by new analysis of fMRI data

Par : No Author

An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.

To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.

Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.

“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”

Higher-order interactions

Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.

To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.

Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.

To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”

Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.

Distinguishing between tasks

With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.

By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.

By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.

“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.

The research is described in Nature Communications.

The post Higher-order brain function revealed by new analysis of fMRI data appeared first on Physics World.

Humanitarian engineering can improve cancer treatment in low- and middle-income countries

This episode of the Physics World Weekly podcast explores how the concept of humanitarian engineering can be used to provide high quality cancer care to people in low- and middle-income countries (LMICs). This is an important challenge because today only 5% of global radiotherapy resources are located in LMICs, which are home to the majority of the world’s population.

Our guests are two medical physicists at the University of Washington in the US who have contributed to the ebook Humanitarian Engineering for Global Oncology. They are Eric Ford, who edited the ebook and Afua Yorke, who along with Ford wrote the chapter “Cost-effective radiation treatment delivery systems for low- and middle-income countries”.

They are in conversation with Physics World’s Tami Freeman.

The post Humanitarian engineering can improve cancer treatment in low- and middle-income countries appeared first on Physics World.

Sun-like stars produce ‘superflares’ about once a century

Stars like our own Sun produce “superflares” around once every 100 years, surprising astronomers who had previously estimated that such events occurred only every 3000 to 6000 years. The result, from a team of astronomers in Europe, the US and Japan, could be important not only for fundamental stellar physics but also for forecasting space weather.

The Sun regularly produces solar flares, which are energetic outbursts of electromagnetic radiation. Sometimes, these flares are accompanied by plasma in events known as coronal mass ejections. Both activities can trigger powerful solar storms when they interact with the Earth’s upper atmosphere, posing a danger to spacecraft and satellites as well as electrical grids and radio communications on the ground.

Despite their power, though, these events are much weaker than the “superflares” recently observed by NASA’s Kepler and TESS missions at other Sun-like stars in our galaxy. The most intense superflares release energies of about 1025 J, which show up as short, sharp peaks in the stars’ visible light spectrum.

Observations from the Kepler space telescope

In the new study, which is detailed in Science, astronomers sought to find out whether our Sun is also capable of producing superflares, and if so, how often they happen. This question can be approached in two different ways, explains study first author Valeriy Vasilyev, a postdoctoral researcher at the Max Planck Institute for Solar System Research, Germany. “One option is to observe the Sun directly and record events, but it would take a very long time to gather enough data,” Vasilyev says. “The other approach is to study a large number of stars with characteristics similar to those of the Sun and extrapolate their flare activity to our Sun.”

The researchers chose the second option. Using a new method they developed, they analysed Kepler space telescope data on the fluctuations of more than 56,000 Sun-like stars during the period between 2009‒2013. This dataset, which is much larger and more representative than previous datasets because it based on recent advances in our understanding of Sun-like stars, corresponds to around 220,000 years of solar observations.

The new technique can detect superflares and precisely localize them on the telescope images with sub-pixel resolution, Vasilyev says. It also accounts for how light propagates through the telescope’s optics as well as instrumental effects that could “contaminate” the data.

The team, which also includes researchers from the University of Graz, Austria; the University of Oulu, Finland; the National Astronomical Observatory of Japan; the University of Colorado Boulder in the US; and the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, both in France; carefully analysed the detected flares. They checked for potential sources of error, such as those originating from unresolved binary stars, flaring M- and K-dwarf stars and fast-rotating active stars that might have been wrongly classified. Thanks to these robust, statistical evaluations, they identified almost 3000 bright stellar flares in the population they observed – a detection rate that implies that superflares occur roughly once per century, per star.

Sun should also be capable of producing superflares

According to Vasilyev, the team’s results also suggest that solar flares and stellar superflares are generated by the same physical mechanisms. This is important because reconstructions of past solar activity, which are based on the concentrations of cosmogenic isotopes in terrestrial archives such as tree rings, tell us that our Sun occasionally experiences periods of higher or lower solar activity lasting several decades.

One example is the Maunder Minimum, a decades-long period during the 17th century when very few sunspots were recorded. At the other extreme, solar activity was comparatively higher during the Modern Maximum that occurred around the mid-20th century. Based on the team’s analysis, Vasilyev says that “so-called grand minima and grand maxima are not regular but tend to cluster in time. This means that centuries could pass by without extreme solar flares followed by several such events occurring over just a few years or decades.”

It is possible, he adds, that a superflare occurred in the past century but went unnoticed. “While we have no evidence of such an event, excluding it with certainty would require continuous and systematic monitoring of the Sun,” he tells Physics World.  The most intense solar flare in recorded history, the so-called “Carrington event” of September 1859, was documented essentially by chance: “By the time he [the English astronomer Richard Carrington] called someone to show them the bright glow he observed (which lasted only a few minutes), the brightness had already faded.”

Between 1996 and 2002, when instruments provided direct measurements of total solar brightness with sufficient accuracy and temporal resolution, 12 flares with Carrington-like energies were detected. Had these flares been aimed at Earth, it is possible that they would have had similar effects, he says.

The researchers now plan to investigate the conditions required to produce superflares. “We will be extending our research by analysing data from next-generation telescopes, such as the European mission PLATO, which I am actively involved in developing,” Vasilyev says. “PLATO’s launch is due for the end of 2026 and will provide valuable information with which we can refine our understanding of stellar activity and even the impact of superflares on exoplanets.”

The post Sun-like stars produce ‘superflares’ about once a century appeared first on Physics World.

Solid-state nuclear clocks brought closer by physical vapour deposition

Par : No Author
8-1-25 PVD thorium clock article
Solid-state clock Illustration of how thorium atoms are vaporized (bottom) and then deposited in a thin film on the substrate’s surface (middle). This film could form the basis for a nuclear clock (top). (Courtesy: Steven Burrows/Ye group)

Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.

Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.

Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.

Sensitive to new physics

Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.

Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.

That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.

In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”

Layers of thorium fluoride

In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.

The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.

More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.

Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”

The research is described in Nature

The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.

Medical physics and biotechnology: highlights of 2024

Par : Tami Freeman

From tumour-killing quantum dots to proton therapy firsts, this year has seen the traditional plethora of exciting advances in physics-based therapeutic and diagnostic imaging techniques, plus all manner of innovative bio-devices and biotechnologies for improving healthcare. Indeed, the Physics World Top 10 Breakthroughs for 2024 included a computational model designed to improve radiotherapy outcomes for patients with lung cancer by modelling the interaction of radiation with lung cells, as well as a method to make the skin of live mice temporarily transparent to enable optical imaging studies. Here are just a few more of the research highlights that caught our eye.

Marvellous MRI machines

This year we reported on some important developments in the field of magnetic resonance imaging (MRI) technology, not least of which was the introduction of a 0.05 T whole-body MRI scanner that can produce diagnostic quality images. The ultralow-field scanner, invented at the University of Hong Kong’s BISP Lab, operates from a standard wall power outlet and does not require shielding cages. The simplified design makes it easier to operate and significantly lower in cost than current clinical MRI systems. As such, the BISP Lab researchers hope that their scanner could help close the global gap in MRI availability.

Moving from ultralow- to ultrahigh-field instrumentation, a team headed up by David Feinberg at UC Berkeley created an ultrahigh-resolution 7 T MRI scanner for imaging the human brain. The system can generate functional brain images with 10 times better spatial resolution than current 7 T scanners, revealing features as small as 0.35 mm, as well as offering higher spatial resolution in diffusion, physiological and structural MR imaging. The researchers plan to use their new NexGen 7 T scanner to study underlying changes in brain circuitry in degenerative diseases, schizophrenia and disorders such as autism.

Meanwhile, researchers at Massachusetts Institute of Technology and Harvard University developed a portable magnetic resonance-based sensor for imaging at the bedside. The low-field single-sided MR sensor is designed for point-of-care evaluation of skeletal muscle tissue, removing the need to transport patients to a centralized MRI facility. The portable sensor, which weighs just 11 kg, uses a permanent magnet array and surface RF coil to provide low operational power and minimal shielding requirements.

Proton therapy progress

Alongside advances in diagnostic imaging, 2024 also saw a couple of firsts in the field of proton therapy. At the start of the year, OncoRay – the National Center for Radiation Research in Oncology in Dresden – launched the world’s first whole-body MRI-guided proton therapy system. The prototype device combines a horizontal proton beamline with a whole-body MRI scanner that rotates around the patient, a geometry that enables treatments both with patients lying down or in an upright position. Ultimately, the system could enable real-time MRI monitoring of patients during cancer treatments and significantly improve the targeting accuracy of proton therapy.

OncoRay’s research prototype
OncoRay’s research prototype The proton therapy beamline (left) and the opened MRI-guided proton therapy system, showing the in-beam MRI (centre) and patient couch (right). (Courtesy: UKD/Kirsten Lassig)

Also aiming to enhance proton therapy outcomes, a team at the PSI Center for Proton Therapy performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow. Online plan adaptation, where the patient remains on the couch throughout the replanning process, could help address uncertainties arising from anatomical changes during treatments. In five adults with tumours in rigid body regions treated using DAPT, the daily adapted plans provided target coverage to within 1.1% of the planned dose and, in over 90% of treatments, improved dose metrics to the targets and/or organs-at-risk. Importantly, the adaptive approach took just a few minutes longer than a non-adaptive treatment, remaining within the 30-min time slot allocated for a proton therapy session.

Bots and dots

Last but certainly not least, this year saw several research teams demonstrate the use of tiny devices for cancer treatment. In a study conducted at the Institute for Bioengineering of Catalonia, for instance, researchers used self-propelling nanoparticles containing radioactive iodine to shrink bladder tumours.

Graphene quantum dots
Cell death by dots Schematic illustration showing the role of graphene quantum dots as nanozymes for tumour catalytic therapy. (Courtesy: FHIPS)

Upon injection into the body, these “nanobots” search for and accumulate inside cancerous tissue, delivering radionuclide therapy directly to the target. Mice receiving a single dose of the nanobots experienced a 90% reduction in the size of bladder tumours compared with untreated animals.

At the Chinese Academy of Sciences’ Hefei Institutes of Physical Science, a team pioneered the use of metal-free graphene quantum dots for chemodynamic therapy. Studies in cancer cells and tumour-bearing mice showed that the quantum dots caused cell death and inhibition of tumour growth, respectively, with no off-target toxicity in the animals.

Finally, scientists at Huazhong University of Science and Technology developed novel magnetic coiling “microfibrebots” and used them to stem arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.

The post Medical physics and biotechnology: highlights of 2024 appeared first on Physics World.

Laser beam casts a shadow in a ruby crystal

Particles of light – photons – are massless, so they normally pass right through each other. This generally means they can’t cast a shadow. In a new work, however, physicist Jeff Lundeen of the University of Ottawa, Canada and colleagues found that this counterintuitive behaviour can, in fact, happen when a laser beam is illuminated by another light source as it passes through a highly nonlinear medium. As well as being important for basic science, the work could have applications in laser fabrication and imaging.

The light-shadow experiment began when physicists led by Raphael Akel Abrahao sent a high-power beam of green laser light through a cube-shaped ruby crystal. They then illuminated this beam from the side with blue light and observed that the beam cast a shadow on a piece of white paper. This shadow extended through an entire face of the crystal. Writing in Optica, they note that “under ordinary circumstances, photons do not interact with each other, much less block each other as needed for a shadow.” What was going on?

Photon-photon interactions

The answer, they explain, boils down to some unusual photon-photon interactions that take place in media that absorb light in a highly nonlinear way. While several materials fit this basic description, most become saturated at high laser intensities. This means they become more transparent in the presence of a strong laser field, producing an “anti-shadow” that is even brighter than the background – the opposite of what the team was looking for.

What they needed, instead, was a material that absorbs more light at higher optical intensities. Such behaviour is known as “reverse saturation of absorption” or “saturable transmission”, and it only occurs if four conditions are met. Firstly, the light-absorbing system needs to have two electronic energy levels: a ground state and an excited state. Secondly, the transition from the ground to the excited state must be less strong (technically, it must have a smaller cross-section) than the transition from the first exited state to a higher excited state. Thirdly, after the material absorbs light, neither the first nor the second excited states should decay back to other levels when the light is re-emitted. Finally, the incident light should only saturate the first transition.

Diagram showing how the green laser increases the optical absorption of the blue illuminating laser beam, alongside a photo of the setup
Shadow experiment: A high-power green laser beam is directed through a ruby cube and illuminated with a blue laser beam from the side. The green laser beam increases the optical absorption of the blue illuminating laser beam, creating a matching region in the illuminating light and creating a darker area that appears as a shadow of the green laser beam. (Courtesy: R. A. Abrahao, H. P. N. Morin, J. T. R. Pagé, A. Safari, R. W. Boyd, J. S. Lundeen)

That might sound like a tall order, but it turns out that ruby fits the bill. Ruby is an aluminium oxide crystal that contains impurities of chromium atoms. These impurities distort its crystal lattice and give it its familiar red colour. When green laser light (532 nm) is applied to ruby, it drives an electronic transition from the ground state (denoted 4A2) to an excited state 4T2. This excited state then decays rapidly via phonons (vibrations of the crystal lattice) to the 2E state.

At this point, the electrons absorb blue light (450 nm) and transition from 2E to a different excited state, denoted 2T1. While electrons in the 4A2 state could, in principle, absorb blue light directly, without any intermediate step, the absorption cross-section of the transition from 2E to 2T1 is larger, Abrahao explains.

The result is that in the presence of the green laser beam, the ruby absorbs more of the illuminating blue light. This leaves behind a lower-optical-intensity region of blue illumination within the ruby – in other words, the green laser beam’s shadow.

Shadow behaves like an ordinary shadow

This laser shadow behaves like an ordinary shadow in many respects. It follows the shape of the object (the green laser beam) and conforms to the contours of the surfaces it falls on. The team also developed a theoretical model that predicts that the darkness of the shadow will increase as a function of the power of the green laser beam. In their experiment, the maximum contrast was 22% – a figure that Abrahao says is similar to a typical shadow on a sunny day. He adds that it could be increased in the future.

Lundeen offers another way of looking at the team’s experiment. “Fundamentally, a light wave is actually composed of a hybrid particle made up of light and matter, called a polariton,” he explains. “When light travels in a glass or crystal, both aspects of the polariton are important and, for example, explain why the wave travels more slowly in these media than in vacuum. In the absence of either part of the polariton, either the photon or atom, there would be no shadow.”

Strictly speaking, it is therefore not massless light that is creating the shadow, but the material component of the polariton, which has mass, adds Abrahao, who is now a postdoctoral researcher at Brookhaven National Laboratory in the US.

As well as helping us to better understand light-matter interactions, Abrahao tells Physics World that the experiment “could also come in useful in any device in which we need to control the transmission of a laser beam with anther laser beam”. The team now plans to search for other materials and combinations of wavelengths that might produce a similar “laser shadow” effect.

The post Laser beam casts a shadow in a ruby crystal appeared first on Physics World.

Elevating brachytherapy QA with RadCalc

Par : No Author

An engaging webinar where we explore how RadCalc supports advanced brachytherapy quality assurance, enabling accurate and efficient dose calculations. Brachytherapy plays a critical role in cancer treatment, with modalities like HDR, LDR, and permanent seed implants requiring precise dose verification to ensure optimal patient outcomes.

The increasing complexity of modern brachytherapy plans has heightened the demand for streamlined QA processes. Traditional methods, while effective, often involve time-consuming experimental workflows. With RadCalc’s 3D dose calculation system based on the TG-43 protocol, users can achieve fast and reliable QA, supported by seamless integration with treatment planning systems and automation through RadCalcAIR.

The webinar will showcase the implementation of independent RadCalc QA.

Don’t miss the opportunity to listen to two RadCalc clinical users!

A Q&A session follows the presentation.

Michal Poltorak, Oskar Sobotka, Lucy Wolfsberger, Carlos Bohorquez (left to right)
Michal Poltorak, Oskar Sobotka, Lucy Wolfsberger, Carlos Bohorquez (left to right)

Michal Poltorak, MSc, is the head of the department of Medical Physics at the National Institute of Medicine, Ministry of the Interior and Administration, in Warsaw, Poland. With expertise in medical physics, he oversees research and clinical applications in radiation therapy and patient safety. His professional focus lies in integrating innovative technologies.

Oskar Sobotka, MSc.Eng, is a medical physicist at the Radiotherapy Center in Gorzów Wielkopolski, specializing in treatment planning and dosimetry. With a Master’s degree from Adam Mickiewicz University and experience in nuclear medicine and radiotherapy, he ensures precision and safety in patient care.

Lucy Wolfsberger, MS, LAP, is an application specialist for RadCalc at LifeLine Software Inc., a part of the LAP Group. She is dedicated to enhancing safety and accuracy in radiotherapy by supporting clinicians with a patient-centric, independent quality assurance platform. Lucy combines her expertise in medical physics and clinical workflows to help healthcare providers achieve efficient, reliable, and comprehensive QA.

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

The post Elevating brachytherapy QA with RadCalc appeared first on Physics World.

Automated checks build confidence in treatment verification

ChartCheck
Streamlined solution ChartCheck automates a comprehensive suite of clinical checks to monitor the progress of ongoing treatments. (Courtesy: Radformation)

Busy radiation therapy clinics need smart solutions that streamline processes while also enhancing the quality of patient care. That’s the premise behind ChartCheck, a tool developed by Radformation to facilitate the weekly checks that medical physicists perform for each patient who is undergoing a course of radiotherapy. By introducing automation into what is often a manual and repetitive process, ChartCheck can save time and effort while also enabling medical physicists to identify and investigate potential risks as the treatment progresses.

“To ensure that a patient is receiving the proper treatment a qualified medical physicist must check a patient’s chart after every five fractions of radiation has been delivered,” explains Ryan Manger, lead medical physicist at the Encinitas Treatment Center, one of four clinics operated by UC San Diego in the US. “The current best practice is to check 36 separate items for each patient, which can take a lot of time when each physicist needs to verify 30 or 40 charts every week.”

Ryan Manger
Improving workflows Ryan Manger, lead medical physicist at one of the treatment centres operated by UC San Diego, believes that ChartCheck has helped him and his colleagues to save time and focus their attention where it matters most. (Courtesy: R Manger/UC San Diego)

Before introducing ChartCheck into the workflow at UC San Diego, Manger says that around 70% of the checks had to be done manually. “The weekly checks are really important for patient safety, but they become a big time sink when each task takes five or ten minutes,” he says. “It’s easy to get fatigued when you’re looking at the same things over and over again, and we have found that introducing automation into the process can have a positive impact on everything else we do in the clinic.”

ChartCheck monitors the progress of ongoing treatments by automatically performing a comprehensive suite of clinical checks, raising an alert if any issue is detected. As an example, after each treatment the tool verifies that the delivered dose matches the parameters defined in the clinical plan, while it also monitors real-time changes such as any movement of the couch during treatment. It also collates together all the necessary safety documentation, allows comments or notes to be added, and highlights any scheduling changes when a patient decides to take a treatment break, for instance, or the physician adds a boost to the clinical plan.

As well as consolidating all the information on a single platform, ChartCheck allows physicists to analyse the treatment data to identify and understand any underlying issues that might affect patient safety. “It has given us a lot more vision of what’s happening across all our treatments, which is typically around 300 per week,” says Manger. “Within just three months it has illuminated areas that we were unaware of before, but that might have carried some risk.”

What’s more, the physicists at UC San Diego have found that automating many of the routine tasks has enabled them to focus their attention where it is needed most. “We have implemented the tool as a first-pass filter to flag any charts that might need further attention, which is typically around 10–15% of the total,” says Manger. “We can then use our expertise to investigate those charts in more detail and to understand what the risk factors might be. The result is that we do a better check where it’s needed, rather than just looking at the same things over and over.”

Jennifer Scharff
Building confidence Jennifer Scharff, lead physicist at the John Stoddard Cancer Center in Des Moines, Iowa, says that ChartCheck has helped her to ensure that all the necessary safety checks are being done in consistent way. (Courtesy: J Scharff/UnityPoint Health)

Jennifer Scharff, lead physicist at the John Stoddard Cancer Center in Des Moines, Iowa, also values the extra insights that ChartCheck offers. One major advantage, she says, is how easy it is to check whether the couch might have moved between treatment fields. “It’s not ideal when the couch moves, but sometimes it happens if a patient coughs or sneezes during the treatment and the therapist needs to adjust the position slightly when they get back into their breath hold,” she says. “In ChartCheck it’s really easy to see those positional shifts on a daily basis, and to identify any trends or issues that we might need to address.”

ChartCheck offers full integration with ARIA, the oncology information system from Varian, making it easy to implement and operate within existing clinical workflows. Although ARIA already offers a tool for treatment verification, Scharff says that ChartCheck offers a more comprehensive and efficient solution. “It checks more than ARIA does, and it’s much faster and more efficient to do a weekly physics check,” she says. “As an example, it’s really easy to see the journal notes that our therapists make when something isn’t quite right, and it helps us to identify patients who need a final chart check when they want to pause or stop their treatment.”

The automated tool also guarantees consistency between the chart checks undertaken by different physicists, with Scharff finding the standardized approach particularly useful when locums are brought into the team. “It’s easy for them to see all the information we can see, we can be sure that they are making the same checks as we do, and the same documents are always sent for approval,” she says. “The system makes it really easy to catch things, and it calls out the same thing for everyone.”

With the medical physicists at UC San Diego working across four different treatment centres, Manger has also been impressed by the ability of ChartCheck to improve consistency between physicists working in different locations. “The human factor always introduces some variations, even between physicists who are fully trained,” he says. “Minimizing the impact of those variations has been a huge benefit that I hadn’t considered when we first decided to introduce the software, but it has allowed us to ensure that all the correct policies and procedures are being followed across all of our treatment centres.”

Overall, the experience of physicists like Manger and Scharff is that ChartCheck can streamline processes while also providing them with the reassurance that their patients are always being treated correctly and safely. “It has had a huge positive impact for us,” says Scharff. “It saves a lot of time and gives us more confidence that everything is being done as it should be.”

The post Automated checks build confidence in treatment verification appeared first on Physics World.

Patient-specific quality assurance (PSQA) based on independent 3D dose calculation

Par : No Author

 

In this webinar, we will discuss that patient specific quality assurance (PSQA) is an essential component of the radiation treatment process. This control allows us to ensure that the planned dose will be delivered to the patient. The increasing number of patients with indications for modulated treatments requiring PSQA has significantly increased the workload of the medical physics departments, and the need to find more efficient ways to perform it has arisen.

In recent years, there has been an increasing evolution of measurement systems. However, the experimental process involved imposes a limit on the time savings. The 3D dose calculation systems are presented as a solution to this problem, allowing the reduction of the time needed for the initiation of treatments.

The use of 3D dose calculation systems, as stated in international recommendations (TG219), requires a process of commissioning and adjustment of dose calculation parameters.

This presentation will show the implementation of PSQA based on independent 3D dose calculation for VMAT treatments in breast cancer using DICOM information from the plan and LOG files. Comparative results with measurement-based PSQA systems will also be presented.

An interactive Q&A session follows the presentation.

Daniel Venencia

Dr Daniel Venencia is the chief of the medical physics department at Instituto Zunino – Fundación Marie Curie in Cordoba, Argentina. He holds a BSc in physics and a PhD from the Universidad Nacional de Córdoba (UNC), Daniel has completed postgraduate studies in radiotherapy and nuclear medicine. With extensive experience in the field, Daniel has directed more than 20 MSc and BSc theses and three doctoral theses. He has delivered more than 400 presentations at national and international congresses. He has published in prestigious journals, including the Journal of Applied Clinical Medical Physics and the International Journal of Radiation Oncology, Biology and Physics. His work continues to make significant contributions to the advancement of medical physics.

Carlos Bohorquez

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

 

The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.

Squishy silicone rings shine a spotlight on fluid-solid transition

Par : Anna Demming

People working in industry, biology and geology are all keen to understand when particles will switch from flowing like fluids to jamming like solids. With rigid particles, and even for foams and emulsions, scientists know what determines this crunch point: it’s related to the number of contact points between particles. But for squishy particles – those that deform by more than 10% of their size – that’s not necessarily the case.

“You can have a particle that’s completely trapped between only two particles,” explains Samuel Poincloux, who studies the statistical and mechanical response of soft assemblies at Aoyama Gakuin University, Japan.

Factoring that level of deformability into existing theories would be fiendishly difficult. But with real-world scenarios – particularly in mechanobiology – coming to light that hinge on the flow or jamming of highly deformable particles, the lack of explanation was beginning to hurt. Poincloux and his University of Tokyo colleague Kazumasa Takeuchi therefore tried a different approach. Their “easy-to-do experiment” sheds fresh light on how squishy particles respond to external forces, leading to a new model that explains how such particles flow – and at what point they don’t.

Pinning down the differences

To demonstrate how things can change when particles can deform a lot, Takeuchi holds up a case containing hundreds of rigid photoelastic rings. When these rings are under stress, the polarization of light passing through them changes. “This shows how the force is propagating,” he says.

As he presses on the rings with a flat-ended rod, a pattern of radial lines centred at the bottom of the rod lights up. With rigid particles, he explains, chains of forces transmitted by these contact points conspire to fix the particles in place. The fewer the contact points, the fewer the chains of forces keeping them from moving. However, when particles can deform a lot, the contact areas are no longer points. Instead, they extend over a larger region of the ring’s surface. “We can already expect that something will be very different then,” he says.

The main ingredient in Takeuchi and Poincloux’s experimental study of these differences was a layer of deformable silicone rings 10 mm high, 1.5 mm thick and with a radius of 3.3 mm, laid out between two parallel surfaces. The choice of ring material and dimensions was key to ensuring the model reproduced relevant aspects of behaviour while remaining easy to manipulate and observe. To that end, they added an acrylic plate on top to stop the rings popping out under compression. “There’s a lot of elastic energy inside them,” says Poincloux, nodding wryly. “They go everywhere.”

By pressing on one of the parallel surfaces, the researchers compressed the rings (thereby adjusting their density) and added an oscillating shear force. To monitor the rings’ response, they used image analysis to note the position, shape, neighbours and contact lengths for each ring. As they reduced the shear force amplitude or increased the density, they observed a transition to solid-like behaviour in which the rings’ displacement under the shear force became reversible. This transition was also reflected in collective properties such as calculated loss and storage moduli.

Unexpectedly simple

Perhaps counterintuitively, regular patterns – crystallinity – emerged in the arrangement of the rings while the system was in a fluid phase but not in the solid phase. This and other surprising behaviours make the system hard to model analytically. However, Takeuchi emphasises that the theoretical criterion for switching between solid-like and fluid-like behaviour turned out to be quite simple. “This is something we really didn’t expect,” he says.

  • The top row in the video depicts the fluid-like behaviour of the rings at low density. The bottom row depicts the solid-like behaviour of the rings at a higher density. (Courtesy: Poincloux and Takeuchi 2024)

The researchers’ experiments showed that for squishy particles, the number of contacts no longer matters much. Instead, it’s the size of the contact that’s important. “If you have very extended contact, then [squishy particles] can basically remain solid via the extension of contact, and that is possible only because of friction,” says Poincloux. “Without friction, they will almost always rearrange and lose their rigidity.”

Jonathan Bares, who studies granular matter at CNRS in the Université de Montpellier, France, but was not involved in this work, describes the model experiment as “remarkably elegant”. This kind of jamming state is, he says, “challenging to analyse both analytically and numerically, as it requires accounting for the intricate properties of the materials that make up the particles.” It is, he adds, “encouraging to see squishy grains gaining increasing attention in the study of granular materials”.

As for the likely impact of the result, biophysicist Christopher Chen, whose work at Boston University in the US focuses on adhesive, mechanical and biochemical contributions in tissue microfabrication, says the study “provides more evidence that the way in which soft particles interact may dominate how biological tissues control transitions in rigidity”.  These transitions, he adds, “are important for many shape-changing processes during tissue assembly and formation”.

Full details of the experiment are reported in PNAS.

The post Squishy silicone rings shine a spotlight on fluid-solid transition appeared first on Physics World.

The heart of the matter: how advances in medical physics impact cardiology

Par : Tami Freeman

Medical physics techniques play a key role in all areas of cardiac medicine – from the use of advanced imaging methods and computational modelling to visualize and understand heart disease, to the development and introduction of novel pacing technologies.  At a recent meeting organised by the Institute of Physics’ Medical Physics Group, experts in the field discussed some of the latest developments in cardiac imaging and therapeutics, with a focus on transitioning technologies from the benchtop to the clinic.

Monitoring metabolism

The first speaker, Damian Tyler from the University of Oxford described how hyperpolarized MRI can provide “a new window on the reactions of life”. He discussed how MRI – most commonly employed to look at the heart’s structure and function – can also be used to characterize cardiac metabolism, with metabolic MR studies helping us understand cardiovascular disease, assess drug mechanisms and guide therapeutic interventions.

In particular, Tyler is studying pyruvate, a compound that plays a central role in the body’s metabolism of glucose. He explained that 13C MR spectroscopy is ideal for studying pyruvate metabolism, but its inherent low signal-to-noise ratio makes it unsuitable for rapid in vivo imaging. To overcome this limitation, Tyler uses hyperpolarized MR, which increases the sensitivity to 13C-enriched tracers by more than 10,000 times and enables real-time visualization of normal and abnormal metabolism.

As an example, Tyler described a study using hyperpolarized 13C MR spectroscopy to examine cardiac metabolism in diabetes, which is associated with an increased risk of heart disease. Tyler and his team examined the downstream metabolites of 13C-pyruvate (such as 13C-bicarbonate and 13C-lactate) in subjects with and without type 2 diabetes. They found reduced bicarbonate levels in diabetes and increased lactate, noting that the bicarbonate to lactate ratio could provide a diagnostic marker.

Among other potential clinical applications, hyperpolarized MR could be used to detect inflammation following a heart attack, elucidate the mechanism of drugs and accelerate new drug discovery, and provide an indication of whether a patient is likely to develop cardiotoxicity from chemotherapy. It can also be employed to guide therapeutic interventions by imaging ischaemia in tissue and assess cardiac perfusion after heart attack.

“Hyperpolarized MRI offers a safe and non-invasive way to assess cardiac metabolism,” Tyler concluded. “There are a raft of potential clinical applications for this emerging technology.”

Changing the pace

Alongside the introduction of new and improved diagnostic approaches, researchers are also developing and refining treatments for cardiac disorders. One goal is to create an effective treatment for heart failure, an incurable progressive condition in which the heart can’t pump enough blood to meet the body’s needs. Current therapies can manage symptoms, but cannot treat the underlying disease or prevent progression. Ashok Chauhan from Ceryx Medical told delegates how the company’s bio-inspired pacemaker aims to address this shortfall.

In healthy hearts, Chauhan explained, the heart rate changes in response to breathing, in a mechanism called respiratory sinus arrythmia (RSA). This natural synchronization is frequently lost in patients with heart failure. Ceryx has developed a pacing technology that aims to treat heart failure by resynchronizing the heart and lungs and restoring RSA.

Ashok Chauhan from Ceryx Medical
Heart–lung synchronization Ashok Chauhan explained how Ceryx Medical’s bio-inspired pacemaker aims to improve cardiac function in patients with heart failure.

The device works by monitoring the cardiorespiratory system in real time and using RSA inputs to generate stimulation signals in real time. Early trials in large animals demonstrated that RSA pacing increased cardiac output and ejection fraction compared with monotonic (constant) pacing. Last month, Ceryx begun the first in-human trials of its pacing technology, using an external pacemaker to assess the safety of the device.

Eliminating sex bias

Later in the day, Hannah Smith from the University of Oxford presented a fascinating talk entitled “Women’s hearts are superior and it’s killing them”.

Smith told a disturbing tale of an elderly man with chest pain, who calls an ambulance and undergoes electrocardiography (ECG) that shows he is having a heart attack. He is rushed to hospital to unblock his artery and restore cardiac function. His elderly wife also feels unwell, but her ECG only shows slight abnormality. She is sent for blood tests that eventually reveal she was also having a severe heart attack – but the delay in diagnosis led to permanent cardiac damage.

The fact is that women having heart attacks are more likely to be misdiagnosed and receive less aggressive treatment than men, Smith explained. This is due to variations in the size of the heart and differences in the distances and angles between the heart and the torso surface, which affect the ECG readings used to diagnose heart attack.

To understand the problem in more depth, Smith developed a computational tool that automatically reconstructs torso ventricular anatomy from standard clinical MR images. Her goal was to identify anatomical differences between males and females, and examine their impact on ECG measurements.

Using clinical data from the UK Biobank (around 1000 healthy men and women, and 84 women and 341 men post-heart attack), Smith modelled anatomies and correlated these with the respective ECG data. She found that the QRS complex (the signal for the heart to start contracting) was about 6 ms longer in healthy males than healthy females, attributed to the smaller heart volume in females. This is significant as it implies that the mean QRS duration would have to increase by a larger percentage for women than men to be diagnosed as elevated.

She also studied the ST segment in the ECG trace, elevation of which is a key feature used to diagnose heart attack. The ST amplitude was lower in healthy females than healthy males, due to their smaller ventricles and more superior position of the heart. The calculations revealed that overweight women would need a 63% larger increase in ST amplitude to be classified as elevated than normal weight men.

Smith concluded that heart attacks are harder to see on a woman’s ECGs than on a man’s, with differences in ventricular size, position and orientation impacting the ECG before, during and after heart attacks. Importantly, if these relationships can be elucidated and corrected for in diagnostic tools, these sex biases can be reduced, paving the way towards personalised ECG interpretation.

Prize presentations

The meeting also included a presentation from the winner of the 2023 Medical Physics Group PhD prize: Joshua Astley from the University of Sheffield, for his thesis “The role of deep learning in structural and functional lung imaging”.

Joshua Astley from the University of Sheffield
Prize presentation Joshua Astley from the University of Sheffield is the winner of the 2023 Medical Physics Group PhD prize.

Shifting the focus from the heart to the lungs, Astley discussed how hyperpolarized gas MRI, using inhaled contrast agents such as 1He and 129Xe, can visualize regional lung ventilation. To improve the accuracy and speed of such lung MRI studies, he designed a deep learning system that rapidly performs MRI segmentation and automates the calculation of ventilation defect percentage via lung cavity estimates. He noted that the tool is already being used to improve workflow in clinical hyperpolarized gas MRI scans.

Astley also described the use of CT ventilation imaging as a potentially lower-cost approach to visualize lung ventilation. Combining the benefits of computational modelling with deep learning, Astley and colleagues have developed a hybrid framework that generates synthetic ventilation scans from non-contrast CT images.

Quoting some “lessons learnt from my thesis”, Astley concluded that artificial intelligence (AI)-based workflows enable faster computation of clinical biomarkers and better integration of functional lung MRI, and that non-contrast functional lung surrogates can reduce the cost and expand use of functional lung imaging. He also emphasized that quantifying the uncertainty in AI approaches can improve clinician’s trust in using such algorithms, and that making code open and available is key to increasing its impact.

The day rounded off with awards for the meeting’s best talk in the submitted abstracts section and the best poster presentation. The former was won by Sam Barnes from Lancaster University for his presentation on the use of electroencephalography (EEG) for diagnosis of autism spectrum disorder. The poster prize was awarded to Suchit Kumar from University College London, for his work on a graphene-based electrophysiology probe for concurrent EEG and functional MRI.

The post The heart of the matter: how advances in medical physics impact cardiology appeared first on Physics World.

Extended cosmic-ray electron spectrum has a break but no other features

Par : No Author

A new observation of electron and positron cosmic rays has confirmed the existence of what could be a “cooling break” in the energy spectrum at around 1 TeV, beyond which the particle flux decreases more rapidly. Aside from this break, however, the spectrum is featureless, showing no evidence of an apparent anomaly previously associated with a dark matter signal.

Cosmic ray is a generic term for an energetic charged particle that enters Earth’s atmosphere from space. Most cosmic rays are protons, some are heavier nuclei, and a small number (orders of magnitude fewer than protons) are electrons and their antiparticles (positrons).

“Because the electron’s mass is small, they radiate much more effectively than protons,” explains high-energy astrophysicist Felix Aharonian of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. “It makes the electrons very fragile, so the electrons we detect cannot be very old. That means the sources that produce them cannot be very far away.” Cosmic-ray electrons and positrons can therefore provide important information about our local cosmic environment.

Today, however, the origins of these electrons and positrons are hotly debated. They could be produced by nearby pulsars or supernova remnants. Some astrophysicists favour a secondary production model in which other cosmic rays interact locally with interstellar gas to create high-energy electrons and positrons.

Unexplained features

Previous measurements of the energy spectra of these cosmic rays revealed several unexplained features. In general, the particle flux decreases with increasing energy. At energies below about 1 TeV, the decline is a steady exponential. But at about 1 TeV, the decline steepens with a larger exponential at a curious kink or break point.

Later observations by the Dark Matter Particle Explorer (DAMPE) collaboration confirmed this kink, but also appeared to show peaks at higher energies. Some theoreticians have suggested these inhomogeneities could arise from local sources such as pulsars, whereas others have advanced more exotic explanations, such as signals from dark matter.

In the new work, members of the High Energy Stereoscopic System (HESS) collaboration looked for evidence of cosmic electrons and positrons in 12 years of data from the HESS observatory in Namibia. HESS’s primary mission is to observe high-energy cosmic gamma rays. These gamma rays interact with the atmosphere creating showers of energetic charged particles. These showers create Cherenkov light, which is detected by HESS.

Similar but not identical

The observatory can also detect atmospheric showers created by cosmic rays such as protons and electrons. However, discerning between showers created by protons and electrons/positrons is a significant challenge (HESS cannot differentiate between electrons and positrons). “The hadronic showers produced by protons and electronic showers are extremely similar but not identical,” says Aharonian; “Now we want to use this tiny difference to distinguish between electron-produced showers and proton-produced showers. The task is very difficult because we need to reject proton showers by four orders of magnitude and still keep a reasonable fraction of electrons.”

Fortunately, the large data sample from HESS meant that the team could identify weak signals associated with electrons and positrons. The researchers were therefore able to extend the flux measurements out to much higher energies.  Whereas previous surveys could not look higher than about 5 TeV, the HESS researchers probed the 0.3–40 TeV range – although Aharonian concedes that the error bars are “huge” at higher energies.

The study confirms that, up until about 1 TeV, the spectrum decreases exponentially. The team measured this exponent to be about 3.25. At about 1 TeV, a sharp downward kink was also observed – with an exponential decrease of about 4.5 at higher energies. However, there is no sign of any bumps or peaks in the data.

This kink can be naturally explained, says Aharonian, as a “cooling break”, in which the low energy electrons are produced by background processes, whereas the high-energy electrons are produced locally. “Teraelectronvolt electrons can only come from local sources,” he says. In theoretical models, the intensity of both fluxes would decay exponentially, and the difference between the exponents would be 1 – close to that measured here. Aharonian believes that further information about this phenomenon could come from techniques such as machine learning or muon detection to distinguish between high-energy proton showers and electron showers.

“This is a unique measurement: it gives you a value of the electron–positron flux up to extremely high energies,” says Andrei Kounine of Massachusetts Institute of Technology, who works on the Alpha Magnetic Spectrometer (AMS-02) detector on the International Space Station. While he expresses some concerns about possible uncharacterized systematic errors at very high energies, he says they do not meaningfully diminish the value of the HESS team’s work. He notes that there are a variety of unexplained anomalies in the energy spectra of various cosmic-ray particles. “What we are missing at the moment,” he says, “is a comprehensive theory that considers all possible effects and tries to predict from fundamental measurements such as the proton spectrum the fluxes of all other elements.”

The research is described in Physical Review Letters.

The post Extended cosmic-ray electron spectrum has a break but no other features appeared first on Physics World.

Mathematical model sheds light on how exercise suppresses tumour growth

Par : Tami Freeman

Physical exercise plays an important role in controlling disease, including cancer, due to its effect on the human body’s immune system. A research team from the USA and India has now developed a mathematical model to quantitatively investigate the complex relationship between exercise, immune function and cancer.

Exercise is thought to supress tumour growth by activating the body’s natural killer (NK) cells. In particular, skeletal muscle contractions drive the release of interleukin-6 (IL-6), which causes NK cells to shift from an inactive to an active state. The activated NK cells can then infiltrate and kill tumour cells. To investigate this process in more depth, the team developed a mathematical model describing the transition of a NK cell from its inactive to active state, at a rate driven by exercise-induced IL-6 levels.

“We developed this model to study how the interplay of exercise intensity and exercise duration can lead to tumour suppression and how the parameters associated with these exercise features can be tuned to get optimal suppression,” explains senior author Niraj Kumar from the University of Massachusetts Boston.

Impact of exercise intensity and duration

The model, reported in Physical Biology, is constructed from three ordinary differential equations that describe the temporal evolution of the number of inactive NK cells, active NK cells and tumour cells, as functions of the growth rates, death rates, switching rates (for NK cells) and the rate of tumour cell kill by activated NK cells.

Kumar and collaborators – Jay Taylor at Northeastern University and T Bagarti at Tata Steel’s Graphene Center – first investigated how exercise intensity impacts tumour suppression. They used their model to determine the evolution over time of tumour cells for different values of α0 – a parameter that correlates with the maximum level of IL-6 and increases with increased exercise intensity.

Temporal evolution of tumour cells
Modelling suppression Temporal evolution of tumour cells for different values of α0 (left) and exercise time scale τ (right). (Courtesy: J Taylor et al Phys. Biol. 10.1088/1478-3975/ad899d)

Simulating tumour growth over 20 days showed that the tumour population increased non-monotonically, exhibiting a minimum population (maximum tumour suppression) at a certain critical time before increasing and then reaching a steady-state value in the long term. At all time points, the largest tumour population was seen for the no-exercise case, confirming the premise that exercise helps suppress tumour growth.

The model revealed that as the intensity of the exercise increased, the level of tumour suppression increased alongside, due to the larger number of active NK cells. In addition, greater exercise intensity sustained tumour suppression for a longer time. The researchers also observed that if the initial tumour population was closer to the steady state, the effect of exercise on tumour suppression was reduced.

Next, the team examined the effect of exercise duration, by calculating tumour evolution over time for varying exercise time scales. Again, the tumour population showed non-monotonic growth with a minimum population at a certain critical time and a maximum population in the no-exercise case.  The maximum level of tumour suppression increased with increasing exercise duration.

Finally, the researchers analysed how multiple bouts of exercise impact tumour suppression, modelling a series of alternating exercise and rest periods. The model revealed that the effect of exercise on maximum tumour suppression exhibits a threshold response with exercise frequency. Up to a critical frequency, which varies with exercise intensity, the maximum tumour suppression doesn’t change. However, if the exercise frequency exceeds the critical frequency, it leads to a corresponding increase in maximum tumour suppression.

Clinical potential

Overall, the model demonstrated that increasing the intensity or duration of exercise leads to greater and sustained tumour suppression. It also showed that manipulating exercise frequency and intensity within multiple exercise bouts had a pronounced effect on tumour evolution.

These results highlight the model’s potential to guide the integration of exercise into a patient’s cancer treatment programme. While still at the early development stage, the model offers valuable insight into how exercise can influence immune responses. And as Taylor points out, as more experimental data become available, the model has potential for further extension.

“In the future, the model could be adapted for clinical use by testing its predictions in human trials,” he explains. “For now, it provides a foundation for designing exercise regimens that could optimize immune function and tumour suppression in cancer patients, based on the exercise intensity and duration.”

Next, the researchers plan to extend the model to incorporate both exercise and chemotherapy dosing. They will also explore how heterogeneity in the tumour population can influence tumour suppression.

The post Mathematical model sheds light on how exercise suppresses tumour growth appeared first on Physics World.

Nuclear shape transitions visualized for the first time

Diagram showing a xenon atom changing shape from spherical to prolate to triaxial to oblate during a collision at the LHC
Shape shifter: The nucleus of the xenon atom can assume different shapes depending on the balance of internal forces at play. When two xenon atoms collide at the LHC, simulations indicate that the extremely hot conditions will trigger changes in these shapes. (Courtesy: You Zhou, NBI)

Xenon nuclei change shape as they collide, transforming from soft, oval-shaped particles to rigid, spherical ones. This finding, which is based on simulations of experiments at CERN’s Large Hadron Collider (LHC), provides a first look at how the shapes of atomic nuclei respond to extreme conditions. While the technique is still at the theoretical stage, physicists at the Niels Bohr Institute (NBI) in Denmark and Peking University in China say that ultra-relativistic nuclear collisions at the LHC could allow for the first experimental observations of these so-called nuclear shape phase transitions.

The nucleus of an atom is made up of protons and neutrons, which are collectively known as nucleons. Like electrons, nucleons exist in different energy levels, or shells. To minimize the energy of the system, these shells take different shapes, with possibilities including pear, spherical, oval or peanut-shell-like formations. These shapes affect many properties of the atomic nucleus as well as nuclear processes such as the strong interactions between protons and neutrons. Being able to identify them is thus very useful for predicting how nuclei will behave.

Colliding pairs of 129Xe atoms at the LHC

In the new work, a team led by You Zhou at the NBI and Huichao Song at Peking University studied xenon-129 (129Xe). This isotope has 54 protons and 75 neutrons and is considered a relatively large atom, making its nuclear shape easier, in principle, to study than that of smaller atoms.

Usually, the nucleus of xenon-129 is oval-shaped (technically, it is a 𝛾-soft rotor). However, low-energy nuclear theory predicts that it can transition to a spherical, prolate or oblate shape under certain conditions. “We propose that to probe this change (called a shape phase transition), we could collide pairs of 129Xe atoms at the LHC and use the information we obtain to extract the geometry and shape of the initial colliding nuclei,” Zhou explains. “Probing these initial conditions would then reveal the shape of the 129Xe atoms after they had collided.”

A quark-gluon plasma

To test the viability of such experiments, the researchers simulated accelerating atoms to near relativistic speeds, equivalent to the energies involved in a typical particle-physics experiment at the LHC. At these energies, when nuclei collide with each other, their constituent protons and neutrons break down into smaller particles. These smaller particles are mainly quarks and gluons, and together they form a quark-gluon plasma, which is a liquid with virtually no viscosity.

Zhou, Song and colleagues modelled the properties of this “almost perfect” liquid using an advanced hydrodynamic model they developed called IBBE-VISHNU. According to these analyses, the Xe nuclei go from being soft and oval-shaped to rigid and spherical as they collide.

Studying shape transitions was not initially part of the researchers’ plan. The original aim of their work was to study conditions that prevailed in the first 10-6 seconds after the Big Bang, when the very early universe is thought to have been filled with a quark-gluon plasma of the type produced at the LHC. But after they realized that their simulations could shed light on a different topic, they shifted course.

“Our new study was initiated to address the open question of how nuclear shape transitions manifest in high-energy collisions,” Zhou explains, “and we also wanted to provide experimental insights into existing theoretical nuclear structure predictions.”

One of the team’s greatest difficulties lay in developing the complex models required to account for nuclear deformation and probe the structure of xenon and its fluctuations, Zhou tells Physics World. “There was also a need for compelling new observables that allow for a direct probe of the shape of the colliding nuclei,” he says.

Applications in both high- and low-energy nuclear and structure physics

The work could advance our understanding of fundamental nuclear properties and the operation of the theory of quantum chromodynamics (QCD) under extreme conditions, Zhou adds. “The insights gleaned from this work could guide future nuclear collision experiments and influence our understanding of nuclear phase transitions, with applications extending to both high-energy nuclear physics and low-energy nuclear structure physics,” he says.

The NBI/Peking University researchers say that future experiments could validate the nuclear shape phase transitions they observed in their simulations. Expanding the study to other nuclei that could be collided at the LHC is also on the cards, says Zhou. “This could deepen our understanding of nuclear structure at ultra-short timescales of 10-24 seconds.”

The research is published in Physical Review Letters.

The post Nuclear shape transitions visualized for the first time appeared first on Physics World.

Physicists in cancer radiotherapy

Par : No Author

The programme focuses on the cancer radiation therapy patient pathway, with the aim of equipping students with the skills to progress onto careers in clinical, academic research or commercial medical physics opportunities.

Alan McWilliam, programme director of the new course, is also a reader in translational radiotherapy physics. He explains: “Radiotherapy is a mainstay of cancer treatment, used in around 50% of all treatments, and can be used together with surgery or systemic treatments like chemotherapy or immunotherapy. With a heritage dating back over 100 years, radiotherapy is now highly technical, allowing the radiation to be delivered with pin-point accuracy and is increasingly interdisciplinary to ensure a high-quality, curative delivery of radiation to every patient.”

“This new course builds on the research expertise at Manchester and benefits from being part of one of the largest university cancer departments in Europe, covering all aspects of cancer research. We believe this master’s reflects the modern field of medical physics, spanning the multidisciplinary nature of the field.”

Cancer pioneers

Manchester has a long history of developing solutions to drive improvements in healthcare, patients’ lives and the wellbeing of individuals. This new course draws on scientific research and innovation to equip those interested in a career in medical physics or cancer research with specialist skills that draw on a breadth of knowledge.  Indeed, the course units bring together expertise from academics that have pioneered, amongst other work, the use of image-guided radiotherapy, big data analysis using real-world radiotherapy data, novel MR imaging for tracking oxygenation of tumours during radiotherapy, and proton research beam lines. Students will benefit directly from this network of research groups by being able to join research seminars throughout the course.

Working with clinical scientists

The master’s course is taught together with clinical physicists from The Christie NHS Foundation Trust, one of the largest single-site cancer hospitals in Europe and the only UK cancer hospital connected directly to a research institute. The radiotherapy department currently has 16 linear accelerators across four sites, an MR-guided radiotherapy service and one of the two NHS high-energy proton beam services. The Christie is currently one of only two cancer centres in the world with access to both proton beam and an MR-guided linear accelerator. For students, this partnership provides the opportunity to work with people at the forefront of cancer treatment developments.

To reflect the current state of radiotherapy, the University of Manchester has worked with The Christie to ensure students gain the skills necessary for a successful, modern, medical physics career. Units have a strong clinical focus, with access to technology that allows students to experience and learn from clinical workflows.

Students will learn the fundamentals of how radiotherapy works, from interactions of X-rays and matter, through X-ray beam generation control and measurement, and to how treatments are planned. Complementary to X-ray therapy, students will learn about the concepts of proton beam therapy, how the delivery of protons is different from X-rays, and the potential clinical benefits and unique difficulties of protons due to greater uncertainties from how protons interact with matter.

Delivering radiation with pin-point accuracy

The course will provide an in-depth understanding of how imaging can be used throughout the patient pathway to aid treatment decisions and guide the delivery of radiation.

The utility of CT, MRI and PET scanners across clinical pathways is explored, and the area of radiation delivery is complemented by material on radiobiology – how cells and tissues respond to radiation.

The difference between the response of tumours and normal tissue to radiation is called the therapeutic ratio. The radiobiology teaching will focus on how to maximize this ratio, essentially how to improve cure whilst minimising the risk of side-effects due to irradiation of nearby normal tissues. Students will also explore how this ratio could be enhanced or modified to improve the efficacy of all forms of radiotherapy.

Research and technology

A core strength of the research groups in Manchester is the use of routinely collected data in the evaluation of improvements in treatment delivery or the clinical translation of research findings. Many such improvements do not qualify for a full randomized clinical trial. However, there are many pragmatic methods to evaluate clinical benefit. Through studying clinical workflows and translation, these concepts will be explored along with investigating how to maximise results from all available data.

Modern medical physicists need an appreciation of artificial intelligence (AI). AI is emerging as an automation tool throughout the radiation therapy workflow; for example, segmentation of tissues, radiotherapy planning and quality assurance. This course delves into the fundamentals of AI and machine learning, giving students the opportunity to implement their own solution for image classification or image segmentation. For those with leadership aspirations, guest lecturers from various academic, clinical or commercial backgrounds will detail career routes and how to develop knowledge in this area.

Pioneering new learning and assessments

Programme director Alan McWilliam talks us through the design of the course and how students are evaluated:

“An aspect of the teaching we are particularly proud of is the design of the assessments throughout the units. Gone are written exams, with assessments allowing students to apply their new knowledge to real medical physics problems. Students will perform dosimetric calculations and Monte Carlo simulations of proton depositions, as well as build an image registration pipeline and pitch for funding in a dragon’s den (or shark tank) scenario. This form of assessment will allow students to demonstrate skills directly useful for future career pathways.”

“The final part of the course is the research project, to take place after the taught elements are complete. Students will choose from projects which will embed them with one of the academic or clinical groups. Examples for the current cohort include training an AI segmentation model for muscle in CT images and associating this with treatment outcomes; simulating prompt gamma rays from proton deliveries for dose verification; and assisting with commissioning MR-guided workflows for ultra-central lung treatments.”

Develop your specialist skills

The Medical Physics in Cancer Radiation Therapy MSc is a one-year full-time (two-year part-time) programme at the University of Manchester.

Applications are now open for the next academic year, and it is recommended to apply early, as applications may close if the course is full.

Find out more and apply: https://uom.link/medphyscancer 

The post Physicists in cancer radiotherapy appeared first on Physics World.

❌