Vue normale
Sustainability spotlight: PFAS unveiled
So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.
Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.
Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.
Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.
Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.
Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.
Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.
Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.
Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.
Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic.
Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA
The post Sustainability spotlight: PFAS unveiled appeared first on Physics World.
String theory may be inevitable as a unified theory of physics, calculations suggest
Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.
The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.
Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.
“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”
Why strings?
String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.
However, a major hurdle remains. The characteristic size of these strings is estimated to be around 10−35 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.
Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.
“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”
In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.
Unique solution
Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.
Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”
The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.
Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.
The research is described in Physical Review Letters.
The post String theory may be inevitable as a unified theory of physics, calculations suggest appeared first on Physics World.
Antimatter partner of hyperhelium-4 is spotted at CERN
CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.
Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.
Exotic antinuclei
Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).
Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.
Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.
Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.
Rapid decay
But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.
Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.
The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.
The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.
The observation is described in a paper that has been submitted to Physical Review Letters.
The post Antimatter partner of hyperhelium-4 is spotted at CERN appeared first on Physics World.
How publishing in Electrochemical Society journals fosters a sense of community
The Electrochemical Society (ECS) is an international non-profit scholarly organization that promotes research, education and technological innovation in electrochemistry, solid-state science and related fields.
Founded in 1902, the ECS brings together scientists and engineers to share knowledge and advance electrochemical technologies.
As part of that mission, the society publishes several journals including the flagship Journal of the Electrochemical Society (JES), which is over 120 years old and covers a wide range of topics in electrochemical science and engineering.
Someone who has seen their involvement with the ECS and ECS journals increase over their career is chemist Trisha Andrew from the University of Massachusetts Amherst. She directs the wearable electronics lab, a multi-disciplinary research team that produces garment-integrated technologies using reactive vapor deposition.
Her involvement with the ECS began when she was invited by the editor-in-chief of ECS Sensors Plus to act as a referee for the journal. Andrew found the depth and practical application of the papers she reviewed interesting and of high quality. This resulted in her submitting her own work to ECS journals and she later became an associate editor for both ECS Sensors Plus and JES.
Professional Opportunities
Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China, meanwhile, says that due to the reputation of ECS journals, they have been his “go-to” place to publish since graduate school.
One of his papers entitled “Python for electrochemistry: a free an all-in-one toolset” (ECS Adv. 2 040502) has been downloaded over 8000 times and is currently the most-read ECS Advances article. This led to an invitation to deliver an ECS webinar — Introducing Python for Electrochemistry Research. “I never expected such an impact when the paper was accepted, and none of this would be possible without the platform offered by ECS journals,” adds Zheng.
Publishing in ECS journals has helped Zheng’s career advance through new connections and becoming more involved with ECS activities. This has not only boosted his research but also professional network and given these benefits, Zheng plans to continue to publish his latest findings in ECS journals.
Highly cited papers
Battery researcher Thierry Brousse from Nantes University in France, came to electrochemistry later on in his career having first carried out a PhD in high-temperature superconducting thin films at the University of Caen Normandy.
When he began working in the field he collaborated with the chemist Donald Schleich from Polytech Nantes, who was an ECS member. It was then that he began to read the JES finding it a prestigious platform for his research in supercapacitors and microdevices for energy storage. “Most of the inspiring scientific papers I was reading at that time were from JES,” notes Brousse. “Naturally, my first papers were then submitted to this journal.”
Brousse says that publishing in ECS journals has provided him with new collaborations as well as invitations to speak at major conferences. He emphasizes the importance of innovative work and the positive impact of publishing in ECS journals where some of his most cited work has been published.
Brousse, who is an associate editor for JES, adds that he particularly values how publishing with ECS journals fosters a quick integration into specific research communities. This, he says, has been instrumental in advancing his career.
Long-standing relationships
Robert Savinell’s relationship with the ECS and ECS journals began during his PhD research in electrochemistry, which he carried out at the University of Pittsburgh. Now at Case Western Reserve University in Cleveland, Ohio, his research focusses on developing a flow battery for low-cost long duration energy storage primarily using iron and water. It is designed to improve the efficiency of the power grid and accelerate the addition of solar and wind power supplies.
Savinell also leads a Department of Energy funded Emerging Frontier Research Center on Breakthrough Electrolytes for Energy Storage. This Center focuses on fundamental research on nano to meso-scale structured electrolytes for energy storage.
ECS journals have been a cornerstone of his professional career, providing a platform for his research and fostering valuable professional connections. “Some of my research published in JES many years ago are still cited today,” says Savinell.
Savinell’s contributions to the ECS community have been recognized through various roles, including being elected a fellow of the ECS and he has previously served as chair of the ECS’s electrolytic and electrochemical engineering division. He was editor-in-chief of JES for the past decade and most recently was elected third vice president of the ECS.
Savinell says that the connections he has made through ECS have been significant, ranging from funding programme managers to personal friends. “My whole professional career has been focused around ECS,” he says, adding that he aims to continue to publish in ECS journals and hopes that his work will inspire solutions to some of society’s biggest problems.
Personal touch
For many researchers in the field, publishing in ECS journals has brought with it several benefits. That includes the high level of engagement and the personal touch within the ECS community and also the promotional support ECS provides for published work.
The ECS journals’ broad portfolio also ensure that researcher’s work reaches the right audience, and such a visibility and engagement is a significant factor when it comes to advancing the careers of scientists. “The difference between ECS journals is the amount of engagement, views and reception that you receive,” says Andrew. “That’s what I found to be the most unique”.
The post How publishing in Electrochemical Society journals fosters a sense of community appeared first on Physics World.
Higher-order brain function revealed by new analysis of fMRI data
An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.
To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.
Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.
“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”
Higher-order interactions
Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.
To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.
Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.
To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”
Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.
Distinguishing between tasks
With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.
By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.
By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.
“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.
The research is described in Nature Communications.
The post Higher-order brain function revealed by new analysis of fMRI data appeared first on Physics World.
- Physics World
- Humanitarian engineering can improve cancer treatment in low- and middle-income countries
Humanitarian engineering can improve cancer treatment in low- and middle-income countries
This episode of the Physics World Weekly podcast explores how the concept of humanitarian engineering can be used to provide high quality cancer care to people in low- and middle-income countries (LMICs). This is an important challenge because today only 5% of global radiotherapy resources are located in LMICs, which are home to the majority of the world’s population.
Our guests are two medical physicists at the University of Washington in the US who have contributed to the ebook Humanitarian Engineering for Global Oncology. They are Eric Ford, who edited the ebook and Afua Yorke, who along with Ford wrote the chapter “Cost-effective radiation treatment delivery systems for low- and middle-income countries”.
They are in conversation with Physics World’s Tami Freeman.
The post Humanitarian engineering can improve cancer treatment in low- and middle-income countries appeared first on Physics World.
Sun-like stars produce ‘superflares’ about once a century
Stars like our own Sun produce “superflares” around once every 100 years, surprising astronomers who had previously estimated that such events occurred only every 3000 to 6000 years. The result, from a team of astronomers in Europe, the US and Japan, could be important not only for fundamental stellar physics but also for forecasting space weather.
The Sun regularly produces solar flares, which are energetic outbursts of electromagnetic radiation. Sometimes, these flares are accompanied by plasma in events known as coronal mass ejections. Both activities can trigger powerful solar storms when they interact with the Earth’s upper atmosphere, posing a danger to spacecraft and satellites as well as electrical grids and radio communications on the ground.
Despite their power, though, these events are much weaker than the “superflares” recently observed by NASA’s Kepler and TESS missions at other Sun-like stars in our galaxy. The most intense superflares release energies of about 1025 J, which show up as short, sharp peaks in the stars’ visible light spectrum.
Observations from the Kepler space telescope
In the new study, which is detailed in Science, astronomers sought to find out whether our Sun is also capable of producing superflares, and if so, how often they happen. This question can be approached in two different ways, explains study first author Valeriy Vasilyev, a postdoctoral researcher at the Max Planck Institute for Solar System Research, Germany. “One option is to observe the Sun directly and record events, but it would take a very long time to gather enough data,” Vasilyev says. “The other approach is to study a large number of stars with characteristics similar to those of the Sun and extrapolate their flare activity to our Sun.”
The researchers chose the second option. Using a new method they developed, they analysed Kepler space telescope data on the fluctuations of more than 56,000 Sun-like stars during the period between 2009‒2013. This dataset, which is much larger and more representative than previous datasets because it based on recent advances in our understanding of Sun-like stars, corresponds to around 220,000 years of solar observations.
The new technique can detect superflares and precisely localize them on the telescope images with sub-pixel resolution, Vasilyev says. It also accounts for how light propagates through the telescope’s optics as well as instrumental effects that could “contaminate” the data.
The team, which also includes researchers from the University of Graz, Austria; the University of Oulu, Finland; the National Astronomical Observatory of Japan; the University of Colorado Boulder in the US; and the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, both in France; carefully analysed the detected flares. They checked for potential sources of error, such as those originating from unresolved binary stars, flaring M- and K-dwarf stars and fast-rotating active stars that might have been wrongly classified. Thanks to these robust, statistical evaluations, they identified almost 3000 bright stellar flares in the population they observed – a detection rate that implies that superflares occur roughly once per century, per star.
Sun should also be capable of producing superflares
According to Vasilyev, the team’s results also suggest that solar flares and stellar superflares are generated by the same physical mechanisms. This is important because reconstructions of past solar activity, which are based on the concentrations of cosmogenic isotopes in terrestrial archives such as tree rings, tell us that our Sun occasionally experiences periods of higher or lower solar activity lasting several decades.
One example is the Maunder Minimum, a decades-long period during the 17th century when very few sunspots were recorded. At the other extreme, solar activity was comparatively higher during the Modern Maximum that occurred around the mid-20th century. Based on the team’s analysis, Vasilyev says that “so-called grand minima and grand maxima are not regular but tend to cluster in time. This means that centuries could pass by without extreme solar flares followed by several such events occurring over just a few years or decades.”
It is possible, he adds, that a superflare occurred in the past century but went unnoticed. “While we have no evidence of such an event, excluding it with certainty would require continuous and systematic monitoring of the Sun,” he tells Physics World. The most intense solar flare in recorded history, the so-called “Carrington event” of September 1859, was documented essentially by chance: “By the time he [the English astronomer Richard Carrington] called someone to show them the bright glow he observed (which lasted only a few minutes), the brightness had already faded.”
Between 1996 and 2002, when instruments provided direct measurements of total solar brightness with sufficient accuracy and temporal resolution, 12 flares with Carrington-like energies were detected. Had these flares been aimed at Earth, it is possible that they would have had similar effects, he says.
The researchers now plan to investigate the conditions required to produce superflares. “We will be extending our research by analysing data from next-generation telescopes, such as the European mission PLATO, which I am actively involved in developing,” Vasilyev says. “PLATO’s launch is due for the end of 2026 and will provide valuable information with which we can refine our understanding of stellar activity and even the impact of superflares on exoplanets.”
The post Sun-like stars produce ‘superflares’ about once a century appeared first on Physics World.
Solid-state nuclear clocks brought closer by physical vapour deposition
Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.
Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.
Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.
Sensitive to new physics
Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.
Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.
That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.
In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”
Layers of thorium fluoride
In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.
The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.
More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.
Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”
The research is described in Nature.
The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.
Medical physics and biotechnology: highlights of 2024
From tumour-killing quantum dots to proton therapy firsts, this year has seen the traditional plethora of exciting advances in physics-based therapeutic and diagnostic imaging techniques, plus all manner of innovative bio-devices and biotechnologies for improving healthcare. Indeed, the Physics World Top 10 Breakthroughs for 2024 included a computational model designed to improve radiotherapy outcomes for patients with lung cancer by modelling the interaction of radiation with lung cells, as well as a method to make the skin of live mice temporarily transparent to enable optical imaging studies. Here are just a few more of the research highlights that caught our eye.
Marvellous MRI machines
This year we reported on some important developments in the field of magnetic resonance imaging (MRI) technology, not least of which was the introduction of a 0.05 T whole-body MRI scanner that can produce diagnostic quality images. The ultralow-field scanner, invented at the University of Hong Kong’s BISP Lab, operates from a standard wall power outlet and does not require shielding cages. The simplified design makes it easier to operate and significantly lower in cost than current clinical MRI systems. As such, the BISP Lab researchers hope that their scanner could help close the global gap in MRI availability.
Moving from ultralow- to ultrahigh-field instrumentation, a team headed up by David Feinberg at UC Berkeley created an ultrahigh-resolution 7 T MRI scanner for imaging the human brain. The system can generate functional brain images with 10 times better spatial resolution than current 7 T scanners, revealing features as small as 0.35 mm, as well as offering higher spatial resolution in diffusion, physiological and structural MR imaging. The researchers plan to use their new NexGen 7 T scanner to study underlying changes in brain circuitry in degenerative diseases, schizophrenia and disorders such as autism.
Meanwhile, researchers at Massachusetts Institute of Technology and Harvard University developed a portable magnetic resonance-based sensor for imaging at the bedside. The low-field single-sided MR sensor is designed for point-of-care evaluation of skeletal muscle tissue, removing the need to transport patients to a centralized MRI facility. The portable sensor, which weighs just 11 kg, uses a permanent magnet array and surface RF coil to provide low operational power and minimal shielding requirements.
Proton therapy progress
Alongside advances in diagnostic imaging, 2024 also saw a couple of firsts in the field of proton therapy. At the start of the year, OncoRay – the National Center for Radiation Research in Oncology in Dresden – launched the world’s first whole-body MRI-guided proton therapy system. The prototype device combines a horizontal proton beamline with a whole-body MRI scanner that rotates around the patient, a geometry that enables treatments both with patients lying down or in an upright position. Ultimately, the system could enable real-time MRI monitoring of patients during cancer treatments and significantly improve the targeting accuracy of proton therapy.
Also aiming to enhance proton therapy outcomes, a team at the PSI Center for Proton Therapy performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow. Online plan adaptation, where the patient remains on the couch throughout the replanning process, could help address uncertainties arising from anatomical changes during treatments. In five adults with tumours in rigid body regions treated using DAPT, the daily adapted plans provided target coverage to within 1.1% of the planned dose and, in over 90% of treatments, improved dose metrics to the targets and/or organs-at-risk. Importantly, the adaptive approach took just a few minutes longer than a non-adaptive treatment, remaining within the 30-min time slot allocated for a proton therapy session.
Bots and dots
Last but certainly not least, this year saw several research teams demonstrate the use of tiny devices for cancer treatment. In a study conducted at the Institute for Bioengineering of Catalonia, for instance, researchers used self-propelling nanoparticles containing radioactive iodine to shrink bladder tumours.
Upon injection into the body, these “nanobots” search for and accumulate inside cancerous tissue, delivering radionuclide therapy directly to the target. Mice receiving a single dose of the nanobots experienced a 90% reduction in the size of bladder tumours compared with untreated animals.
At the Chinese Academy of Sciences’ Hefei Institutes of Physical Science, a team pioneered the use of metal-free graphene quantum dots for chemodynamic therapy. Studies in cancer cells and tumour-bearing mice showed that the quantum dots caused cell death and inhibition of tumour growth, respectively, with no off-target toxicity in the animals.
Finally, scientists at Huazhong University of Science and Technology developed novel magnetic coiling “microfibrebots” and used them to stem arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.
The post Medical physics and biotechnology: highlights of 2024 appeared first on Physics World.
Laser beam casts a shadow in a ruby crystal
Particles of light – photons – are massless, so they normally pass right through each other. This generally means they can’t cast a shadow. In a new work, however, physicist Jeff Lundeen of the University of Ottawa, Canada and colleagues found that this counterintuitive behaviour can, in fact, happen when a laser beam is illuminated by another light source as it passes through a highly nonlinear medium. As well as being important for basic science, the work could have applications in laser fabrication and imaging.
The light-shadow experiment began when physicists led by Raphael Akel Abrahao sent a high-power beam of green laser light through a cube-shaped ruby crystal. They then illuminated this beam from the side with blue light and observed that the beam cast a shadow on a piece of white paper. This shadow extended through an entire face of the crystal. Writing in Optica, they note that “under ordinary circumstances, photons do not interact with each other, much less block each other as needed for a shadow.” What was going on?
Photon-photon interactions
The answer, they explain, boils down to some unusual photon-photon interactions that take place in media that absorb light in a highly nonlinear way. While several materials fit this basic description, most become saturated at high laser intensities. This means they become more transparent in the presence of a strong laser field, producing an “anti-shadow” that is even brighter than the background – the opposite of what the team was looking for.
What they needed, instead, was a material that absorbs more light at higher optical intensities. Such behaviour is known as “reverse saturation of absorption” or “saturable transmission”, and it only occurs if four conditions are met. Firstly, the light-absorbing system needs to have two electronic energy levels: a ground state and an excited state. Secondly, the transition from the ground to the excited state must be less strong (technically, it must have a smaller cross-section) than the transition from the first exited state to a higher excited state. Thirdly, after the material absorbs light, neither the first nor the second excited states should decay back to other levels when the light is re-emitted. Finally, the incident light should only saturate the first transition.
That might sound like a tall order, but it turns out that ruby fits the bill. Ruby is an aluminium oxide crystal that contains impurities of chromium atoms. These impurities distort its crystal lattice and give it its familiar red colour. When green laser light (532 nm) is applied to ruby, it drives an electronic transition from the ground state (denoted 4A2) to an excited state 4T2. This excited state then decays rapidly via phonons (vibrations of the crystal lattice) to the 2E state.
At this point, the electrons absorb blue light (450 nm) and transition from 2E to a different excited state, denoted 2T1. While electrons in the 4A2 state could, in principle, absorb blue light directly, without any intermediate step, the absorption cross-section of the transition from 2E to 2T1 is larger, Abrahao explains.
The result is that in the presence of the green laser beam, the ruby absorbs more of the illuminating blue light. This leaves behind a lower-optical-intensity region of blue illumination within the ruby – in other words, the green laser beam’s shadow.
Shadow behaves like an ordinary shadow
This laser shadow behaves like an ordinary shadow in many respects. It follows the shape of the object (the green laser beam) and conforms to the contours of the surfaces it falls on. The team also developed a theoretical model that predicts that the darkness of the shadow will increase as a function of the power of the green laser beam. In their experiment, the maximum contrast was 22% – a figure that Abrahao says is similar to a typical shadow on a sunny day. He adds that it could be increased in the future.
Lundeen offers another way of looking at the team’s experiment. “Fundamentally, a light wave is actually composed of a hybrid particle made up of light and matter, called a polariton,” he explains. “When light travels in a glass or crystal, both aspects of the polariton are important and, for example, explain why the wave travels more slowly in these media than in vacuum. In the absence of either part of the polariton, either the photon or atom, there would be no shadow.”
Strictly speaking, it is therefore not massless light that is creating the shadow, but the material component of the polariton, which has mass, adds Abrahao, who is now a postdoctoral researcher at Brookhaven National Laboratory in the US.
As well as helping us to better understand light-matter interactions, Abrahao tells Physics World that the experiment “could also come in useful in any device in which we need to control the transmission of a laser beam with anther laser beam”. The team now plans to search for other materials and combinations of wavelengths that might produce a similar “laser shadow” effect.
The post Laser beam casts a shadow in a ruby crystal appeared first on Physics World.
Elevating brachytherapy QA with RadCalc
An engaging webinar where we explore how RadCalc supports advanced brachytherapy quality assurance, enabling accurate and efficient dose calculations. Brachytherapy plays a critical role in cancer treatment, with modalities like HDR, LDR, and permanent seed implants requiring precise dose verification to ensure optimal patient outcomes.
The increasing complexity of modern brachytherapy plans has heightened the demand for streamlined QA processes. Traditional methods, while effective, often involve time-consuming experimental workflows. With RadCalc’s 3D dose calculation system based on the TG-43 protocol, users can achieve fast and reliable QA, supported by seamless integration with treatment planning systems and automation through RadCalcAIR.
The webinar will showcase the implementation of independent RadCalc QA.
Don’t miss the opportunity to listen to two RadCalc clinical users!
A Q&A session follows the presentation.
Michal Poltorak, MSc, is the head of the department of Medical Physics at the National Institute of Medicine, Ministry of the Interior and Administration, in Warsaw, Poland. With expertise in medical physics, he oversees research and clinical applications in radiation therapy and patient safety. His professional focus lies in integrating innovative technologies.
Oskar Sobotka, MSc.Eng, is a medical physicist at the Radiotherapy Center in Gorzów Wielkopolski, specializing in treatment planning and dosimetry. With a Master’s degree from Adam Mickiewicz University and experience in nuclear medicine and radiotherapy, he ensures precision and safety in patient care.
Lucy Wolfsberger, MS, LAP, is an application specialist for RadCalc at LifeLine Software Inc., a part of the LAP Group. She is dedicated to enhancing safety and accuracy in radiotherapy by supporting clinicians with a patient-centric, independent quality assurance platform. Lucy combines her expertise in medical physics and clinical workflows to help healthcare providers achieve efficient, reliable, and comprehensive QA.
Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.
The post Elevating brachytherapy QA with RadCalc appeared first on Physics World.
Automated checks build confidence in treatment verification
Busy radiation therapy clinics need smart solutions that streamline processes while also enhancing the quality of patient care. That’s the premise behind ChartCheck, a tool developed by Radformation to facilitate the weekly checks that medical physicists perform for each patient who is undergoing a course of radiotherapy. By introducing automation into what is often a manual and repetitive process, ChartCheck can save time and effort while also enabling medical physicists to identify and investigate potential risks as the treatment progresses.
“To ensure that a patient is receiving the proper treatment a qualified medical physicist must check a patient’s chart after every five fractions of radiation has been delivered,” explains Ryan Manger, lead medical physicist at the Encinitas Treatment Center, one of four clinics operated by UC San Diego in the US. “The current best practice is to check 36 separate items for each patient, which can take a lot of time when each physicist needs to verify 30 or 40 charts every week.”
Before introducing ChartCheck into the workflow at UC San Diego, Manger says that around 70% of the checks had to be done manually. “The weekly checks are really important for patient safety, but they become a big time sink when each task takes five or ten minutes,” he says. “It’s easy to get fatigued when you’re looking at the same things over and over again, and we have found that introducing automation into the process can have a positive impact on everything else we do in the clinic.”
ChartCheck monitors the progress of ongoing treatments by automatically performing a comprehensive suite of clinical checks, raising an alert if any issue is detected. As an example, after each treatment the tool verifies that the delivered dose matches the parameters defined in the clinical plan, while it also monitors real-time changes such as any movement of the couch during treatment. It also collates together all the necessary safety documentation, allows comments or notes to be added, and highlights any scheduling changes when a patient decides to take a treatment break, for instance, or the physician adds a boost to the clinical plan.
As well as consolidating all the information on a single platform, ChartCheck allows physicists to analyse the treatment data to identify and understand any underlying issues that might affect patient safety. “It has given us a lot more vision of what’s happening across all our treatments, which is typically around 300 per week,” says Manger. “Within just three months it has illuminated areas that we were unaware of before, but that might have carried some risk.”
What’s more, the physicists at UC San Diego have found that automating many of the routine tasks has enabled them to focus their attention where it is needed most. “We have implemented the tool as a first-pass filter to flag any charts that might need further attention, which is typically around 10–15% of the total,” says Manger. “We can then use our expertise to investigate those charts in more detail and to understand what the risk factors might be. The result is that we do a better check where it’s needed, rather than just looking at the same things over and over.”
Jennifer Scharff, lead physicist at the John Stoddard Cancer Center in Des Moines, Iowa, also values the extra insights that ChartCheck offers. One major advantage, she says, is how easy it is to check whether the couch might have moved between treatment fields. “It’s not ideal when the couch moves, but sometimes it happens if a patient coughs or sneezes during the treatment and the therapist needs to adjust the position slightly when they get back into their breath hold,” she says. “In ChartCheck it’s really easy to see those positional shifts on a daily basis, and to identify any trends or issues that we might need to address.”
ChartCheck offers full integration with ARIA, the oncology information system from Varian, making it easy to implement and operate within existing clinical workflows. Although ARIA already offers a tool for treatment verification, Scharff says that ChartCheck offers a more comprehensive and efficient solution. “It checks more than ARIA does, and it’s much faster and more efficient to do a weekly physics check,” she says. “As an example, it’s really easy to see the journal notes that our therapists make when something isn’t quite right, and it helps us to identify patients who need a final chart check when they want to pause or stop their treatment.”
The automated tool also guarantees consistency between the chart checks undertaken by different physicists, with Scharff finding the standardized approach particularly useful when locums are brought into the team. “It’s easy for them to see all the information we can see, we can be sure that they are making the same checks as we do, and the same documents are always sent for approval,” she says. “The system makes it really easy to catch things, and it calls out the same thing for everyone.”
With the medical physicists at UC San Diego working across four different treatment centres, Manger has also been impressed by the ability of ChartCheck to improve consistency between physicists working in different locations. “The human factor always introduces some variations, even between physicists who are fully trained,” he says. “Minimizing the impact of those variations has been a huge benefit that I hadn’t considered when we first decided to introduce the software, but it has allowed us to ensure that all the correct policies and procedures are being followed across all of our treatment centres.”
Overall, the experience of physicists like Manger and Scharff is that ChartCheck can streamline processes while also providing them with the reassurance that their patients are always being treated correctly and safely. “It has had a huge positive impact for us,” says Scharff. “It saves a lot of time and gives us more confidence that everything is being done as it should be.”
- To find out more, visit the Radformation website to watch a recent ChartCheck webinar.
The post Automated checks build confidence in treatment verification appeared first on Physics World.
Patient-specific quality assurance (PSQA) based on independent 3D dose calculation
In this webinar, we will discuss that patient specific quality assurance (PSQA) is an essential component of the radiation treatment process. This control allows us to ensure that the planned dose will be delivered to the patient. The increasing number of patients with indications for modulated treatments requiring PSQA has significantly increased the workload of the medical physics departments, and the need to find more efficient ways to perform it has arisen.
In recent years, there has been an increasing evolution of measurement systems. However, the experimental process involved imposes a limit on the time savings. The 3D dose calculation systems are presented as a solution to this problem, allowing the reduction of the time needed for the initiation of treatments.
The use of 3D dose calculation systems, as stated in international recommendations (TG219), requires a process of commissioning and adjustment of dose calculation parameters.
This presentation will show the implementation of PSQA based on independent 3D dose calculation for VMAT treatments in breast cancer using DICOM information from the plan and LOG files. Comparative results with measurement-based PSQA systems will also be presented.
An interactive Q&A session follows the presentation.
Dr Daniel Venencia is the chief of the medical physics department at Instituto Zunino – Fundación Marie Curie in Cordoba, Argentina. He holds a BSc in physics and a PhD from the Universidad Nacional de Córdoba (UNC), Daniel has completed postgraduate studies in radiotherapy and nuclear medicine. With extensive experience in the field, Daniel has directed more than 20 MSc and BSc theses and three doctoral theses. He has delivered more than 400 presentations at national and international congresses. He has published in prestigious journals, including the Journal of Applied Clinical Medical Physics and the International Journal of Radiation Oncology, Biology and Physics. His work continues to make significant contributions to the advancement of medical physics.
Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.
The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.
Squishy silicone rings shine a spotlight on fluid-solid transition
People working in industry, biology and geology are all keen to understand when particles will switch from flowing like fluids to jamming like solids. With rigid particles, and even for foams and emulsions, scientists know what determines this crunch point: it’s related to the number of contact points between particles. But for squishy particles – those that deform by more than 10% of their size – that’s not necessarily the case.
“You can have a particle that’s completely trapped between only two particles,” explains Samuel Poincloux, who studies the statistical and mechanical response of soft assemblies at Aoyama Gakuin University, Japan.
Factoring that level of deformability into existing theories would be fiendishly difficult. But with real-world scenarios – particularly in mechanobiology – coming to light that hinge on the flow or jamming of highly deformable particles, the lack of explanation was beginning to hurt. Poincloux and his University of Tokyo colleague Kazumasa Takeuchi therefore tried a different approach. Their “easy-to-do experiment” sheds fresh light on how squishy particles respond to external forces, leading to a new model that explains how such particles flow – and at what point they don’t.
Pinning down the differences
To demonstrate how things can change when particles can deform a lot, Takeuchi holds up a case containing hundreds of rigid photoelastic rings. When these rings are under stress, the polarization of light passing through them changes. “This shows how the force is propagating,” he says.
As he presses on the rings with a flat-ended rod, a pattern of radial lines centred at the bottom of the rod lights up. With rigid particles, he explains, chains of forces transmitted by these contact points conspire to fix the particles in place. The fewer the contact points, the fewer the chains of forces keeping them from moving. However, when particles can deform a lot, the contact areas are no longer points. Instead, they extend over a larger region of the ring’s surface. “We can already expect that something will be very different then,” he says.
The main ingredient in Takeuchi and Poincloux’s experimental study of these differences was a layer of deformable silicone rings 10 mm high, 1.5 mm thick and with a radius of 3.3 mm, laid out between two parallel surfaces. The choice of ring material and dimensions was key to ensuring the model reproduced relevant aspects of behaviour while remaining easy to manipulate and observe. To that end, they added an acrylic plate on top to stop the rings popping out under compression. “There’s a lot of elastic energy inside them,” says Poincloux, nodding wryly. “They go everywhere.”
By pressing on one of the parallel surfaces, the researchers compressed the rings (thereby adjusting their density) and added an oscillating shear force. To monitor the rings’ response, they used image analysis to note the position, shape, neighbours and contact lengths for each ring. As they reduced the shear force amplitude or increased the density, they observed a transition to solid-like behaviour in which the rings’ displacement under the shear force became reversible. This transition was also reflected in collective properties such as calculated loss and storage moduli.
Unexpectedly simple
Perhaps counterintuitively, regular patterns – crystallinity – emerged in the arrangement of the rings while the system was in a fluid phase but not in the solid phase. This and other surprising behaviours make the system hard to model analytically. However, Takeuchi emphasises that the theoretical criterion for switching between solid-like and fluid-like behaviour turned out to be quite simple. “This is something we really didn’t expect,” he says.
- The top row in the video depicts the fluid-like behaviour of the rings at low density. The bottom row depicts the solid-like behaviour of the rings at a higher density. (Courtesy: Poincloux and Takeuchi 2024)
The researchers’ experiments showed that for squishy particles, the number of contacts no longer matters much. Instead, it’s the size of the contact that’s important. “If you have very extended contact, then [squishy particles] can basically remain solid via the extension of contact, and that is possible only because of friction,” says Poincloux. “Without friction, they will almost always rearrange and lose their rigidity.”
Jonathan Bares, who studies granular matter at CNRS in the Université de Montpellier, France, but was not involved in this work, describes the model experiment as “remarkably elegant”. This kind of jamming state is, he says, “challenging to analyse both analytically and numerically, as it requires accounting for the intricate properties of the materials that make up the particles.” It is, he adds, “encouraging to see squishy grains gaining increasing attention in the study of granular materials”.
As for the likely impact of the result, biophysicist Christopher Chen, whose work at Boston University in the US focuses on adhesive, mechanical and biochemical contributions in tissue microfabrication, says the study “provides more evidence that the way in which soft particles interact may dominate how biological tissues control transitions in rigidity”. These transitions, he adds, “are important for many shape-changing processes during tissue assembly and formation”.
Full details of the experiment are reported in PNAS.
The post Squishy silicone rings shine a spotlight on fluid-solid transition appeared first on Physics World.
The heart of the matter: how advances in medical physics impact cardiology
Medical physics techniques play a key role in all areas of cardiac medicine – from the use of advanced imaging methods and computational modelling to visualize and understand heart disease, to the development and introduction of novel pacing technologies. At a recent meeting organised by the Institute of Physics’ Medical Physics Group, experts in the field discussed some of the latest developments in cardiac imaging and therapeutics, with a focus on transitioning technologies from the benchtop to the clinic.
Monitoring metabolism
The first speaker, Damian Tyler from the University of Oxford described how hyperpolarized MRI can provide “a new window on the reactions of life”. He discussed how MRI – most commonly employed to look at the heart’s structure and function – can also be used to characterize cardiac metabolism, with metabolic MR studies helping us understand cardiovascular disease, assess drug mechanisms and guide therapeutic interventions.
In particular, Tyler is studying pyruvate, a compound that plays a central role in the body’s metabolism of glucose. He explained that 13C MR spectroscopy is ideal for studying pyruvate metabolism, but its inherent low signal-to-noise ratio makes it unsuitable for rapid in vivo imaging. To overcome this limitation, Tyler uses hyperpolarized MR, which increases the sensitivity to 13C-enriched tracers by more than 10,000 times and enables real-time visualization of normal and abnormal metabolism.
As an example, Tyler described a study using hyperpolarized 13C MR spectroscopy to examine cardiac metabolism in diabetes, which is associated with an increased risk of heart disease. Tyler and his team examined the downstream metabolites of 13C-pyruvate (such as 13C-bicarbonate and 13C-lactate) in subjects with and without type 2 diabetes. They found reduced bicarbonate levels in diabetes and increased lactate, noting that the bicarbonate to lactate ratio could provide a diagnostic marker.
Among other potential clinical applications, hyperpolarized MR could be used to detect inflammation following a heart attack, elucidate the mechanism of drugs and accelerate new drug discovery, and provide an indication of whether a patient is likely to develop cardiotoxicity from chemotherapy. It can also be employed to guide therapeutic interventions by imaging ischaemia in tissue and assess cardiac perfusion after heart attack.
“Hyperpolarized MRI offers a safe and non-invasive way to assess cardiac metabolism,” Tyler concluded. “There are a raft of potential clinical applications for this emerging technology.”
Changing the pace
Alongside the introduction of new and improved diagnostic approaches, researchers are also developing and refining treatments for cardiac disorders. One goal is to create an effective treatment for heart failure, an incurable progressive condition in which the heart can’t pump enough blood to meet the body’s needs. Current therapies can manage symptoms, but cannot treat the underlying disease or prevent progression. Ashok Chauhan from Ceryx Medical told delegates how the company’s bio-inspired pacemaker aims to address this shortfall.
In healthy hearts, Chauhan explained, the heart rate changes in response to breathing, in a mechanism called respiratory sinus arrythmia (RSA). This natural synchronization is frequently lost in patients with heart failure. Ceryx has developed a pacing technology that aims to treat heart failure by resynchronizing the heart and lungs and restoring RSA.
The device works by monitoring the cardiorespiratory system in real time and using RSA inputs to generate stimulation signals in real time. Early trials in large animals demonstrated that RSA pacing increased cardiac output and ejection fraction compared with monotonic (constant) pacing. Last month, Ceryx begun the first in-human trials of its pacing technology, using an external pacemaker to assess the safety of the device.
Eliminating sex bias
Later in the day, Hannah Smith from the University of Oxford presented a fascinating talk entitled “Women’s hearts are superior and it’s killing them”.
Smith told a disturbing tale of an elderly man with chest pain, who calls an ambulance and undergoes electrocardiography (ECG) that shows he is having a heart attack. He is rushed to hospital to unblock his artery and restore cardiac function. His elderly wife also feels unwell, but her ECG only shows slight abnormality. She is sent for blood tests that eventually reveal she was also having a severe heart attack – but the delay in diagnosis led to permanent cardiac damage.
The fact is that women having heart attacks are more likely to be misdiagnosed and receive less aggressive treatment than men, Smith explained. This is due to variations in the size of the heart and differences in the distances and angles between the heart and the torso surface, which affect the ECG readings used to diagnose heart attack.
To understand the problem in more depth, Smith developed a computational tool that automatically reconstructs torso ventricular anatomy from standard clinical MR images. Her goal was to identify anatomical differences between males and females, and examine their impact on ECG measurements.
Using clinical data from the UK Biobank (around 1000 healthy men and women, and 84 women and 341 men post-heart attack), Smith modelled anatomies and correlated these with the respective ECG data. She found that the QRS complex (the signal for the heart to start contracting) was about 6 ms longer in healthy males than healthy females, attributed to the smaller heart volume in females. This is significant as it implies that the mean QRS duration would have to increase by a larger percentage for women than men to be diagnosed as elevated.
She also studied the ST segment in the ECG trace, elevation of which is a key feature used to diagnose heart attack. The ST amplitude was lower in healthy females than healthy males, due to their smaller ventricles and more superior position of the heart. The calculations revealed that overweight women would need a 63% larger increase in ST amplitude to be classified as elevated than normal weight men.
Smith concluded that heart attacks are harder to see on a woman’s ECGs than on a man’s, with differences in ventricular size, position and orientation impacting the ECG before, during and after heart attacks. Importantly, if these relationships can be elucidated and corrected for in diagnostic tools, these sex biases can be reduced, paving the way towards personalised ECG interpretation.
Prize presentations
The meeting also included a presentation from the winner of the 2023 Medical Physics Group PhD prize: Joshua Astley from the University of Sheffield, for his thesis “The role of deep learning in structural and functional lung imaging”.
Shifting the focus from the heart to the lungs, Astley discussed how hyperpolarized gas MRI, using inhaled contrast agents such as 1He and 129Xe, can visualize regional lung ventilation. To improve the accuracy and speed of such lung MRI studies, he designed a deep learning system that rapidly performs MRI segmentation and automates the calculation of ventilation defect percentage via lung cavity estimates. He noted that the tool is already being used to improve workflow in clinical hyperpolarized gas MRI scans.
Astley also described the use of CT ventilation imaging as a potentially lower-cost approach to visualize lung ventilation. Combining the benefits of computational modelling with deep learning, Astley and colleagues have developed a hybrid framework that generates synthetic ventilation scans from non-contrast CT images.
Quoting some “lessons learnt from my thesis”, Astley concluded that artificial intelligence (AI)-based workflows enable faster computation of clinical biomarkers and better integration of functional lung MRI, and that non-contrast functional lung surrogates can reduce the cost and expand use of functional lung imaging. He also emphasized that quantifying the uncertainty in AI approaches can improve clinician’s trust in using such algorithms, and that making code open and available is key to increasing its impact.
The day rounded off with awards for the meeting’s best talk in the submitted abstracts section and the best poster presentation. The former was won by Sam Barnes from Lancaster University for his presentation on the use of electroencephalography (EEG) for diagnosis of autism spectrum disorder. The poster prize was awarded to Suchit Kumar from University College London, for his work on a graphene-based electrophysiology probe for concurrent EEG and functional MRI.
The post The heart of the matter: how advances in medical physics impact cardiology appeared first on Physics World.
Extended cosmic-ray electron spectrum has a break but no other features
A new observation of electron and positron cosmic rays has confirmed the existence of what could be a “cooling break” in the energy spectrum at around 1 TeV, beyond which the particle flux decreases more rapidly. Aside from this break, however, the spectrum is featureless, showing no evidence of an apparent anomaly previously associated with a dark matter signal.
Cosmic ray is a generic term for an energetic charged particle that enters Earth’s atmosphere from space. Most cosmic rays are protons, some are heavier nuclei, and a small number (orders of magnitude fewer than protons) are electrons and their antiparticles (positrons).
“Because the electron’s mass is small, they radiate much more effectively than protons,” explains high-energy astrophysicist Felix Aharonian of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. “It makes the electrons very fragile, so the electrons we detect cannot be very old. That means the sources that produce them cannot be very far away.” Cosmic-ray electrons and positrons can therefore provide important information about our local cosmic environment.
Today, however, the origins of these electrons and positrons are hotly debated. They could be produced by nearby pulsars or supernova remnants. Some astrophysicists favour a secondary production model in which other cosmic rays interact locally with interstellar gas to create high-energy electrons and positrons.
Unexplained features
Previous measurements of the energy spectra of these cosmic rays revealed several unexplained features. In general, the particle flux decreases with increasing energy. At energies below about 1 TeV, the decline is a steady exponential. But at about 1 TeV, the decline steepens with a larger exponential at a curious kink or break point.
Later observations by the Dark Matter Particle Explorer (DAMPE) collaboration confirmed this kink, but also appeared to show peaks at higher energies. Some theoreticians have suggested these inhomogeneities could arise from local sources such as pulsars, whereas others have advanced more exotic explanations, such as signals from dark matter.
In the new work, members of the High Energy Stereoscopic System (HESS) collaboration looked for evidence of cosmic electrons and positrons in 12 years of data from the HESS observatory in Namibia. HESS’s primary mission is to observe high-energy cosmic gamma rays. These gamma rays interact with the atmosphere creating showers of energetic charged particles. These showers create Cherenkov light, which is detected by HESS.
Similar but not identical
The observatory can also detect atmospheric showers created by cosmic rays such as protons and electrons. However, discerning between showers created by protons and electrons/positrons is a significant challenge (HESS cannot differentiate between electrons and positrons). “The hadronic showers produced by protons and electronic showers are extremely similar but not identical,” says Aharonian; “Now we want to use this tiny difference to distinguish between electron-produced showers and proton-produced showers. The task is very difficult because we need to reject proton showers by four orders of magnitude and still keep a reasonable fraction of electrons.”
Fortunately, the large data sample from HESS meant that the team could identify weak signals associated with electrons and positrons. The researchers were therefore able to extend the flux measurements out to much higher energies. Whereas previous surveys could not look higher than about 5 TeV, the HESS researchers probed the 0.3–40 TeV range – although Aharonian concedes that the error bars are “huge” at higher energies.
The study confirms that, up until about 1 TeV, the spectrum decreases exponentially. The team measured this exponent to be about 3.25. At about 1 TeV, a sharp downward kink was also observed – with an exponential decrease of about 4.5 at higher energies. However, there is no sign of any bumps or peaks in the data.
This kink can be naturally explained, says Aharonian, as a “cooling break”, in which the low energy electrons are produced by background processes, whereas the high-energy electrons are produced locally. “Teraelectronvolt electrons can only come from local sources,” he says. In theoretical models, the intensity of both fluxes would decay exponentially, and the difference between the exponents would be 1 – close to that measured here. Aharonian believes that further information about this phenomenon could come from techniques such as machine learning or muon detection to distinguish between high-energy proton showers and electron showers.
“This is a unique measurement: it gives you a value of the electron–positron flux up to extremely high energies,” says Andrei Kounine of Massachusetts Institute of Technology, who works on the Alpha Magnetic Spectrometer (AMS-02) detector on the International Space Station. While he expresses some concerns about possible uncharacterized systematic errors at very high energies, he says they do not meaningfully diminish the value of the HESS team’s work. He notes that there are a variety of unexplained anomalies in the energy spectra of various cosmic-ray particles. “What we are missing at the moment,” he says, “is a comprehensive theory that considers all possible effects and tries to predict from fundamental measurements such as the proton spectrum the fluxes of all other elements.”
The research is described in Physical Review Letters.
The post Extended cosmic-ray electron spectrum has a break but no other features appeared first on Physics World.
Mathematical model sheds light on how exercise suppresses tumour growth
Physical exercise plays an important role in controlling disease, including cancer, due to its effect on the human body’s immune system. A research team from the USA and India has now developed a mathematical model to quantitatively investigate the complex relationship between exercise, immune function and cancer.
Exercise is thought to supress tumour growth by activating the body’s natural killer (NK) cells. In particular, skeletal muscle contractions drive the release of interleukin-6 (IL-6), which causes NK cells to shift from an inactive to an active state. The activated NK cells can then infiltrate and kill tumour cells. To investigate this process in more depth, the team developed a mathematical model describing the transition of a NK cell from its inactive to active state, at a rate driven by exercise-induced IL-6 levels.
“We developed this model to study how the interplay of exercise intensity and exercise duration can lead to tumour suppression and how the parameters associated with these exercise features can be tuned to get optimal suppression,” explains senior author Niraj Kumar from the University of Massachusetts Boston.
Impact of exercise intensity and duration
The model, reported in Physical Biology, is constructed from three ordinary differential equations that describe the temporal evolution of the number of inactive NK cells, active NK cells and tumour cells, as functions of the growth rates, death rates, switching rates (for NK cells) and the rate of tumour cell kill by activated NK cells.
Kumar and collaborators – Jay Taylor at Northeastern University and T Bagarti at Tata Steel’s Graphene Center – first investigated how exercise intensity impacts tumour suppression. They used their model to determine the evolution over time of tumour cells for different values of α0 – a parameter that correlates with the maximum level of IL-6 and increases with increased exercise intensity.
Simulating tumour growth over 20 days showed that the tumour population increased non-monotonically, exhibiting a minimum population (maximum tumour suppression) at a certain critical time before increasing and then reaching a steady-state value in the long term. At all time points, the largest tumour population was seen for the no-exercise case, confirming the premise that exercise helps suppress tumour growth.
The model revealed that as the intensity of the exercise increased, the level of tumour suppression increased alongside, due to the larger number of active NK cells. In addition, greater exercise intensity sustained tumour suppression for a longer time. The researchers also observed that if the initial tumour population was closer to the steady state, the effect of exercise on tumour suppression was reduced.
Next, the team examined the effect of exercise duration, by calculating tumour evolution over time for varying exercise time scales. Again, the tumour population showed non-monotonic growth with a minimum population at a certain critical time and a maximum population in the no-exercise case. The maximum level of tumour suppression increased with increasing exercise duration.
Finally, the researchers analysed how multiple bouts of exercise impact tumour suppression, modelling a series of alternating exercise and rest periods. The model revealed that the effect of exercise on maximum tumour suppression exhibits a threshold response with exercise frequency. Up to a critical frequency, which varies with exercise intensity, the maximum tumour suppression doesn’t change. However, if the exercise frequency exceeds the critical frequency, it leads to a corresponding increase in maximum tumour suppression.
Clinical potential
Overall, the model demonstrated that increasing the intensity or duration of exercise leads to greater and sustained tumour suppression. It also showed that manipulating exercise frequency and intensity within multiple exercise bouts had a pronounced effect on tumour evolution.
These results highlight the model’s potential to guide the integration of exercise into a patient’s cancer treatment programme. While still at the early development stage, the model offers valuable insight into how exercise can influence immune responses. And as Taylor points out, as more experimental data become available, the model has potential for further extension.
“In the future, the model could be adapted for clinical use by testing its predictions in human trials,” he explains. “For now, it provides a foundation for designing exercise regimens that could optimize immune function and tumour suppression in cancer patients, based on the exercise intensity and duration.”
Next, the researchers plan to extend the model to incorporate both exercise and chemotherapy dosing. They will also explore how heterogeneity in the tumour population can influence tumour suppression.
The post Mathematical model sheds light on how exercise suppresses tumour growth appeared first on Physics World.
Nuclear shape transitions visualized for the first time
Xenon nuclei change shape as they collide, transforming from soft, oval-shaped particles to rigid, spherical ones. This finding, which is based on simulations of experiments at CERN’s Large Hadron Collider (LHC), provides a first look at how the shapes of atomic nuclei respond to extreme conditions. While the technique is still at the theoretical stage, physicists at the Niels Bohr Institute (NBI) in Denmark and Peking University in China say that ultra-relativistic nuclear collisions at the LHC could allow for the first experimental observations of these so-called nuclear shape phase transitions.
The nucleus of an atom is made up of protons and neutrons, which are collectively known as nucleons. Like electrons, nucleons exist in different energy levels, or shells. To minimize the energy of the system, these shells take different shapes, with possibilities including pear, spherical, oval or peanut-shell-like formations. These shapes affect many properties of the atomic nucleus as well as nuclear processes such as the strong interactions between protons and neutrons. Being able to identify them is thus very useful for predicting how nuclei will behave.
Colliding pairs of 129Xe atoms at the LHC
In the new work, a team led by You Zhou at the NBI and Huichao Song at Peking University studied xenon-129 (129Xe). This isotope has 54 protons and 75 neutrons and is considered a relatively large atom, making its nuclear shape easier, in principle, to study than that of smaller atoms.
Usually, the nucleus of xenon-129 is oval-shaped (technically, it is a 𝛾-soft rotor). However, low-energy nuclear theory predicts that it can transition to a spherical, prolate or oblate shape under certain conditions. “We propose that to probe this change (called a shape phase transition), we could collide pairs of 129Xe atoms at the LHC and use the information we obtain to extract the geometry and shape of the initial colliding nuclei,” Zhou explains. “Probing these initial conditions would then reveal the shape of the 129Xe atoms after they had collided.”
A quark-gluon plasma
To test the viability of such experiments, the researchers simulated accelerating atoms to near relativistic speeds, equivalent to the energies involved in a typical particle-physics experiment at the LHC. At these energies, when nuclei collide with each other, their constituent protons and neutrons break down into smaller particles. These smaller particles are mainly quarks and gluons, and together they form a quark-gluon plasma, which is a liquid with virtually no viscosity.
Zhou, Song and colleagues modelled the properties of this “almost perfect” liquid using an advanced hydrodynamic model they developed called IBBE-VISHNU. According to these analyses, the Xe nuclei go from being soft and oval-shaped to rigid and spherical as they collide.
Studying shape transitions was not initially part of the researchers’ plan. The original aim of their work was to study conditions that prevailed in the first 10-6 seconds after the Big Bang, when the very early universe is thought to have been filled with a quark-gluon plasma of the type produced at the LHC. But after they realized that their simulations could shed light on a different topic, they shifted course.
“Our new study was initiated to address the open question of how nuclear shape transitions manifest in high-energy collisions,” Zhou explains, “and we also wanted to provide experimental insights into existing theoretical nuclear structure predictions.”
One of the team’s greatest difficulties lay in developing the complex models required to account for nuclear deformation and probe the structure of xenon and its fluctuations, Zhou tells Physics World. “There was also a need for compelling new observables that allow for a direct probe of the shape of the colliding nuclei,” he says.
Applications in both high- and low-energy nuclear and structure physics
The work could advance our understanding of fundamental nuclear properties and the operation of the theory of quantum chromodynamics (QCD) under extreme conditions, Zhou adds. “The insights gleaned from this work could guide future nuclear collision experiments and influence our understanding of nuclear phase transitions, with applications extending to both high-energy nuclear physics and low-energy nuclear structure physics,” he says.
The NBI/Peking University researchers say that future experiments could validate the nuclear shape phase transitions they observed in their simulations. Expanding the study to other nuclei that could be collided at the LHC is also on the cards, says Zhou. “This could deepen our understanding of nuclear structure at ultra-short timescales of 10-24 seconds.”
The research is published in Physical Review Letters.
The post Nuclear shape transitions visualized for the first time appeared first on Physics World.
Physicists in cancer radiotherapy
The programme focuses on the cancer radiation therapy patient pathway, with the aim of equipping students with the skills to progress onto careers in clinical, academic research or commercial medical physics opportunities.
Alan McWilliam, programme director of the new course, is also a reader in translational radiotherapy physics. He explains: “Radiotherapy is a mainstay of cancer treatment, used in around 50% of all treatments, and can be used together with surgery or systemic treatments like chemotherapy or immunotherapy. With a heritage dating back over 100 years, radiotherapy is now highly technical, allowing the radiation to be delivered with pin-point accuracy and is increasingly interdisciplinary to ensure a high-quality, curative delivery of radiation to every patient.”
“This new course builds on the research expertise at Manchester and benefits from being part of one of the largest university cancer departments in Europe, covering all aspects of cancer research. We believe this master’s reflects the modern field of medical physics, spanning the multidisciplinary nature of the field.”
Cancer pioneers
Manchester has a long history of developing solutions to drive improvements in healthcare, patients’ lives and the wellbeing of individuals. This new course draws on scientific research and innovation to equip those interested in a career in medical physics or cancer research with specialist skills that draw on a breadth of knowledge. Indeed, the course units bring together expertise from academics that have pioneered, amongst other work, the use of image-guided radiotherapy, big data analysis using real-world radiotherapy data, novel MR imaging for tracking oxygenation of tumours during radiotherapy, and proton research beam lines. Students will benefit directly from this network of research groups by being able to join research seminars throughout the course.
Working with clinical scientists
The master’s course is taught together with clinical physicists from The Christie NHS Foundation Trust, one of the largest single-site cancer hospitals in Europe and the only UK cancer hospital connected directly to a research institute. The radiotherapy department currently has 16 linear accelerators across four sites, an MR-guided radiotherapy service and one of the two NHS high-energy proton beam services. The Christie is currently one of only two cancer centres in the world with access to both proton beam and an MR-guided linear accelerator. For students, this partnership provides the opportunity to work with people at the forefront of cancer treatment developments.
To reflect the current state of radiotherapy, the University of Manchester has worked with The Christie to ensure students gain the skills necessary for a successful, modern, medical physics career. Units have a strong clinical focus, with access to technology that allows students to experience and learn from clinical workflows.
Students will learn the fundamentals of how radiotherapy works, from interactions of X-rays and matter, through X-ray beam generation control and measurement, and to how treatments are planned. Complementary to X-ray therapy, students will learn about the concepts of proton beam therapy, how the delivery of protons is different from X-rays, and the potential clinical benefits and unique difficulties of protons due to greater uncertainties from how protons interact with matter.
Delivering radiation with pin-point accuracy
The course will provide an in-depth understanding of how imaging can be used throughout the patient pathway to aid treatment decisions and guide the delivery of radiation.
The utility of CT, MRI and PET scanners across clinical pathways is explored, and the area of radiation delivery is complemented by material on radiobiology – how cells and tissues respond to radiation.
The difference between the response of tumours and normal tissue to radiation is called the therapeutic ratio. The radiobiology teaching will focus on how to maximize this ratio, essentially how to improve cure whilst minimising the risk of side-effects due to irradiation of nearby normal tissues. Students will also explore how this ratio could be enhanced or modified to improve the efficacy of all forms of radiotherapy.
Research and technology
A core strength of the research groups in Manchester is the use of routinely collected data in the evaluation of improvements in treatment delivery or the clinical translation of research findings. Many such improvements do not qualify for a full randomized clinical trial. However, there are many pragmatic methods to evaluate clinical benefit. Through studying clinical workflows and translation, these concepts will be explored along with investigating how to maximise results from all available data.
Modern medical physicists need an appreciation of artificial intelligence (AI). AI is emerging as an automation tool throughout the radiation therapy workflow; for example, segmentation of tissues, radiotherapy planning and quality assurance. This course delves into the fundamentals of AI and machine learning, giving students the opportunity to implement their own solution for image classification or image segmentation. For those with leadership aspirations, guest lecturers from various academic, clinical or commercial backgrounds will detail career routes and how to develop knowledge in this area.
Pioneering new learning and assessments
Programme director Alan McWilliam talks us through the design of the course and how students are evaluated:
“An aspect of the teaching we are particularly proud of is the design of the assessments throughout the units. Gone are written exams, with assessments allowing students to apply their new knowledge to real medical physics problems. Students will perform dosimetric calculations and Monte Carlo simulations of proton depositions, as well as build an image registration pipeline and pitch for funding in a dragon’s den (or shark tank) scenario. This form of assessment will allow students to demonstrate skills directly useful for future career pathways.”
“The final part of the course is the research project, to take place after the taught elements are complete. Students will choose from projects which will embed them with one of the academic or clinical groups. Examples for the current cohort include training an AI segmentation model for muscle in CT images and associating this with treatment outcomes; simulating prompt gamma rays from proton deliveries for dose verification; and assisting with commissioning MR-guided workflows for ultra-central lung treatments.”
Develop your specialist skills
The Medical Physics in Cancer Radiation Therapy MSc is a one-year full-time (two-year part-time) programme at the University of Manchester.
Applications are now open for the next academic year, and it is recommended to apply early, as applications may close if the course is full.
Find out more and apply: https://uom.link/medphyscancer
The post Physicists in cancer radiotherapy appeared first on Physics World.
‘Buddy star’ could explain Betelgeuse’s varying brightness
An unseen low-mass companion star may be responsible for the recently observed “Great Dimming” of the red supergiant star Betelgeuse. According to this hypothesis, which was put forward by researchers in the US and Hungary, the star’s apparent brightness varies when an orbiting companion – dubbed α Ori B or, less formally, “Betelbuddy” – displaces light-blocking dust, thereby changing how much of Betelgeuse’s light reaches the Earth.
Located about 548 light-years away, in the constellation Orion, Betelgeuse is the 10th brightest star in the night sky. Usually, its brightness varies over a period of 416 days, but in 2019–2020, its output dropped to the lowest level ever recorded.
At the time, some astrophysicists speculated that this “Great Dimming” might mean that the star was reaching the end of its life and would soon explode as a supernova. Over the next three years, however, Betelgeuse’s brightness recovered, and alternative hypotheses gained favour. One such suggestion is that a cooler spot formed on the star and began ejecting material and dust, causing its light to dim as seen from Earth.
Pulsation periods
The latest hypothesis was inspired, in part, by the fact that Betelgeuse experiences another cycle in addition to its fundamental 416-day pulsation period. This second cycle, known as the long secondary period (LSP), lasts 2170 days, and the Great Dimming occurred after its minimum brightness coincided with a minimum in the 416-day cycle.
While astrophysicists are not entirely sure what causes LSPs, one leading theory suggest that they stem from a companion star. As this companion orbits its parent star, it displaces the cosmic dust the star produces and expels, which in turn changes the amount of starlight that reaches us.
Lots of observational data
To understand whether this might be happening with Betelgeuse, a team led by Jared Goldberg at the Flatiron Institute’s Center for Computational Astrophysics; Meridith Joyce at the University of Wyoming; and László Molnár of the Konkoly Observatory, HUN-REN CSFK, Budapest; analysed a wealth of observational data from the American Association of Variable Star Observers. “This association has been collecting data from both professional and amateur astronomers, so we had access to decades worth of data,” explains Molnár. “We also looked at data from the space-based SMEI instrument and spectroscopic observations collected by the STELLA robotic telescope.”
The researchers combined these direct-observation data with advanced computer models that simulate Betelgeuse’s activity. When they studied how the star’s brightness and its velocity varied relative to each other, they realized that the brightest phase must correspond to a companion being in front of it. “This is the opposite of what others have proposed,” Molnár notes. “For example, one popular hypothesis postulates that companions are enveloped in dense dust clouds, obscuring the giant star when they pass in front of them. But in this case, the companion must remove dust from its vicinity.”
As for how the companion does this, Molnár says they are not sure whether it evaporates the dust away or shepherds it to the opposite side of Betelgeuse with its gravitational pull. Both are possible, and Goldberg adds that other processes may also contribute. “Our new hypothesis complements the previous one involving the formation of a cooler spot on the star that ejects material and dust,” he says. “The dust ejection could occur because the companion star was out of the way, behind Betelgeuse rather than along the line of sight.”
The least absurd of all hypotheses?
The prospect of a connection between an LSP and the activity of a companion star is a longstanding one, Goldberg tells Physics World. “We know the Betelgeuse has an LSP and if an LSP exists, that means a ‘buddy’ for Betelgeuse,” he says.
The researchers weren’t always so confident, though. Indeed, they initially thought the idea of a companion star for Betelgeuse was absurd, so the hardest part of their work was to prove to themselves that this was, in fact, the least absurd of all hypotheses for what was causing the LSP.
“We’ve been interested in Betelgeuse for a while now, and in a previous paper, led by Meridith, we already provided new size, distance and mass estimates for the star based on our models,” says Molnár. “Our new data started to point in one direction, but first we had to convince ourselves that we were right and that our claims are novel.”
The findings could have more far-reaching implications, he adds. While around one third of all red giants and supergiants have LSPs, the relationships between LSPs and brightness vary. “There are therefore a host of targets out there and potentially a need for more detailed models on how companions and dust clouds may interact,” Molnár says.
The researchers are now applying for observing time on space telescopes in hopes of finding direct evidence that the companion exists. One challenge they face is that because Betelgeuse is so bright – indeed, too bright for many sensitive instruments – a “Betelbuddy”, as Goldberg has nicknamed it, may be simpler to explain than it is to observe. “We’re throwing everything we can at it to actually find it,” Molnár says. “We have some ideas on how to detect its radiation in a way that can be separated from the absolute deluge of light Betelgeuse is producing, but we have to collect and analyse our data first.”
The study is published in The Astrophysical Journal.
The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.
Physicists propose new solution to the neutron lifetime puzzle
Neutrons inside the atomic nucleus are incredibly stable, but free neutrons decay within 15 minutes – give or take a few seconds. The reason we don’t know this figure more precisely is that the two main techniques used to measure it produce conflicting results. This so-called neutron lifetime problem has perplexed scientists for decades, but now physicists at TU Wien in Austria have come up with a possible explanation. The difference in lifetimes, they say, could stem from the neutron being in not-yet-discovered excited states that have different lifetimes as well as different energies.
According to the Standard Model of particle physics, free neutrons undergo a process called beta decay that transforms a neutron into a proton, an electron and an antineutrino. To measure the neutrons’ average lifetime, physicists employ two techniques. The first, known as the bottle technique, involves housing neutrons within a container and then counting how many of them remain after a certain amount of time. The second approach, known as the beam technique, is to fire a neutron beam with a known intensity through an electromagnetic trap and measure how many protons exit the trap within a fixed interval.
Researchers have been performing these experiments for nearly 30 years but they always encounter the same problem: the bottle technique yields an average neutron survival time of 880 s, while the beam method produces a lifetime of 888 s. Importantly, this eight-second difference is larger than the uncertainties of the measurements, meaning that known sources of error cannot explain it.
A mix of different neutron states?
A team led by Benjamin Koch and Felix Hummel of TU Wien’s Institute of Theoretical Physics is now suggesting that the discrepancy could be caused by nuclear decay producing free neutrons in a mix of different states. Some neutrons might be in the ground state, for example, while others could be in a higher-energy excited state. This would alter the neutrons’ lifetimes, they say, because elements in the so-called transition matrix that describes how neutrons decay into protons would be different for neutrons in excited states and neutrons in ground states.
As for how this would translate into different beam and bottle lifetime measurements, the team say that neutron beams would naturally contain several different neutron states. Neutrons in a bottle, in contrast, would almost all be in the ground state – simply because they would have had time to cool down before being measured in the container.
Towards experimental tests
Could these different states be detected? The researchers say it’s possible, but they caution that experiments will be needed to prove it. They also note that theirs is not the first hypothesis put forward to explain the neutron lifetime discrepancy. Perhaps the simplest explanation is that the gap stems from unknown systematic errors in either the beam experiment, the bottle experiment, or both. Other, more theoretical approaches have also been proposed, but Koch says they do not align with existing experimental data.
“Personally, I find hypotheses that require fewer and smaller new assumptions – and that are experimentally testable – more appealing,” Koch says. As an example, he cites a 2020 study showing that a phenomenon called the inverse quantum Zeno effect could speed up the decay of bottle-confined neutrons, calling it “an interesting idea”. Another possible explanation of the puzzle, which he says he finds “very intriguing” has just been published and describes the admixture of novel bound electron-proton states in the final state of a weak decay, known as “Second Flavor Hydrogen Atoms”.
As someone with a background in quantum gravity and theoretical physics beyond the Standard Model, Koch is no stranger to predictions that are hard (and sometimes impossible, at least in the near term) to test. “Contributing to the understanding of a longstanding problem in physics with a hypothesis that could be experimentally tested soon is therefore particularly exciting for me,” he tells Physics World. “If our hypothesis of excited neutron states is confirmed by future experiments, it would shed a completely new light on the structure of neutral nuclear matter.”
The researchers now plan to collaborate with colleagues from the Institute for Atomic and Subatomic Physics at TU Wien to revaluate existing experimental data and explore various theoretical models. “We’re also hopeful about designing experiments specifically aimed at testing our hypothesis,” Koch reveals.
The present study is detailed in Physical Review D.
The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.
- Physics World
- Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study
Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study
According to the well-known thought experiment, the infinite monkeys theorem, a monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of William Shakespeare purely by chance.
Yet a new analysis by two mathematicians in Australia finds that even a troop might not have the time to do so within the supposed timeframe of the universe.
To find out, the duo created a model that includes 30 keys – all the letters in the English language plus punctuation marks. They assumed a constant chimpanzee population of 200,000 could be enlisted to the task, each typing at one key per second until the end of the universe in about 10100 years.
“We decided to look at the probability of a given string of letters being typed by a finite number of monkeys within a finite time period consistent with estimates for the lifespan of our universe,” notes mathematician Stephen Woodcock from the University of Technology Sydney.
The mathematicians found that there is only a 5% chance a single monkey would type “bananas” within its own lifetime of just over 30 years. Yet even with all the chimps feverishly typing away, they would not be able to produce Shakespeare’s entire works (coming in at over 850,000 words) before the universe ends. They would, however, be able to type “I chimp, therefore I am”.
“It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors conclude, adding that while the infinite monkeys theorem is true, it is also “somewhat misleading”, or rather it’s “not to be” in reality.
The post Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study appeared first on Physics World.