Two independent teams in the US have demonstrated the potential of using the optical properties of nanocrystals to create remote sensors that measure tiny forces on tiny length scales. One team is based at Stanford University and used nanocrystals to measure the micronewton-scale forces exerted by a worm as it chewed bacteria. The other team is based at several institutes and used the photon avalanche effect in nanocrystals to measure sub-nanonewton to micronewton forces. The latter technique could potentially be used to study forces involved in processes such as stem cell differentiation.
Remote sensing of forces at small scales is challenging, especially inside living organisms. Optical tweezers cannot make remote measurements inside the body, while fluorophores – molecules that absorb and re-emit light – can measure forces in organisms, but have limited range, problematic stability or, in the case of quantum dots, toxicity. Nanocrystals with optical properties that change when subjected to external forces offer a way forward.
At Stanford, materials scientist Jennifer Dionne led a team that used nanocrystals doped with ytterbium and erbium. When two ytterbium atoms absorb near-infrared photons, they can then transfer energy to a nearby erbium atom. In this excited state, the erbium can either decay directly to its lowest energy state by emitting red light, or become excited to an even higher-energy state that decays by emitting green light. These processes are called upconversion.
Colour change
The ratio of green to red emission depends on the separation between the ytterbium and erbium atoms, and the separation between the erbium atoms – explains Dionne’s PhD student Jason Casar, who is lead author of a paper describing the Stanford research. Forces on the nanocrystal can change these separations and therefore affect that ratio.
The researchers encased their nanocrystals in polystyrene vessels approximately the size of a E coli bacterium. They then mixed the encased nanoparticles with E coli bacteria that were then fed to tiny nematode worms. To extract the nutrients, the worm’s pharynx needs to break open the bacterial cell wall. “The biological question we set out to answer is how much force is the bacterium generating to achieve that breakage?” explains Stanford’s Miriam Goodman.
The researchers shone near-infrared light on the worms, allowing them to monitor the flow of the nanocrystals. By measuring the colour of the emitted light when the particles reached the pharynx, they determined the force it exerted with micronewton-scale precision.
Meanwhile, a collaboration of scientists at Columbia University, Lawrence Berkeley National Laboratory and elsewhere has shown that a process called photon avalanche can be used to measure even smaller forces on nanocrystals. The team’s avalanching nanoparticles (ANPs) are sodium yttrium fluoride nanocrystals doped with thulium – and were discovered by the team in 2021.
The fun starts here
The sensing process uses a laser tuned off-resonance from any transition from the ground state of the ANP. “We’re bathing our particles in 1064 nm light,” explains James Schuck of Columbia University, whose group led the research. “If the intensity is low, that all just blows by. But if, for some reason, you do eventually get some absorption – maybe a non-resonant absorption in which you give up a few phonons…then the fun starts. Our laser is resonant with an excited state transition, so you can absorb another photon.”
This creates a doubly excited state that can decay radiatively directly to the ground state, producing an upconverted photon. Or, it energy can be transferred to a nearby thulium atom, which becomes resonant with the excited state transition and can excite more thulium atoms into resonance with the laser. “That’s the avalanche,” says Schuck; “We find on average you get 30 or 40 of these events – it’s analogous to a chain reaction in nuclear fission.”
Now, Schuck and colleagues have shown that the exact number of photons produced in each avalanche decreases when the nanoparticle experiences compressive force. One reason is that the phonon frequencies are raised as the lattice is compressed, making non-radiatively decay energetically more favourable.
The thulium-doped nanoparticles decay by emitting either red or near infrared photons. As the force increases, the red dims more quickly, causing a change in the colour of the emitted light. These effects allowed the researchers to measure forces from the sub-nanonewton to the micronewton range – at which point the light output from the nanoparticles became too low to detect.
Not just for forces
Schuck and colleagues are now seeking practical applications of their discovery, and not just for measuring forces.
“We’re discovering that this avalanching process is sensitive to a lot of things,” says Schuck. “If we put these particles in a cell and we’re trying to measure a cellular force gradient, but the cell also happened to change its temperature, that would also affect the brightness of our particles, and we would like to be able to differentiate between those things. We think we know how to do that.”
If the technique could be made to work in a living cell, it could be used to measure tiny forces such as those involved in the extra-cellular matrix that dictate stem cell differentiation.
Andries Meijerink of Utrecht University in the Netherlands believes both teams have done important work that is impressive in different ways. Schuck and colleagues for unveiling a fundamentally new force sensing technique and Dionne’s team for demonstrating a remarkable practical application.
However, Meijerink is sceptical that photon avalanching will be useful for sensing in the short term. “It’s a very intricate process,” he says, adding, “There’s a really tricky balance between this first absorption step, which has to be slow and weak, and this resonant absorption”. Nevertheless, he says that researchers are discovering other systems that can avalanche. “I’m convinced that many more systems will be found,” he says.
Both studies are described in Nature. Dionne and colleagues report their results here, and Schuck and colleagues here.
Last year was the year of elections and 2025 is going to be the year of decisions.
After many countries, including the UK, Ireland and the US, went to the polls in 2024, the start of 2025 will see governments at the beginning of new terms, forced to respond swiftly to mounting economic, social, security, environmental and technological challenges.
These issues would be difficult to address at any given time, but today they come amid a turbulent geopolitical context. Governments are often judged against short milestones – the first 100 days or a first budget – but urgency should not come at the cost of thinking long-term, because the decisions over the next few months will shape outcomes for years, perhaps decades, to come. This is no less true for science than it is for health and social care, education or international relations.
In the UK, the first half of the year will be dominated by the government’s spending review. Due in late spring, it could be one of the toughest political tests for UK science, as the implications of the tight spending plans announced in the October budget become clear. Decisions about departmental spending will have important implications for physics funding, from research to infrastructure, facilities and teaching.
One of the UK government’s commitments is to establish 10-year funding cycles for key R&D activities – a policy that could be a positive improvement. Physics discoveries often take time to realise in full, but their transformational nature is indisputable. From fibre-optic communications to magnetic resonance imaging, physics has been indispensable to many of the world’s most impactful and successful innovations.
Emerging technologies, enabled by physicists’ breakthroughs in fields such as materials science and quantum physics, promise to transform the way we live and work, and create new business opportunities and open up new markets. A clear, comprehensive and long-term vision for R&D would instil confidence among researchers and innovators, and long-term and sustainable R&D funding would enable people and disruptive ideas to flourish and drive tomorrow’s breakthroughs.
Alongside the spending review, we are also expecting the publication of the government’s industrial strategy. The focus of the green paper published last year was an indication of how the strategy will place significance on science and technology in positioning the UK for economic growth.
If we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead
Physics-based industries are a foundation stone for the UK economy and are highly productive, as highlighted by research commissioned by the Institute of Physics, which publishes Physics World. Across the UK, the physics sector generates £229bn gross value added, or 11% of total UK gross domestic product. It creates a collective turnover of £643bn, or £1380bn when indirect and induced turnover is included.
Labour productivity in physics-based businesses is also strong at £84 300 per worker, per year. So, if physics is not at the heart of this effort, then the government’s mission of economic revival is in danger of failing to get off the launch pad.
A pivotal year
Another of the new government’s policy priorities is the strategic defence review, which is expected to be published later this year. It could have huge implications for physics given its core role in many of the technologies that contribute to the UK’s defence capabilities. The changing geopolitical landscape, and potential for strained relations between global powers, may well bring research security to the front of the national mind.
Intellectual property, and scientific innovation, are some of the UK’s greatest strengths and it is right to secure them. But physics discoveries in particular can be hampered by overzealous security measures. So much of the important work in our discipline comes from years of collaboration between researchers across the globe. Decisions about research security need to protect, not hamper, the future of UK physics research.
This year could also be pivotal for UK universities, as securing their financial stability and future will be one of the major challenges. Last year, the pressures faced by higher education institutions became apparent, with announcements of course closures, redundancies and restructures as a way of saving money. The rise in tuition fees has far from solved the problem, so we need to be prepared for more turbulence coming for the higher education sector.
These things matter enormously. We have heard that universities are facing a tough situation, and it’s getting harder for physics departments to exist. But if we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead.
As we celebrate the International Year of Quantum Science and Technology that marks the centenary of the initial development of quantum mechanics by Werner Heisenberg, 2025 is a reminder of how the benefits of physics span over decades.
We need to enhance all the vital and exciting developments that are happening in physics departments. The country wants and needs a stronger scientific workforce – just think about all those individuals who studied physics and now work in industries that are defending the country – and that workforce will be strongly dependent on physics skills. So our priority is to make sure that physics departments keep doing world-leading research and preparing the next generation of physicists that they do so well.
If aliens came to Earth and tried to work out how we earthlings make sense of our world, they’d surely conclude that we take information and slot it into pre-existing stories – some true, some false, some bizarre. Ominously, these aliens would be correct. You don’t need to ask earthling philosophers, just look around.
Many politicians and influencers, for instance, are convinced that scientific evidence does not tell the reality about, for instance, autism or AIDS, the state of the atmosphere or the legitimacy of elections, or even about aliens. Truth comes to light only when you “know the full story”, which will eventually reveal the scientific data to be deceptive or irrelevant.
To see how this works in practice, suppose you hear someone say that a nearby lab is leaking x picocuries of a radioactive substance, potentially exposing you to y millirems of dose. How do you know if you’re in danger? Well, you’ll instinctively start going through a mental checklist of questions.
Who’s speaking – scientist, politician, reporter or activist? If it’s a scientist, are they from the government, a university, or an environmental or anti-nuclear group? You might then wonder: how trustworthy are the agencies that regulate the substance? Is the lab a good neighbour, or did it cover up past incidents? How much of the substance is truly harmful?
Your answers to all these questions will shape the story you tell yourself. You might conclude: “The lab is a responsible organization and will protect me”. Or perhaps you’ll think: “The lab is a thorn in the side of the community and is probably doing weapons-related work. The leak’s a sign of something far worse.”
Perhaps your story will be: “Those environmentalists are just trying to scare us and the data indicate the leak is harmless”. Or maybe it’ll be: “I knew it! The lab’s sold out, the data are terrifying, and the activists are revealing the real truth”. Such stories determine the meaning of the picocuries and millirems for humans, not the other way around.
Acquiring data
Humans gain a sense of what’s happening in several ways. Three of them, to use philosophical language, are deferential, civic and melodramatic epistemology.
In “deferential epistemology”, citizens habitually take the word of experts and institutions about things like the dangers of x picocuries and exposures of y millirems. In his 1624 book New Atlantis, the philosopher Francis Bacon famously crafted a fictional portrait of an island society where deferential epistemology rules and people instinctively trust the scientific infrastructure.
Earthlings haven’t seen deferential epistemology in a while.
We may think this is how people ought to behave. But Bacon, who was also a politician, understood that deference to experts is not automatic and requires constantly curating the public face of the scientific infrastructure. Earthlings haven’t seen deferential epistemology in a while.
“Civic epistemology”, meanwhile, is how people acquire knowledge in the absence of that curation. Such people don’t necessarily reject experts but hear their voices alongside many others claiming to know best how to pursue our interests and values. Civic epistemology is when we negotiate daily life not by first consulting scientists but by pursuing our concerns with a mix of habit, trust, experience and friendly advice.
We sometimes don’t, in fact, take scientific advice when it collides with how we already behave; we may smoke or drink, for instance, despite warnings not to. Or we might seek guidance from non-scientists about things like the harms of radiation.
Finally, what I call “melodramatic epistemology” draws on the word “melodrama”, a genre of theatre involving extreme plots, obvious villains, emotional appeal, sensational language, and moral outrage (the 1939 film Gone with the Wind comes to mind).
A melodramatic lens can be a powerful and irresistible way for humans to digest difficult and emotionally charged events.
Melodramas were once considered culturally insignificant, but scholars such as Peter Brooks from Yale University have shown that a melodramatic lens can be a powerful and irresistible way for humans to digest difficult and emotionally charged events. The clarity, certainty and passion provided by a melodramatic read on a situation tends to displace the complexities, uncertainties and dispassion of scientific evaluation and evidence.
One example from physics occurred at the Lawrence Berkeley Laboratory in the late 1990s when activists fought, successfully, for the closing of its National Tritium Labeling Facility (NTLF). As I have written before, the NTLF had successfully developed techniques for medical studies while releasing tritium emissions well below federal and state environmental standards.
Activists, however, used melodramatic epistemology to paint the NTLF’s scientists as villains spreading breast cancer throughout the area, and denounced them as making “a terrorist attack on the citizens of Berkeley”. One activist called the scientists “piano players in a nuclear whorehouse.”
The critical point
The aliens studying us would worry most about melodramatic epistemology. Melodramatic epistemology, though dangerous, is nearly impervious to being altered, for any contrary data, studies and expert judgment are considered to spring from the villain’s allies and therefore to incite rather than allay fear.
Two US psychologists – William Brady from Northwestern University and Molly Crockett from Princeton University – recently published a study of how and why misinformation spreads (Science 386 991). By analyzing data from Facebook and Twitter and by conducting real experiments with participants, they found that sources of misinformation evoke more outrage than trustworthy sources. Worse still, the outrage encourages us to share the misinformation even if we haven’t fully read the original source.
This makes it hard to counter misinformation. As the authors tactfully conclude: “Outrage-evoking misinformation may be difficult to mitigate with interventions that assume users want to share accurate information”.
The best, and perhaps only, way to challenge melodramatic stories is to write bigger, more encompassing stories that reveal that a different plot is unfolding.
In my view, the best, and perhaps only, way to challenge melodramatic stories is to write bigger, more encompassing stories that reveal that a different plot is unfolding. Such a story about the NTLF, for instance, would comprise story lines about the benefits of medical techniques, the testing of byproducts, the origin of regulations of toxins, the perils of our natural environment, the nature of fear and its manipulation, and so forth. In such a big story, those who promote melodramatic epistemology show up as an obvious, and dangerous, subplot.
If the aliens see us telling such bigger stories, they might not give up earthlings for lost.
A novel fusion device based at the University of Seville in Spain has achieved its first plasma. The SMall Aspect Ratio Tokamak (SMART) is a spherical tokamak that can operate with a “negative triangularity” – the first compact spherical tokamak to do so. Work performed on the machine could be useful when designing compact fusion power plants based on spherical tokamak technology.
SMART has been constructed by the University of Seville’s Plasma Science and Fusion Technology Laboratory. With a vessel dimension of 1.6 × 1.6 m, SMART has a 30 cm diameter solenoid wrapped around 12 toroidal field coils while eight poloidal field coils are used to shape the plasma.
Triangularity refers to the shape of the plasma relative to the tokamak. The cross section of the plasma in a tokamak is typically shaped like a “D”. When the straight part of the D faces the centre of the tokamak, it is said to have positive triangularity. When the curved part of the plasma faces the centre, however, the plasma has negative triangularity.
It is thought that negative triangularity configurations can better suppress plasma instabilities that expel particles and energy from the plasma, helping to prevent damage to the tokamak wall.
Last year, researchers at the University of Seville began to prepare the tokamak’s inner walls for a high pressure plasma by heating argon gas with microwaves. When those tests were successful, engineers then worked toward producing the first plasma.
“This is an important achievement for the entire team as we are now entering the operational phase,” notes SMART principal investigator Manuel García Muñoz. “The SMART approach is a potential game changer with attractive fusion performance and power handling for future compact fusion reactors. We have exciting times ahead.”
Devices like lasers and other semiconductor-based technologies operate on the principles of quantum mechanics, but they only scratch the surface. To fully exploit quantum phenomena, scientists are developing a new generation of quantum-based devices. These devices are advancing rapidly, fuelling what many call the “second quantum revolution”.
One exciting development in this domain is the rise of next-generation energy storage devices known as quantum batteries (QBs). These devices leverage exotic quantum phenomena such as superposition, coherence, correlation and entanglement to store and release energy in ways that conventional batteries cannot. However, practical realization of QBs has its own challenges such as reliance on fragile quantum states and difficulty in operating at room temperature.
A recent theoretical study by Rahul Shastri and colleagues from IIT Gandhinagar, India, in collaboration with researchers at China’s Zhejiang University and the China Academy of Engineering Physics takes significant strides towards understanding how QBs can be charged faster and more efficiently, thereby lowering some of the barriers restricting their use.
How does a QB work?
The difference between charging a QB and charging a mobile phone is that with a QB, both the battery and the charger are quantum systems. Shastri and colleagues focused on two such systems: a harmonic oscillator (HO) and a two-level system. While a two-level system can exist in just two energy states, a harmonic oscillator has an evenly spaced range of energy levels. These systems therefore represent two extremes – one with a discrete, bounded energy range and the other with a more complex, unbounded energy spectrum approaching a continuous limit – making them ideal for exploring the versality of QBs.
In the quantum HO-based setup, a higher-energy HO acts as the charger and a lower-energy one as the battery. When the two are connected, or coupled, energy transfers from the charger to the battery. The two-level system follows the same working principle. Such coupled quantum systems are routinely realized in experiments.
Using decoherence as a tool to improve QB performance
The study’s findings, which are published in npj Quantum Information, are both surprising and promising, illustrating how a phenomenon typically seen as a challenge in quantum systems – decoherence – can become a solution.
The term “decoherence” refers to the process where a quantum system loses its unique quantum properties (such as quantum correlation, coherence and entanglement). The key trigger for decoherence is quantum noise caused by interactions between a quantum system and its environment.
Since no real-world physical system is perfectly isolated, such noise is unavoidable, and even minute amounts of environmental noise can lead to decoherence. Maintaining quantum coherence is thus extremely challenging even in controlled laboratory settings, let alone industrial environments producing large-scale practical devices. For this reason, decoherence represents one of the most significant obstacles in advancing quantum technologies towards practical applications.
Shastri and colleagues, however discovered a way to turn this foe into a friend. “Instead of trying to eliminate these naturally occurring environmental effects, we ask: why not use them to our advantage?” Shashtri says.
The method they developed speeds up the charging process using a technique called controlled dephasing. Dephasing is a form of decoherence that usually involves the gradual loss of quantum coherence, but the researchers found that when managed carefully, it can actually boost the battery’s performance.
Dissipative effects, traditionally seen as a hindrance, can be harnessed to enhance performance
Rahul Shastri
To understand how this works, it’s important to note that at low levels of dephasing, the battery undergoes smooth energy oscillations. Too much dephasing, however, freezes these oscillations in what’s known as the quantum Zeno effect, essentially stalling the energy transfer. But with just the right amount of dephasing, the battery charges faster while maintaining stability. By precisely controlling the dephasing rate, therefore, it becomes possible to strike a balance that significantly improves charging speed while still preserving stability. This balance leads to quicker, more robust charging that could overcome challenges posed by environmental factors.
“Our study shows how dissipative effects, traditionally seen as a hindrance, can be harnessed to enhance performance,” Shastri notes. This opens the door to scalable, robust quantum battery designs, which could be extremely useful for energy management in quantum computing and other quantum-enabled applications.
Implications for scalable quantum technologies
The results of this study are encouraging for the quantum-technology industry. As per Shastri, using dephasing to optimize the charging speed and stability of QBs not only advances fundamental understanding but also addresses practical challenges in quantum energy storage.
“Our proposed method could be tested on existing platforms such as superconducting qubits and NMR systems, where dephasing control is already experimentally feasible,” he says. These platforms offer experimentalists a tangible starting point for verifying the study’s predictions and further refining QB performance.
Experimentalists testing this theory will face challenges. Examples include managing additional decoherence mechanisms like amplitude damping and achieving the ideal balance of controlled dephasing in realistic setups. However, Shastri says that these challenges present valuable opportunities to refine and expand the proposed theoretical model for optimizing QB performance under practical conditions. The second quantum revolution is already underway, and QBs might just be the power source that charges our quantum future.
Brain tumours are notoriously difficult to treat, resisting conventional treatments such as radiation therapy, where the deliverable dose is limited by normal tissue tolerance. To better protect healthy tissues, researchers are turning to microbeam radiation therapy (MRT), which uses spatially fractionated beams to spare normal tissue while effectively killing cancer cells.
MRT is delivered using arrays of ultrahigh-dose rate synchrotron X-ray beams tens of microns wide (high-dose peaks) and spaced hundreds of microns apart (low-dose valleys). A research team from the Centre for Medical Radiation Physics at the University of Wollongong in Australia has now demonstrated that combining MRT with targeted radiosensitizers – such as nanoparticles or anti-cancer drugs – can further boost treatment efficacy, reporting their findings in Cancers.
“MRT is famous for its healthy tissue-sparing capabilities with good tumour control, whilst radiosensitizers are known for their ability to deliver targeted dose enhancement to cancer,” explains first author Michael Valceski. “Combining these modalities just made sense, with their synergy providing the potential for the best of both worlds.”
Enhancement effects
Valceski and colleagues combined MRT with thulium oxide nanoparticles, the chemotherapy drug methotrexate and the radiosensitizer iododeoxyuridine (IUdR). They examined the response of monolayers of rodent brain cancer cells to various therapy combinations. They also compared conventional broadbeam orthovoltage X-ray irradiation with synchrotron broadbeam X-rays and synchrotron MRT.
Synchrotron irradiations were performed on the Imaging and Medical Beamline at the ANSTO Australian Synchrotron, using ultrahigh dose rates of 74.1 Gy/s for broadbeam irradiation and 50.3 Gy/s for MRT. The peak-to-valley dose ratio (PVDR, used to characterize an MRT field) of this set-up was measured as 8.9.
Using a clonogenic assay to measure cell survival, the team observed that synchrotron-based irradiation enhanced cell killing compared with conventional irradiation at the same 5 Gy dose (for MRT this is the valley dose, the peaks experience 8.9 times higher dose), demonstrating the increased cell-killing effect of these ultrahigh-dose rate X-rays.
Adding radiosensitizers further increased the impact of synchrotron broadbeam irradiation, with DNA-localized IUdR killing more cells than cytoplasm-localized nanoparticles. Methotrexate, meanwhile, halved cell survival compared with conventional irradiation.
The team observed that at 5 Gy, MRT showed equivalent cell killing to synchrotron broadbeam irradiation. Valceski explains that this demonstrates MRT’s tissue-sparing potential, by showing how MRT can maintain treatment efficacy while simultaneously protecting healthy cells.
MRT also showed enhanced cell killing when combined with radiosensitizers, with the greatest effect seen for IUdR and IUdR plus methotrexate. This local dose enhancement, attributed to the DNA localization of IUdR, could further improve the tissue-sparing capabilities of MRT by enabling a lower per-fraction dose to reduce patient exposure whilst maintaining tumour control.
Imaging valleys and peaks
To link the biological effects with the physical collimation of MRT, the researchers performed confocal microscopy (at the Fluorescence Analysis Facility in Molecular Horizons, University of Wollongong) to investigate DNA damage following treatment at 0.5 and 5 Gy. Twenty minutes after irradiation, they imaged fixed cells to visualize double-strand DNA breaks (DSBs), as shown by γH2AX foci (representing a nuclear DSB site).
The images verified that the cells’ biological responses corresponded with the MRT beam patterns, with the 400 µm microbeam spacing clearly seen in all treated cells, both with and without radiosensitizers.
In the 0.5 Gy images, the microbeam tracks were consistent in width, while the 5 Gy MRT tracks were wider as DNA damage spread from peaks into the valleys. This radiation roll-off was also seen with IUdR and IUdR plus methotrexate, with numerous bright foci visible in the valleys, demonstrating dose enhancement and improved cancer-killing with these radiosensitizers.
The researchers also analysed the MRT beam profiles using the γH2AX foci intensity across the images. Cells treated with radiosensitizers had broadened peaks, with the largest effect seen with the nanoparticles. As nanoparticles can be designed to target tumours, this broadening (roughly 30%) can be used to increase the radiation dose to cancer cells in nearby valleys.
“Peak broadening adds a novel benefit to radiosensitizer-enhanced MRT. The widening of the peaks in the presence of nanoparticles could potentially ‘engulf’ the entire cancer, and only the cancer, whilst normal tissues without nanoparticles retain the protection of MRT tissue sparing,” Valceski explains. “This opens up the potential for MRT radiosurgery, something our research team has previously investigated.”
Finally, the researchers used γH2AX foci data for each peak and valley to determine a biological PVDR. The biological PDVR values matched the physical PVDR of 8.9, confirming for the first time a direct relationship between physical dose delivered and DSBs induced in the cancer cells. They note that adding radiosensitizers generally lowered the biological PVDRs from the physical value, likely due to additional DSBs induced in the valleys.
The next step will be to perform preclinical studies of MRT. “Trials to assess the efficacy of this multimodal therapy in treating aggressive cancers in vivo are key, especially given the theragnostic potential of nanoparticles for image-guided treatment and precision planning, as well as cancer-specific dose enhancement,” senior author Moeava Tehei tells Physics World. “Considering the radiosurgical potential of stereotactic, radiosensitizer-enhanced MRT fractions, we can foresee a revolutionary multimodal technique with curative potential in the near future.”
Permanent distortions in spacetime caused by the passage of gravitational waves could be detectable from Earth. Known as “gravitational memory”, such distortions are predicted to occur most prominently when the core of a supernova collapses. Observing them could therefore provide a window into the death of massive stars and the creation of black holes, but there’s a catch: the supernova might have to happen in our own galaxy.
Physicists have been detecting gravitational waves from colliding stellar-mass black holes and neutron stars for almost a decade now, and theory predicts that core-collapse supernovae should also produce them. The difference is that unlike collisions, supernovae tend to be lopsided – they don’t explode outwards equally in all directions. It is this asymmetry – in both the emission of neutrinos from the collapsing core and the motion of the blast wave itself – that produces the gravitational-wave memory effect.
“The memory is the result of the lowest frequency aspects of these motions,” explains Colter Richardson, a PhD student at the University of Tennessee in Knoxville, US and co-lead author (with Haakon Andresen of Sweden’s Oskar Klein Centre) of a Physical Review Letters paper describing how gravitational-wave memory detection might work on Earth.
Filtering out seismic noise
Previously, many physicists assumed it wouldn’t be possible to detect the memory effect from Earth. This is because it manifests at frequencies below 10 Hz, where noise from seismic events tends to swamp detectors. Indeed, Harvard astrophysicist Kiranjyot Gill argues that detecting gravitational memory “would require exceptional sensitivity in the millihertz range to separate it from background noise and other astrophysical signals” – a sensitivity that she says Earth-based detectors simply don’t have.
Anthony Mezzacappa, Richardson’s supervisor at Tennessee, counters this by saying that while the memory signal itself cannot be detected, the ramp-up to it can. “The signal ramp-up corresponds to a frequency of 20–30 Hz, which is well above 10 Hz, below which the detector response needs to be better characterized for what we can detect on Earth, before dropping down to virtually 0 Hz where the final memory amplitude is achieved,” he tells Physics World.
The key, Mezzacappa explains, is a “matched filter” technique in which templates of what the ramp-up should look like are matched to the signal to pick it out from low-frequency background noise. Using this technique, the team’s simulations show that it should be possible for Earth-based gravitational-wave detectors such as LIGO to detect the ramp-up even though the actual deformation effect would be tiny – around 10-16 cm “scaled to the size of a LIGO detector arm”, Richardson says.
The snag is that for the ramp-up to be detectable, the simulations suggest the supernova would need to be close – probably within 10 kiloparsecs (32::615 light years) of Earth. That would place it within our own galaxy, and galactic supernovae are not exactly common. The last to be observed in real time was spotted by Johannes Kepler in 1604; though there have been others since, we’ve only identified their remnants after the fact.
Going to the Moon
Mezzacappa and colleagues are optimistic that multi-messenger astronomy techniques such as gravitational-wave and neutrino detectors will help astronomers identify future Milky Way supernovae as they happen, even if cosmic dust (for example) hides their light for optical observers.
Gill, however, prefers to look towards the future. In a paper under revision at Astrophysical Journal Letters, and currently available as a preprint, she cites two proposals for detectors on the Moon that could transform gravitational-wave physics and extend the range at which gravitational memory signals can be detected.
The first, called the Lunar Gravitational Wave Antenna, would use inertial sensors to detect the Moon shaking as gravitational waves ripple through it. The other, known as the Laser Interferometer Lunar Antenna, would be like a giant, triangular version of LIGO with arms spanning tens of kilometres open to space. Both are distinct from the European Space Agency’s Laser Interferometer Space Antenna, which is due for launch in the 2030s, but is optimized to detect gravitational waves from supermassive black holes rather than supernovae.
“Lunar-based detectors or future space-based observatories beyond LISA would overcome the terrestrial limitations,” Gill argues. Such detectors, she adds, could register a memory effect from supernovae tens or even hundreds of millions of light years away. This huge volume of space would encompass many galaxies, making the detection of gravitational waves from core-collapse supernovae almost routine.
The memory of something far away
In response, Richardson points out that his team’s filtering method could also work at longer ranges – up to approximately 10 million light years, encompassing our own Local Group of galaxies and several others – in certain circumstances. If a massive star is spinning very quickly, or it has an exceptionally strong magnetic field, its eventual supernova explosion will be highly collimated and almost jet-like, boosting the amplitude of the memory effect. “If the amplitude is significantly larger, then the detection distance is also significantly larger,” he says.
Whatever technologies are involved, both groups agree that detecting gravitational-wave memory is important. It might, for example, tell us whether a supernova has left behind a neutron star or a black hole, which would be valuable because the reasons one forms and not the other remain a source of debate among astrophysicists.
“By complementing other multi-messenger observations in the electromagnetic spectrum and neutrinos, gravitational-wave memory detection would provide unparalleled insights into the complex interplay of forces in core-collapse supernovae,” Gill says.
Richardson agrees that a detection would be huge and hopes that his work and that of others “motivates new investigations into the low-frequency region of gravitational-wave astronomy”.
Several years ago I was sitting at the back of a classroom supporting a newly qualified science teacher. The lesson was going well, a pretty standard class on Hooke’s law, when a student leaned over to me and asked “Why are we doing this? What’s the point?”.
Having taught myself, this was a question I had been asked many times before. I suspect that when I was a teacher, I went for the knee-jerk “it’s useful if you want to be an engineer” response, or something similar. This isn’t a very satisfying answer, but I never really had the time to formulate a real justification for studying Hooke’s law, or physics in general for that matter.
Who is the physics curriculum designed for? Should it be designed for the small number of students who will pursue the subject, or subjects allied to it, at the post-16 and post-18 level? Or should we be reflecting on the needs of the overwhelming majority who will never use most of the curriculum content again? Only about 10% of students pursue physics or physics-rich subjects post-16 in England, and at degree level, only around 4000 students graduate with physics degrees in the UK each year.
One argument often levelled at me is that learning this is “useful”, to which I retort – in a similar vein to the student from the first paragraph – “In what way?” In the 40 years or so since first learning Hooke’s law, I can’t remember ever explicitly using it in my everyday life, despite being a physicist. Whenever I give a talk on this subject, someone often pipes up with a tenuous example, but I suspect they are in the minority. An audience member once said they consider the elastic behaviour of wire when hanging pictures, but I suspect that many thousands of pictures have been successfully hung with no recourse to F = –kx.
Hooke’s law is incredibly important in engineering but, again, most students will not become engineers or rely on a knowledge of the properties of springs, unless they get themselves a job in a mattress factory.
From a personal perspective, Hooke’s law fascinates me. I find it remarkable that we can see the macroscopic properties of materials being governed by microscopic interactions and that this can be expressed in a simple linear form. There is no utilitarianism in this, simply awe, wonder and aesthetics. I would always share this “joy of physics” with my students, and it was incredibly rewarding when this was reciprocated. But for many, if not most, my personal perspective was largely irrelevant, and they knew that the curriculum content would not directly support them in their future careers.
At this point, I should declare my position – I don’t think we should take Hooke’s law, or physics, off the curriculum, but my reason is not the one often given to students.
A series of lessons on Hooke’s law is likely to include: experimental design; setting up and using equipment; collecting numerical data using a range of devices; recording and presenting data, including graphs; interpreting data; modelling data and testing theories; devising evidence-based explanations; communicating ideas; evaluating procedures; critically appraising data; collaborating with others; and working safely.
Science education must be about preparing young people to be active and critical members of a democracy, equipped with the skills and confidence to engage with complex arguments that will shape their lives. For most students, this is the most valuable lesson they will take away from Hooke’s law. We should encourage students to find our subject fascinating and relevant, and in doing so make them receptive to the acquisition of scientific knowledge throughout their lives.
At a time when pressures on the education system are greater than ever, we must be able to articulate and justify our position within a crowded curriculum. I don’t believe that students should simply accept that they should learn something because it is on a specification. But they do deserve a coherent reason that relates to their lives and their careers. As science educators, we owe it to our students to have an authentic justification for what we are asking them to do. As physicists, even those who don’t have to field tricky questions from bored teenagers, I think it’s worthwhile for all of us to ask ourselves how we would answer the question “What is the point of this?”.
The New Journal of Physics (NJP) has long been a flagship journal for IOP Publishing. The journal published its first volume in 1998 and was an early pioneer of open-access publishing. Co-owned by the Institute of Physics, which publishes Physics World, and the Deutsche Physikalische Gesellschaft (DPG), after some 25 years the journal is now seeking to establish itself further as a journal that represents the entire range of physics disciplines.
NJP publishes articles in pure, applied, theoretical and experimental research, as well as interdisciplinary topics. Research areas include optics, condensed-matter physics, quantum science and statistical physics, and the journal publishes a range of article types such as papers, topical reviews, fast-track communications, perspectives and special issues.
While NJP has been seen as a leading journal for quantum information, optics and condensed-matter physics, the journal is currently undergoing a significant transformation to broaden its scope to attract a wider array of physics disciplines. This shift aims to enhance the journal’s relevance, foster a broader audience and maintain NJP’s position as a leading publication in the global scientific community.
While quantum physics in general, and quantum optics and quantum information in particular, will remain crucial areas for the journal, researchers in other fields such as gravitational-wave research, condensed- and soft-matter physics, polymer physics, theoretical chemistry, statistical and mathematical physics are being encouraged to submit their articles to the journal. “It’s a reminder to the community that NJP is a journal for all kinds of physics and not just a select few,” says quantum physicist Andreas Buchleitner from the Albert-Ludwigs-Universität Freiburg who is NJP’s editor-in-chief.
Historically, NJP has had a strong focus on theoretical physics, particularly in quantum information. Yet another significant aspect of NJP’s new strategy is the inclusion of more experimental research. Attracting high-quality experimental papers to balance its content and enhance its reputation as a comprehensive physics journal, will also allow it to compete with other leading physics journals. Part of this shift will also involve attracting a reliable and loyal group of authors who regularly publish their best work in NJP.
A broader scope
To aid this move, NJP has recently grown its editorial board to add expertise in subjects such as gravitational-wave physics. This diversity of capabilities is crucial to evaluate submissions from different areas of physics and maintain high standards of quality during the peer-review process. That point is particularly relevant for Buchleitner, who sees the expansion of the editorial board as helping to improve the journal’s handling of submissions to ensure that authors feel their work is being evaluated fairly and by knowledgeable and engaged individuals. “Increasing the editorial board was quite an important concept in terms of helping the journal expand,” adds Buchleitner. “What is important to me is that scientists who contact the journal feel that they are talking to people and not to artificial intelligence substitutes.”
While citation metrics such as impact factors are often debated in terms of their scientific value, they remain essential for a journal’s visibility and reputation. In the competitive landscape of scientific publishing, they can set a journal apart from its competitors. With that in mind, NJP, which has an impact factor of 2.8, is also focusing on improving its citation indices to compete with top-tier journals.
Yet that doesn’t only just include the impact factor but other metrics that ensure efficient and constructive handling of submissions that will encourage researchers to publish with the journal again. To set it apart from competitors, the time taken to first decision before peer review, for example, is only six days while the journal has a median of 50 days to first decision after peer review.
Society benefits
While NJP pioneered the open-access model of scientific publishing, that position is no longer unique given the huge increase in open-access journals over the past decade. Yet the publishing model continues to be an important aspect of the journal’s identity to ensure that the research it publishes is freely available to all. Another crucial factor to attract authors and set it apart from commercial entities is that NJP is published by learned societies – the IOP and DPG.
NJP has often been thought of as a “European journal”. Indeed, NJP’s role is significant in the context of the UK leaving the European Union, in that it serves as a bridge between the UK and mainland European research communities. “That’s one of the reasons why I like the journal,” says Buchleitner, who adds that with a wider scope NJP will not only publish the best research from around the world but also strengthen its identity as a leading European journal.
Assessing lung function is crucial for diagnosing and monitoring respiratory diseases. The most common way to do this is using spirometry, which measures the amount and speed of air that a person can inhale and exhale. Spirometry, however, is insensitive to early disease and cannot detect heterogeneity in lung function. Techniques such as chest radiography or CT provide more detailed spatial information, but are not ideal for long-term monitoring as they expose patients to ionizing radiation.
Now, a team headed up at Newcastle University in the UK has demonstrated a new lung MR imaging technique that provides quantitative and spatially localized assessment of pulmonary ventilation. The researchers also show that the MRI scans – recorded after the patient inhales a safe gas mixture – can track improvements in lung function following medication.
Although conventional MRI of the lungs is challenging, lung function can be assessed by imaging the distribution of an inhaled gas, most commonly hyperpolarized 3He or 129Xe. These gases can be expensive, however, and the magnetic preparation step requires extra equipment and manpower. Instead, project leader Pete Thelwall and colleagues are investigating 19F-MRI of inhaled perfluoropropane – an inert gas that does not require hyperpolarization to be visible in an MRI scan.
“Conventional MRI detects magnetic signals from hydrogen nuclei in water to generate images of water distribution,” Thelwall explains. “Perfluoropropane is interesting to us as we can also get an MRI signal from fluorine nuclei and visualize the distribution of inhaled perfluoropropane. We assess lung ventilation by seeing how well this MRI-visible gas moves into different parts of the lungs when it is inhaled.”
Testing the new technique
The researchers analysed 19F-MRI data from 38 healthy participants, 35 with asthma and 21 with chronic obstructive pulmonary disease (COPD), reporting their findings in Radiology. For the 19F-MRI scans, participants were asked to inhale a 79%/21% mixture of perfluoropropane and oxygen and then hold their breath. All subjects also underwent spirometry and an anatomical 1H-MRI scan, and those with respiratory disease withheld their regular bronchodilator medication before the MRI exams.
After co-registering each subject’s anatomical (1H) and ventilation (19F) images, the researchers used the perfluoropropane distribution in the images to differentiate ventilated and non-ventilated lung regions. They then calculated the ratio of non-ventilated lung to total lung volume, a measure of ventilation dysfunction known as the ventilation defect percentage (VDP).
Healthy subjects had a mean VDP of 1.8%, reflecting an even distribution of inhaled gas throughout their lungs and well-preserved lung function. In comparison, the patient groups showed elevated mean VDP values – 8.3% and 27.2% for those with asthma and COPD, respectively – reflecting substantial ventilation heterogeneity.
In participants with respiratory disease, the team also performed 19F-MRI after treatment with salbutamol, a common inhaler. They found that the MR images revealed changes in regional ventilation in response to this bronchodilator therapy.
Post-treatment images of patients with asthma showed an increase in lung regions containing perfluoropropane, reflecting the reversible nature of this disease. Participants with COPD generally showed less obvious changes following treatment, as expected for this less reversible disease. Bronchodilator therapy reduced the mean VDP by 33% in participants with asthma (from 8.3% to 5.6%) and by 14% in those with COPD (from 27.2% to 23.3%).
The calculated VDP values were negatively associated with standard spirometry metrics. However, the team note that some participants with asthma exhibited normal spirometry but an elevated mean VDP (6.7%) compared with healthy subjects. This finding suggests that the VDP acquired by 19F-MRI of inhaled perfluoropropane is more sensitive to subclinical disease than conventional spirometry.
Supporting lung transplants
In a separate study reported in JHLT Open, Thelwall and colleagues used dynamic 19F-MRI of inhaled perfluoropropane to visualize the function of transplanted lungs. Approximately half of lung transplant recipients experience organ rejection, known as chronic lung allograft dysfunction (CLAD), within five years of transplantation.
Transplant recipients are monitored frequently using pulmonary function tests and chest X-rays. But by the time CLAD is diagnosed, irreversible lung damage may already have occurred. The team propose that 19F-MRI may find subtle early changes in lung function that could help detect rejection earlier.
The researchers studied 10 lung transplant recipients, six of whom were experiencing chronic rejection. They used a wash-in and washout technique, acquiring breath-hold 19F-MR images while the patient inhaled a perfluoropropane/oxygen mixture (wash-in acquisitions), followed by scans during breathing of room air (washout acquisitions).
The MR images revealed quantifiable differences in regional ventilation in participants with and without CLAD. In those with chronic rejection, scans showed poorer air movement to the edges of the lungs, likely due to damage to the small airways, a typical feature of CLAD. By detecting such changes in lung function, before signs of damage are seen in other tests, it’s possible that this imaging method might help inform patient treatment decisions to better protect the transplanted lungs from further damage.
The studies fall squarely within the field of clinical research, requiring non-standard MRI hardware to detect fluorine nuclei. But Thelwall sees a pathway towards introducing 19F-MRI in hospitals, noting that scanner manufacturers have brought systems to market that can detect nuclei other than 1H in routine diagnostic scans. Removing the requirement for hyperpolarization, combined with the lower relative cost of perfluoropropane inhalation (approximately £50 per study participant), could also help scale this method for use in the clinic.
The team is currently working on a study looking at how MRI assessment of lung function could help reduce the side effects associated with radiotherapy for lung cancer. The idea is to design a radiotherapy plan that minimizes dose to lung regions with good function, whilst maintaining effective cancer treatment.
“We are also looking at how better lung function measurements might help the development of new treatments for lung disease, by being able to see the effects of new treatments earlier and more accurately than current lung function measurements used in clinical trials,” Thelwall tells Physics World.
Exocomets are boulders of rock and ice, at least 1 km in size, that exist outside our solar system. Exocometary belts – regions containing many such icy bodies – are found in at least 20% of planetary systems. When the exocomets within these belts smash together they can also produce small pebbles.
The belts in the latest study orbit 74 nearby stars that cover a range of ages – from those that are have just formed to those in more mature systems like our own Solar System. The belts typically lie tens to hundreds of astronomical units (the distance from the Earth to the Sun) from their central star.
At that distance, the temperature is between -250 to -150 degrees Celsius, meaning that most compounds on the exocomets are frozen as ice.
While most exocometary belts in the latest study are disks, some are narrow rings. Some even have multiple rings/disks that are eccentric, which provides evidence that yet undetectable planets are present and their gravity affects the distribution of the pebbles in these systems.
According the Sebastián Marino from the University of Exeter, the images reveal “a remarkable diversity in the structure” of the belts.
Indeed, Luca Matrà from Trinity College Dublin says that the “power” of such a large survey is to reveal population-wide properties and trends. “[The survey] confirmed that the number of pebbles decreases for older planetary systems as belts run out of larger exocomets smashing together, but showed for the first time that this decrease in pebbles is faster if the belt is closer to the central star,” Matrà adds. “It also indirectly showed – through the belts’ vertical thickness – that unobservable objects as large as 140 km to Moon-size are likely present in these belts.”
The darkest, clearest skies anywhere in the world could suffer “irreparable damage” by a proposed industrial megaproject. That is the warning from the European Southern Observatory (ESO) in response to plans by AES Andes, a subsidiary of the US power company AES Corporation, to develop a green hydrogen project just a few kilometres from ESO’s flagship Paranal Observatory in Chile’s Atacama Desert.
The Atacama Desert is considered one of the most important astronomical research sites in the world due to its stable atmosphere and lack of light pollution. Sitting 2635 m above sea level, on Cerro Paranal, the Paranal Observatory is home to key astronomical instruments including the Very Large Telescope. The Extremely Large Telescope (ELT) – the largest visible and infrared light telescope in the world – is also being constructed at the observatory on Cerro Armazones with first light expected in 2028.
AES Chile submitted an Environmental Impact Assessment in Chile for an industrial-scale green hydrogen project at the end of December. The complex is expected to cover more than 3000 hectares – similar in size to 1200 football pitches. According to AES, the project is in the early stages of development, but could include green hydrogen and ammonia production plants, solar and wind farms as well as battery storage facilities.
ESO is calling for the development to be relocated to preserve “one of Earth’s last truly pristine dark skies” and “safeguard the future” of astronomy. “The proximity of the AES Andes industrial megaproject to Paranal poses a critical risk to the most pristine night skies on the planet,” says ESO director general Xavier Barcons. “Dust emissions during construction, increased atmospheric turbulence, and especially light pollution will irreparably impact the capabilities for astronomical observation.”
In a statement sent to Physics World, an AES spokesperson says they “understand there are concerns raised by ESO regarding the development of renewable energy projects in the area”. The spokesperson adds that the project would be in an area “designated for renewable energy development”. They also claim that the company is “dedicated to complying with all regulatory guidelines and rules” and “supporting local economic development while maintaining the highest environmental and safety standards”.
According to the statement, the proposal “incorporates the highest standards in lighting” to comply with Chilean regulatory requirements designed “to prevent light pollution, and protect the astronomical quality of the night skies”.
Yet Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, noted that it is “obvious” that the light pollution from such a large complex will negatively affect observations. “There are not many places left in the world with the dark and other climatic conditions necessary to do cutting-edge science in the field of observational astrophysics,” adds Corradi. “Light pollution is a global effect and it is therefore essential to protect sites as important as Paranal.”
Biomedical microrobots could revolutionize future cancer treatments, reliably delivering targeted doses of toxic cancer-fighting drugs to destroy malignant tumours while sparing healthy bodily tissues. Development of such drug-delivering microrobots is at the forefront of biomedical engineering research. However, there are many challenges to overcome before this minimally invasive technology moves from research lab to clinical use.
Microrobots must be capable of rapid, steady and reliable propulsion through various biological materials, while generating enhanced image contrast to enable visualization through thick body tissue. They require an accurate guidance system to precisely target diseased tissue. They also need to support sizable payloads of drugs, maintain their structure long enough to release this cargo, and then efficiently biodegrade – all without causing any harm to the body.
Aiming to meet this tall order, researchers at the California Institute of Technology (Caltech) and the University of Southern California have designed a hydrogel-based, image-guided, bioresorbable acoustic microrobot (BAM) with these characteristics and capabilities. Reporting their findings in Science Robotics, they demonstrated that the BAMs could successfully deliver drugs that decreased the size of bladder tumours in mice.
Microrobot design
The team, led by Caltech’s Wei Gao, fabricated the hydrogel-based BAMs using high-resolution two-photon polymerization. The microrobots are hollow spheres with an outer diameter of 30 µm and an 18 µm-diameter internal cavity to trap a tiny air bubble inside.
The BAMs have a hydrophobic inner surface to prolong microbubble retention within biofluids and a hydrophilic outer layer that prevents microrobot clustering and promotes degradation. Magnetic nanoparticles and therapeutic agents integrated into the hydrogel matrix enable wireless magnetic steering and drug delivery, respectively.
The entrapped microbubbles are key as they provide propulsion for the BAMs. When stimulated by focused ultrasound (FUS), the bubbles oscillate at their resonant frequencies. This vibration creates microstreaming vortices around the BAM, generating a propulsive force in the opposite direction of the flow. The microbubbles inside the BAMs also act as ultrasound contrast agents, enabling real-time, deep-tissue visualization.
The researchers designed the microrobots with two cylinder-like openings, which they found achieves faster propulsion speeds than single- or triple-opening spheres. They attribute this to propulsive forces that run parallel to the sphere’s boundary improving both speed and stability of movement when activated by FUS.
They also discovered that asymmetric placement of the microbubble centre from the centre of the sphere generated propulsion speeds more than twice that achieved by BAMS with a symmetric design.
To perform simultaneous imaging of BAM location and acoustic propulsion within soft tissue, the team employed a dual-probe design. An ultrasound imaging probe enabled real-time imaging of the bubbles, while the acoustic field generated by a FUS probe (at an excitation frequency of 480 kHz and an applied acoustic pressure of 626 kPa peak-to-peak) provided effective propulsion.
In vitro and in vivo testing
The team performed real-time imaging of the propulsion of BAMs in vitro, using an agarose chamber to simulate an artificial bladder. When exposed to an ultrasound field generated by the FUS probe, the BAMs demonstrated highly efficient motion, as observed in the ultrasound imaging scans. The propulsion direction of BAMs could be precisely controlled by an external magnetic field.
The researchers also conducted in vivo testing, using laboratory mice with bladder cancer and the anti-cancer drug 5-fluorouracil (5-FU). They treated groups of mice with either phosphate buffered saline, free drug, passive BAMs or active (acoustically actuated and magnetically guided) BAMs, at three day intervals over four sessions. They then monitored the tumour progression for 21 days, using bioluminescence signals emitted by cancer cells.
The active BAM group exhibited a 93% decrease in bioluminescence by the 14th day, indicating large tumour shrinkage. Histological examination of excised bladders revealed that mice receiving this treatment had considerably reduced tumour sizes compared with the other groups.
“Embedding the anticancer drug 5-FU into the hydrogel matrix of BAMs substantially improved the therapeutic efficiency compared with 5-FU alone,” the authors write. “These BAMs used a controlled-release mechanism that prolonged the bioavailability of the loaded drug, leading to sustained therapeutic activity and better outcomes.”
Mice treated with active BAMS experienced no weight changes, and no adverse effects to the heart, liver, spleen, lung or kidney compared with the control group. The researchers also evaluated in vivo degradability by measuring BAM bioreabsorption rates following subcutaneous implantation into both flanks of a mouse. Within six weeks, they observed complete breakdown of the microrobots.
Gao tells Physics World that the team has subsequently expanded the scope of its work to optimize the design and performance of the microbubble robots for broader biomedical applications.
“We are also investigating the use of advanced surface engineering techniques to further enhance targeting efficiency and drug loading capacity,” he says. “Planned follow-up studies include preclinical trials to evaluate the therapeutic potential of these robots in other tumour models, as well as exploring their application in non-cancerous diseases requiring precise drug delivery and tissue penetration.”
So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.
Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.
Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.
Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.
Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.
Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.
Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.
Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.
Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.
About this journal
Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic. Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA
Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.
The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.
Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.
“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”
Why strings?
String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.
However, a major hurdle remains. The characteristic size of these strings is estimated to be around 10−35 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.
Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.
“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”
In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.
Unique solution
Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.
Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”
The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.
Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.
Heart failure is a serious condition that occurs when a damaged heart loses its ability to pump blood around the body. It affects as many as 100 million people worldwide and it is a progressive disease such that five years after a diagnosis, 50% of patients with heart failure will be dead.
The UK-based company Ceryx Medical has created a new bioelectronic device called Cysoni, which is designed to adjust the pace of the heart as a patient breathes in and out. This mimics a normal physiological process called respiratory sinus arrhythmia, which can be absent in people with heart failure. The company has just began the first trial of Cysoni on human subjects.
This podcast features the biomedical engineer Stuart Plant and the physicist Ashok Chauhan, who are Ceryx Medical’s CEO and senior scientist respectively. In a wide-ranging conversation with Physics World’s Margaret Harris, they talk about how bioelectronics could be used treat heart failure and some other diseases. Chauhan and Plant also chat about challenges and rewards of developing medical technologies within a small company.
Increased collaboration between different areas of materials research and development will be needed if the UK is to remain a leader in the field. That is according to the National Materials Innovation Strategy, which claims to be the first document aimed at boosting materials-based innovation in the UK. Failing to adopt a “clear, national strategy” for materials will hamper the UK’s ability to meet its net-zero and other sustainability goals, the strategy says.
Led by the Henry Royce Institute – the UK’s national institute for advanced materials – the strategy included the input of over 2000 experts in materials science, engineering, innovation, policy and industry. It says that some 52 000 people in the UK work or contribute to the materials industry, adding about £4.4bn to the UK economy each year. Of the 2700 companies in materials innovation in the UK, 70% are registered outside of London and the South East, with 90% being small and medium-sized enterprises.
According to the 160-page strategy, materials innovation touches “almost every strategically important sector in the UK” and points to “six areas of opportunity” where materials can have an impact. They are: energy; healthcare; structural innovations; surface technologies; electronics, telecommunications and sensors; and consumer products, packaging and specialist polymers.
The strategy, which is the first phase of an effort to speed up materials development in the UK, calls for a more collaborative effort between different fields to help spur materials innovation that has “traditionally been siloed across sectors”. It claims that every materials-related job results in 12 additional jobs within “materials innovation business”, adding that “a commitment to materials innovation” by the UK could double the number of materials-specific roles by 2035.
“Advanced materials hold the key to finding and delivering solutions to some of the most pressing national and global challenges of today and directly contribute billions to our national economy,” says materials engineer David Knowles, who is chief executive of the Henry Royce Institute. “But to unlock the full value of materials we must break down traditional long-standing silos within the industry. This strategy has kickstarted that process.”
SPIE Photonics West, the world’s largest photonics technologies event, takes place in San Francisco, California, from 25 to 30 January. Showcasing cutting-edge research in lasers, biomedical optics, biophotonics, quantum technologies, optoelectronics and more, Photonics West features leaders in the field discussing the industry’s challenges and breakthroughs, and sharing their research and visions of the future.
As well as 100 technical conferences with over 5000 presentations, the event brings together several world-class exhibitions, kicking off on 25 January with the BiOS Expo, the world’s largest biomedical optics and biophotonics exhibition.
The main Photonics West Exhibition starts on 28 January. Hosting more than 1200 companies, the event highlights the latest developments in laser technologies, optoelectronics, photonic components, materials and devices, and system support. The newest and fastest growing expo, Quantum West, showcases photonics as an enabling technology for a quantum future. Finally, the co-located AR | VR | MR exhibition features the latest extended reality hardware and systems. Here are some of the innovative products on show at this year’s event.
HydraHarp 500: a new era in time-correlated single-photon counting
Photonics West sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.
At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.
In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.
The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.
Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering
precision, flexibility and efficiency combined with reliability and seamless integration to
achieve breakthrough results.
Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.
SmarAct: shaping the future of precision
SmarAct is set to make waves at the upcoming SPIE Photonics West, the world’s leading exhibition for photonics, biomedical optics and laser technologies, and the parallel BiOS trade fair. SmarAct will showcase a portfolio of cutting-edge solutions designed to redefine precision and performance across a wide range of applications.
At Photonics West, SmarAct will unveil its latest innovations, as well as its well-established and appreciated iris diaphragms and optomechanical systems. All of the highlighted technologies exemplify SmarAct’s commitment to enabling superior control in optical setups, a critical requirement for research and industrial environments.
Attendees can also experience the unparalleled capabilities of electromagnetic positioners and SmarPod systems. With their hexapod-like design, these systems offer nanometre-scale precision and flexibility, making them indispensable tools for complex alignment tasks in photonics and beyond.
One major highlight is SmarAct’s debut of a 3D pick-and-place system designed for handling optical fibres. This state-of-the-art solution integrates precision and flexibility, offering a glimpse into the future of fibre alignment and assembly. Complementing this is a sophisticated gantry system for microassembly of optical components. Designed to handle large travel ranges with remarkable accuracy, this system meets the growing demand for precision in the assembly of intricate optical technologies. It combines the best of SmarAct’s drive technologies, such as fast (up to 1 m/s) and durable electromagnetic positioners and scanner stages based on piezo-driven mechanical flexures with maximum scanning speed and minimum scanning error.
Simultaneously, at the BiOS trade fair SmarAct will spotlight its new electromagnetic microscopy stage, a breakthrough specifically tailored for life sciences applications. This advanced stage delivers exceptional stability and adaptability, enabling researchers to push the boundaries of imaging and experimental precision. This innovation underscores SmarAct’s dedication to addressing the unique challenges faced by the biomedical and life sciences sectors, as well as bioprinting and tissue engineering companies.
Throughout the event, SmarAct’s experts will demonstrate these solutions in action, offering visitors an interactive and hands-on understanding of how these technologies can meet their specific needs. Visit SmarAct’s booths to engage with experts and discover how SmarAct solutions can empower your projects.
Whether you’re advancing research in semiconductors, developing next-generation photonic devices or pioneering breakthroughs in life sciences, SmarAct’s solutions are tailored to help you achieve your goals with unmatched precision and reliability.
Precision positioning systems enable diverse applications
For 25 years Mad City Labs has provided precision instrumentation for research and industry – including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes, atomic-force microscopes (AFMs) and customized solutions.
The company’s newest micropositioning system – the MMP-UHV50 – is a modular, linear micropositioner designed for ultrahigh-vacuum (UHV) environments. Constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks, the MMP-UHV50 offers 50 mm travel range with 190 nm step size and a maximum vertical payload of 2 kg.
Uniquely, the MMP-UHV50 incorporates a zero-power feature when not in motion, to minimize heating and drift. Safety features include limit switches and overheat protection – critical features when operating in vacuum environments. The system includes the Micro-Drive-UHV digital electronic controller, supplied with LabVIEW-based software and compatible with user-written software via the supplied DLL file (for example, Python, Matlab or C++).
Other products from Mad City Labs include piezo nanopositioners featuring the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution. These high-performance sensors enable motion control down to the single picometre level.
For scanning probe microscopy, Mad City Labs’s nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. The company offers both an optical deflection AFM – the MadAFM, a multimodal sample scanning AFM in a compact, tabletop design and designed for simple installation – plus resonant probe AFM models.
The resonant probe products include the company’s AFM controllers, MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs’ micro- and nanopositioners. All AFM instruments are ideal for material characterization, but the resonant probe AFMs are uniquely suitable for quantum sensing and nano-magnetometry applications.
Mad City Labs also offers standalone micropositioning products, including optical microscope stages, compact positioners for photonics and the Mad-Deck XYZ stage platform, all of which employ proprietary intelligent control to optimize stability and precision. They are also compatible with the high-resolution nanopositioning systems, enabling motion control across micro-to-picometre length scales.
Finally, for high-end microscopy applications, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multi-colour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques.
Our product portfolio, coupled with our expertise in custom design and manufacturing, ensures that we are able to provide solutions for nanoscale motion for diverse applications such as astronomy, photonics, metrology and quantum sensing.
Learn more at BiOS booth #8525 and Photonics West booth #3525.
What exactly is ice cream? For most of us, it’s a tasty frozen dessert, but to food scientists like Douglas Goff, it’s also a marvel of physics and chemistry. Ice cream is a complex multiphase material, containing emulsion, foam, crystals, solutes and solvent. Whether made in a domestic kitchen or on a commercial scale, ice cream requires a finely tuned ratio of ingredients and precision control during mixing, churning and freezing.
Goff is a researcher in food science at the University of Guelph in Canada and an expert in the science of ice cream. In addition to his research studying, among other things, structure and ingredient functionality in ice cream, Goff is also the instructor on the University of Guelph’s annual ice-cream course, which, having been taught since 1914, is the longest-running at the university.
In a conversation with Physics World’s Hamish Johnston, Goff explains the science of ice cream, why it’s so hard to make vegan ice cream and how his team performs electron microscopy experiments without their samples melting.
How would you describe the material properties of ice cream to a physicist?
Ice cream is an incredibly complex multi-phase system. It starts as an emulsion, where fat droplets are dispersed in a sugary water-based solution. Then we whip the emulsion to incorporate an air phase into it – this is called foaming (see “Phases in ice cream”). In a frozen tub of ice cream, about half of the volume is air. That air is present in the form of tiny bubbles that are distributed throughout the product.
Then we partially freeze the aqueous phase, turning at least half of the water into microscopically small ice crystals. The remaining unfrozen phase is what makes the ice cream soft, scoopable and chewable. It remains unfrozen because of all the sugar that’s dissolved in it, which depresses the freezing point.
So you end up with fat droplets in the form of an emulsion, air bubbles in the form of a foam, a partially crystalline solvent in the form of ice crystals, and a concentrated sugar solution.
Phases in ice cream
Emulsion: Some liquids, such as oil and water, will not mix if a droplet of one is added to the other – they are said to be immiscible. If many droplets of one liquid can be stabilized in another without coalescing, the resulting mixture is called an emulsion (left image).
Foam: A foam, like an emulsion, consists of two phases where one is dispersed in the other. In the case of foam, many tiny gas bubbles are trapped in a liquid or solid (right image).
Glass: When a liquid is cooled below a certain temperature, it generally undergoes a first-order phase transition to a solid crystal. However, if a liquid can be cooled below its freezing point without crystallizing (supercooling) – for example, if it is cooled very quickly, it may form glass – an amorphous solid with a disordered, liquid-like structure but solid-like mechanical properties. The temperature at which the glass forms, marked by a rapid increase in the material’s viscosity, is called the glass transition temperature.
What are the length scales of the different phases in the ice cream?
We’ve done a lot of electron microscopy research studying this in my lab. In fact, our research was some of the very first that utilized electron microscopy techniques for the structure of ice cream. The fat droplets are about one micron in diameter and the air bubbles, depending on the equipment that’s used, would be about 20 to 30 microns in diameter. The ice crystals are in the 10 to 20 micron size range.
It really is a beautiful thing to look at under an electron microscope, depending on the technique that you use (see image).
What are the big differences between ice cream that’s made in a commercial setting versus a domestic kitchen?
The freezing and whipping happen at the same time whether it’s an ice cream maker in the kitchen or a commercial operation. The biggest difference between what you do in the kitchen and what they’re going to do in the factory is the structure of the ice cream. Homemade ice cream is fine for maybe a day or two, but it starts to get icy pretty quickly, whereas we want a shelf life of months to a year when ice cream is made commercially.
This is because of the way the ice phase evolves over time – a process called recrystallization. If ice cream warms up it starts to melt. When the temperature is lowered again, water is frozen back into the ice phase, but it doesn’t create new ice crystals, it just grows onto the existing ice crystals.
This means that if ice cream is subject to lots of temperature fluctuation during storage, it’s going to degrade and become icy much quicker than if it was stored at a constant temperature. The warmer the temperature, the faster the rate of recrystallization. Commercial freezing equipment will give you much smaller ice crystal size than homemade ice cream machines. Low and constant temperature storage is what everybody strives for, and so the lower the temperature and the more constant it is, and the smaller the ice crystals are to begin with, the longer your shelf life before changes start occurring.
There’s also another structural element that is important for the long-term storage of ice cream. When that unfrozen sugary solvent phase gets concentrated enough, it can undergo a glass transition (see “Phases in ice cream”). Glass is an amorphous solid, so if this happens, there will be no movement of water or solute within the system and it can remain unchanged for years. For ice cream, the glass transition temperature is around –28 to –32° C so if you want long-term storage, you have to get down below that that glass transition temperature.
The third thing is the addition of stabilisers. Those are things like locust bean gum, guar gum or cellulose gum and there are some novel ones as well. What those do is increase the viscosity in the unfrozen phase. This slows down the rate of ice recrystallization because it slows down the diffusion of water and the growth of ice.
There are also some other novel agents that can prevent ice from recrystallizing into large crystals. One of these is called propylene glycol monostearate, it absorbs onto the surface of an ice crystal and prevents it from growing as the temperature fluctuates. This is also something we see in nature. Some insect, fish and plant species that live in cold environments have proteins that control the growth of ice in their blood and tissues. A lot of fish, for example, swim around with minute ice crystals in their in their body, but the proteins prevent the crystals from getting big enough to cause harm.
How does adding flavourings to ice cream change the manufacturing process?
When you think about ice cream around the world, there are hundreds of different flavours. The important question is whether the flavouring will impact the solution or emulsion.
For example, a chocolate chip will be inert, it’s not going to interact at all with the rest of the matrix. Strawberries on the other hand, really impact the system because of the high sugar content in the fruit preparation. We need to add sugar to the fruit to make sure it is softer than the ice cream itself – you don’t want to bite into ice cream and find a hard, frozen berry. The problem is that some of that sugar will diffuse into the unfrozen phase and lower its freezing point. This means that if you don’t do anything to the formulation, strawberry ice cream will be softer than something like vanilla because of the added sugar.
Another example would be alcohol-based flavours, anything from rum to Baileys Irish Cream or Frangelico, or even wine and beer. They’re very popular but the alcohol depresses the freezing point, so if you add enough to give you the flavour intensity that you want, your product won’t freeze. In that case, you might need to add less of the alcohol and a little bit more of a de-alcoholized flavouring.
You can try to make ice cream with just about any flavour, but you certainly have to look at what that flavouring is going to do to the structure and things like shelf life and so on.
Nowadays one can also buy vegan ice creams. How do the preparation and ingredients differ compared to dairy products?
A lot of it will be similar. We’re going to have an emulsified fat source, typically something like coconut oil or palm kernel oil, and then there’s the sugar, stabilisers and so on that you would have in a dairy ice cream.
The difference is the protein. Milk protein is both a very good foaming agent and a very good emulsifying agent. [Emulsifying and foaming agents are molecules that stabilize foams and emulsions. The molecules attach to the surface of the liquid droplets or air bubbles and stop them from coalescing with each other.] Plant proteins aren’t very good at either. If you look at cashew, almond or soy-based products, you’ll find additional ingredients to deliver the functionality that we would otherwise get from the milk protein.
What techniques do you use to study ice cream? And how do you stop the ice cream from melting during an experiment?
The workhorses of instrumentation for research are particle size analysis, electron microscopy and rheology (see “Experimental techniques”).
So first there’s laser light scattering which tells us everything we need to know about the fat globules and fat structure (see “Experimental techniques”). Then we use a lot of optical microscopy. You either need to put the microscope in a freezer or cold box or have a cold stage where you have the ice cream on a slide inside a chamber that’s cooled with liquid nitrogen. On the electron microscopy side (see “Experimental techniques”), we’ve done a lot of cryo-scanning electron microscopy (SEM), with a low-temperature unit.
We’ve also done a lot of transmission electron microscopy (TEM), which generally uses a different approach. Instead of performing the experiment in cold conditions, we use a chemical that “fixes” the structure in place and then we dry it, typically using a technique called “critical point drying” (see “Experimental techniques”). It’s then sliced into thin samples and studied with the TEM.
Experimental techniques
Rheology: Rheology is the study of the flow and deformation of materials. A rheometer is an apparatus used to measure the response of different materials to applied forces.
Dynamic light scattering (DLS): A laser-based technique used to measure the size distribution of dispersed particles. Dispersed particles such as fat globules in ice cream exhibit Brownian motion, with small particles moving faster than larger particles. The interference of laser light scattered from the particles is used to calculate the characteristic timescale of the Brownian motion and the particle size distribution.
Electron microscopy: Imaging techniques that use a beam of electrons, rather than photons, to image a sample. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) are two common examples. SEM uses reflected electrons to study the sample surface, whereas TEM uses electrons travelling through a sample to understand its internal structure.
Critical point drying: When a sample is dried in preparation for microscopy experiments, the effects of surface tension between the water in the sample and the surrounding air can cause damage. At the critical point, the liquid and gas phases are indistinguishable, if the water in the sample is at its critical point during dehydration, there is no boundary between the water and vapour, and this protects the structure of the sample.
After decades of studying ice cream, do you still get excited about it?
Oh, absolutely. I’ve been fortunate enough to have travelled to many, many interesting countries and I always see what the ice cream market looks like when I’m there. It’s not just a professional thing. I also like to know what’s going on around the world so I can share that with people. But of course, how can you go wrong with ice cream? It’s such a fun product to be associated with.
Listen to the full interview with Douglas Goff on the Physics World Weekly podcast.
Incoming US President Donald Trump has selected Silicon Valley executive Michael Kratsios as director of the Office of Science and Technology Policy (OSTP). Kratsios will also serve as Trump’s science advisor, a position that, unlike the OSTP directorship, does not require approval by the US Senate. Meanwhile, computer scientist Lynne Parker from the University of Tennessee, Knoxville, has been appointed to a new position – executive director of the President’s Council on Advisors on Science and Technology. Parker, who is a former member of OSTP, will also act as counsellor to the OSTP director.
Kratsios, with a BA in politics from Princeton University, was previously chief of staff to Silicon Valley venture capitalist Peter Thiel before becoming the White House’s chief technology officer in 2017 at the start of Trump’s first stint as US president. In addition to his technology remit, Kratsios was effectively Trump’s science advisor until meteorologist Kelvin Droegemeier took that position in January 2019. Kratsios then became the Department of Defense’s acting undersecretary of research and engineering. After the 2020 presidential election, Kratsios left government to run the San Francisco-based company Scale AI.
Parker has a MS from the University of Tennessee and a PhD from Massachusetts Institute of Technology, both in computer science. She was founding director of the University of Tennessee’s AI Tennessee Initiative before spending four years as a member of OSTP, bridging the first Trump and Biden administrations. There, she served as deputy chief technology officer and was the inaugural director of OSTP’s National Artificial Intelligence Initiative Office.
Unlike some other Trump nominations, the appointments have been positively received by the science community. “APLU is enthusiastic that President-elect Trump has selected two individuals who recognize the importance of science to national competitiveness, health, and economic growth,” noted the Association of Public & Land Universities – a membership organisation of public research universities — in a statement. Analysts expect the nominations to reflect the returning president’s interest in pursuing AI, which could indicate a move towards technology over scientific research in the coming four years.
Bill Nelson – NASA’s departing administrator – has handed over a decision about when to retrieve samples from Mars to potential successor Jared Isaacman. In the wake of huge cost increases and long delays in the schedule for bringing back samples collected by the rover Perseverance, NASA had said last year that it would develop a fresh plan for the “Mars Sample Return” mission. Nelson now says the agency had two lower-cost plans in mind – but that a choice will not be made until mid-2026. One plan would use a sky crane system resembling that which delivered Perseverance to the Martian surface, while the other would require a commercially produced “heavy lift lander” to pick up samples. Each option could cost up to $7.5 bn – much less than the rejected plan’s $11 bn.
Physicists have developed a new theoretical framework that helps make sense of how quantum processes are limited by the classical space–time in which they are embedded. One of these processes is the quantum indefinite causal order (ICO), which is a puzzling consequence of quantum physics that has attracted a lot of attention and excitement lately. Quantum ICO systems could have applications in quantum technology, so gaining a better understanding of the phenomenon could have practical implications.
In a quantum ICO process, the temporal order of events is not fixed. Instead, the order is a quantum superposition of event A occurring before event B and event B happening before event A. Usually, such a statement is paradoxical: we are used to speaking about event A causing B, or vice-versa, but how can both be true simultaneously?
While quantum ICO events have been demonstrated in the laboratory, they appear incompatible with the classical interpretation of space–time causality that governs those experiments and indeed, the world that we live in. As a result, some physicists have cast doubt on whether ICO has actually been observed in the lab.
But now, V Vilasini and Renato Renner at ETH Zurich in Switzerland and the University of Grenoble in France have determined what conditions must hold for ICO processes to be possible in space–time. Their result is cast in the language of a no-go theorem, which is a proof that, under certain assumptions, it is impossible for something to occur.
Bell’s famous no-go theorem
Perhaps the most famous no-go theorem in quantum physics is Bell’s theorem. It was derived in 1964 by the Northern Irish physicist John Bell and concerns the purely quantum phenomenon of entanglement. Bell’s no-go test establishes that the correlations observed between two entangled particles cannot be the result of any process of classical physics that obeys space–time causality. Many Bell tests have been done in the laboratory using photons and other particles, and the results of these experiments are consistent with the quantum nature of entanglement. The Bell test has also been put to practical use in the E91 quantum cryptography protocol.
Bell’s no-go theorem puts a limit on classical processes in space–time causality. In their new work, Vilasini and Renner have created no-go theorems that limit quantum processes such as ICO in space–time causality.
The first no-go result demonstrates that it is possible to embed a quantum ICO in classical space–time, which is the space–time we access in the laboratory – and experience on a day-to-day basis – provided that we do not require the involved systems to be localized in space–time. These systems could, for example, be electrons or photons that are acting as the quantum bits (qubits) of a computation. Locality is the assumption that these particles are fixed at a particular location in space–time, but Vilasini and Renner’s result suggests that in order for the process to play out in classical space–time, it cannot possess this property of locality.
Cyclicity and acyclicity
Central to their work is the notion of cyclicity, and the opposite notion of acyclicity. Acyclic space–time does not contain cycles, meaning that one event cannot occur both before and after another event. ICO processes, on the other hand, are necessarily cyclic.
Their second no-go result says that any quantum ICO process that can be embedded in classical space–time can be realized instead by a process that is acyclic. What this means is that we can achieve the same result of the ICO process by replacing it with a different process that is in fact acyclic. This is something like an unravelling, and referred to by Vilasini and Renner as “coarse graining”.
Vilasini tells Physics World that there is a nice classical analogy: “the demand and price of a commodity may influence each other forming a cyclic causal structure, but upon a closer look this unravels into an acyclic structure where demand at time 1 influences price at time 2, which is greater than time 1, which in turn influences demand at time 3 and so on”.
Quantum ICO processes are not only important from a theoretical standpoint. They have been shown to be useful in a variety of tasks, ranging from refrigeration, or the cooling down of something, to reliable and noiseless communication between people. In particular, the use of quantum ICO’s results in performances that are superior to those achieved by classical machines.
Future research in this field could improve our understanding of the interaction of ICOs and quantum gravity. While ICO processes have been studied in space–times in which gravity and quantum effects play a role, it still remains for no-go results, akin to those of Vilasini and Renner’s, to be worked out. This would shed light on the role of causality in quantum gravitational regimes, an area of research of pressing importance.
Magnetic particle imaging (MPI) is an emerging medical imaging modality with the potential for high sensitivity and spatial resolution. Since its introduction back in 2005, researchers have built numerous preclinical MPI systems for small-animal studies. But human-scale MPI remains an unmet challenge. Now, a team headed up at the Athinoula A Martinos Center for Biomedical Imaging has built a proof-of-concept human brain-scale MPI system and demonstrated its potential for functional neuroimaging.
MPI works by visualizing injected superparamagnetic iron oxide nanoparticles (SPIONs). SPIONs exhibit a nonlinear response to an applied magnetic field: at low fields they respond roughly linearly, but at larger field strengths, particle response saturates. MPI exploits this behaviour by creating a magnetic field gradient across the imaging space with a field-free line (FFL) in the centre. Signals are only generated by the unsaturated SPIONs inside the FFL, which can be scanned through the imaging space to map SPION distribution.
First author Eli Mattingly and colleagues propose that MPI could be of particular interest for imaging the dynamics of blood volume in the brain, as it can measure the local distribution of nanoparticles in blood without an interfering background signal.
“In the brain, the tracer stays in the blood so we get an image of blood volume distribution,” Mattingly explains. “This is an important physiological parameter to map since blood is so vital for supporting metabolism. In fact, when a brain area is used by a mental task, the local blood volume swells about 20% in response, allowing us to map functional brain activity by dynamically imaging cerebral blood volume.”
Rescaling the scanner
The researchers began by defining the parameters required to build a human brain-scale MPI system. Such a device should be able to image the head with 6 mm spatial resolution (as used in many MRI-based functional neuroimaging studies) and 5 s temporal resolution for at least 30 min. To achieve this, they rescaled their existing rodent-sized imager.
The resulting scanner uses two opposed permanent magnets to generate the FFL and high-power electromagnet shift coils, comprising inner and outer coils on each side of the head, to sweep the FFL across the head. The magnets create a gradient of 1.13 T/m, sufficient to achieve 5–6 mm resolution with high-performance SPIONs. To create 2D images, a mechanical gantry rotates the magnets and shift coils at 6 RPM, enabling imaging every 5 s.
The MPI system also incorporates a water-cooled 26.3 kHz drive coil, which produces the oscillating magnetic field (of up to 7 mTpeak) needed to drive the SPIONs in and out of saturation. A gradiometer-based receive coil fits over the head to record the SPION response.
Mattingly notes that this rescaling was far from straightforward as many parameters scale with the volumeof the imaging bore. “With a bore about five times larger, the volume is about 125 times larger,” he says. “This means the power electronics require one to two orders of magnitude more power than rat-sized MPI systems, and the receive coils are simultaneously less sensitive as they become larger.”
Performance assessment
The researchers tested the scanner performance using a series of phantoms. They first evaluated spatial resolution by imaging 2.5 mm-diameter capillary tubes filled with Synomag SPIONs and spaced by between 5 and 9 mm. They reconstructed images using an inverse Radon reconstruction algorithm and a forward-model iterative reconstruction.
The system demonstrated a spatial resolution of about 7 mm with inverse Radon reconstruction, increasing to 5 mm with iterative reconstruction. The team notes that this resolution should be sufficient to observe changes in cerebral blood volume associated with brain function and following brain injuries.
To determine the practical detection limit, the researchers imaged Synomag samples with concentrations from 6 mgFe/ml to 15.6 µgFe/ml, observing a limit of about 1 µgFe. Based on this result, they predict that MPI should show grey matter with a signal-to-noise ratio (SNR) of roughly five and large blood vessels with an SNR of about 100 in a 5 s image. They also expect to detect changes during brain activation with a contrast-to-noise ratio of above one.
Next, they quantified the scanner’s imaging field-of-view using a G-shaped phantom filled with Synomag at roughly the concentration of blood. The field-of-view was 181 mm in diameter – sufficient to encompass most human brains. Finally, the team monitored the drive current stability over 35 min of continuous imaging. At a drive field of 4.6 mTpeak, the current deviated less than 2%. As this drift was smooth and slow, it should be straightforward to separate it from the larger signal changes expected from brain activation.
The researchers conclude that their scanner – the first human head-sized, mechanically rotating, FFL-based MPI – delivers a suitable spatial resolution, temporal resolution and sensitivity for functional human neuroimaging. And they continue to improve the device. “Currently, the group is developing hardware to enable studies such as application-specific receive coils to prepare for in vivo experiments,” says Mattingly.
At present, the scanner’s sensitivity is limited by background noise from the amplifiers. Mitigating such noise could increase sensitivity 20-fold, the team predicts, potentially providing an order of magnitude improvement over other human neuroimaging methods and enabling visualization of haemodynamic changes following brain activity.
Lia Merminga has resigned as director of Fermilab – the US’s premier particle-physics lab. She stepped down yesterday after a turbulent year that saw staff layoffs, a change in the lab’s management contractor and accusations of a toxic atmosphere. Merminga is being replaced by Young-Kee Kim from the University of Chicago, who will serve as interim director until a permanent successor is found. Kim was previously Fermilab’s deputy director between 2006 and 2013.
Tracy Marc, a spokerperson for Fermilab, says that the search for Merminga’s successor has already begun, although without a specific schedule. “Input from Fermilab employees is highly valued and we expect to have Fermilab employee representatives as advisory members on the search committee, just as has been done in the past,” Marc told Physics World. “The search committee will keep the Fermilab community informed about the progress of this search.”
The departure of Merminga, who became Fermilab director in August 2022, was announced by Paul Alivisatos, president of the University of Chicago. The university jointly manages the lab with Universities Research Association (URA), a consortium of research universities, as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates.
“Her dedication and passion for high-energy physics and Fermilab’s mission have been deeply appreciated,” Alivisatos said in a statement. “This leadership change will bring fresh perspectives and expertise to the Fermilab leadership team.”
Turbulent times
The reasons for Merminga’s resignation are unclear but Fermilab has experienced a difficult last two years with questions raised about its internal management and external oversight. Last August, a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.
The document highlighted issues such as management cover ups of dangerous behaviour including guns being brought onto Fermilab’s campus and a male employee’s attack on a female colleague. In addition, key experiments such as the Deep Underground Neutrino Experiment suffered notable delays. Cost overruns also led to a “limited operations period” with most staff on leave in late August.
In October, the US Department of Energy, which oversees Fermilab, announced a new organization – Fermi Forward Discovery Group – to manage the lab. Yet that decision came under scrutiny given it is dominated by the University of Chicago and URA, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off, adding to portray an institution in crisis.
The whistleblowers, who told Physics World that they still stand by their analysis of the lab’s issues, say that the layoffs “undermined Fermilab’s scientific mission” and claim that it sidelined “some of its most accomplished” researchers at the lab. “Meanwhile, executive managers, insulated by high salaries and direct oversight responsibilities, remained unaffected,” they allege.
Born in Greece, Merminga, 65, earned a BSc in physics from the University of Athens before moving to the University of Michigan where she completed an MS and PhD in physics. Before taking on Fermilab’s directorship, she held leadership posts in governmental physics-related institutions in the US and Canada.
CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.
Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.
Exotic antinuclei
Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).
Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.
Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.
Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.
Rapid decay
But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.
Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.
The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.
The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.
The observation is described in a paper that has been submitted to Physical Review Letters.