Space Elevators Could Totally Work—if Earth Days Were Much Shorter
Grid operators around the world are under intense pressure to expand and modernize their power networks. The International Energy Authority predicts that demand for electricity will rise by 30% in this decade alone, fuelled by global economic growth and the ongoing drive towards net zero. At the same time, electrical transmission systems must be adapted to handle the intermittent nature of renewable energy sources, as well as the extreme and unpredictable weather conditions that are being triggered by climate change.
High-voltage capacitors play a crucial role in these power networks, balancing the electrical load and managing the flow of current around the grid. For more than 40 years, the standard dielectric for storing energy in these capacitors has been a thin film of a polymer material called biaxially oriented polypropylene (BOPP). However, as network operators upgrade their analogue-based infrastructure with digital technologies such as solid-state transformers and high-frequency switches, BOPP struggles to provide the thermal resilience and reliability that are needed to ensure the stability, scalability and security of the grid.
“We’re trying to bring innovation to an area that hasn’t seen it for a very long time,” says Dr Mike Ponting, Chief Scientific Officer of Peak Nano, a US firm specializing in advanced polymer materials. “Grid operators have been using polypropylene materials for a generation, with no improvement in capability or performance. It’s time to realize we can do better.”
Peak Nano has created a new capacitor film technology that address the needs of the digital power grid, as well as other demanding energy storage applications such as managing the power supply to data centres, charging solutions for electric cars, and next-generation fusion energy technology. The company’s Peak NanoPlex™ materials are fabricated from multiple thin layers of different polymer materials, and can be engineered to deliver enhanced performance for both electrical and optical applications. The capacitor films typically contain polymer layers anywhere between 32 and 156 nm thick, while the optical materials are fabricated with as many as 4000 layers in films thinner than 300 µm.
“When they are combined together in an ordered, layered structure, the long polymer molecules behave and interact with each other in different ways,” explains Ponting. “By putting the right materials together, and controlling the precise arrangement of the molecules within the layers, we can engineer the film properties to achieve the performance characteristics needed for each application.”
In the case of capacitor films, this process enhances BOPP’s properties by interleaving it with another polymer. Such layered films can be optimized to store four times the energy as conventional BOPP while achieving extremely fast charge/discharge rates. Alternatively, they can be engineered to deliver longer lifetimes at operating temperatures some 50–60°C higher than existing materials. Such improved thermal resilience is useful for applications that experience more heat, such as mining and aerospace, and is also becoming an important priority for grid operators as they introduce new transmission technologies that generate more heat.
“We talked to the users of the components to find out what they needed, and then adjusted our formulations to meet those needs,” says Ponting. “Some people wanted smaller capacitors that store a lot of energy and can be cycled really fast, while others wanted an upgraded version of BOPP that is more reliable at higher temperatures.”
The multilayered materials now being produced by Peak Nano emerged from research Ponting was involved in while he was a graduate student at Case Western Reserve University in the 2000s, where Ponting was a graduate student. Plastics containing just a few layers had originally been developed for everyday applications like gift wrap and food packaging, but scientists were starting to explore the novel optical and electronic properties that emerge when the thickness of the polymer layers is reduced to the nanoscale regime.
Small samples of these polymer nanocomposites produced in the lab demonstrated their superior performance, and Peak Nano was formed in 2016 to commercialize the technology and scale up the fabrication process. “There was a lot of iteration and improvement to produce large quantities of the material while still maintaining the precision and repeatability of the nanostructured layers,” says Ponting, who has been developing these multilayered polymer materials and the required processing technology for more than 20 years. “The film properties we want to achieve require the polymer molecules to be well ordered, and it took us a long time to get it right.”
As part of this development process, Peak Nano worked with capacitor manufacturers to create a plug-and-play replacement technology for BOPP that can be used on the same manufacturing systems and capacitor designs as BOPP today. By integrating its specialist layering technology into these existing systems, Peak Nano has been able to leverage established supply chains for materials and equipment rather than needing to develop a bespoke manufacturing process. “That has helped to keep costs down, which means that our layered material is only slightly more expensive than BOPP,” says Ponting.
Ponting also points out that long term, NanoPlex is a more cost-effective option. With improved reliability and resilience, NanoPlex can double or even quadruple the lifetime of a component. “The capacitors don’t need to be replaced as often, which reduces the need for downtime and offsets the slightly higher cost,” he says.
For component manufacturers, meanwhile, the multilayered films can be used in exactly the same way as conventional materials. “Our material can be wound into capacitors using the same process as for polypropylene,” says Ponting. “Our customers don’t need to change their process; they just need to design for higher performance.”
Initial interest in the improved capabilities of NanoPlex came from the defence sector, with Peak Nano benefiting from investment and collaborative research with the US Defense Advanced Research Projects Agency (DARPA) and the Naval Research Laboratory. Optical films produced by the company have been used to fabricate lenses with a graduated refractive index, reducing the size and weight of head-mounted visual equipment while also sharpening the view. Dielectric films with a high breakdown voltage are also a common requirement within the defence community.
The post Nanostructured plastics deliver energy innovation appeared first on Physics World.
Superfluorescence is a collective quantum phenomenon in which many excited particles emit light coherently in a sudden, intense burst. It is usually only observed at cryogenic temperatures, but researchers in the US and France have now determined how and why superfluorescence occurs at room temperature in a lead halide perovskite. The work could help in the development of materials that host exotic coherent quantum states – like superconductivity, superfluidity or superfluorescence – under ambient conditions, they say.
Superfluorescence and other collective quantum phenomena are rapidly destroyed at temperatures higher than cryogenic ones because of thermal vibrations produced in the crystal lattice. In the system studied in this work, the researchers, led by physicist Kenan Gundogdu of North Carolina State University, found that excitons (bound electron–hole pairs) spontaneously form localized, coherence-preserving domains. “These solitons act like quantum islands,” explains Gundogdu. “Excitons inside these islands remain coherent while those outside rapidly dephase.”
The soliton structure acts as a shield, he adds, protecting its content from thermal disturbances – a behaviour that represents a kind of quantum analogue of “soundproofing” – that is, isolation from vibrations. “Here, coherence is maintained not by external cooling but by intrinsic self-organization,” he says.
The team, which also includes researchers from Duke University, Boston University and the Institut Polytechnique de Paris, began their experiment by exciting lead halide perovskite samples with intense femtosecond laser pulses to generate a dense population of excitons in the material. Under normal conditions, these excitons recombine and emit light incoherently, but at high enough densities, as was the case here, the researchers observed intense, time-delayed bursts of coherent emission, which is a signature of superfluorescence.
When they analysed how the emission evolved over time, the researchers observed that it fluctuated. Surprisingly, these fluctuations were not random, explains Gundogdu, but were modulated by a well-defined frequency, corresponding to a specific lattice vibrational mode. “This suggested that the coherent excitons that emit superfluorescence come from a region in the lattice in which the lattice modes themselves oscillate in synchrony.”
So how can coherent lattice oscillations arise in a thermally disordered environment? The answer involves polarons, says Gundogdu. These are groups of excitons that locally deform the lattice. “Above a critical excitation density, these polarons self-organize into a soliton, which concentrates energy into specific vibrational modes while suppressing others. This process filters out incoherent lattice motion, allowing a stable collective oscillation to emerge.”
The new work, which is detailed in Nature, builds on a previous study in which the researchers had observed superfluorescence in perovskites at room temperature – an unexpected result. They suspected that an intrinsic effect was protecting excitons from dephasing – possibly through a quantum analogue of vibration isolation as mentioned – but the mechanism behind this was unclear.
In this latest experiment, the team determined how polarons can self-organize into soliton states, and revealed an unconventional form of superfluorescence where coherence emerges intrinsically inside solitons. This coherence protection mechanism might be extended to other macroscopic quantum phenomena such as superconductivity and superfluidity.
“These effects are foundational for quantum technologies, yet how coherence survives at high temperatures is still unresolved,” Gundogdu tells Physics World. “Our findings provide a new principle that could help close this knowledge gap and guide the design of more robust, high-temperature quantum systems.”
The post Soliton structure protects superfluorescence appeared first on Physics World.
Light has always played a central role in healthcare, enabling a wide range of tools and techniques for diagnosing and treating disease. Nick Stone from the University of Exeter is a pioneer in this field, working with technologies ranging from laser-based cancer therapies to innovative spectroscopy-based diagnostics. Stone was recently awarded the Institute of Physics’ Rosalind Franklin Medal and Prize for developing novel Raman spectroscopic tools for rapid in vivo cancer diagnosis and monitoring. Physics World’s Tami Freeman spoke with Stone about his latest research.
Think about how we see the sky. It is blue due to elastic (specifically Rayleigh) scattering – when an incident photon scatters off a particle without losing any energy. But in about one in a million events, photons interacting with molecules in the atmosphere will be inelastically scattered. This changes the energy of the photon as some of it is taken by the molecule to make it vibrate.
If you shine laser light on a molecule and cause it to vibrate, the photon that is scattered from that molecule will be shifted in energy by a specific amount relating to the molecule’s vibrational mode. Measuring the wavelength of this inelastically scattered light reveals which molecule it was scattered from. This is Raman spectroscopy.
Because most of the time we’re working at room or body temperatures, most of what we observe is Stokes Raman scattering, in which the laser photons lose energy to the molecules. But if a molecule is already vibrating in an excited state (at higher temperature), it can give up energy and shift the laser photon to a higher energy. This anti-Stokes spectrum is much weaker, but can be very useful – as I’ll come back to later.
A cell in the body is basically a nucleus: one set of molecules, surrounded by the cytoplasm: another set of molecules. These molecules change subtlety depending on the phenotype [set of observable characteristics] of the particular cell. If you have a genetic mutation, which is what drives cancer, the cell tends to change its relative expression of proteins, nucleic acids, glycogen and so on.
We can probe these molecules with light, and therefore determine their molecular composition. Cancer diagnostics involves identifying minute changes between the different compositions. Most of our work has been in tissues, but it can also be done in biofluids such as tears, blood plasma or sweat. You build up a molecular fingerprint of the tissue or cell of interest, and then you can compare those fingerprints to identify the disease.
We tend to perform measurements under a microscope and, because Raman scattering is a relatively weak effect, this requires good optical systems. We’re trying to use a single wavelength of light to probe molecules of interest and look for wavelengths that are shifted from that of the laser illumination. Technology improvements have provided holographic filters that remove the incident laser wavelength readily, and less complex systems that enable rapid measurements.
Absolutely, we’ve developed probes that fit inside an endoscope for diagnosing oesophageal cancer.
Earlier in my career I worked on photodynamic therapy. We would look inside the oesophagus with an endoscope to find disease, then give the patient a phototoxic drug that would target the diseased cells. Shining light on the drug causes it to generate singlet oxygen that kills the cancer cells. But I realized that the light we were using could also be used for diagnosis.
Currently, to find this invisible disease, you have to take many, many biopsies. But our in vivo probes allow us to measure the molecular composition of the oesophageal lining using Raman spectroscopy, to be and determine where to take biopsies from. Oesophageal cancer has a really bad outcome once it’s diagnosed symptomatically, but if you can find the disease early you can deliver effective treatments. That’s what we’re trying to do.
The very weak Raman signal, however, causes problems. With a microscope, we can use advanced filters to remove the incident laser wavelength. But sending light down an optical fibre generates unwanted signal, and we also need to remove elastically scattered light from the oesophagus. So we had to put a filter on the end of this tiny 2 mm fibre probe. In addition, we don’t want to collect photons that have travelled a long way through the body, so we needed a confocal system. We built a really complex probe, working in collaboration with John Day at the University of Bristol – it took a long time to optimize the optics and the engineering.
Yes, we have also developed a smart needle probe that’s currently in trials. We are using this to detect lymphomas – the primary cancer in lymph nodes – in the head and neck, under the armpit and in the groin.
If somebody comes forward with lumps in these areas, they usually have a swollen lymph node, which shows that something is wrong. Most often it’s following an infection and the node hasn’t gone back down in size.
This situation usually requires surgical removal of the node to decide whether cancer is present or not. Instead, we can just insert our needle probe and send light in. By examining the scattered light and measuring its fingerprint we can identify if it’s lymphoma. Indeed, we can actually see what type of cancer it is and where it has come from.
Currently, the prototype probe is quite bulky because we are trying to make it low in cost. It has to have a disposable tip, so we can use a new needle each time, and the filters and optics are all in the handpiece.
As people don’t particularly want a needle stuck in them, we are now trying to understand where the photons travel if you just illuminate the body. Red and near-infrared light travel a long way through the body, so we can use near-infrared light to probe photons that have travelled many, many centimetres.
We are doing a study looking at calcifications in a very early breast cancer called ductal carcinoma in situ (DCIS) – it’s a Cancer Research UK Grand Challenge called DCIS PRECISION, and we are just moving on to the in vivo phase.
Calcifications aren’t necessarily a sign of breast cancer – they are mostly benign; but in patients with DCIS, the composition of the calcifications can show how their condition will progress. Mammographic screening is incredibly good at picking up breast cancer, but it’s also incredibly good at detecting calcifications that are not necessarily breast cancer yet. The problem is how to treat these patients, so our aim is to determine whether the calcifications are completely fine or if they require biopsy.
We are using Raman spectroscopy to understand the composition of these calcifications, which are different in patients who are likely to progress onto invasive disease. We can do this in biopsies under a microscope and are now trying to see whether it works using transillumination, where we send near-infrared light through the breast. We could use this to significantly reduce the number of biopsies, or monitor individuals with DCIS over many years.
This is an area I’m really excited about. Nanoscale gold can enhance Raman signals by many orders of magnitude – it’s called surface-enhanced Raman spectroscopy. We can also “label” these nanoparticles by adding functional molecules to their surfaces. We’ve used unlabelled gold nanoparticles to enhance signals from the body and labelled gold to find things.
During that process, we also realized that we can use gold to provide heat. If you shine light on gold at its resonant frequency, it will heat the gold up and can cause cell death. You could easily blow holes in people with a big enough laser and lots of nanoparticles – but we want to do is more subtle. We’re decorating the tiny gold nanoparticles with a label that will tell us their temperature.
By measuring the ratio between Stokes and anti-Stokes scattering signals (which are enhanced by the gold nanoparticles), we can measure the temperature of the gold when it is in the tumour. Then, using light, we can keep the temperature at a suitable level for treatment to optimize the outcome for the patient.
Ideally, we want to use 100 nm gold particles, but that is not something you can simply excrete through the kidneys. So we’ve spent the last five years trying to create nanoconstructs made from 5 nm gold particles that replicate the properties of 100 nm gold, but can be excreted. We haven’t demonstrated this excretion yet, but that’s the process we’re looking at.
We’ve just completed a five-year programme called Raman Nanotheranostics. The aim is to label our nanoparticles with appropriate antibodies that will help the nanoparticles target different cancer types. This could provide signals that tell us what is or is not present and help decide how to treat the patient.
We have demonstrated the ability to perform treatments in preclinical models, control the temperature and direct the nanoparticles. We haven’t yet achieved a multiplexed approach with all the labels and antibodies that we want. But this is a key step forward and something we’re going to pursue further.
We are also trying to put labels on the gold that will enable us to measure and monitor treatment outcomes. We can use molecules that change in response to pH, or the reactive oxygen species that are present, or other factors. If you want personalized medicine, you need ways to see how the patient reacts to the treatment, how their immune system responds. There’s a whole range of things that will enable us to go beyond just diagnosis and therapy, to actually monitor the treatment and potentially apply a boost if the gold is still there.
Light has always been used for diagnosis: “you look yellow, you’ve got something wrong with your liver”; “you’ve got blue-tinged lips, you must have oxygen depletion”. But it’s getting more and more advanced. I think what’s most encouraging is our ability to measure molecular changes that potentially reveal future outcomes of patients, and individualization of the patient pathway.
But the real breakthrough is what’s on our wrists. We are all walking around with devices that shine light in us – to measure heartbeat, blood oxygenation and so on. There are already Raman spectrometers that sort of size. They’re not good enough for biological measurements yet, but it doesn’t take much of a technology step forward.
I could one day have a chip implanted in my wrist that could do all the things the gold nanoconstructs might do, and my watch could read it out. And this is just Raman – there are a whole host of approaches, such as photoacoustic imaging or optical coherence tomography. Combining different techniques together could provide greater understanding in a much less invasive way than many traditional medical methods. Light will always play a really important role in healthcare.
The post Harnessing the power of light for healthcare appeared first on Physics World.
Astronomers in China have observed a pulsar that becomes partially eclipsed by an orbiting companion star every few hours. This type of observation is very rare and could shed new light on how binary star systems evolve.
While most stars in our galaxy exist in pairs, the way these binary systems form and evolve is still little understood. According to current theories, when two stars orbit each other, one of them may expand so much that its atmosphere becomes large enough to encompass the other. During this “envelope” phase, mass can be transferred from one star to the other, causing the stars’ orbit to shrink over a period of around 1000 years. After this, the stars either merge or the envelope is ejected.
In the special case where one star in the pair is a neutron star, the envelope-ejection scenario should, in theory, produce a helium star that has been “stripped” of much of its material and a “recycled” millisecond pulsar – that is, a rapidly spinning neutron star that flashes radio pulses hundreds of times per second. In this type of binary system, the helium star can periodically eclipse the pulsar as it orbits around it, blocking its radio pulses and preventing us from detecting them here on Earth. Only a few examples of such a binary system have ever been observed, however, and all previous ones were in nearby dwarf galaxies called the Magellanic Clouds, rather than our own Milky Way.
Astronomers led by Jinlin Han from the National Astronomical Observatories of China say they have now identified the first system of this type in the Milky Way. The pulsar in the binary, denoted PSR J1928+1815, had been previously identified using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) during the FAST Galactic Plane Pulsar Snapshot survey. These observations showed that PSR J1928+1815 has a spin period of 10.55 ms, which is relatively short for a pulsar of this type and suggests it had recently sped up by accreting mass from a companion.
The researchers used FAST to observe this suspected binary system at radio frequencies ranging from 1.0 to 1.5 GHz over a period of four and a half years. They fitted the times that the radio pulses arrived at the telescope with a binary orbit model to show that the system has an eccentricity of less than 3 × 10−5. This suggests that the pulsar and its companion star are in a nearly circular orbit. The diameter of this orbit, Han points out, is smaller than that of our own Sun, and its period – that is, the time it takes the two stars to circle each other – is correspondingly short, at 3.6 hours. For a sixth of this time, the companion star blocks the pulsar’s radio signals.
The team also found that the rate at which this orbital period is changing (the so-called spin period derivative) is unusually high for a millisecond-period pulsar, at 3.63 × 10−18 s s−1 .This shows that energy is rapidly being lost from the system as the pulsar spins down.
“We knew that PSR J1928+1815 was special from November 2021 onwards,” says Han. “Once we’d accumulated data with FAST, one of my students, ZongLin Yang, studied the evolution of such binaries in general and completed the timing calculations from the data we had obtained for this system. His results suggested the existence of the helium star companion and everything then fell into place.”
This is the first time a short-life (107 years) binary consisting of a neutron star and a helium star has ever been detected, Han tells Physics World. “It is a product of the common envelope evolution that lasted for only 1000 years and that we couldn’t observe directly,” he says.
“Our new observation is the smoking gun for long-standing binary star evolution theories, such as those that describe how stars exchange mass and shrink their orbits, how the neutron star spins up by accreting matter from its companion and how the shared hydrogen envelope is ejected.”
The system could help astronomers study how neutron stars accrete matter and then cool down, he adds. “The binary detected in this work will evolve to become a system of two compact stars that will eventually merge and become a future source of gravitational waves.”
Full details of the study are reported in Science.
The post Short-lived eclipsing binary pulsar spotted in Milky Way appeared first on Physics World.
As the world celebrates the 2025 International Year of Quantum Science and Technology, it’s natural that we should focus on the exciting applications of quantum physics in computing, communication and cryptography. But quantum physics is also set to have a huge impact on medicine and healthcare. Quantum sensors, in particular, can help us to study the human body and improve medical diagnosis – in fact, several systems are close to being commercialized.
Quantum computers, meanwhile, could one day help us to discover new drugs by providing representations of atomic structures with greater accuracy and by speeding up calculations to identify potential drug reactions. But what other technologies and projects are out there? How can we forge new applications of quantum physics in healthcare and how can we help discover new potential use cases for the technology?
Those are the some of the questions tackled in a recent report, on which this Physics World article is based, published by Innovate UK in October 2024. Entitled Quantum for Life, the report aims to kickstart new collaborations by raising awareness of what quantum physics can do for the healthcare sector. While the report says quite a bit about quantum computing and quantum networking, this article will focus on quantum sensors, which are closer to being deployed.
The importance of quantum science to healthcare isn’t new. In fact, when a group of academics and government representatives gathered at Chicheley Hall back in 2013 to hatch plans for the UK’s National Quantum Technologies Programme, healthcare was one of the main applications they identified. The resulting £1bn programme, which co-ordinated the UK’s quantum-research efforts, was recently renewed for another decade and – once again – healthcare is a key part of the remit.
As it happens, most major hospitals already use quantum sensors in the form of magnetic resonance imaging (MRI) machines. Pioneered in the 1970s, these devices manipulate the quantum spin states of hydrogen atoms using magnetic fields and radio waves. By measuring how long those states take to relax, MRI can image soft tissues, such as the brain, and is now a vital part of the modern medicine toolkit.
While an MRI machine measures the quantum properties of atoms, the sensor itself is classical, essentially consisting of electromagnetic coils that detect the magnetic flux produced when atomic spins change direction. More recently, though, we’ve seen a new generation of nanoscale quantum sensors that are sensitive enough to detect magnetic fields emitted by a target biological system. Others, meanwhile, consist of just a single atom and can monitor small changes in the environment.
There are lots of different quantum-based companies and institutions working in the healthcare sector
As the Quantum for Life report shows, there are lots of different quantum-based companies and institutions working in the healthcare sector. There are also many promising types of quantum sensors, which use photons, electrons or spin defects within a material, typically diamond. But ultimately what matters is what quantum sensors can achieve in a medical environment.
While compiling the report, it became clear that quantum-sensor technologies for healthcare come in five broad categories. The first is what the report labels “lab diagnostics”, in which trained staff use quantum sensors to observe what is going on inside the human body. By monitoring everything from our internal temperature to the composition of cells, the sensors can help to identify diseases such as cancer.
Currently, the only way to definitively diagnose cancer is to take a sample of cells – a biopsy – and examine them under a microscope in a laboratory. Biopsies are often done with visual light but that can damage a sample, making diagnosis tricky. Another option is to use infrared radiation. By monitoring the specific wavelengths the cells absorb, the compounds in a sample can be identified, allowing molecular changes linked with cancer to be tracked.
Unfortunately, it can be hard to differentiate these signals from background noise. What’s more, infrared cameras are much more expensive than those operating in the visible region. One possible solution is being explored by Digistain, a company that was spun out of Imperial College, London, in 2019. It is developing a product called EntangleCam that uses two entangled photons – one infrared and one visible (figure 1).
a One way in which quantum physics is benefiting healthcare is through entangled photons created by passing laser light through a nonlinear crystal (left). Each laser photon gets converted into two lower-energy photons – one visible, one infrared – in a process called spontaneous parametric down conversion. In technology pioneered by the UK company Digistain, the infrared photon can be sent through a sample, with the visible photon picked up by a detector. As the photons are entangled, the visible photon gives information about the infrared photon and the presence of, say, cancer cells. b Shown here are cells seen with traditional stained biopsy (left) and with Digistain’s method (right).
If the infrared photon is absorbed by, say, a breast cancer cell, that immediately affects the visible photon with which it is entangled. So by measuring the visible light, which can be done with a cheap, efficient detector, you can get information about the infrared photon – and hence the presence of a potential cancer cell (Phys. Rev. 108 032613). The technique could therefore allow cancer to be quickly diagnosed before a tumour has built up, although an oncologist would still be needed to identify the area for the technique to be applied.
The second promising application of quantum sensors lies in “point-of-care” diagnostics. We all became familiar with the concept during the COVID-19 pandemic when lateral-flow tests proved to be a vital part of the worldwide response to the virus. The tests could be taken anywhere and were quick, simple, reliable and relatively cheap. Something that had originally been designed to be used in a lab was now available to most people at home.
Quantum technology could let us miniaturize such tests further and make them more accurate, such that they could be used at hospitals, doctor’s surgeries or even at home. At the moment, biological indicators of disease tend to be measured by tagging molecules with fluorescent markers and measuring where, when and how much light they emit. But because some molecules are naturally fluorescent, those measurements have to be processed to eliminate the background noise.
One emerging quantum-based alternative is to characterize biological samples by measuring their tiny magnetic fields. This can be done, for example, using diamond specially engineered with nitrogen-vacancy (NV) defects. Each is made by removing two carbon atoms from the lattice and implanting a nitrogen atom in one of the gaps, leaving a vacancy in the other. Behaving like an atom with discrete energy levels, each defect’s spin state is influenced by the local magnetic field and can be “read out” from the way it fluoresces.
One UK company working in this area is Element Six. It has joined forces with the US-based firm QDTI to make a single-crystal diamond-based device that can quickly identify biomarkers in blood plasma, cerebrospinal fluid and other samples extracted from the body. The device detects magnetic fields produced by specific proteins, which can help identify diseases in their early stages, including various cancers and neurodegenerative conditions like Alzheimer’s. Another firm using single-crystal diamond to detect cancer cells is Germany-based Quantum Total Analysis Systems (QTAS).
Matthew Markham, a physicist who is head of quantum technologies at Element Six, thinks that healthcare has been “a real turning point” for the company. “A few years ago, this work was mostly focused on academic problems,” he says. “But now we are seeing this technology being applied to real-world use cases and that it is transitioning into industry with devices being tested in the field.”
An alternative approach involves using tiny nanometre-sized diamond particles with NV centres, which have the advantage of being highly biocompatible. QT Sense of the Netherlands, for example, is using these nanodiamonds to build nano-MRI scanners that can measure the concentration of molecules that have an intrinsic magnetic field. This equipment has already been used by biomedical researchers to investigate single cells (figure 2).
A nitrogen-vacancy defect in diamond – known as an NV centre – is made by removing two carbon atoms from the lattice and implanting a nitrogen atom in one of the gaps, leaving a vacancy in the other. Using a pulse of green laser light, NV centres can be sent from their ground state to an excited state. If the laser is switched off, the defects return to their ground state, emitting a visible photon that can be detected. However, the rate at which the fluorescent light drops while the laser is off depends on the local magnetic field. As companies like Element Six and QTSense are discovering, NV centres in diamond are great way of measuring magnetic fields in the human body especially as the surrounding lattice of carbon atoms shields the NV centre from noise.
Australian firm FeBI Technologies, meanwhile, is developing a device that uses nanodiamonds to measure the magnetic properties of ferritin – a protein that stores iron in the body. The company claims its technology is nine orders of magnitude more sensitive than traditional MRI and will allow patients to monitor the amount of iron in their blood using a device that is accurate and cheap.
The third area in which quantum technologies are benefiting healthcare is what’s billed in the Quantum for Life report as “consumer medical monitoring and wearable healthcare”. In other words, we’re talking about devices that allow people to monitor their health in daily life on an ongoing basis. Such technologies are particularly useful for people who have a diagnosed medical condition, such as diabetes or high blood pressure.
NIQS Tech, for example, was spun off from the University of Leeds in 2022 and is developing a highly accurate, non-invasive sensor for measuring glucose levels. Traditional glucose-monitoring devices are painful and invasive because they basically involve sticking a needle in the body. While newer devices use light-based spectroscopic measurements, they tend to be less effective for patients with darker skin tones.
The sensor from NIQS Tech instead uses a doped silica platform, which enables quantum interference effects. When placed in contact with the skin and illuminated with laser light, the device fluoresces, with the lifetime of the fluorescence depending on the amount of glucose in the user’s blood, regardless of skin tone. NIQS has already demonstrated proof of concept with lab-based testing and now wants to shrink the technology to create a wearable device that monitors glucose levels continuously.
The fourth application of quantum tech lies in body scanning, which allows patients to be diagnosed without needing a biopsy. One company leading in this area is Cerca Magnetics, which was spun off from the University of Nottingham. In 2023 it won the inaugural qBIG prize for quantum innovation from the Institute of Physics, which publishes Physics World, for developing wearable optically pumped magnetometers for magnetoencephalography (MEG), which measure magnetic fields generated by neuronal firings in the brain. Its devices can be used to scan patients’ brains in a comfortable seated position and even while they are moving.
Quantum-based scanning techniques could also help diagnose breast cancer, which is usually done by exposing a patient’s breast tissue to low doses of X-rays. The trouble with such mammograms is that all breasts contain a mix of low-density fatty and other, higher-density tissue. The latter creates a “white blizzard” effect against the dark background, making it challenging to differentiate between healthy tissue and potential malignancies.
That’s a particular problem for the roughly 40% of women who have a higher concentration of higher-density tissue. One alternative is to use molecular breast imaging (MBI), which involves imaging the distribution of a radioactive tracer that has been intravenously injected into a patient. This tracer, however, exposes patients to a higher (albeit still safe) dose of radiation than with a mammogram, which means that patients have to be imaged for a long time to get enough signal.
A solution could lie with the UK-based firm Kromek, which is using cadmium zinc telluride (CZT) semiconductors that produce a measurable voltage pulse from just a single gamma-ray photon. As well as being very efficient over a broad range of X-ray and gamma-ray photon energies, CZTs can be integrated onto small chips operating at room temperature. Preliminary results with Kromek’s ultralow-dose and ultrafast detectors show they work with barely one-eighth of the amount of tracer as traditional MBI techniques.
“Our prototypes have shown promising results,” says Alexander Cherlin, who is principal physicist at Kromek. The company is now designing and building a full-size prototype of the camera as part of Innovate UK’s £2.5m “ultralow-dose” MBI project, which runs until the end of 2025. It involves Kromek working with hospitals in Newcastle along with researchers at University College London and the University of Newcastle.
The final application of quantum sensors to medicine lies in microscopy, which these days no longer just means visible light but everything from Raman and two-photon microscopy to fluorescence lifetime imaging and multiphoton microscopy. These techniques allow samples to be imaged at different scales and speeds, but they are all reaching various technological limits.
Quantum technologies can help us break the technological limits of microscopy
Quantum technologies can help us break those limits. Researchers at the University of Glasgow, for example, are among those to have used pairs of entangled photons to enhance microscopy through “ghost imaging”. One photon in each pair interacts with a sample, with the image built up by detecting the effect on its entangled counterpart. The technique avoids the noise created when imaging with low levels of light (Sci. Adv. 6 eaay2652).
Researchers at the University of Strathclyde, meanwhile, have used nanodiamonds to get around the problem that dyes added to biological samples eventually stop fluorescing. Known as photobleaching, the effect prevents samples from being studied after a certain time (Roy. Soc. Op. Sci. 6 190589). In the work, samples could be continually imaged and viewed using two-photon excitation microscopy with a 10-fold increase in resolution.
But despite the great potential of quantum sensors in medicine, there are still big challenges before the technology can be deployed in real, clinical settings. Scalability – making devices reliably, cheaply and in sufficient numbers – is a particular problem. Fortunately, things are moving fast. Even since the Quantum for Life report came out late in 2024, we’ve seen new companies being founded to address these problems.
One such firm is Bristol-based RobQuant, which is developing solid-state semiconductor quantum sensors for non-invasive magnetic scanning of the brain. Such sensors, which can be built with the standard processing techniques used in consumer electronics, allow for scans on different parts of the body. RobQuant claims its sensors are robust and operate at ambient temperatures without requiring any heating or cooling.
Agnethe Seim Olsen, the company’s co-founder and chief technologist, believes that making quantum sensors robust and scalable is vital if they are to be widely adopted in healthcare. She thinks the UK is leading the way in the commercialization of such sensors and will benefit from the latest phase of the country’s quantum hubs. Bringing academia and businesses together, they include the £24m Q-BIOMED biomedical-sensing hub led by University College London and the £27.5m QuSIT hub in imaging and timing led by the University of Birmingham.
Q-BIOMED is, for example, planning to use both single-crystal diamond and nanodiamonds to develop and commercialize sensors that can diagnose and treat diseases such as cancer and Alzheimer’s at much earlier stages of their development. “These healthcare ambitions are not restricted to academia, with many startups around the globe developing diamond-based quantum technology,” says Markham at Element Six.
As with the previous phases of the hubs, allowing for further research encourages start-ups – researchers from the forerunner of the QuSIT hub, for example, set up Cerca Magnetics. The growing maturity of some of these quantum sensors will undoubtedly attract existing medical-technology companies. The next five years will be a busy and exciting time for the burgeoning use of quantum sensors in healthcare.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.
Find out more on our quantum channel.
The post How quantum sensors could improve human health and wellbeing appeared first on Physics World.
The Helgoland 2025 meeting, marking 100 years of quantum mechanics, has featured a lot of mind-bending fundamental physics, quite a bit of which has left me scratching my head.
So it was great to hear a brilliant talk by David Moore of Yale University about some amazing practical experiments using levitated, trapped microspheres as quantum sensors to detect what he calls the “invisible” universe.
If the work sounds familar to you, that’s because Moore’s team won a Physics World Top 10 Breakthrough of the Year award in 2024 for using their technique to detect the alpha decay of individual lead-212 atoms.
Speaking in the Nordseehalle on the island of Helgoland, Moore explained the next stage of the experiment, which could see it detect neutrinos “in a couple of months” at the earliest – and “at least within a year” at the latest.
Of course, physicists have already detected neutrinos, but it’s a complicated business, generally involving huge devices in deep underground locations where background signals are minimized. Yale’s set up is much cheaper, smaller and more convenient, involving no more than a couple of lab benches.
As Moore explained, he and his colleagues first trap silica spheres at low pressure, before removing excess electrons to electrically neutralize them. They then stabilize the spheres’ rotation before cooling them to microkelvin temperatures.
In the work that won the Physics World award last year, the team used samples of radon-220, which decays first into polonium-216 and then lead-212. These nuclei embed theselves in the silica spheres, which recoil when the lead-212 decays by releasing an alpha particle (Phys. Rev. Lett. 133 023602).
Moore’s team is able to measure the tiny recoil by watching how light scatters off the spheres. “We can see the force imparted by a subatomic particle on a heavier object,” he told the audience at Helgoland. “We can see single nuclear decays.”
Now the plan is to extend the experiment to detect neutrinos. These won’t (at least initially) be the neutrinos that stream through the Earth from the Sun or even those from a nuclear reactor.
Instead, the idea will be to embed the spheres with nuclei that undergo beta decay, releasing a much lighter neutrino in the process. Moore says the team will do this within a year and, one day, potentially even use to it spot dark matter.
“We are reaching the quantum measurement regime,” he said. It’s a simple concept, even if the name – “Search for new Interactions in a Microsphere Precision Levitation Experiment” (SIMPLE) – isn’t.
This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.
Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.
Find out more on our quantum channel.
The post Yale researcher says levitated spheres could spot neutrinos ‘within months’ appeared first on Physics World.
The animal world – including some of its ickiest parts – never ceases to amaze. According to researchers in Canada and Singapore, velvet worm slime contains an ingredient that could revolutionize the design of high-performance polymers, making them far more sustainable than current versions.
“We have been investigating velvet worm slime as a model system for inspiring new adhesives and recyclable plastics because of its ability to reversibly form strong fibres,” explains Matthew Harrington, the McGill University chemist who co-led the research with Ali Miserez of Nanyang Technological University (NTU). “We needed to understand the mechanism that drives this reversible fibre formation, and we discovered a hitherto unknown feature of the proteins in the slime that might provide a very important clue in this context.”
The velvet worm (phylum Onychophora) is a small, caterpillar-like creature that lives in humid forests. Although several organisms, including spiders and mussels, produce protein-based slimy material outside their bodies, the slime of the velvet worm is unique. Produced from specialized papillae on each side of the worm’s head, and squirted out in jets whenever the worm needs to capture prey or defend itself, it quickly transforms from a sticky, viscoelastic gel into stiff, glassy fibres as strong as nylon.
When dissolved in water, these stiff fibres return to their biomolecular precursors. Remarkably, new fibres can then be drawn from the solution – implyimg that the instructions for fibre self-assembly are “encoded” within the precursors themselves, Harrington says.
Previously, the molecular mechanisms behind this reversibility were little understood. In the present study, however, the researchers used protein sequencing and the AI-guided protein structure prediction algorithm AlphaFold to identify a specific high-molecular-weight protein in the slime. Known as a leucine-rich repeat, this protein has a structure similar to that of a cell surface receptor protein called a Toll-like receptor (TLR).
In biology, Miserez explains, this type of receptor is involved in immune system response. It also plays a role in embryonic or neural development. In the worm slime, however, that’s not the case.
“We have now unveiled a very different role for TLR proteins,” says Miserez, who works in NTU’s materials science and engineering department. “They play a structural, mechanical role and can be seen as a kind of ‘glue protein’ at the molecular level that brings together many other slime proteins to form the macroscopic fibres.”
Miserez adds that the team found this same protein in different species of velvet worms that diverged from a common ancestor nearly 400 million years ago. “This means that this different biological function is very ancient from an evolutionary perspective,” he explains.
“It was very unusual to find such a protein in the context of a biological material,” Harrington adds. “By predicting the protein’s structure and its ability to bind to other slime proteins, we were able to hypothesize its important role in the reversible fibre formation behaviour of the slime.”
The team’s hypothesis is that the reversibility of fibre formation is based on receptor-ligand interactions between several slime proteins. While Harrington acknowledges that much work remains to be done to verify this, he notes that such binding is a well-described principle in many groups of organisms, including bacteria, plants and animals. It is also crucial for cell adhesion, development and innate immunity. “If we can confirm this, it could provide inspiration for making high-performance non-toxic (bio)polymeric materials that are also recyclable,” he tells Physics World.
The study, which is detailed in PNAS, was mainly based on computational modelling and protein structure prediction. The next step, say the McGill researchers, is to purify or recombinantly express the proteins of interest and test their interactions in vitro.
The post Worm slime could inspire recyclable polymer design appeared first on Physics World.
In this episode of the Physics World Weekly podcast we explore the career opportunities open to physicists and engineers looking to work within healthcare – as medical physicists or clinical engineers.
Physics World’s Tami Freeman is in conversation with two early-career physicists working in the UK’s National Health Service (NHS). They are Rachel Allcock, a trainee clinical scientist at University Hospitals Coventry and Warwickshire NHS Trust, and George Bruce, a clinical scientist at NHS Greater Glasgow and Clyde. We also hear from Chris Watt, head of communications and public affairs at IPEM, about the new IPEM careers guide.
This episode is supported by Radformation, which is redefining automation in radiation oncology with a full suite of tools designed to streamline clinical workflows and boost efficiency. At the centre of it all is AutoContour, a powerful AI-driven autocontouring solution trusted by centres worldwide.
The post Exploring careers in healthcare for physicists and engineers appeared first on Physics World.
If you dig deep enough, you’ll find that most biochemical and physiological processes rely on shuttling hydrogen atoms – protons – around living systems. Until recently, this proton transfer process was thought to occur when protons jump from water molecule to water molecule and between chains of amino acids. In 2023, however, researchers suggested that protons might, in fact, transfer at the same time as electrons. Scientists in Israel have now confirmed this is indeed the case, while also showing that proton movement is linked to the electrons’ spin, or magnetic moment. Since the properties of electron spin are defined by quantum mechanics, the new findings imply that essential life processes are intrinsically quantum in nature.
The scientists obtained this result by placing crystals of lysozyme – an enzyme commonly found in living organisms – on a magnetic substrate. Depending on the direction of the substrate’s magnetization, the spin of the electrons ejected from this substrate may be up or down. Once the electrons are ejected from the substrate, they enter the lysozymes. There, they become coupled to phonons, or vibrations of the crystal lattice.
Crucially, this coupling is not random. Instead, the chirality, or “handedness”, of the phonons determines which electron spin they will couple with – a property known as chiral induced spin selectivity.
When the scientists turned their attention to proton transfer through the lysozymes, they discovered that the protons moved much more slowly with one magnetization direction than they did with the opposite. This connection between proton transfer and spin-selective electron transfer did not surprise Yossi Paltiel, who co-led the study with his Hebrew University of Jerusalem (HUJI) colleagues Naama Goren, Nir Keren and Oded Livnah in collaboration with Nurit Ashkenazy of Ben Gurion University and Ron Naaman of the Weizmann Institute.
“Proton transfer in living organisms occurs in a chiral environment and is an essential process,” Paltiel says. “Since protons also have spin, it was logical for us to try to relate proton transfer to electron spin in this work.”
The finding could shed light on proton hopping in biological environments, Paltiel tells Physics World. “It may ultimately help us understand how information and energy are transferred inside living cells, and perhaps even allow us to control this transfer in the future.
“The results also emphasize the role of chirality in biological processes,” he adds, “and show how quantum physics and biochemistry are fundamentally related.”
The HUJI team now plans to study how the coupling between the proton transfer process and the transfer of spin polarized electrons depends on specific biological environments. “We also want to find out to what extent the coupling affects the activity of cells,” Paltiel says.
Their present study is detailed in PNAS.
The post Quantum physics guides proton motion in biological systems appeared first on Physics World.
I began my career in the 1990s at a university spin-out company, working for a business that developed vibration sensors to monitor the condition of helicopter powertrains and rotating machinery. It was a job that led to a career developing technologies and techniques for checking the “health” of machines, such as planes, trains and trucks.
What a difference three decades has made. When I started out, we would deploy bespoke systems that generated limited amounts of data. These days, everything has gone digital and there’s almost more information than we can handle. We’re also seeing a growing use of machine learning and artificial intelligence (AI) to track how machines operate.
In fact, with AI being increasingly used in medical science – for example to predict a patient’s risk of heart attacks – I’ve noticed intriguing similarities between how we monitor the health of machines and the health of human bodies. Jet engines and hearts are very different objects, but in both cases monitoring devices gives us a set of digitized physical measurements.
Sensors installed on a machine provide various basic physical parameters, such as its temperature, pressure, flow rate or speed. More sophisticated devices can yield information about, say, its vibration, acoustic behaviour, or (for an engine) oil debris or quality. Bespoke sensors might even be added if an important or otherwise unchecked aspect of a machine’s performance needs to be monitored – provided the benefits of doing so outweigh the cost.
Generally speaking, the sensors you use in a particular situation depend on what’s worked before and whether you can exploit other measurements, such as those controlling the machine. But whatever sensors are used, the raw data then have to be processed and manipulated to extract particular features and characteristics.
If the machine appears to be going wrong, can you try to diagnose what the problem might be?
Once you’ve done all that, you can then determine the health of the machine, rather like in medicine. Is it performing normally? Does it seem to be developing a fault? If the machine appears to be going wrong, can you try to diagnose what the problem might be?
Generally, we do this by tracking a range of parameters to look for consistent behaviour, such as a steady increase, or by seeing if a parameter exceeds a pre-defined threshold. With further analysis, we can also try to predict the future state of the machine, work out what its remaining useful life might be, or decide if any maintenance needs scheduling.
A diagnosis typically involves linking various anomalous physical parameters (or symptoms) to a probable cause. As machines obey the laws of physics, a diagnosis can either be based on engineering knowledge or be driven by data – or sometimes the two together. If a concrete diagnosis can’t be made, you can still get a sense of where a problem might lie before carrying out further investigation or doing a detailed inspection.
One way of doing this is to use a “borescope” – essentially a long, flexible cable with a camera on the end. Rather like an endoscope in medicine, it allows you to look down narrow or difficult-to-reach cavities. But unlike medical imaging, which generally takes place in the controlled environment of a lab or clinic, machine data are typically acquired “in the field”. The resulting images can be tricky to interpret because the light is poor, the measurements are inconsistent, or the equipment hasn’t been used in the most effective way.
Even though it can be hard to work out what you’re seeing, in-situ visual inspections are vital as they provide evidence of a known condition, which can be directly linked to physical sensor measurements. It’s a kind of health status calibration. But if you want to get more robust results, it’s worth turning to advanced modelling techniques, such as deep neural networks.
One way to predict the wear and tear of a machine’s constituent parts is to use what’s known as a “digital twin”. Essentially a virtual replica of a physical object, a digital twin is created by building a detailed model and then feeding in real-time information from sensors and inspections. The twin basically mirrors the behaviour, characteristics and performance of the real object.
Real-time health data are great because they allow machines to be serviced as and when required, rather than following a rigid maintenance schedule. For example, if a machine has been deployed heavily in a difficult environment, it can be serviced sooner, potentially preventing an unexpected failure. Conversely, if it’s been used relatively lightly and not shown any problems, then maintenance could be postponed or reduced in scope. This saves time and money because the equipment will be out of action less than anticipated.
We can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it
Having information about a machine’s condition at any point in time not only allows this kind of “intelligent maintenance” but also lets us use associated resources wisely. For example, we can work out which parts will need repairing or replacing, when the maintenance will be required and who will do it. Spare parts can therefore be ordered only when required, saving money and optimizing supply chains.
Real-time health-monitoring data are particularly useful for companies owning many machines of one kind, such as airlines with a fleet of planes or haulage companies with a lot of trucks. It gives them a better understanding not just of how machines behave individually – but also collectively to give a “fleet-wide” view. Noticing and diagnosing failures from data becomes an iterative process, helping manufacturers create new or improved machine designs.
This all sounds great, but in some respects, it’s harder to understand a machine than a human. People can be taken to hospitals or clinics for a medical scan, but a wind turbine or jet engine, say, can’t be readily accessed, switched off or sent for treatment. Machines also can’t tell us exactly how they feel.
However, even humans don’t always know when there’s something wrong. That’s why it’s worth us taking a leaf from industry’s book and consider getting regular health monitoring and checks. There are lots of brilliant apps out there to monitor and track your heart rate, blood pressure, physical activity and sugar levels.
Just as with a machine, you can avoid unexpected failure, reduce your maintenance costs, and make yourself more efficient and reliable. You could, potentially, even live longer too.
The post People benefit from medicine, but machines need healthcare too appeared first on Physics World.
Stars are cosmic musical instruments: they vibrate with complex patterns that echo through their interiors. These vibrations, known as pressure waves, ripple through the star, similar to the earthquakes that shake our planet. The frequencies of these waves hold information about the star’s mass, age and internal structure.
In a study led by researchers at UNSW Sydney, Australia, astronomer Claudia Reyes and colleagues “listened” to the sound from stars in the M67 cluster and discovered a surprising feature: a plateau in their frequency pattern. This plateau appears during the subgiant and red giant phases of stars where they expand and evolve after exhausting the hydrogen fuel in their cores. This feature, reported in Nature, reveals how deep the outer layers of the star have pushed into the interior and offers a new diagnostic to improve mass and age estimates of stars beyond the main sequence (the core-hydrogen-burning phase).
Beneath the surface of stars, hot gases are constantly rising, cooling and sinking back down, much like hot bubbles in boiling water. This constant churning is called convection. As these rising and sinking gas blobs collide or burst at the stellar surface, they generate pressure waves. These are essentially acoustic waves, bouncing within the stellar interior to create standing wave patterns.
Stars do not vibrate at just one frequency; they oscillate simultaneously at multiple frequencies, producing a spectrum of sounds. These acoustic oscillations cannot be heard in space directly, but are observed as tiny fluctuations in the star’s brightness over time.
Star clusters offer an ideal environment in which to study stellar evolution as all stars in a cluster form from the same gas cloud at about the same time with the same chemical compositions but with different masses. The researchers investigated stars from the open cluster M67, as this cluster has a rich population of evolved stars including subgiants and red giants with a chemical composition similar to the Sun’s. They measured acoustic oscillations in 27 stars using data from NASA’s Kepler/K2 mission.
Stars oscillate across a range of tones, and in this study the researchers focused on two key features in this oscillation spectrum: large and small frequency separations. The large frequency separation, which probes stellar density, is the frequency difference between oscillations of the same angular degree (ℓ) but different radial orders (n). The small frequency separation refers to frequency differences between the modes of degrees ℓ and ℓ + 2, of consecutive orders of n. For main sequence stars, small separations are reliable age indicators because their changes during hydrogen burning are well understood. In later stages of stellar evolution, however, their relationship to the stellar interior remained unclear.
In 27 stars, Reyes and colleagues investigated the small separation between modes of degrees 0 and 2. Plotting a graph of small versus large frequency separations for each star, called a C–D diagram, they uncovered a surprising plateau in small frequency separations.
The researchers traced this plateau to the evolution of the lower boundary of the star’s convective envelope. As the envelope expands and cools, this boundary sinks deeper into the interior. Along this boundary, the density and sound speed change rapidly due to the difference in chemical composition on either side. These steep changes cause acoustic glitches that disturb how the pressure waves move through the star and temporarily stall the evolution of the small frequency separations, observed as a plateau in the frequency pattern.
This stalling occurs at a specific stage in stellar evolution – when the convective envelope deepens enough to encompass nearly 80% of the star’s mass. To confirm this connection, the researchers varied the amount of convective boundary mixing in their stellar models. They found that the depth of the envelope directly influenced both the timing and shape of the plateau in the small separations.
This plateau serves as a new diagnostic tool to identify a specific evolutionary stage in red giant stars and improve estimates of their mass and age.
“The discovery of the ‘plateau’ frequencies is significant because it represents one more corroboration of the accuracy of our stellar models, as it shows how the turbulent regions at the bottom of a star’s envelope affect the sound speed,” explains Reyes, who is now at the Australian National University in Canberra. “They also have great potential to help determine with ease and great accuracy the mass and age of a star, which is of great interest for galactic archaeology, the study of the history of our galaxy.”
The sounds of starquakes offer a new window to study the evolution of stars and, in turn, recreate the history of our galaxy. Clusters like M67 serve as benchmarks to study and test stellar models and understand the future evolution of stars like our Sun.
“We plan to look for stars in the field which have very well-determined masses and which are in their ‘plateau’ phase,” says Reyes. “We will use these stars to benchmark the diagnostic potential of the plateau frequencies as a tool, so it can later be applied to stars all over the galaxy.”
The post New analysis of M67 cluster helps decode the sound of stars appeared first on Physics World.
Powerful flares on highly-magnetic neutron stars called magnetars could produce up to 10% of the universe’s gold, silver and platinum, according to a new study. What is more, astronomers may have already observed this cosmic alchemy in action.
Gold, silver, platinum and a host of other rare heavy nuclei are known as rapid-process (r-process) elements. This is because astronomers believe that these elements are produced by the rapid capture of neutrons by lighter nuclei. Neutrons can only exist outside of an atomic nucleus for about 15 min before decaying (except in the most extreme environments). This means that the r-process must be fast and take place in environments rich in free neutrons.
In August 2017, an explosion resulting from the merger of two neutron stars was witnessed by telescopes operating across the electromagnetic spectrum and by gravitational-wave detectors. Dubbed a kilonova, the explosion produced approximately 16,000 Earth-masses worth of r-process elements, including about ten Earth masses of gold and platinum.
While the observations seem to answer the question of where precious metals came from, there remains a suspicion that neutron-star mergers cannot explain the entire abundance of r-process elements in the universe.
Now researchers led by Anirudh Patel, who is a PhD student at New York’s Columbia University, have created a model that describes how flares on the surface of magnetars can create r-process elements.
Patel tells Physics World that “The rate of giant flares is significantly greater than mergers.” However, given that one merger “produces roughly 10,000 times more r-process mass than a single magnetar flare”, neutron-star mergers are still the dominant factory of rare heavy elements.
A magnetar is an extreme type of neutron star with a magnetic field strength of up to a thousand trillion gauss. This makes magnetars the most magnetic objects in the universe. Indeed, if a magnetar were as close to Earth as the Moon, its magnetic field would wipe your credit card.
Astrophysicists believe that when a magnetar’s powerful magnetic fields are pulled taut, the magnetic tension will inevitably snap. This would result in a flare, which is an energetic ejection of neutron-rich material from the magnetar’s surface.
However, the physics isn’t entirely understood, according to Jakub Cehula of Charles University in the Czech Republic, who is a member of Patel’s team. “While the source of energy for a magnetar’s giant flares is generally agreed to be the magnetic field, the exact mechanism by which this energy is released is not fully understood,” he explains.
One possible mechanism is magnetic reconnection, which creates flares on the Sun. Flares could also be produced by energy released during starquakes following a build-up of magnetic stress. However, neither satisfactorily explains the giant flares, of which only nine have thus far been detected.
In 2024 Cehula led research that attempted to explain the flares by combining starquakes with magnetic reconnection. “We assumed that giant flares are powered by a sudden and total dissipation of the magnetic field right above a magnetar’s surface,” says Cehula.
This sudden release of energy drives a shockwave into the magnetar’s neutron-rich crust, blasting a portion of it into space at velocities greater than a tenth of the speed of light, where in theory heavy elements are formed via the r-process.
Remarkably, astronomers may have already witnessed this in 2004, when a giant magnetar flare was spotted as a half-second gamma-ray burst that released more energy than the Sun does in a million years. What happened next remained unexplained until now. Ten minutes after the initial burst, the European Space Agency’s INTEGRAL satellite detected a second, weaker signal that was not understood.
Now, Patel and colleagues have shown that the r-process in this flare created unstable isotopes that quickly decayed into stable heavy elements – creating the gamma-ray signal.
Patel calculates that the 2004 flare resulted in the creation of two million billion billion kilograms of r-process elements, equivalent to about the mass of Mars.
Extrapolating, Patel calculates that giant flares on magnetars contribute between 1–10% of all the r-process elements in the universe.
“This estimate accounts for the fact that these giant flares are rare,” he says, “But it’s also important to note that magnetars have lifetimes of 1000 to 10,000 years, so while there may only be a couple of dozen magnetars known to us today, there have been many more magnetars that have lived and died over the course of the 13 billion-year history of our galaxy.”
Magnetars would have been produced early in the universe by the supernovae of massive stars, whereas it can take a billion years or longer for two neutron stars to merge. Hence, magnetars would have been a more dominant source of r-process elements in the early universe. However, they may not have been the only source.
“If I had to bet, I would say there are other environments in which r-process elements can be produced, for example in certain rare types of core-collapse supernovae,” says Patel.
Either way, it means that some of the gold and silver in your jewellery was forged in the violence of immense magnetic fields snapping on a dead star.
The research is described in Astrophysical Journal Letters.
The post How magnetar flares give birth to gold and platinum appeared first on Physics World.
Subtle quantum effects within atomic nuclei can dramatically affect how some nuclei break apart. By studying 100 isotopes with masses below that of lead, an international team of physicists uncovered a previously unknown region in the nuclear landscape where fragments of fission split in an unexpected way. This is driven not by the usual forces, but by shell effects rooted in quantum mechanics.
“When a nucleus splits apart into two fragments, the mass and charge distribution of these fission fragments exhibits the signature of the underlying nuclear structure effect in the fission process,” explains Pierre Morfouace of Université Paris-Saclay, who led the study. “In the exotic region of the nuclear chart that we studied, where nuclei do not have many neutrons, a symmetric split was previously expected. However, the asymmetric fission means that a new quantum effect is at stake.”
This unexpected discovery not only sheds light on the fine details of how nuclei break apart but also has far-reaching implications. These range from the development of safer nuclear energy to understanding how heavy elements are created during cataclysmic astrophysical events like stellar explosions.
Fission is the process by which a heavy atomic nucleus splits into smaller fragments. It is governed by a complex interplay of forces. The strong nuclear force, which binds protons and neutrons together, competes with the electromagnetic repulsion between positively charged protons. The result is that certain nuclei are unstable and typically leads to a symmetric fission.
But there’s another, subtler phenomenon at play: quantum shell effects. These arise because protons and neutrons inside the nucleus tend to arrange themselves into discrete energy levels or “shells,” much like electrons do in atoms.
“Quantum shell effects [in atomic electrons] play a major role in chemistry, where they are responsible for the properties of noble gases,” says Cedric Simenel of the Australian National University, who was not involved in the study. “In nuclear physics, they provide extra stability to spherical nuclei with so-called ‘magic’ numbers of protons or neutrons. Such shell effects drive heavy nuclei to often fission asymmetrically.”
In the case of very heavy nuclei, such as uranium or plutonium, this asymmetry is well documented. But in lighter, neutron-deficient nuclei – those with fewer neutrons than their stable counterparts – researchers had long expected symmetric fission, where the nucleus breaks into two roughly equal parts. This new study challenges that view.
To investigate fission in this less-explored part of the nuclear chart, scientists from the R3B-SOFIA collaboration carried out experiments at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. They focused on nuclei ranging from iridium to thorium, many of which had never been studied before. The nuclei were fired at high energies into a lead target to induce fission.
The fragments produced in each fission event were carefully analysed using a suite of high-resolution detectors. A double ionization chamber captured the number of protons in each product, while a superconducting magnet and time-of-flight detectors tracked their momentum, enabling a detailed reconstruction of how the split occurred.
Using this method, the researchers found that the lightest fission fragments were frequently formed with 36 protons, which is the atomic number of krypton. This pattern suggests the presence of a stabilizing shell effect at that specific proton number.
“Our data reveal the stabilizing effect of proton shells at Z=36,” explains Morfouace. “This marks the identification of a new ‘island’ of asymmetric fission, one driven by the light fragment, unlike the well-known behaviour in heavier actinides. It expands our understanding of how nuclear structure influences fission outcomes.”
“Experimentally, what makes this work unique is that they provide the distribution of protons in the fragments, while earlier measurements in sub-lead nuclei were essentially focused on the total number of nucleons,” comments Simenel.
Since quantum shell effects are tied to specific numbers of protons or neutrons, not just the overall mass, these new measurements offer direct evidence of how proton shell structure shapes the outcome of fission in lighter nuclei. This makes the results particularly valuable for testing and refining theoretical models of fission dynamics.
“This work will undoubtedly lead to further experimental studies, in particular with more exotic light nuclei,” Simenel adds. “However, to me, the ball is now in the camp of theorists who need to improve their modelling of nuclear fission to achieve the predictive power required to study the role of fission in regions of the nuclear chart not accessible experimentally, as in nuclei formed in the astrophysical processes.”
The research is described in Nature.
The post Subtle quantum effects dictate how some nuclei break apart appeared first on Physics World.
Five-body recombination, in which five identical atoms form a tetramer molecule and a single free atom, could be the largest contributor to loss from ultracold atom traps at specific “Efimov resonances”, according to calculations done by physicists in the US. The process, which is less well understood than three- and four-body recombination, could be useful for building molecules, and potentially for modelling nuclear fusion.
A collision involving trapped atoms can be either elastic – in which the internal states of the atoms and their total kinetic energy remain unchanged – or inelastic, in which there is an interchange between the kinetic energy of the system and the internal energy states of the colliding atoms.
Most collisions in a dilute quantum gas involve only two atoms, and when physicists were first studying Bose-Einstein condensates (the ultralow-temperature state of some atomic gases), they suppressed inelastic two-body collisions, keeping the atoms in the desired state and preserving the condensate. A relatively small number of collisions, however, involve three or more bodies colliding simultaneously.
“They couldn’t turn off three body [inelastic collisions], and that turned out to be the main reason atoms leaked out of the condensate,” says theoretical physicist Chris Greene of Purdue University in the US.
While attempting to understand inelastic three-body collisions, Greene and colleagues made the connection to work done in the 1970s by the Soviet theoretician Vitaly Efimov. He showed that at specific “resonances” of the scattering length, quantum mechanics allowed two colliding particles that could otherwise not form a bound state to do so in the presence of a third particle. While Efimov first considered the scattering of nucleons (protons and neutrons) or alpha particles, the effect applies to atoms and other quantum particles.
In the case of trapped atoms, the bound dimer and free atom are then ejected from the trap by the energy released from the binding event. “There were signatures of this famous Efimov effect that had never been seen experimentally,” Greene says. This was confirmed in 2005 by experiments from Rudolf Grimm’s group at the University of Innsbruck in Austria.
Hundreds of scientific papers have now been written about three-body recombination. Greene and colleagues subsequently predicted resonances at which four-body Efimov recombination could occur, producing a trimer. These were observed almost immediately by Grimm and colleagues. “Five was just too hard for us to do at the time, and only now are we able to go that next step,” says Greene.
In the new work, Greene and colleague Michael Higgins modelled collisions between identical caesium atoms in an optical trap. At specific resonances, five-body recombination – in which four atoms combine to produce a tetramer and a free particle – is not only enhanced but becomes the principal loss channel. The researchers believe these resonances should be experimentally observable using today’s laser box traps, which hold atomic gases in a square-well potential.
“For most ultracold experiments, researchers will be avoiding loss as much as possible – they would stay away from these resonances,” says Greene; “But for those of us in the few-body community interested in how atoms bind and resonate and how to describe complicated rearrangement, it’s really interesting to look at these points where the loss becomes resonant and very strong.” This is one technique that can be used to create new molecules, for example.
In future, Greene hopes to apply the model to nucleons themselves. “There have been very few people in the few-body theory community willing to tackle a five-particle collision – the Schrödinger equation has so many dimensions,” he says.
He hopes it may be possible to apply the researchers’ toolkit to nuclear reactions. “The famous one is the deuterium/tritium fusion reaction. When they collide they can form an alpha particle and a neutron and release a ton of energy, and that’s the basis of fusion reactors…There’s only one theory in the world from the nuclear community, and it’s such an important reaction I think it needs to be checked,” he says.
The researchers also wish to study the possibility of even larger bound states. However, they foresee a problem because the scattering length of the ground state resonance gets shorter and shorter with each additional particle. “Eventually the scattering length will no longer be the dominant length scale in the problem, and we think between five and six is about where that border line occurs,” Greene says. Nevertheless, higher-lying, more loosely-bound six-body Efimov resonances could potentially be visible at longer scattering lengths.
The research is described in Proceedings of the National Academy of Sciences.
Theoretical physicist Ravi Rau of Louisiana State University in the US is impressed by Greene and Higgins’ work. “For quite some time Chris Greene and a succession of his students and post-docs have been extending the three-body work that they did, using the same techniques, to four and now five particles,” he says. “Each step is much more complicated, and that he could use this technique to extend it to five bosons is what I see as significant.” Rau says, however, that “there is a vast gulf” between five atoms and the number treated by statistical mechanics, so new theoretical approaches may be required to bridge the gap.
The post Five-body recombination could cause significant loss from atom traps appeared first on Physics World.
Physicists have set a new upper bound on the interaction strength of dark matter by simulating the collision of two clouds of interstellar plasma. The result, from researchers at Ruhr University Bochum in Germany, CINECA in Italy and the Instituto Superior Tecnico in Portugal, could force a rethink on theories describing this mysterious substance, which is thought to make up more than 85% of the mass in the universe.
Since dark matter has only ever been observed through its effect on gravity, we know very little about what it’s made of. Indeed, various theories predict that dark matter particles could have masses ranging from around 10−22 eV to around 1019 GeV — a staggering 50 orders of magnitude.
Another major unknown about dark matter is whether it interacts via forces other than gravity, either with itself or with other particles. Some physicists have hypothesized that dark matter particles might possess positive and negative “dark charges” that interact with each other via “dark electromagnetic forces”. According to this supposition, dark matter could behave like a cold plasma of self-interacting particles.
In the new study, the team searched for evidence of dark interactions in a cluster of galaxies located several billion light years from Earth. This galactic grouping is known as the Bullet Cluster, and it contains a subcluster that is moving away from the main body after passing through it at high speed.
Since the most basic model of dark-matter interactions relies on the same equations as ordinary electromagnetism, the researchers chose to simulate these interactions in the Bullet Cluster system using the same computational tools they would use to describe electromagnetic interactions in a standard plasma. They then compared their results with real observations of the Bullet Cluster galaxy.
The new work builds on a previous study in which members of the same team simulated the collision of two clouds of standard plasma passing through one another. This study found that as the clouds merged, electromagnetic instabilities developed. These instabilities had the effect of redistributing energy from the opposing flows of the clouds, slowing them down while also broadening the temperature range within them.
The latest study showed that, as expected, the plasma components of the subcluster and main body slowed down thanks to ordinary electromagnetic interactions. That, however, appeared to be all that happened, as the data contained no sign of additional dark interactions. While the team’s finding doesn’t rule out dark electromagnetic interactions entirely, team member Kevin Schoeffler explains that it does mean that these interactions, which are characterized by a parameter known as 𝛼𝐷, must be far weaker than their ordinary-matter counterpart. “We can thus calculate an upper limit for the strength of this interaction,” he says.
This limit, which the team calculated as 𝛼𝐷 < 4 x 10-25 for a dark matter particle with a mass of 1 TeV, rules out many of the simplest dark matter theories and will require them to be rethought, Schoeffler says. “The calculations were made possible thanks to detailed discussions with scientists working outside of our speciality of physics, namely plasma physicists,” he tells Physics World. “Throughout this work, we had to overcome the challenge of connecting with very different fields and interacting with communities that speak an entirely different language to ours.”
As for future work, the physicists plan to compare the results of their simulations with other astronomical observations, with the aim of constraining the upper limit of the dark electromagnetic interaction even further. More advanced calculations, such as those that include finer details of the cloud models, would also help refine the limit. “These more realistic setups would include other plasma-like electromagnetic scenarios and ‘slowdown’ mechanisms, leading to potentially stronger limits,” Schoeffler says.
The present study is detailed in Physical Review D.
The post Plasma physics sets upper limit on the strength of ‘dark electromagnetism’ appeared first on Physics World.
In the quantum world, observing a particle is not a passive act. If you shine light on a quantum object to measure its position, photons scatter off it and disturb its motion. This disturbance is known as quantum backaction noise, and it limits how precisely physicists can observe or control delicate quantum systems.
Physicists at Swansea University have now proposed a technique that could eliminate quantum backaction noise in optical traps, allowing a particle to remain suspended in space undisturbed. This would bring substantial benefits for quantum sensors, as the amount of noise in a system determines how precisely a sensor can measure forces such as gravity; detect as-yet-unseen interactions between gravity and quantum mechanics; and perhaps even search for evidence of dark matter.
There’s just one catch: for the technique to work, the particle needs to become invisible.
Backaction noise is a particular challenge in the field of levitated optomechanics, where physicists seek to trap nanoparticles using light from lasers. “When you levitate an object, the whole thing moves in space and there’s no bending or stress, and the motion is very pure,” explains James Millen, a quantum physicist who studies levitated nanoparticles at Kings College, London, UK. “That’s why we are using them to detect crazy stuff like dark matter.”
While some noise is generally unavoidable, Millen adds that there is a “sweet spot” called the Heisenberg limit. “This is where you have exactly the right amount of measurement power to measure the position optimally while causing the least noise,” he explains.
The problem is that laser beams powerful enough to suspend a nanoparticle tend to push the system away from the Heisenberg limit, producing an increase in backaction noise.
The Swansea team’s method avoids this problem by, in effect, blocking the flow of information from the trapped nanoparticle. Its proposed setup uses a standing-wave laser to trap a nanoparticle in space with a hemispherical mirror placed around it. When the mirror has a specific radius, the scattered light from the particle and its reflection interfere so that the outgoing field no longer encodes any information about the particle’s position.
At this point, the particle is effectively invisible to the observer, with an interesting consequence: because the scattered light carries no usable information about the particle’s location, quantum backaction disappears. “I was initially convinced that we wanted to suppress the scatter,” team leader James Bateman tells Physics World. “After rigorous calculation, we arrived at the correct and surprising answer: we need to enhance the scatter.”
In fact, when scattering radiation is at its highest, the team calculated that the noise should disappear entirely. “Even though the particle shines brighter than it would in free space, we cannot tell in which direction it moves,” says Rafał Gajewski, a postdoctoral researcher at Swansea and Bateman’s co-author on a paper in Physical Review Research describing the technique.
Gajewski and Bateman’s result flips a core principle of quantum mechanics on its head. While it’s well known that measuring a quantum system disturbs it, the reverse is also true: if no information can be extracted, then no disturbance occurs, even when photons continuously bombard the particle. If physicists do need to gain information about the trapped nanoparticle, they can use a different, lower-energy laser to make their measurements, allowing experiments to be conducted at the Heisenberg limit with minimal noise.
For the method to work experimentally, the team say the mirror needs a high-quality surface and a radius that is stable with temperature changes. “Both requirements are challenging, but this level of control has been demonstrated and is achievable,” Gajewski says.
Positioning the particle precisely at the center of the hemisphere will be a further challenge, he adds, while the “disappearing” effect depends on the mirror’s reflectivity at the laser wavelength. The team is currently investigating potential solutions to both issues.
If demonstrated experimentally, the team says the technique could pave the way for quieter, more precise experiments and unlock a new generation of ultra-sensitive quantum sensors. Millen, who was not involved in the work, agrees. “I think the method used in this paper could possibly preserve quantum states in these particles, which would be very interesting,” he says.
Because nanoparticles are far more massive than atoms, Millen adds, they interact more strongly with gravity, making them ideal candidates for testing whether gravity follows the strange rules of quantum theory. “Quantum gravity – that’s like the holy grail in physics!” he says.
The post Quantum effect could tame noisy nanoparticles by rendering them invisible appeared first on Physics World.
During this webinar, the key steps of integrating an MRI scanner and MRI Linac into a radiotherapy will be presented, specially focusing on the quality assurance required for the use of the MRI images. Furthermore, the use of phantoms and their synergy with each other across the multi-vendor facility will be discussed.
Akos Gulyban is a medical physicist with a PhD in Physics (in Medicine), renowned for his expertise in MRI-guided radiotherapy (MRgRT). Currently based at Institut Jules Bordet in Brussels, he plays a pivotal role in advancing MRgRT technologies, particularly through the integration of the Elekta Unity MR-Linac system along the implementation of dedicated MRI simulation for radiotherapy.
In addition to his clinical research, Gulyban has been involved in developing quality assurance protocols for MRI-linear accelerator (MR-Linac) systems, contributing to guidelines that ensure safe and effective implementation of MRI-guided radiotherapy.
Gulyban is playing a pivotal role in integrating advanced imaging technologies into radiotherapy, striving to enhance treatment outcomes for cancer patients.
The post MR QA from radiotherapy perspective appeared first on Physics World.
This episode of the Physics World Weekly podcast features the Nobel laureate Ferenc Krausz. He is director of the Max-Planck Institute of Quantum Optics and a professor at LMU Munich, both in Germany, and CEO and scientific director of the Center for Molecular Fingerprinting in Budapest, Hungary.
In a conversation with Physics World’s Tami Freeman Krausz talks about his research into using ultrashort-pulsed laser technology to develop a diagnostic tool for early disease detection. He also discusses his collaboration with Semmelweis University to establish the John von Neumann Institute for Data Science, and describes the Science4People initiative, a charity that he and his colleagues founded to provide education for children who have been displaced by the war in Ukraine.
On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now and use code QUANTUM20 to receive 20% off. This offer ends on 4 May.
The post Ferenc Krausz explains how ultrashort laser pulses could help detect disease appeared first on Physics World.
In his debut book, Einstein’s Tutor: the Story of Emmy Noether and the Invention of Modern Physics, Lee Phillips champions the life and work of German mathematician Emmy Noether (1882–1935). Despite living a life filled with obstacles, injustices and discrimination as a Jewish mathematician, Noether revolutionized the field and discovered “the single most profound result in all of physics”. Phillips’ book weaves the story of her extraordinary life around the central subject of “Noether’s theorem”, which itself sits at the heart of a fascinating era in the development of modern theoretical physics.
Noether grew up at a time when women had few rights. Unable to officially register as a student, she was instead able to audit courses at the University of Erlangen in Bavaria, with the support of her father who was a mathematics professor there. At the time, young Noether was one of only two female auditors in the university of 986 students. Just two years previously, the university faculty had declared that mixed-sex education would “overthrow academic order”. Despite going against this formidable status quo, she was able to graduate in 1903.
Noether continued her pursuit of advanced mathematics, travelling to the “[world’s] centre of mathematics” – the University of Göttingen. Here, she was able to sit in the lectures of some of the brightest mathematical minds of the time – Karl Schwarzschild, Hermann Minkowski, Otto Blumenthal, Felix Klein and David Hilbert. While there, the law finally changed: women were, at last, allowed to enrol as students at university. In 1904 Noether returned to the University of Erlangen to complete her postgraduate dissertation under the supervision of Paul Gordan. At the time, she was the only woman to matriculate alongside 46 men.
Despite being more than qualified, Noether was unable to secure a university position after graduating from her PhD in 1907. Instead, she worked unpaid for almost a decade – teaching her father’s courses and supervising his PhD students. As of 1915, Noether was the only woman in the whole of Europe with a PhD in mathematics. She had worked hard to be recognized as an expert on symmetry and invariant theory, and eventually accepted an invitation from Klein and Hilbert to work alongside them in Göttingen. Here, the three of them would meet Albert Einstein to discuss his latest project – a general theory of relativity.
In Einstein’s Tutor, Phillips paints an especially vivid picture of Noether’s life at Göttingen, among colleagues including Klein, Hilbert and Einstein, who loom large and bring a richness to the story. Indeed, much of the first three chapters are dedicated to these men, setting the scene for Noether’s arrival in Göttingen. Phillips makes it easy to imagine these exceptionally talented and somewhat eccentric individuals working at the forefront of mathematics and theoretical physics together. And it was here, when supporting Einstein with the development of general relativity (GR), that Noether discovered a profound result: for every symmetry in the universe, there is a corresponding conservation law.
Throughout the book, Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of GR. Einstein himself “expressed wonderment at what happened to his equations in her hands, how he never imagined that things could be expressed with such elegance and generality”. Phillips argues that Einstein should not be credited as the sole architect of GR. Indeed, the contributions of Grossman, Klein, Besso, Hilbert, and crucially, Noether, remain largely unacknowledged – a wrong that Phillips is trying to right with this book.
Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of general relativity
A key theme running through Einstein’s Tutor is the importance of the support and allyship that Noether received from her male contemporaries. While at Göttingen, there was a battle to allow Noether to receive her habilitation (eligibility for tenure). Many argued in her favour but considered her an exception, and believed that in general, women were not suited as university professors. Hilbert, in contrast, saw her sex as irrelevant (famously declaring “this is not a bath house”) and pointed out that science requires the best people, of which she was one. Einstein also fought for her on the basis of equal rights for women.
Eventually, in 1919 Noether was allowed to habilitate (as an exception to the rule) and was promoted to professor in 1922. However, she was still not paid for her work. In fact, her promotion came with the specific condition that she remained unpaid, making it clear that Noether “would not be granted any form of authority over any male employee”. Hilbert however, managed to secure a contract with a small salary for her from the university administration.
Her allies rose to the cause again in 1933, when Noether was one of the first Jewish academics to be dismissed under the Nazi regime. After her expulsion, German mathematician Helmut Hasse convinced 14 other colleagues to write letters advocating for her importance, asking that she be allowed to continue as a teacher to a small group of advanced students – the government denied this request.
When the time came to leave Germany, many colleagues wrote testimonials in her support for immigration, with one writing “She is one of the 10 or 12 leading mathematicians of the present generation in the entire world.” Rather than being placed at a prestigious university or research institute (Hermann Weyl and Einstein were both placed at “the men’s university”, the Institute for Advanced Study in Princeton), it was recommended she join Bryn Mawr, a women’s college in Pennsylvania, US. Her position there would “compete with no-one… the most distinguished feminine mathematician connected with the most distinguished feminine university”. Phillips makes clear his distaste for the phrasing of this recommendation. However, all accounts show that she was happy at Bryn Mawr and stayed there until her unexpected death in 1935 at the age of 53.
With a PhD in theoretical physics, Phillips has worked for many years in both academia and industry. His background shows itself clearly in some unusual writing choices. While his writing style is relaxed and conversational, it includes the occasional academic turn of phrase (e.g. “In this chapter I will explain…”), which feels out of place in a popular-science book. He also has a habit of piling repetitive and overly sincere praise onto Noether. I personally prefer stories that adopt the “show, don’t tell” approach – her abilities speak for themselves, so it should be easy to let the reader come to their own conclusions.
Phillips has made the ambitious choice to write a popular-science book about complex mathematical concepts such as symmetries and conservation laws that are challenging to explain, especially to general readers. He does his best to describe the mathematics and physics behind some of the key concepts around Noether’s theorem. However, in places, you do need to have some familiarity with university-level physics and maths to properly follow his explanations. The book also includes a 40-page appendix filled with additional physics content, which I found unnecessary.
Einstein’s Tutor does achieve its primary goal of familiarizing the reader with Emmy Noether and the tremendous significance of her work. The final chapter on her legacy breezes quickly through developments in particle physics, astrophysics, quantum computers, economics and XKCD Comics to highlight the range and impact this single theorem has had. Phillips’ goal was to take Noether into the mainstream, and this book is a small step in the right direction. As cosmologist and author Katie Mack summarizes perfectly: “Noether’s theorem is to theoretical physics what natural selection is to biology.”
The post Mathematical genius: celebrating the life and work of Emmy Noether appeared first on Physics World.
Nonlocal correlations that define quantum entanglement could be reconciled with Einstein’s theory of relativity if space–time had two temporal dimensions. That is the implication of new theoretical work that extends nonlocal hidden variable theories of quantum entanglement and proposes a potential experimental test.
Marco Pettini, a theoretical physicist at Aix Marseille University in France, says the idea arose from conversations with the mathematical physicist Roger Penrose – who shared the 2020 Nobel Prize for Physics for showing that the general theory of relativity predicted black holes. “He told me that, from his point of view, quantum entanglement is the greatest mystery that we have in physics,” says Pettini. The puzzle is encapsulated by Bell’s inequality, which was derived in the mid-1960s by the Northern Irish physicist John Bell.
Bell’s breakthrough was inspired by the 1935 Einstein–Podolsky–Rosen paradox, a thought experiment in which entangled particles in quantum superpositions (using the language of modern quantum mechanics) travel to spatially separated observers Alice and Bob. They make measurements of the same observable property of their particles. As they are superposition states, the outcome of neither measurement is certain before it is made. However, as soon as Alice measures the state, the superposition collapses and Bob’s measurement is now fixed.
A sceptic of quantum indeterminacy could hypothetically suggest that the entangled particles carried hidden variables all along, so that when Alice made her measurement, she simply found out the state that Bob would measure rather than actually altering it. If the observers are separated by a distance so great that information about the hidden variable’s state would have to travel faster than light between them, then hidden variable theory violates relativity. Bell derived an inequality showing the maximum degree of correlation between the measurements possible if each particle carried such a “local” hidden variable, and showed it was indeed violated by quantum mechanics.
A more sophisticated alternative investigated by the theoretical physicists David Bohm and his student Jeffrey Bub, as well as by Bell himself, is a nonlocal hidden variable. This postulates that the particle – including the hidden variable – is indeed in a superposition and defined by an evolving wavefunction. When Alice makes her measurement, this superposition collapses. Bob’s value then correlates with Alice’s. For decades, researchers believed the wavefunction collapse could travel faster than light without allowing superliminal exchange of information – therefore without violating the special theory of relativity. However, in 2012 researchers showed that any finite-speed collapse propagation would enable superluminal information transmission.
“I met Roger Penrose several times, and while talking with him I asked ‘Well, why couldn’t we exploit an extra time dimension?’,” recalls Pettini. Particles could have five-dimensional wavefunctions (three spatial, two temporal), and the collapse could propagate through the extra time dimension – allowing it to appear instantaneous. Pettini says that the problem Penrose foresaw was that this would enable time travel, and the consequent possibility that one could travel back through the “extra time” to kill one’s ancestors or otherwise violate causality. However, Pettini says he “recently found in the literature a paper which has inspired some relatively standard modifications of the metric of an enlarged space–time in which massive particles are confined with respect to the extra time dimension…Since we are made of massive particles, we don’t see it.”
Pettini believes it might be possible to test this idea experimentally. In a new paper, he proposes a hypothetical experiment (which he describes as a toy model), in which two sources emit pairs of entangled, polarized photons simultaneously. The photons from one source are collected by recipients Alice and Bob, while the photons from the other source are collected by Eve and Tom using identical detectors. Alice and Eve compare the polarizations of the photons they detect. Alice’s photon must, by fundamental quantum mechanics, be entangled with Bob’s photon, and Eve’s with Tom’s, but otherwise simple quantum mechanics gives no reason to expect any entanglement in the system.
Pettini proposes, however, that Alice and Eve should be placed much closer together, and closer to the photon sources, than to the other observers. In this case, he suggests, the communication of entanglement through the extra time dimension when the wavefunction of Alice’s particle collapses, transmitting this to Bob, or when Eve’s particle is transmitted to Tom would also transmit information between the much closer identical particles received by the other woman. This could affect the interference between Alice’s and Eve’s photons and cause a violation of Bell’s inequality. “[Alice and Eve] would influence each other as if they were entangled,” says Pettini. “This would be the smoking gun.”
Bub, now a distinguished professor emeritus at the University of Maryland, College Park, is not holding his breath. “I’m intrigued by [Pettini] exploiting my old hidden variable paper with Bohm to develop his two-time model of entanglement, but to be frank I can’t see this going anywhere,” he says. “I don’t feel the pull to provide a causal explanation of entanglement, and I don’t any more think of the ‘collapse’ of the wave function as a dynamical process.” He says the central premise of Pettini’s – that adding an extra time dimension could allow the transmission of entanglement between otherwise unrelated photons, is “a big leap”. “Personally, I wouldn’t put any money on it,” he says.
The research is described in Physical Review Research.
The post Could an extra time dimension reconcile quantum entanglement with local causality? appeared first on Physics World.
Worms move faster in an environment riddled with randomly-placed obstacles than they do in an empty space. This surprising observation by physicists at the University of Amsterdam in the Netherlands can be explained by modelling the worms as polymer-like “active matter”, and it could come in handy for developers of robots for soil aeriation, fertility treatments and other biomedical applications.
When humans move, the presence of obstacles – disordered or otherwise – has a straightforward effect: it slows us down, as anyone who has ever driven through “traffic calming” measures like speed bumps and chicanes can attest. Worms, however, are different, says Antoine Deblais, who co-led the new study with Rosa Sinaasappel and theorist colleagues in Sara Jabbari Farouji’s group. “The arrangement of obstacles fundamentally changes how worms move,” he explains. “In disordered environments, they spread faster as crowding increases, while in ordered environments, more obstacles slow them down.”
The team obtained this result by placing single living worms at the bottom of a water chamber containing a 50 x 50 cm array of cylindrical pillars, each with a radius of 2.5 mm. By tracking the worms’ movement and shape changes with a camera for two hours, the scientists could see how the animals behaved when faced with two distinct pillar arrangements: a periodic (square lattice) structure; and a disordered array. The minimum distance between any two pillars was set to the characteristic width of a worm (around 0.5 mm) to ensure they could always pass through.
“By varying the number and arrangement of the pillars (up to 10 000 placed by hand!), we tested how different environments affect the worm’s movement,” Sinaasappel explains. “We also reduced or increased the worm’s activity by lowering or raising the temperature of the chamber.”
These experiments showed that when the chamber contained a “maze” of obstacles placed at random, the worms moved faster, not slower. The same thing happened when the researchers increased the number of obstacles. More surprisingly still, the worms got through the maze faster when the temperature was lower, even though the cold reduced their activity.
To explain these counterintuitive results, the team developed a statistical model that treats the worms as active polymer-like filaments and accounts for both the worms’ flexibility and the fact that they are self-driven. This analysis revealed that in a space containing disordered pillar arrays, the long-time diffusion coefficient of active polymers with a worm-like degree of flexibility increases significantly as the fraction of the surface occupied by pillars goes up. In regular, square-lattice arrangements, the opposite happens.
The team say that this increased diffusivity comes about because randomly-positioned pillars create narrow tube-like structures between them. These curvilinear gaps guide the worms and allow them to move as if they were straight rods for longer before they reorient. In contrast, ordered pillar arrangements create larger open spaces, or pores, in which worms can coil up. This temporarily traps them and they slow down.
Similarly, the team found that reducing the worm’s activity by lowering ambient temperatures increases a parameter known as its persistence length. This is essentially a measure of how straight the worm is, and straighter worms pass between the pillars more easily.
Identifying the right active polymer model was no easy task, says Jabbari Farouji. One challenge was to incorporate the way worms adjust their flexibility depending on their surroundings. “This self-tuning plays a key role in their surprising motion,” says Jabbari Farouji, who credits this insight to team member Twan Hooijschuur.
Understanding how active, flexible objects move through crowded environments is crucial in physics, biology and biophysics, but the role of environmental geometry in shaping this movement was previously unclear, Jabbari Farouji says. The team’s discovery that movement in active, flexible systems can be controlled simply by adjusting the environment has important implications, adds Deblais.
“Such a capability could be used to sort worms by activity and therefore optimize soil aeration by earthworms or even influence bacterial transport in the body,” he says. “The insights gleaned from this study could also help in fertility treatments – for instance, by sorting sperm cells based on how fast or slow they move.”
Looking ahead, the researchers say they are now expanding their work to study the effects of different obstacle shapes (not just simple pillars), more complex arrangements and even movable obstacles. “Such experiments would better mimic real-world environments,” Deblais says.
The present work is detailed in Physical Review Letters.
The post Speedy worms behave like active polymers in disordered mazes appeared first on Physics World.
This podcast features Alonso Gutierrez, who is chief of medical physics at the Miami Cancer Institute in the US. In a wide-ranging conversation with Physics World’s Tami Freeman, Gutierrez talks about his experience using Elekta’s Leksell Gamma Knife for radiosurgery in a busy radiotherapy department.
This podcast is sponsored by Elekta.
The post Radiosurgery made easy: the role of the Gamma Knife in modern radiotherapy appeared first on Physics World.
Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.
Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.
Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.
In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.
“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”
Because the subtle effects of neutrino mass are most visible in decays where the antineutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.
The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.
Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.
At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”
Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”
Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”
KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.
“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”
The research appears in Science.
The post KATRIN sets tighter limit on neutrino mass appeared first on Physics World.
New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.
The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.
The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.
One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”
Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).
This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”
To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.
The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.
“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.
To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.
Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.
As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.
The work is detailed in Physical Review Letters.
The post Helium nanobubble measurements shed light on origins of heavy elements in the universe appeared first on Physics World.
With so much turmoil in the world at the moment, it’s always great to meet enthusiastic physicists celebrating all that their subject has to offer. That was certainly the case when I travelled with my colleague Tami Freeman to the 2025 Celebration of Physics at Nottingham Trent University (NTU) on 10 April.
Organized by the Institute of Physics (IOP), which publishes Physics World, the event was aimed at “physicists, creative thinkers and anyone interested in science”. It also featured some of the many people who won IOP awards last year, including Nick Stone from the University of Exeter, who was awarded the 2024 Rosalind Franklin medal and prize.
Stone was honoured for his “pioneering use of light for diagnosis and therapy in healthcare”, including “developing novel Raman spectroscopic tools and techniques for rapid in vivo cancer diagnosis and monitoring”. Speaking in a Physics World Live chat, Stone explained why Raman spectroscopy is such a useful technique for medical imaging.
Nottingham is, of course, a city famous for medical imaging, thanks in particular to the University of Nottingham Nobel laureate Peter Mansfield (1933–2017), who pioneered magnetic resonance imaging (MRI). In an entertaining talk, Rob Morris from NTU explained how MRI is also crucial for imaging foodstuffs, helping the food industry to boost productivity, reduce waste – and make tastier pork pies.
Still on the medical theme, Niall Holmes from Cerca Magnetics, which was spun out from the University of Nottingham, explained how his company has developed wearable magnetoencephalography (MEG) sensors that can measures magnetic fields generated by neuronal firings in the brain. In 2023 Cerca won one of the IOP’s business and innovation awards.
Richard Friend from the University of Cambridge, who won the IOP’s top Isaac Newton medal and prize, discussed some of the many recent developments that have followed from his seminal 1990 discovery that semiconducting polymers can be used in light-emitting diodes (LEDs).
The event ended with a talk from particle physicist Tara Shears from the University of Liverpool, who outlined some of the findings of the new IOP report Physics and AI, to which she was an adviser. Based on a survey with 700 responses and a workshop with experts from academia and industry, the report concludes that physics doesn’t only benefit from AI – but underpins it too.
I’m sure AI will be good for physics overall, but I hope it never removes the need for real-life meetings like the Celebration of Physics.
The post Physicists gather in Nottingham for the IOP’s Celebration of Physics 2025 appeared first on Physics World.