Quantum transducer A niobium microwave LC resonator (silver) is capacitively coupled to two hybridized lithium niobate racetrack resonators in a paperclip geometry (black) to exchange energy between the microwave and optical domains using the electro-optic effect. (Courtesy: Lončar group/Harvard SEAS)
The future of quantum communication and quantum computing technologies may well revolve around superconducting qubits and quantum circuits, which have already been shown to improve processing capabilities over classical supercomputers – even when there is noise within the system. This scenario could be one step closer with the development of a novel quantum transducer by a team headed up at the Harvard John A Paulson School of Engineering and Applied Sciences (SEAS).
Realising this future will rely on systems having hundreds (or more) logical qubits (each built from multiple physical qubits). However, because superconducting qubits require ultralow operating temperatures, large-scale refrigeration is a major challenge – there is no technology available today that can provide the cooling power to realise such large-scale qubit systems.
Superconducting microwave qubits are a promising option for quantum processor nodes, but they currently require bulky microwave components. These components create a lot of heat that can easily disrupt the refrigeration systems cooling the qubits.
One way to combat this cooling conundrum is to use a modular approach, with small-scale quantum processors connected via quantum links, and each processor having its own dilution refrigerator. Superconducting qubits can be accessed using microwave photons between 3 and 8 GHz, thus the quantum links could be used to transmit microwave signals. The downside of this approach is that it would require cryogenically cooled links between each subsystem.
On the other hand, optical signals at telecoms frequency (around 200 THz) can be generated using much smaller form factor components, leading to lower thermal loads and noise, and can be transmitted via low-loss optical fibres. The transduction of information between optical and microwave frequencies is therefore key to controlling superconducting microwave qubits without the high thermal cost.
The large energy gap between microwave and optical photons makes it difficult to control microwave qubits with optical signals and requires a microwave–optical quantum transducer (MOQT). These MOQTs provide a coherent, bidirectional link between microwave and optical frequencies while preserving the quantum states of the qubit. A team led by SEAS researcher Marko Lončar has now created such a device, describing it in Nature Physics.
Lončar and collaborators have developed a thin-film lithium niobate (TFLN) cavity electro-optic (CEO)-based MOQT (clad with silica to aid thermal dissipation and mitigate optical losses) that converts optical frequencies into microwave frequencies with low loss. The team used the CEO-MOQT to facilitate coherent optical driving of a superconducting qubit (controlling the state of the quantum system by manipulating its energy).
The on-chip transducer system contains three resonators: a microwave LC resonator capacitively coupled to two optical resonators using the electro-optic effect. The device creates hybridized optical modes in the transducer that enables a resonance-enhanced exchange of energy between the microwave and optical modes.
The transducer uses a process known as difference frequency generation to create a new frequency output from two input frequencies. The optical modes – an optical pump in a classical red-pumping regime and an optical idler – interact to generate a microwave signal at the qubit frequency, in the form of a shaped, symmetric single microwave photon.
This microwave signal is then transmitted from the transducer to a superconducting qubit (in the same refrigerator system) using a coaxial cable. The qubit is coupled to a readout resonator that enables its state to be read by measuring the transmission of a readout pulse.
The MOQT operated with a peak conversion efficiency of 1.18% (in both microwave-to-optical and optical-to-microwave regimes), low microwave noise generation and the ability to drive Rabi oscillations in a superconducting qubit. Because of the low noise, the researchers state that stronger optical-pump fields could be used without affecting qubit performance.
Having effectively demonstrated the ability to control superconducting circuits with optical light, the researchers suggest a number of future improvements that could increase the device performance by orders of magnitude. For example, microwave and optical coupling losses could be reduced by fabricating a single-ended microwave resonator directly onto the silicon wafer instead of on silica. A flux tuneable microwave cavity could increase the optical bandwidth of the transducer. Finally, the use of improved measurement methods could improve control of the qubits and allow for more intricate gate operations between qubit nodes.
The researchers suggest this type of device could be used for networking superconductor qubits when scaling up quantum systems. The combination of this work with other research on developing optical readouts for superconducting qubit chips “provides a path towards forming all-optical interfaces with superconducting qubits…to enable large scale quantum processors,” they conclude.
Nonlocal correlations that define quantum entanglement could be reconciled with Einstein’s theory of relativity if space–time had two temporal dimensions. That is the implication of new theoretical work that extends nonlocal hidden variable theories of quantum entanglement and proposes a potential experimental test.
Marco Pettini, a theoretical physicist at Aix Marseille University in France, says the idea arose from conversations with the mathematical physicist Roger Penrose – who shared the 2020 Nobel Prize for Physics for showing that the general theory of relativity predicted black holes. “He told me that, from his point of view, quantum entanglement is the greatest mystery that we have in physics,” says Pettini. The puzzle is encapsulated by Bell’s inequality, which was derived in the mid-1960s by the Northern Irish physicist John Bell.
Bell’s breakthrough was inspired by the 1935 Einstein–Podolsky–Rosen paradox, a thought experiment in which entangled particles in quantum superpositions (using the language of modern quantum mechanics) travel to spatially separated observers Alice and Bob. They make measurements of the same observable property of their particles. As they are superposition states, the outcome of neither measurement is certain before it is made. However, as soon as Alice measures the state, the superposition collapses and Bob’s measurement is now fixed.
Quantum scepticism
A sceptic of quantum indeterminacy could hypothetically suggest that the entangled particles carried hidden variables all along, so that when Alice made her measurement, she simply found out the state that Bob would measure rather than actually altering it. If the observers are separated by a distance so great that information about the hidden variable’s state would have to travel faster than light between them, then hidden variable theory violates relativity. Bell derived an inequality showing the maximum degree of correlation between the measurements possible if each particle carried such a “local” hidden variable, and showed it was indeed violated by quantum mechanics.
A more sophisticated alternative investigated by the theoretical physicists David Bohm and his student Jeffrey Bub, as well as by Bell himself, is a nonlocal hidden variable. This postulates that the particle – including the hidden variable – is indeed in a superposition and defined by an evolving wavefunction. When Alice makes her measurement, this superposition collapses. Bob’s value then correlates with Alice’s. For decades, researchers believed the wavefunction collapse could travel faster than light without allowing superliminal exchange of information – therefore without violating the special theory of relativity. However, in 2012 researchers showed that any finite-speed collapse propagation would enable superluminal information transmission.
“I met Roger Penrose several times, and while talking with him I asked ‘Well, why couldn’t we exploit an extra time dimension?’,” recalls Pettini. Particles could have five-dimensional wavefunctions (three spatial, two temporal), and the collapse could propagate through the extra time dimension – allowing it to appear instantaneous. Pettini says that the problem Penrose foresaw was that this would enable time travel, and the consequent possibility that one could travel back through the “extra time” to kill one’s ancestors or otherwise violate causality. However, Pettini says he “recently found in the literature a paper which has inspired some relatively standard modifications of the metric of an enlarged space–time in which massive particles are confined with respect to the extra time dimension…Since we are made of massive particles, we don’t see it.”
Toy model
Pettini believes it might be possible to test this idea experimentally. In a new paper, he proposes a hypothetical experiment (which he describes as a toy model), in which two sources emit pairs of entangled, polarized photons simultaneously. The photons from one source are collected by recipients Alice and Bob, while the photons from the other source are collected by Eve and Tom using identical detectors. Alice and Eve compare the polarizations of the photons they detect. Alice’s photon must, by fundamental quantum mechanics, be entangled with Bob’s photon, and Eve’s with Tom’s, but otherwise simple quantum mechanics gives no reason to expect any entanglement in the system.
Pettini proposes, however, that Alice and Eve should be placed much closer together, and closer to the photon sources, than to the other observers. In this case, he suggests, the communication of entanglement through the extra time dimension when the wavefunction of Alice’s particle collapses, transmitting this to Bob, or when Eve’s particle is transmitted to Tom would also transmit information between the much closer identical particles received by the other woman. This could affect the interference between Alice’s and Eve’s photons and cause a violation of Bell’s inequality. “[Alice and Eve] would influence each other as if they were entangled,” says Pettini. “This would be the smoking gun.”
Bub, now a distinguished professor emeritus at the University of Maryland, College Park, is not holding his breath. “I’m intrigued by [Pettini] exploiting my old hidden variable paper with Bohm to develop his two-time model of entanglement, but to be frank I can’t see this going anywhere,” he says. “I don’t feel the pull to provide a causal explanation of entanglement, and I don’t any more think of the ‘collapse’ of the wave function as a dynamical process.” He says the central premise of Pettini’s – that adding an extra time dimension could allow the transmission of entanglement between otherwise unrelated photons, is “a big leap”. “Personally, I wouldn’t put any money on it,” he says.
A burst of solar wind triggered a planet-wide heatwave in Jupiter’s upper atmosphere, say astronomers at the University in Reading, UK. The hot region, which had a temperature of over 750 K, propagated at thousands of kilometres per hour and stretched halfway around the planet.
“This is the first time we have seen something like a travelling ionospheric disturbance, the likes of which are found on Earth, at a giant planet,” says James O’Donoghue, a Reading planetary scientist and lead author of a study in Geophysical Research Letters on the phenomenon. “Our finding shows that Jupiter’s atmosphere is not as self-contained as we thought, and that the Sun can drive dramatic, global changes, even this far out in the solar system.”
Jupiter’s upper atmosphere begins hundreds of kilometres above its surface and has two components. One is a neutral thermosphere composed mainly of molecular hydrogen. The other is a charged ionosphere comprising electrons and ions. Jupiter also has a protective magnetic shield, or magnetosphere.
When emissions from Jupiter’s volcanic moon, Io, become ionized by extreme ultraviolet radiation from the Sun, the resulting plasma becomes trapped in the magnetosphere. This trapped plasma then generates magnetosphere-ionosphere currents that heat the planet’s polar regions and produce aurorae. Thanks to this heating, the hottest places on Jupiter, at around 900 K, are its poles. From there, temperatures gradually decrease, reaching 600 K at the equator.
Quite a different temperature-gradient pattern
In 2021, however, O’Donoghue and colleagues observed quite a different temperature-gradient pattern in near-infrared spectral data recorded by the 10-metre Keck II telescope in Hawaii, US, during an event in 2017. When they analysed these data, they found an enormous hot region far from Jupiter’s aurorae and stretching across 180° in longitude – half the planet’s circumference.
“At the time, we could not definitively explain this hot feature, which is roughly 150 K hotter than the typical ambient temperature of Jupiter,” says O’Donoghue, “so we re-analysed the Keck data using updated solar wind propagation models.”
Two instruments on NASA’s Juno spacecraft were pivotal in the re-analysis, he explains. The first, called Waves, can measure electron densities locally. Its data showed that these electron densities ramped up as the spacecraft approached Jupiter’s magnetosheath, which is the region between the planet’s magnetic field and the solar wind. The second instrument was Juno’s magnetometer, which recorded measurements that backed up the Waves-based analyses, O’Donoghue says.
A new interpretation
In their latest study, the Reading scientists analysed a burst of fast solar wind that emanated from the Sun in January 2017 and propagated towards Jupiter. They found that a high-speed stream of this wind arrived several hours before the Keck telescope recorded the data that led them to identify the hot region.
“Our analysis of Juno’s magnetometer measurements also showed that this spacecraft exited the magnetosphere of Jupiter early,” says O’Donoghue. “This is a strong sign that strong solar winds probably compressed Jupiter’s magnetic field several hours before the hot region appeared.
“We therefore see the hot region emerging as a response to solar wind compression: the aurorae flared up and heat spilled equatorward.”
The result shows that the Sun can significantly reshape the global energy balance in Jupiter’s upper atmosphere, he tells Physics World. “That changes how we think about energy balance at all giant planets, not just Jupiter, but potentially Saturn, Uranus, Neptune and exoplanets too,” he says. “It also shows that solar wind can trigger complex atmospheric responses far from Earth and it could help us understand space weather in general.”
The Reading researchers say they would now like to hunt for more of these events, especially in the southern hemisphere of Jupiter where they expect a mirrored response. “We are also working on measuring wind speeds and temperatures across more of the planet and at different times to better understand how often this happens and how energy moves around,” O’Donoghue reveals. “Ultimately, we want to build a more complete picture of how space weather shapes Jupiter’s upper atmosphere and drives (or interferes) with global circulation there.”
The world’s smallest pacemaker to date is smaller than a single grain of rice, optically controlled and dissolves after it’s no longer needed. According to researchers involved in the work, the pacemaker could work in human hearts of all sizes that need temporary pacing, including those of newborn babies with congenital heart defects.
“Our major motivation was children,” says Igor Efimov, a professor of medicine and biomedical engineering, in a press release from Northwestern University. Efimov co-led the research with Northwestern bioelectronics pioneer John Rogers.
“About 1% of children are born with congenital heart defects – regardless of whether they live in a low-resource or high-resource country,” Efimov explains. “Now, we can place this tiny pacemaker on a child’s heart and stimulate it with a soft, gentle, wearable device. And no additional surgery is necessary to remove it.”
The current clinical standard-of-care involves sewing pacemaker electrodes directly onto a patient’s heart muscle during surgery. Wires from the electrodes protrude from the patient’s chest and connect to an external pacing box. Placing the pacemakers – and removing them later – does not come without risk. Complications include infection, dislodgment, torn or damaged tissues, bleeding and blood clots.
To minimize these risks, the researchers sought to develop a dissolvable pacemaker, which they introduced in Nature Biotechnology in 2021. By varying the composition and thickness of materials in the devices, Rogers’ lab can control how long the pacemaker functions before dissolving. The dissolvable device also eliminates the need for bulky batteries and wires.
“The heart requires a tiny amount of electrical stimulation,” says Rogers in the Northwestern release. “By minimizing the size, we dramatically simplify the implantation procedures, we reduce trauma and risk to the patient, and, with the dissolvable nature of the device, we eliminate any need for secondary surgical extraction procedures.”
Light-controlled pacing When the wearable device (left) detects an irregular heartbeat, it emits light to activate the pacemaker. (Courtesy: John A Rogers/Northwestern University)
The latest iteration of the device – reported in Nature – advances the technology further. The pacemaker is paired with a small, soft, flexible, wireless device that is mounted onto the patient’s chest. The skin-interfaced device continuously captures electrocardiogram (ECG) data. When it detects an irregular heartbeat, it automatically shines a pulse of infrared light to activate the pacemaker and control the pacing.
“The new device is self-powered and optically controlled – totally different than our previous devices in those two essential aspects of engineering design,” says Rogers. “We moved away from wireless power transfer to enable operation, and we replaced RF wireless control strategies – both to eliminate the need for an antenna (the size-limiting component of the system) and to avoid the need for external RF power supply.”
Measurements demonstrated that the pacemaker – which is 1.8 mm wide, 3.5 mm long and 1 mm thick – delivers as much stimulation as a full-sized pacemaker. Initial studies in animals and in the human hearts of organ donors suggest that the device could work in human infants and adults. The devices are also versatile, the researchers say, and could be used across different regions of the heart or the body. They could also be integrated with other implantable devices for applications in nerve and bone healing, treating wounds and blocking pain.
The next steps for the research (supported by the Querrey Simpson Institute for Bioelectronics, the Leducq Foundation and the National Institutes of Health) include further engineering improvements to the device. “From the translational standpoint, we have put together a very early-stage startup company to work individually and/or in partnerships with larger companies to begin the process of designing the device for regulatory approval,” Rogers says.
In a conversation with Physics World’s Matin Durrani, Meredith talks about the importance of semiconductors in a hi-tech economy and why it is crucial for the UK to have a homegrown semiconductor industry.
Founded in 2020, CISM moved into a new, state-of-the-art £50m building in 2023 and is now in its first full year of operation. Meredith explains how technological innovation and skills training at CSIM is supporting chipmakers in the M4 hi-tech corridor, which begins in Swansea in South Wales and stretches eastward to London.
Harvard University is suing the Trump administration over its plan to block up to $9bn of government research grants to the institution. The suit, filed in a federal court on 21 April, claims that the administration’s “attempt to coerce and control” Harvard violates the academic freedom protected by the first amendment of the US constitution.
The action comes in the wake of the US administration claiming that Harvard and other universities have not protected Jewish students during pro-Gaza campus demonstrations. Columbia University has already agreed to change its teaching policies and clamp down on demonstrations in the hope of regaining some $400,000 of government grants.
Harvard president Alan Garber also sought negotiations with the administration on ways that it might satisfy its demands. But a letter sent to Garber dated 11 April, signed by three Trump administration officials, asserted that the university had “failed to live up to both the intellectual and civil rights conditions that justify federal investments”.
The letter demanded that Harvard reform and restructure its governance, stop all diversity, equality and inclusion (DEI) programmes and reform how it hires staff and students. It also said Harvard must stop recruiting international students who are “hostile to American values” and provide an audit on “viewpoint diversity” on admissions and hiring.
Some administration sources suggested that the letter, which effectively insists on government oversight of Harvard’s affairs, was an internal draft sent to Harvard by mistake. Nevertheless, Garber decided to end negotiations, leading Harvard to instead sue the government over the blocked funds.
We stand for the values that have made American higher education a beacon for the world
Alan Garber
A letter on 14 April from Harvard’s lawyers states that the university is “committed to fighting antisemitism and other forms of bigotry in its community”. It adds that it is “open to dialogue” about what it has done, and is planning to do, to “improve the experience of every member” of its community but concludes that Harvard “is not prepared to agree to demands that go beyond the lawful authority of this or any other administration”.
Writing in an open letter to the community dated 22 April, Garber says that “we stand for the values that have made American higher education a beacon for the world”. The administration has hit back by threatening to withdraw Harvard’s non-profit status, tax its endowment and jeopardise its ability to enrol overseas students, who currently make up more than 27% of its intake.
Budget woes
The Trump administration is also planning swingeing cuts to government science agencies. If its budget request for 2026 is approved by Congress, funding for NASA’s Science Mission Directorate would be almost halved from $7.3bn to $3.9bn. The Nancy Grace Roman Space Telescope, a successor to the Hubble and James Webb space telescopes, would be axed. Two missions to Venus – the DAVINCI atmosphere probe and the VERITAS surface-mapping project – as well as the Mars Sample Return mission would lose their funding too.
“The impacts of these proposed funding cuts would not only be devastating to the astronomical sciences community, but they would also have far-reaching consequences for the nation,” says Dara Norman, president of the American Astronomical Society. “These cuts will derail not only cutting-edge scientific advances, but also the training of the nation’s future STEM workforce.”
The National Oceanic and Atmospheric Administration (NOAA) also stands to lose key programmes, with the budget for its Ocean and Atmospheric Research Office slashed from $485m to just over $170m. Surviving programmes from the office, including research on tornado warning and ocean acidification, would move to the National Weather Service and National Ocean Service.
“This administration’s hostility toward research and rejection of climate science will have the consequence of eviscerating the weather forecasting capabilities that this plan claims to preserve,” says Zoe Lofgren, a senior Democrat who sits on the House of Representatives’ Science, Space, and Technology Committee.
The National Science Foundation (NSF), meanwhile, is unlikely to receive $234m for major building projects this financial year, which could spell the end of the Horizon supercomputer being built at the University of Texas at Austin. The NSF has already halved the number of graduate students in its research fellowship programme, while Science magazine says it is calling back all grant proposals that had been approved but not signed off, apparently to check that awardees conform to Trump’s stance on DEI.
A survey of 292 department chairs at US institutions in early April, carried out by the American Institute of Physics, reveals that almost half of respondents are experiencing or anticipate cuts in federal funding in the coming months. Entitled Impacts of Restrictions on Federal Grant Funding in Physics and Astronomy Graduate Programs, the report also says that the number of first-year graduate students in physics and astronomy is expected to drop by 13% in the next enrolment.
Update: 25/04/2025: Sethuraman Panchanathan has resigned as NSF director five years into his six-year term. Panchanathan took up the position in 2020 during Trump’s first term as US President. “I believe that I have done all I can to advance the mission of the agency and feel that it is time to pass the baton to new leadership,” Panchanathan said in a statement yesterday. “This is a pivotal moment for our nation in terms of global competitiveness. We must not lose our competitive edge.”
A series of spectacular images of the cosmos has been released to celebrate the Hubble Space Telescope‘s 35 years in space. The images include pictures of Mars, planetary nebulae and a spiral galaxy.
Hubble was launched into low-Earth orbit in April 1990, stowed in the payload bay of the space shuttle Discovery. The telescope experienced a difficult start as its 2.4 m primary mirror suffered from spherical aberration – a fault that caused the curvature of the mirror to not bring light to focus at the same point. This was fixed three years later during a daring space walk in which astronauts successfully installed the COSTAR instrument.
During Hubble’s operational life, the telescope has made nearly 1.7 million observations, studying approximately 55,000 astronomical targets. Its discoveries have resulted in over 22,000 papers and over 1.3 million citations.
Operating for three decades, Hubble has allowed astronomers to see astronomical changes such as seasonal variability on the planets in our solar system, black-hole jets travelling at nearly the speed of light as well as stellar convulsions, asteroid collisions and expanding supernova bubbles.
Despite being 35 years in orbit around the Earth, Hubble is still one of the most sought after observatories with demand for observing time oversubscribed by 6:1.
“[Hubble’s] stunning imagery inspired people across the globe, and the data behind those images revealed surprises about everything from early galaxies to planets in our own solar system,” notes Shawn Domagal-Goldman, acting director of NASA’s astrophysics division. “The fact that it is still operating today is a testament to the value of our flagship observatories.”
Worms move faster in an environment riddled with randomly-placed obstacles than they do in an empty space. This surprising observation by physicists at the University of Amsterdam in the Netherlands can be explained by modelling the worms as polymer-like “active matter”, and it could come in handy for developers of robots for soil aeriation, fertility treatments and other biomedical applications.
When humans move, the presence of obstacles – disordered or otherwise – has a straightforward effect: it slows us down, as anyone who has ever driven through “traffic calming” measures like speed bumps and chicanes can attest. Worms, however, are different, says Antoine Deblais, who co-led the new study with Rosa Sinaasappel and theorist colleagues in Sara Jabbari Farouji’s group. “The arrangement of obstacles fundamentally changes how worms move,” he explains. “In disordered environments, they spread faster as crowding increases, while in ordered environments, more obstacles slow them down.”
A maze of cylindrical pillars
The team obtained this result by placing single living worms at the bottom of a water chamber containing a 50 x 50 cm array of cylindrical pillars, each with a radius of 2.5 mm. By tracking the worms’ movement and shape changes with a camera for two hours, the scientists could see how the animals behaved when faced with two distinct pillar arrangements: a periodic (square lattice) structure; and a disordered array. The minimum distance between any two pillars was set to the characteristic width of a worm (around 0.5 mm) to ensure they could always pass through.
“By varying the number and arrangement of the pillars (up to 10 000 placed by hand!), we tested how different environments affect the worm’s movement,” Sinaasappel explains. “We also reduced or increased the worm’s activity by lowering or raising the temperature of the chamber.”
These experiments showed that when the chamber contained a “maze” of obstacles placed at random, the worms moved faster, not slower. The same thing happened when the researchers increased the number of obstacles. More surprisingly still, the worms got through the maze faster when the temperature was lower, even though the cold reduced their activity.
Active polymer-like filaments
To explain these counterintuitive results, the team developed a statistical model that treats the worms as active polymer-like filaments and accounts for both the worms’ flexibility and the fact that they are self-driven. This analysis revealed that in a space containing disordered pillar arrays, the long-time diffusion coefficient of active polymers with a worm-like degree of flexibility increases significantly as the fraction of the surface occupied by pillars goes up. In regular, square-lattice arrangements, the opposite happens.
The team say that this increased diffusivity comes about because randomly-positioned pillars create narrow tube-like structures between them. These curvilinear gaps guide the worms and allow them to move as if they were straight rods for longer before they reorient. In contrast, ordered pillar arrangements create larger open spaces, or pores, in which worms can coil up. This temporarily traps them and they slow down.
Similarly, the team found that reducing the worm’s activity by lowering ambient temperatures increases a parameter known as its persistence length. This is essentially a measure of how straight the worm is, and straighter worms pass between the pillars more easily.
“Self-tuning plays a key role”
Identifying the right active polymer model was no easy task, says Jabbari Farouji. One challenge was to incorporate the way worms adjust their flexibility depending on their surroundings. “This self-tuning plays a key role in their surprising motion,” says Jabbari Farouji, who credits this insight to team member Twan Hooijschuur.
Understanding how active, flexible objects move through crowded environments is crucial in physics, biology and biophysics, but the role of environmental geometry in shaping this movement was previously unclear, Jabbari Farouji says. The team’s discovery that movement in active, flexible systems can be controlled simply by adjusting the environment has important implications, adds Deblais.
“Such a capability could be used to sort worms by activity and therefore optimize soil aeration by earthworms or even influence bacterial transport in the body,” he says. “The insights gleaned from this study could also help in fertility treatments – for instance, by sorting sperm cells based on how fast or slow they move.”
Looking ahead, the researchers say they are now expanding their work to study the effects of different obstacle shapes (not just simple pillars), more complex arrangements and even movable obstacles. “Such experiments would better mimic real-world environments,” Deblais says.
Precise control over the generation of intense, ultrafast changes in magnetic fields called “magnetic steps” has been achieved by researchers in Hamburg, Germany. Using ultrashort laser pulses, Andrea Cavalleri and colleagues at the Max Planck Institute for the Structure and Dynamics of Matter disrupted the currents flowing through a superconducting disc. This alters the superconductor’s local magnetic environment on very short timescales – creating a magnetic step.
Magnetic steps rise to their peak intensity in just a few picoseconds, before decaying more slowly in several nanoseconds. They are useful to scientists because they rise and fall on timescales far shorter than the time it takes for materials to respond to external magnetic fields. As a result, magnetic steps could provide fundamental insights into the non-equilibrium properties of magnetic materials, and could also have practical applications in areas such as magnetic memory storage.
So far, however, progress in this field has been held back by technical difficulties in generating and controlling magnetic steps on ultrashort timescales. Previous strategies have employed technologies including microcoils, specialized antennas, and circularly polarized light pulses. However, each of these schemes offer a limited degree of control over the properties of the magnetic steps they generated.
Quenching supercurrents
Now, Cavalleri’s team has developed a new technique that involves the quenching of currents in a superconductor. Normally, these “supercurrents” will flow indefinitely without losing energy, and will act to expel any external magnetic fields from the superconductor’s interior. However, if these currents are temporarily disrupted on ultrashort timescales, a sudden change will be triggered in the magnetic field close to the superconductor – which could be used to create a magnetic step.
To create this process, Cavalleri and colleagues applied ultrashort laser pulses to a thin, superconducting disc of yttrium barium copper oxide (YBCO), while also exposing the disc to an external magnetic field.
To detect whether magnetic steps had been generated, they placed a crystal of the semiconductor gallium phosphide in the superconductor’s vicinity. This material exhibits an extremely rapid Faraday response. This involves the rotation of the polarization of light passing through the semiconductor in response to changes in the local magnetic field. Crucially, this rotation can occur on sub-picosecond timescales.
In their experiments, researchers monitored changes to the polarization of an ultrashort “probe” laser pulse passing through the semiconductor shortly after they quenched supercurrents in their YBCO disc using a separate ultrashort “pump” laser pulse.
“By abruptly disrupting the material’s supercurrents using ultrashort laser pulses, we could generate ultrafast magnetic field steps with rise times of approximately one picosecond – or one trillionth of a second,” explains team member Gregor Jotzu.
Broadband step
This was used to generate an extremely broadband magnetic step, which contains frequencies ranging from sub-gigahertz to terahertz. In principle, this should make the technique suitable for studying magnetization in a diverse variety of materials.
To demonstrate practical applications, the team used these magnetic steps to control the magnetization of a ferrimagnet. Such a magnet has opposing magnetic moments, but has a non-zero spontaneous magnetization in zero magnetic field.
When they placed a ferrimagnet on top of their superconductor and created a magnetic step, the step field caused the ferrimagnet’s magnetization to rotate.
For now, the magnetic steps generated through this approach do not have the speed or amplitude needed to switch materials like a ferrimagnet between stable states. Yet through further tweaks to the geometry of their setup, the researchers are confident that this ability may not be far out of reach.
“Our goal is to create a universal, ultrafast stimulus that can switch any magnetic sample between stable magnetic states,” Cavalleri says. “With suitable improvements, we envision applications ranging from phase transition control to complete switching of magnetic order parameters.”
Over the years, first as a PhD student and now as a postdoc, I have been approached by many students and early-career academics who have confided their problems with me. Their issues, which they struggled to deal with alone, ranged from anxiety and burnout to personal and professional relationships as well as mental-health concerns. Sadly, such discussions were not one-off incidents but seemed worryingly common in academia where people are often under pressure to perform, face uncertainty over their careers and need to juggle lots of different tasks simultaneously.
But it can be challenging to even begin to approach someone else with a problem. That first step can take days or weeks of mental preparation, so to those who are approached for help, it is our responsibility to listen and act appropriately when someone does finally open up. This is especially so given that a supervisor, mentor, teaching assistant, or anybody in a position of seniority, may be the first point of contact when a difficulty becomes debilitating.
I am fortunate to have had excellent relationships with my PhD and postdoc supervisors – providing great examples to follow. Even then, however, it was difficult to subdue the feeling of nausea when I knocked on their office doors to have a difficult conversation. I was worried about their response and reaction and how they would judge me. While that first conversation is challenging for both parties, fortunately it does gets easier from there.
Yet it can also be hard for the person who is trying to offer help, especially if they haven’t done so before. In fact, when colleagues began to confide in me, I’d had no formal preparation or training to support them. But through experience and some research, I found a few things that worked well in such complex situations. The first is to set and maintain boundaries or where your personal limits lie. This includes which topics are off limits and to what extent you will engage with somebody. Someone who has recently experienced bereavement, for example, may not want to engage deeply with a student who is enduring the same and so should make it clear they can’t offer help. Yet at the same time, that person may feel confident providing support for someone struggling with imposter syndrome – a feeling that you don’t deserve to be there and aren’t good at your work.
Time restrictions can also be used as boundaries. If you are working on a critical experiment, have an article deadline or are about to go on holiday, explain that you can only help them until a certain point, after which you will explore alternative solutions together. Setting boundaries can also be handy for mentors to prepare to help someone struggling. This could involve taking a mental-health first-aid course to support a person who experiences panic attacks or is relapsing into depression. It could also mean finding contact details for professionals, either on campus or beyond, would could help. While providing such information might sound trivial and unimportant, remember that for a person who is feeling overwhelmed, it can be hugely appreciated.
Following up
Sharing problems takes courage. It also requires trust because if information leaks out, rumours and accusations can spread quickly and worsen situations. It is, however, possible to ask more senior colleagues for advice without identifying anyone or their exact circumstances, perhaps in cases when dealing with less than amicable relationships with collaborators. It is also possible to let colleagues know that a particular person needs more support without explicitly saying why.
There are times, however, when that confidentiality must be broken. In my experience, this should always be first addressed with the person at hand and broken to somebody who is sure to have a concrete solution. For a student who is struggling with a particular subject, it could, for example, be the lecturer responsible for that course. For somebody not coping with divorce, say, it could be someone from HR or a supervisor for a colleague. It could even be a university’s support team or the police for a student who has experienced sexual assault.
Even if the situation has been handed over to someone else, it’s important to follow up with the person struggling, which helps them know they’re being heard and respected
I have broken confidentiality at times and it can be nerve-wracking, but it is essential to provide the best possible support and take a situation that you cannot handle off your hands. Even if the issue has been handed over to someone else, it’s important to follow up with the person struggling, which helps them know they’re being heard and respected. Following up is not always a comfortable conversation, potentially invoking trauma or broaching sensitive topics. But it also allows them to admit that they are still looking for more support or that their situation has worsened.
A follow-up conversation could also be held in a discrete environment with reassurance that nobody is obliged to go into detail. It may be as simple as asking “How are you feeling today?”. Letting someone express themselves without judgement can help them come to terms with their situation, let them speak or have confidence to approach you again.
Regularly reflecting on your boundaries and limits as well as having a good knowledge of possible resources can help you prepare for unexpected circumstances. It gives students and colleagues immediate care and relief at what might be their lowest point. But perhaps the most important aspect when approached by someone is to ask yourself this: “What kind of person would I want to speak to if I were struggling?”. That is the person you want to be.
Until now, researchers have had to choose between thermal and visible imaging: One reveals heat signatures while the other provides structural detail. Recording both and trying to align them manually — or harder still, synchronizing them temporally — can be inconsistent and time-consuming. The result is data that is close but never quite complete. The new FLIR MIX is a game changer, capturing and synchronizing high-speed thermal and visible imagery at up to 1000 fps. Visible and high-performance infrared cameras with FLIR Research Studio software work together to deliver one data set with perfect spatial and temporal alignment — no missed details or second guessing, just a complete picture of fast-moving events.
Jerry Beeney
Jerry Beeney is a seasoned global business development leader with a proven track record of driving product growth and sales performance in the Teledyne FLIR Science and Automation verticals. With more than 20 years at Teledyne FLIR, he has played a pivotal role in launching new thermal imaging solutions, working closely with technical experts, product managers, and customers to align products with market demands and customer needs. Before assuming his current role, Beeney held a variety of technical and sales positions, including senior scientific segment engineer. In these roles, he managed strategic accounts and delivered training and product demonstrations for clients across diverse R&D and scientific research fields. Beeney’s dedication to achieving meaningful results and cultivating lasting client relationships remains a cornerstone of his professional approach.
Researchers at the University of Victoria in Canada are developing a low-cost radiotherapy system for use in low- and middle-income countries and geographically remote rural regions. Initial performance characterization of the proof-of-concept device produced encouraging results, and the design team is now refining the system with the goal of clinical commercialization.
This could be good news for people living in low-resource settings, where access to cancer treatment is an urgent global health concern. The WHO’s International Agency for Research on Cancer estimates that there are at least 20 million new cases of cancer diagnosed annually and 9.7 million annual cancer-related deaths, based on 2022 data. By 2030, approximately 75% of cancer deaths are expected to occur in low- and middle-income countries, due to rising populations, healthcare and financial disparities, and a general lack of personnel and equipment resources compared with high-income countries.
The team’s orthovoltage radiotherapy system, known as KOALA (kilovoltage optimized alternative for adaptive therapy), is designed to create, optimize and deliver radiation treatments in a single session. The device, described in Biomedical Physics & Engineering Express, consists of a dual-robot system with a 225 kVp X-ray tube mounted onto one robotic arm and a flat-panel detector mounted on the other.
The same X-ray tube can be used to acquire cone-beam CT (CBCT) images, as well as to deliver treatment, with a peak tube voltage of 225 kVp and a maximum tube current of 2.65 mA for a 1.2 mm focal spot. Due to its maximum reach of 2.05 m and collision restrictions, the KOALA system has a limited range of motion, achieving 190° arcs for both CBCT acquisition and treatments.
Device testing
To characterize the KOALA system, lead author Olivia Masella and colleagues measured X-ray spectra for tube voltages of 120, 180 and 225 kVp. At 120 and 180 kVp, they observed good agreement with spectra from SpekPy (a Python software toolkit for modelling X-ray tube spectra). For the 225 kVp spectrum, they found a notable overestimation in the higher energies.
The researchers performed dosimetric tests by measuring percent depth dose (PDD) curves for a 120 kVp imaging beam and a 225 kVp therapy beam, using solid water phantom blocks with a Farmer ionization chamber at various depths. They used an open beam with 40° divergence and a source-to-surface distance of 30 cm. They also measured 2D dose profiles with radiochromic film at various depths in the phantom for a collimated 225 kVp therapy beam and a dose of approximately 175 mGy at the surface.
The PDD curves showed excellent agreement between experiment and simulations at both 120 and 225 kVp, with dose errors of less than 2%. The 2D profile results were less than optimal. The team aims to correct this by using a more optimal source-to-collimator distance (100 mm) and a custom-built motorized collimator.
Workflow proof-of-concept The team tested the workflow by acquiring a CBCT image of a dosimetry phantom containing radiochromic film, delivering a 190° arc to the phantom, and scanning and analysing the film. The CBCT image was then processed for Monte Carlo dose calculation and compared to the film dose. (Courtesy: CC BY 4.0/Biomed. Phys. Eng. Express 10.1088/2057-1976/adbcb2)
Geometrical evaluation conducted using a coplanar star-shot test showed that the system demonstrated excellent geometrical accuracy, generating a wobble circle with a diameter of just 0.3 mm.
Low costs and clinical practicality
Principal investigator Magdalena Bazalova-Carter describes the rationale behind the KOALA’s development. “I began the computer simulations of this project about 15 years ago, but the idea originated from Michael Weil, a radiation oncologist in Northern California,” she tells Physics World. “He and our industrial partner, Tai-Nang Huang, the president of Linden Technologies, are overseeing the progress of the project. Our university team is diversified, working in medical physics, computer science, and electrical and mechanical engineering. Orimtech, a medical device manufacturer and collaborator, developed the CBCT acquisition and reconstruction software and built the imaging prototype.”
Masella says that the team is keeping costs low is various ways. “Megavoltage X-rays are most commonly used in conventional radiotherapy, but KOALA’s design utilizes low-energy kilovoltage X-rays for treatment. By using a 225 kVp X-ray tube, the X-ray generation alone is significantly cheaper compared to a conventional linac, at a cost of USD $150,000 compared to $3 million,” she explains. “By operating in the kilovoltage instead of megavoltage range, only about 4 mm of lead shielding is required, instead of 6 to 7 feet of high-density concrete, bringing the shielding cost down from $2 million to $50,000. We also have incorporated components that are much lower cost than [those in] a conventional radiotherapy system.”
“Our novel iris collimator leaves are only 1-mm thick due to the lower treatment X-ray beam energy, and its 12 leaves are driven by a single motor,” adds Bazalova-Carter. “Although multileaf collimators with 120 leaves utilized with megavoltage X-ray radiotherapy are able to create complex fields, they are about 8-cm thick and are controlled by 120 separate motors. Given the high cost and mechanical vulnerability of multileaf collimators, our single motor design offers a more robust and reliable alternative.”
The team is currently developing a new motorized collimator, an improved treatment couch and a treatment planning system. They plan to improve CBCT imaging quality with hardware modifications, develop a CBCT-to-synthetic CT machine learning algorithm, refine the auto-contouring tool and integrate all of the software to smooth the workflow.
The researchers are planning to work with veterinarians to test the KOALA system with dogs diagnosed with cancer. They will also develop quality assurance protocols specific to the KOALA device using a dog-head phantom.
“We hope to demonstrate the capabilities of our system by treating beloved pets for whom available cancer treatment might be cost-prohibitive. And while our system could become clinically adopted in veterinary medicine, our hope is that it will be used to treat people in regions where conventional radiotherapy treatment is insufficient to meet demand,” they say.
Contrary to some theorists’ expectations, water does not form hydrogen bonds in its supercritical phase. This finding, which is based on terahertz spectroscopy measurements and simulations by researchers at Ruhr University Bochum, Germany, puts to rest a long-standing controversy and could help us better understand the chemical processes that occur near deep-sea vents.
Water is unusual. Unlike most other materials, it is denser as a liquid than it is as the ice that forms when it freezes. It also expands rather than contracting when it cools; becomes less viscous when compressed; and exists in no fewer than 17 different crystalline phases.
Another unusual property is that at high temperatures and pressures – above 374 °C and 221 bars – water mostly exists as a supercritical fluid, meaning it shares some properties with both gases and liquids. Though such extreme conditions are rare on the Earth’s surface (at least outside a laboratory), they are typical for the planet’s crust and mantle. They are also present in so-called black smokers, which are hydrothermal vents that exist on the seabed in certain geologically active locations. Understanding supercritical water is therefore important for understanding the geochemical processes that occur in such conditions, including the genesis of gold ore.
Supercritical water also shows promise as an environmentally friendly solvent for industrial processes such as catalysis, and even as a mediator in nuclear power plants. Before any such applications see the light of day, however, researchers need to better understand the structure of water’s supercritical phase.
Probing the hydrogen bonding between molecules
At ambient conditions, the tetrahedrally-arranged hydrogen bonds (H-bonds) in liquid water produce a three-dimensional H-bonded network. Many of water’s unusual properties stem from this network, but as it approaches its supercritical point, its structure changes.
Previous studies of this change have produced results that were contradictory or unclear at best. While some pointed to the existence of distorted H-bonds, others identified heterogeneous structures involving rigid H-bonded dimers or, more generally, small clusters of tetrahedrally-bonded water surrounded by nonbonded gas-like molecules.
To resolve this mystery, an experimental team led by Gerhard Schwaab and Martina Havenith, together with Philipp Schienbein and Dominik Marx, investigated how water absorbs light in the far infrared/terahertz (THz) range of the spectrum. They performed their experiments and simulations at temperatures of 20° to 400°C and pressures from 1 bar up to 240 bars. In this way, they were able to investigate the hydrogen bonding between molecules in samples of water that were entering the supercritical state and samples that were already in it.
Diamond and gold cell
Because supercritical water is highly corrosive, the researchers carried out their experiments in a specially-designed cell made from diamond and gold. By comparing their experimental data with the results of extensive ab initio simulations that probed different parts of water’s high-temperature phase diagram, they obtained a molecular picture of what was happening.
The researchers found that the terahertz spectrum of water in its supercritical phase was practically identical to that of hot gaseous water vapour. This, they say, proves that supercritical water is different from both liquid water at ambient conditions and water in a low-temperature gas phase where clusters of molecules form directional hydrogen bonds. No such molecular clusters appear in supercritical water, they note.
The team’s ab initio molecular dynamics simulations also revealed that two water molecules in the supercritical phase remain close to each other for a very limited time – much shorter than the typical lifetime of hydrogen bonds in liquid water – before distancing themselves. What is more, the bonds between hydrogen and oxygen atoms in supercritical water do not have a preferred orientation. Instead, they are permanently and randomly rotating. “This is completely different to the hydrogen bonds that connect the water molecules in liquid water at ambient conditions, which do have a persisting preferred orientation,” Havenith says.
Now that they have identified a clear spectroscopic fingerprint for supercritical water, the researchers want to study how solutes affect the solvation properties of this substance. They anticipate that the results from this work, which is published in Science Advances, will enable them to characterize the properties of supercritical water for use as a “green” solvent.
Physicists working on the ATLAS experiment on the Large Hadron Collider (LHC) are the first to report the production of top quark–antiquark pairs in collisions involving heavy nuclei. By colliding lead ions, CERN’s LHC creates a fleeting state of matter called the quark–gluon plasma. This is an extremely hot and dense soup of subatomic particles that includes deconfined quarks and gluons. This plasma is believed to have filled the early universe microseconds after the Big Bang.
“Heavy-ion collisions at the LHC recreate the quark–gluon plasma in a laboratory setting,” Anthony Badea, a postdoctoral researcher at the University of Chicago and one of the lead authors of a paper describing the research. As well as boosting our understanding of the early universe, studying the quark–gluon plasma at the LHC could also provide insights into quantum chromodynamics (QCD), which is the theory of how quarks and gluons interact.
Although the quark–gluon plasma at the LHC vanishes after about 10-23 s, scientists can study it by analysing how other particles produced in collisions move through it. The top quark is the heaviest known elementary particle and its short lifetime and distinct decay pattern offer a unique way to explore the quark–gluon plasma. This because the top quark decays before the quark–gluon plasma dissipates.
“The top quark decays into lighter particles that subsequently further decay,” explains Stefano Forte at the University of Milan, who was not involved in the research. “The time lag between these subsequent decays is modified if they happen within the quark–gluon plasma, and thus studying them has been suggested as a way to probe [quark–gluon plasma’s] structure. In order for this to be possible, the very first step is to know how many top quarks are produced in the first place, and determining this experimentally is what is done in this [ATLAS] study.”
First observations
The ATLAS team analysed data from lead–lead collisions and searched for events in which a top quark and its antimatter counterpart were produced. These particles can then decay in several different ways and the researchers focused on a less frequent but more easily identifiable mode known as the di-lepton channel. In this scenario, each top quark decays into a bottom quark and a W boson, which is a weak force-carrying particle that then transforms into a detectable lepton and an invisible neutrino.
The results not only confirmed that top quarks are created in this complex environment but also showed that their production rate matches predictions based on our current understanding of the strong nuclear force.
“This is a very important study,” says Juan Rojo, a theoretical physicist at the Free University of Amsterdam who did not take part in the research. “We have studied the production of top quarks, the heaviest known elementary particle, in the relatively simple proton–proton collisions for decades. This work represents the first time that we observe the production of these very heavy particles in a much more complex environment, with two lead nuclei colliding among them.”
As well as confirming QCD’s prediction of heavy-quark production in heavy-nuclei collisions, Rojo explains that “we have a novel probe to resolve the structure of the quark–gluon plasma”. He also says that future studies will enable us “to understand novel phenomena in the strong interactions such as how much gluons in a heavy nucleus differ from gluons within the proton”.
Crucial first step
“This is a first step – a crucial one – but further studies will require larger samples of top quark events to explore more subtle effects,” adds Rojo.
The number of top quarks created in the ATLAS lead–lead collisions agrees with theoretical expectations. In the future, more detailed measurements could help refine our understanding of how quarks and gluons behave inside nuclei. Eventually, physicists hope to use top quarks not just to confirm existing models, but to reveal entirely new features of the quark–gluon plasma.
Rojo says we could, “learn about the time structure of the quark–gluon plasma, measurements which are ‘finer’ would be better, but for this we need to wait until more data is collected, in particular during the upcoming high-luminosity run of the LHC”.
Badea agrees that ATLAS’s observation opens the door to deeper explorations. “As we collect more nuclei collision data and improve our understanding of top-quark processes in proton collisions, the future will open up exciting prospects”.
Great mind Grete Hermann, pictured here in 1955, was one of the first scientists to consider the philosophical implications of quantum mechanics. (Photo: Lohrisch-Achilles. Courtesy: Bremen State Archives)
In the early days of quantum mechanics, physicists found its radical nature difficult to accept – even though the theory had successes. In particular Werner Heisenberg developed the first comprehensive formulation of quantum mechanics in 1925, while the following year Erwin Schrödinger was able to predict the spectrum of light emitted by hydrogen using his eponymous equation. Satisfying though these achievements were, there was trouble in store.
Long accustomed to Isaac Newton’s mechanical view of the universe, physicists had assumed that identical systems always evolve with time in exactly the same way, that is to say “deterministically”. But Heisenberg’s uncertainty principle and the probabilistic nature of Schrödinger’s wave function suggested worrying flaws in this notion. Those doubts were famously expressed by Albert Einstein, Boris Podolsky and Nathan Rosen in their “EPR” paper of 1935 (Phys. Rev.47 777) and in debates between Einstein and Niels Bohr.
But the issues at stake went deeper than just a disagreement among physicists. They also touched on long-standing philosophical questions about whether we inhabit a deterministic universe, the related question of human free will, and the centrality of cause and effect. One person who rigorously addressed the questions raised by quantum theory was the German mathematician and philosopher Grete Hermann (1901–1984).
Hermann stands out in an era when it was rare for women to contribute to physics or philosophy, let alone to both. Writing in The Oxford Handbook of the History of Quantum Interpretations, published in 2022, the City University of New York philosopher of science Elise Crull has called Hermann’s work “one of the first, and finest, philosophical treatments of quantum mechanics”.
Grete Hermann upended the famous ‘proof’, developed by the Hungarian-American mathematician and physicist John von Neumann, that ‘hidden variables’ are impossible in quantum mechanics
What’s more, Hermann upended the famous “proof”, developed by the Hungarian-American mathematician and physicist John von Neumann, that “hidden variables” are impossible in quantum mechanics. But why have Hermann’s successes in studying the roots and meanings of quantum physics been so often overlooked? With 2025 being the International Year of Quantum Science and Technology, it’s time to find out.
Free thinker
Hermann was born on 2 March 1901 in the north German port city of Bremen. One of seven children, her mother was deeply religious, while her father was a merchant, a sailor and later an itinerant preacher. According to the 2016 book Grete Hermann: Between Physics and Philosophy by Crull and Guido Bacciagaluppi, she was raised according to her father’s maxim: “I train my children in freedom!” Essentially, he enabled Hermann to develop a wide range of interests and benefit from the best that the educational system could offer a woman at the time.
She was eventually admitted as one of a handful of girls at the Neue Gymnasium – a grammar school in Bremen – where she took a rigorous and broad programme of subjects. In 1921 Hermann earned a certificate to teach high-school pupils – an interest in education that reappeared in her later life – and began studying mathematics, physics and philosophy at the University of Göttingen.
In just four years, Hermann earned a PhD under the exceptional Göttingen mathematician Emmy Noether (1882–1935), famous for her groundbreaking theorem linking symmetry to physical conservation laws. Hermann’s final oral exam in 1925 featured not just mathematics, which was the subject of her PhD, but physics and philosophy too. She had specifically requested to be examined in the latter by the Göttingen philosopher Leonard Nelson, whose “logical sharpness” in lectures had impressed her.
Mutual interconnections Grete Hermann was fascinated by the fundamental overlap between physics and philosophy. (Courtesy: iStock/agsandrew)
By this time, Hermann’s interest in philosophy was starting to dominate her commitment to mathematics. Although Noether had found a mathematics position for her at the University of Freiburg, Hermann instead decided to become Nelson’s assistant, editing his books on philosophy. “She studies mathematics for four years,” Noether declared, “and suddenly she discovers her philosophical heart!”
Hermann found Nelson to be demanding and sometimes overbearing but benefitted from the challenges he set. “I gradually learnt to eke out, step by step,” she later declared, “the courage for truth that is necessary if one is to utterly place one’s trust, also within one’s own thinking, in a method of thought recognized as cogent.” Hermann, it appeared, was searching for a path to the internal discovery of truth, rather like Einstein’s Gedankenexperimente.
After Nelson died in 1927 aged just 45, Hermann stayed in Göttingen, where she continued editing and expanding his philosophical work and related political ideas. Espousing a form of socialism based on ethical reasoning to produce a just society, Nelson had co-founded a political action group and set up the associated Philosophical-Political Academy (PPA) to teach his ideas. Hermann contributed to both and also wrote for the PPA’s anti-Nazi newspaper.
Hermann’s involvement in the organizations Nelson had founded later saw her move to other locations in Germany, including Berlin. But after Hitler came to power in 1933, the Nazis banned the PPA, and Hermann and her socialist associates drew up plans to leave Germany. Initially, she lived at a PPA “school-in-exile” in neighbouring Denmark. As the Nazis began to arrest socialists, Hermann feared that Germany might occupy Denmark (as it indeed later did) and so moved again, first to Paris and then London.
Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics
Arriving in Britain in early 1938, Hermann became acquainted with Edward Henry, another socialist, whom she later married. It was, however, merely a marriage of convenience that gave Hermann British citizenship and – when the Second World War started in 1939 – stopped her from being interned as an enemy alien. (The couple divorced after the war.) Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics.
Mixing philosophy and physics
A major stimulus for Hermann’s work came from discussions she had in 1934 with Heisenberg and Carl Friedrich von Weizsäcker, who was then his research assistant at the Institute for Theoretical Physics in Leipzig. The previous year Hermann had written an essay entitled “Determinism and quantum mechanics”, which analysed whether the indeterminate nature of quantum mechanics – central to the “Copenhagen interpretation” of quantum behaviour – challenged the concept of causality.
Much cherished by physicists, causality says that every event has a cause, and that a given cause always produces a single specific event. Causality was also a tenet of the 18th-century German philosopher Immanuel Kant, best known for his famous 1781 treatise Critique of Pure Reason. He believed that causality is fundamental for how humans organize their experiences and make sense of the world.
Hermann, like Nelson, was a “neo-Kantian” who believed that Kant’s ideas should be treated with scientific rigour. In her 1933 essay, Hermann examined how the Copenhagen interpretation undermines Kant’s principle of causality. Although the article was not published at the time, she sent copies to Heisenberg, von Weizsäcker, Bohr and also Paul Dirac, who was then at the University of Cambridge in the UK.
In fact, we only know of the essay’s existence because Crull and Bacciagaluppi discovered a copy in Dirac’s archives at Churchill College, Cambridge. They also found a 1933 letter to Hermann from Gustav Heckmann, a physicist who said that Heisenberg, von Weizsäcker and Bohr had all read her essay and took it “absolutely and completely seriously”. Heisenberg added that Hermann was a “fabulously clever woman”.
Heckmann then advised Hermann to discuss her ideas more fully with Heisenberg, who he felt would be more open than Bohr to new ideas from an unexpected source. In 1934 Hermann visited Heisenberg and von Weizsäcker in Leipzig, with Heisenberg later describing his interaction in his 1971 memoir Physics and Beyond: Encounters and Conversations.
In that book, Heisenberg relates how rigorously Hermann wanted to treat philosophical questions. “[She] believed she could prove that the causal law – in the form Kant had given it – was unshakable,” Heisenberg recalled. “Now the new quantum mechanics seemed to be challenging the Kantian conception, and she had accordingly decided to fight the matter out with us.”
Their interaction was no fight, but a spirited discussion, with some sharp questioning from Hermann. When Heisenberg suggested, for instance, that a particular radium atom emitting an electron is an example of an unpredictable random event that has no cause, Hermann countered by saying that just because no cause has been found, it didn’t mean no such cause exists.
Significantly, this was a reference to what we now call “hidden variables” – the idea that quantum mechanics is being steered by additional parameters that we possibly don’t know anything about. Heisenberg then argued that even with such causes, knowing them would lead to complications in other experiments because of the wave nature of electrons.
Forward thinker Grete Hermann was one of the first people to study the notion that quantum mechanics might be steered by mysterious additional parameters – now dubbed “hidden variables” – that we know nothing about. (Courtesy: iStock/pobytov)
Suppose, using a hidden variable, we could predict exactly which direction an electron would move. The electron wave wouldn’t then be able to split and interfere with itself, resulting in an extinction of the electron. But such electron interference effects are experimentally observed, which Heisenberg took as evidence that no additional hidden variables are needed to make quantum mechanics complete. Once again, Hermann pointed out a discrepancy in Heisenberg’s argument.
In the end, neither side fully convinced the other, but inroads were made, with Heisenberg concluding in his 1971 book that “we had all learned a good deal about the relationship between Kant’s philosophy and modern science”. Hermann herself paid tribute to Heisenberg in a 1935 paper “Natural-philosophical foundations of quantum mechanics”, which appeared in a relatively obscure philosophy journal called Abhandlungen der Fries’schen Schule (6 69). In it, she thanked Heisenberg “above all for his willingness to discuss the foundations of quantum mechanics, which was crucial in helping the present investigations”.
Quantum indeterminacy versus causality
In her 1933 paper, Hermann aimed to understand if the indeterminacy of quantum mechanics threatens causality. Her overall finding was that wherever indeterminacy is invoked in quantum mechanics, it is not logically essential to the theory. So without claiming that quantum theory actually supports causality, she left the possibility open that it might.
To illustrate her point, Hermann considered Heisenberg’s uncertainty principle, which says that there’s a limit to the accuracy with which complementary variables, such as position, q, and momentum, p, can be measured, namely ΔqΔp ≥ h where h is Planck’s constant. Does this principle, she wondered, truly indicate quantum indeterminism?
Hermann asserted that this relation can mean only one of two possible things. One is that measuring one variable leaves the value of the other undetermined. Alternatively, the result of measuring the other variable can’t be precisely predicted. Hermann dismissed the first option because its very statement implies that exact values exist, and so it cannot be logically used to argue against determinism. The second choice could be valid, but that does not exclude the possibility of finding new properties – hidden variables – that give an exact prediction.
Hermann used her mathematical training to point out a flaw in von Neumann’s 1932 famous proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics
In making her argument about hidden variables, Hermann used her mathematical training to point out a flaw in von Neumann’s 1932 famous proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics. Quantum mechanics, according to von Neumann, is complete and no extra deterministic features need to be added.
For decades, his result was cited as “proof” that any deterministic addition to quantum mechanics must be wrong. Indeed, von Neumann had such a well-deserved reputation as a brilliant mathematician that few people had ever bothered to scrutinize his analysis. But in 1964 the Northern Irish theorist John Bell famously showed that a valid hidden-variable theory could indeed exist, though only if it’s “non-local” (Physics 1 195).
Non-locality says that things can happen at different parts of the universe simultaneously without needing faster-than-light communication. Despite being a notion that Einstein never liked, non-locality has been widely confirmed experimentally. In fact, non-locality is a defining feature of quantum physics and one that’s eminently useful in quantum technology.
Then, in 1966 Bell examined von Neumann’s reasoning and found an error that decisively refuted the proof (Rev. Mod, Phys. 38 447). Bell, in other words, showed that quantum mechanics could permit hidden variables after all – a finding that opened the door to alternative interpretations of quantum mechanics. However, Hermann had reported the very same error in her 1933 paper, and again in her 1935 essay, with an especially lucid exposition that almost exactly foresees Bell’s objection.
She had got there first, more than three decades earlier (see box).
Grete Hermann: 30 years ahead of John Bell
(Courtesy: iStock/Chayanan)
According to Grete Hermann, John von Neumann’s 1933 proof that quantum mechanics doesn’t need hidden variables “stands or falls” on his assumption concerning “expectation values”, which is the sum of all possible outcomes weighted by their respective probabilities. In the case of two quantities, say, r and s, von Neumann supposed that the expectation value of (r + s) is the same as the expectation value of r plus the expectation value of s. In other words, <(r + s)> = <r> + <s>.
This is clearly true in classical physics, Hermann writes, but the truth is more complicated in quantum mechanics. Suppose r and s are the conjugate variables in an uncertainty relationship, such as momentum q and position p given by ΔqΔp ≥ h. By definition, measuring q eliminates making a precise measurement of p, so it is impossible to simultaneously measure them and satisfy the relation <q + p> = <q> + <p>.
Further analysis, which Hermann supplied and Bell presented more fully, shows exactly why this invalidates or at least strongly limits the applicability of von Neumann’s proof; but Hermann caught the essence of the error first. Bell did not recognize or cite Hermann’s work, most probably because it was hardly known to the physics community until years after his 1966 paper.
A new view of causality
After rebutting von Neumann’s proof in her 1935 essay, Hermann didn’t actually turn to hidden variables. Instead, Hermann went in a different and surprising direction, probably as a result of her discussions with Heisenberg. She accepted that quantum mechanics is a complete theory that makes only statistical predictions, but proposed an alternative view of causality within this interpretation.
We cannot foresee precise causal links in a quantum mechanics that is statistical, she wrote. But once a measurement has been made with a known result, we can work backwards to get a cause that led to that result. In fact, Hermann showed exactly how to do this with various examples. In this way, she maintains, quantum mechanics does not refute the general Kantian category of causality.
Not all philosophers have been satisfied by the idea of retroactive causality. But writing in The Oxford Handbook of the History of Quantum Interpretations, Crull says that Hermann “provides the contours of a neo-Kantian interpretation of quantum mechanics”. “With one foot squarely on Kant’s turf and the other squarely on Bohr’s and Heisenberg’s,” Crull concludes, “[Hermann’s] interpretation truly stands on unique ground.”
Grete Hermann’s 1935 paper shows a deep and subtle grasp of elements of the Copenhagen interpretation.
But Hermann’s 1935 paper did more than just upset von Neumann’s proof. In the article, she shows a deep and subtle grasp of elements of the Copenhagen interpretation such as its correspondence principle, which says that – in the limit of large quantum numbers – answers derived from quantum physics must approach those from classical physics.
The paper also shows that Hermann was fully aware – and indeed extended the meaning – of the implications of Heisenberg’s thought experiment that he used to illustrate the uncertainty principle. Heisenberg envisaged a photon colliding with an electron, but after that contact, she writes, the wave function of the physical system is a linear combination of terms, each being “the product of one wave function describing the electron and one describing the light quantum”.
As she went on to say, “The light quantum and the electron are thus not described each by itself, but only in their relation to each other. Each state of the one is associated with one of the other.” Remarkably, this amounts to an early perception of quantum entanglement, which Schrödinger described and named later in 1935. There is no evidence, however, that Schrödinger knew of Hermann’s insights.
Hermann’s legacy
On the centenary of the birth of a full theory of quantum mechanics, how should we remember Hermann? According to Crull, the early founders of quantum mechanics were “asking philosophical questions about the implications of their theory [but] none of these men were trained in both physics and philosophy”. Hermann, however, was an expert in the two. “[She] composed a brilliant philosophical analysis of quantum mechanics, as only one with her training and insight could have done,” Crull says.
Had Hermann’s 1935 paper been more widely known, it could have altered the early development of quantum mechanics
Sadly for Hermann, few physicists at the time were aware of her 1935 paper even though she had sent copies to some of them. Had it been more widely known, her paper could have altered the early development of quantum mechanics. Reading it today shows how Hermann’s style of incisive logical examination can bring new understanding.
Hermann leaves other legacies too. As the Second World War drew to a close, she started writing about the ethics of science, especially the way in which it was carried out under the Nazis. After the war, she returned to Germany, where she devoted herself to pedagogy and teacher training. She disseminated Nelson’s views as well as her own through the reconstituted PPA, and took on governmental positions where she worked to rebuild the German educational system, apparently to good effect according to contemporary testimony.
Hermann also became active in politics as an adviser to the Social Democratic Party. She continued to have an interest in quantum mechanics, but it is not clear how seriously she pursued it in later life, which saw her move back to Bremen to care for an ill comrade from her early socialist days.
Hermann’s achievements first came to light in 1974 when the physicist and historian Max Jammer revealed her 1935 critique of von Neumann’s proof in his book The Philosophy of Quantum Mechanics. Following Hermann’s death in Bremen on 15 April 1984, interest slowly grew, culminating in Crull and Bacciagaluppi’s 2016 landmark study Grete Hermann: Between Physics and Philosophy.
The life of this deep thinker, who also worked to educate others and to achieve worthy societal goals, remains an inspiration for any scientist or philosopher today.
Synchronization studies: When the experimenters mapped the laser’s breathing frequency intensity in the parameter space of pump current and intracavity loss (left), unusual features appeared. The areas contoured by blue dashed lines correspond to strong intensity, and represent the main synchronization regions. (right) Synchronization regions extracted from this map highlight their leaf-like structure. (Courtesy: DOI: 10.1126/sciadv.ads3660)
Abnormal versions of synchronization patterns known as “Arnold’s tongues” have been observed in a femtosecond fibre laser that generates oscillating light pulses. While these unconventional patterns had been theorized to exist in certain strongly-driven oscillatory systems, the new observations represent the first experimental confirmation.
Scientists have known about synchronization since 1665, when Christiaan Huygens observed that pendulums placed on a table eventually begin to sway in unison, coupled by vibrations within the table. It was not until the mid-20th century, however, that a Russian mathematician, Vladimir Arnold, discovered that plotting certain parameters of such coupled oscillating systems produces a series of tongue-like triangular shapes.
These shapes are now known as Arnold’s tongues, and they are an important indicator of synchronization. When the system’s parameters are in the tongue region, the system is synchronized. Otherwise, it is not.
Arnold’s tongues are found in all real-world synchronized systems, explains Junsong Peng, a physicist at East China Normal University. They have previously been studied in systems such as nanomechanical and biological resonators to which external driving frequencies are applied. More recently, they have been observed in the motion of two bound solitons (wave packets that maintain their shapes and sizes as they propagate) when they are subject to external forces.
Abnormal synchronization regions
In the new work, Peng, Sonia Boscolo of Aston University in the UK, Christophe Finot of the University of Burgundy in France, and colleagues studied Arnold’s tongue patterns in a laser that emits solitons. Lasers of this type possess two natural synchronization frequencies: the repetition frequency of the solitons (determined by the laser’s cavity length) and the frequency at which the energy of the soliton becomes self-modulating, or “breathing”.
In their experiments, which they describe in Science Advances, the researchers found that as they increased the driving force applied to this so-called breathing-soliton laser, the synchronization region first broadened, then narrowed. These changes produced Arnold’s tongues with very peculiar shapes. Instead of being triangle-like, they appeared as two regions shaped like leaves or rays.
Avoiding amplitude death
Although theoretical studies had previously predicted that Arnold’s-tongue patterns would deviate substantially from the norm as the driving force increased, Peng says that demonstrating this in a real system was not easy. The driving force required to access the anomalous regime is so strong that it can destroy fragile coherent pulsing states, leading to “amplitude death” in which all oscillations are completely suppressed.
In the breathing-soliton laser, however, the two frequencies synchronized without amplitude death even though the repetition frequency is about two orders of magnitude higher than the breathing frequency. “These lasers therefore open up a new frontier for studying synchronization phenomena,” Peng says.
To demonstrate the system’s potential, the researchers explored the effects of using an optical attenuator to modulate the laser’s dissipation while changing the laser’s pump current to modulate its gain. Having precise control over both parameters enabled them to identify “holes” within the ray-shaped tongue regions. These holes appear when the driving force exceeds a certain strength, and they represent quasi-periodic (unsynchronized) states inside the larger synchronized regions.
“The manifestation of holes is interesting not only for nonlinear science, it is also important for practical applications,” Peng explains. “This is because these holes, which have not been realized in experiments until now, can destabilize the synchronized system.”
Understanding when and under which conditions these holes appear, Peng adds, could help scientists ensure that oscillating systems operate more stably and reliably.
Extending synchronization to new regimes
The researchers also used simulations to produce a “map” of the synchronization regions. These simulations perfectly reproduced the complex synchronization structures they observed in their experiments, confirming the existence of the “hole” effect.
Despite these successes, however, Peng says it is “still quite challenging” to understand why such patterns appear. “We would like to do more investigations on this issue and get a better understanding of the dynamics at play,” he says.
The current work extends studies of synchronization into a regime where the synchronized region no longer exhibits a linear relationship with the coupling strength (as is the case for normal Arnold’s-tongue pattern), he adds. “This nonlinear relationship can generate even broader synchronization regions compared to the linear regime, making it highly significant for enhancing the stability of oscillating systems in practical applications,” he tells Physics World.
A new retinal stimulation technique called Oz enabled volunteers to see colours that lie beyond the natural range of human vision. Developed by researchers at UC Berkeley, Oz works by stimulating individual cone cells in the retina with targeted microdoses of laser light, while compensating for the eye’s motion.
Colour vision is enabled by cone cells in the retina. Most humans have three types of cone cells, known as L, M and S (long, medium and short), which respond to different wavelengths of visible light. During natural human vision, the spectral distribution of light reaching these cone cells determines the colours that we see.
Spectral sensitivity curves The response function of M cone cells overlaps completely with those of L and S cones. (Courtesy: Ben Rudiak-Gould)
Some colours, however, simply cannot be seen. The spectral sensitivity curves of the three cone types overlap – in particular, there is no wavelength of light that stimulates only the M cone cells without stimulating nearby L (and sometimes also S) cones as well.
The Oz approach, however, is fundamentally different. Rather than being based on spectral distribution, colour perception is controlled by shaping the spatial distribution of light on the retina.
Describing the technique in Science Advances, Ren Ng and colleagues showed that targeting individual cone cells with a 543 nm laser enabled subjects to see a range of colours in both images and videos. Intriguingly, stimulating only the M cone cells sent a colour signal to the brain that never occurs in natural vision.
The Oz laser system uses a technique called adaptive optics scanning light ophthalmoscopy (AOSLO) to simultaneously image and stimulate the retina with a raster scan of laser light. The device images the retina with infrared light to track eye motion in real time and targets pulses of visible laser light at individual cone cells, at a rate of 105 per second.
In a proof-of-principle experiment, the researchers tested a prototype Oz system on five volunteers. In a preparatory step, they used adaptive optics-based optical coherence tomography (AO-OCT) to classify the LMS spectral type of 1000 to 2000 cone cells in a region of each subject’s retina.
When exclusively targeting M cone cells in these retinal regions, subjects reported seeing a new blue–green colour of unprecedented saturation – which the researchers named “olo”. They could also clearly perceive Oz hues in image and video form, reliably detecting the orientation of a red line and the motion direction of a rotating red dot on olo backgrounds. In colour matching experiments, subjects could only match olo with the closest monochromatic light by desaturating it with white light – demonstrating that olo lies beyond the range of natural vision.
The team also performed control experiments in which the Oz microdoses were intentionally “jittered” by a few microns. With the target locations no longer delivered accurately, the subjects instead perceived the natural colour of the stimulating laser. In the image and video recognition experiments, jittering the microdose target locations reduced the task accuracy to guessing rate.
Ng and colleagues conclude that “Oz represents a new class of experimental platform for vision science and neuroscience [that] will enable diverse new experiments”. They also suggest that the technique could one day help to elicit full colour vision in people with colour blindness.
Oh, balls A record-breaking 34-ball, 12-storey tower with three balls per layer (photo a); a 21-ball six-storey tower with four balls per layer (photo b); an 11-ball, three-storey tower with five balls per layers (photo c); and why a tower with six balls per layer would be impossible as the “locker” ball just sits in the middle (photo d). (Courtesy: Andria Rogava)
A few years ago, I wrote in Physics World about various bizarre structures I’d built from tennis balls, the most peculiar of which I termed “tennis-ball towers”. They consisted of a series of three-ball layers topped by a single ball (“the locker”) that keeps the whole tower intact. Each tower had (3n + 1) balls, where n is the number of triangular layers. The tallest tower I made was a seven-storey, 19-ball structure (n = 6). Shortly afterwards, I made an even bigger, nine-storey, 25-ball structure (n = 8).
Now, in the latest exciting development, I have built a new, record-breaking tower with 34 balls (n = 11), in which all 30 balls from the second to the eleventh layer are kept in equilibrium by the locker on the top (see photo a). The three balls in the bottom layer aren’t influenced by the locker as they stay in place by virtue of being on the horizontal surface of a table.
I tried going even higher but failed to build a structure that would stay intact without supporting “scaffolds”. Now in case you think I’ve just glued the balls together, watch the video below to see how the incredible 34-ball structure collapses spontaneously, probably due to a slight vibration as I walked around the table.
Even more unexpectedly, I have been able to make tennis-ball towers consisting of layers of four balls (4n + 1) and five balls too (5n + 1). Their equilibria are more delicate and, in the case of four-ball structures, so far I have only managed to build (photo b) a 21-ball, six-storey tower (n = 5). You can also see the tower in the video below.
The (5n + 1) towers are even trickier to make and (photo c) I have only got up to a three-storey structure with 11 balls (n = 2): two lots of five balls with a sixth single ball on top. In case you’re wondering, towers with six balls in each layer are physically impossible to build because they form a regular hexagon. You can’t just use another ball as a locker because it would simply sit between the other six (photo d).
This podcast features Alonso Gutierrez, who is chief of medical physics at the Miami Cancer Institute in the US. In a wide-ranging conversation with Physics World’s Tami Freeman, Gutierrez talks about his experience using Elekta’s Leksell Gamma Knife for radiosurgery in a busy radiotherapy department.
A concept from quantum information theory appears to explain at least some of the peculiar behaviour of so-called “strange” metals. The new approach, which was developed by physicists at Rice University in the US, attributes the unusually poor electrical conductivity of these metals to an increase in the quantum entanglement of their electrons. The team say the approach could advance our understanding of certain high-temperature superconductors and other correlated quantum structures.
While electrons can travel through ordinary metals such as gold or copper relatively freely, strange metals resist their flow. Intriguingly, some high-temperature superconductors have a strange metal phase as well as a superconducting one. This phenomenon that cannot be explained by conventional theories that treat electrons as independent particles, ignoring any interactions between them.
To unpick these and other puzzling behaviours, a team led by Qimiao Si turned to the concept of quantum Fisher information (QFI). This statistical tool is typically used to measure how correlations between electrons evolve under extreme conditions. In this case, the team focused on a theoretical model known as the Anderson/Kondo lattice that describes how magnetic moments are coupled to electron spins in a material.
Correlations become strongest when strange metallicity appears
These analyses revealed that electron-electron correlations become strongest at precisely the point at which strange metallicity appears in a material. “In other words, the electrons become maximally entangled at this quantum critical point,” Si explains. “Indeed, the peak signals a dramatic amplification of multipartite electron spin entanglement, leading to a complex web of quantum correlations between many electrons.”
What is striking, he adds, is that this surge of entanglement provides a new and positive characterization of why strange metals are so strange, while also revealing why conventional theory fails. “It’s not just that traditional theory falls short, it is that it overlooks this rich web of quantum correlations, which prevents the survival of individual electrons as the elementary objects in this metallic substance,” he explains.
To test their finding, the researchers, who report their work in Nature Communications, compared their predictions with neutron scattering data from real strange-metal materials. They found that the experimental data was a good match. “Our earlier studies had also led us to suspect that strange metals might host a deeply entangled electron fluid – one whose hidden quantum complexity had yet to be fully understood,” adds Si.
The implications of this work are far-reaching, he tells Physics World. “Strange metals may hold the key to unlocking the next generation of superconductors — materials poised to transform how we transmit energy and, perhaps one day, eliminate power loss from the electric grid altogether.”
The Rice researchers say they now plan to explore how QFI manifests itself in the charge of electrons as well as their spins. “Until now, our focus has only been on the QFI associated with electrons spins, but electrons also of course carry charge,” Si says.
Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.
Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.
Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.
Looking for clues in electrons
In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.
“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”
Because the subtle effects of neutrino mass are most visible in decays where the neutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.
Improvements over previous results
The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.
Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.
Keeping track: Laser system for the analysis of the tritium gas composition at KATRIN’s Windowless Gaseous Tritium Source. Improvements to temperature control in this source helped raise the precision of the neutrino mass limit. (Courtesy: Tritium Laboratory, KIT)
At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”
Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”
Model independence
Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”
KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.
“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”
The high-street bank HSBC has worked with the NQCC, hardware provider Rigetti and the Quantum Software Lab to investigate the advantages that quantum computing could offer for detecting the signs of fraud in transactional data. (Courtesy: Shutterstock/Westend61 on Offset)
Rapid technical innovation in quantum computing is expected to yield an array of hardware platforms that can run increasingly sophisticated algorithms. In the real world, however, such technical advances will remain little more than a curiosity if they are not adopted by businesses and the public sector to drive positive change. As a result, one key priority for the UK’s National Quantum Computing Centre (NQCC) has been to help companies and other organizations to gain an early understanding of the value that quantum computing can offer for improving performance and enhancing outcomes.
To meet that objective the NQCC has supported several feasibility studies that enable commercial organizations in the UK to work alongside quantum specialists to investigate specific use cases where quantum computing could have a significant impact within their industry. One prime example is a project involving the high-street bank HSBC, which has been exploring the potential of quantum technologies for spotting the signs of fraud in financial transactions. Such fraudulent activity, which affects millions of people every year, now accounts for about 40% of all criminal offences in the UK and in 2023 generated total losses of more than £2.3 bn across all sectors of the economy.
Banks like HSBC currently exploit classical machine learning to detect fraudulent transactions, but these techniques require a large computational overhead to train the models and deliver accurate results. Quantum specialists at the bank have therefore been working with the NQCC, along with hardware provider Rigetti and the Quantum Software Lab at the University of Edinburgh, to investigate the capabilities of quantum machine learning (QML) for identifying the tell-tale indicators of fraud.
“HSBC’s involvement in this project has brought transactional fraud detection into the realm of cutting-edge technology, demonstrating our commitment to pushing the boundaries of quantum-inspired solutions for near-term benefit,” comments Philip Intallura, Group Head of Quantum Technologies at HSBC. “Our philosophy is to innovate today while preparing for the quantum advantage of tomorrow.”
Another study focused on a key problem in the aviation industry that has a direct impact on fuel consumption and the amount of carbon emissions produced during a flight. In this logistical challenge, the aim was to find the optimal way to load cargo containers onto a commercial aircraft. One motivation was to maximize the amount of cargo that can be carried, the other was to balance the weight of the cargo to reduce drag and improve fuel efficiency.
“Even a small shift in the centre of gravity can have a big effect,” explains Salvatore Sinno of technology solutions company Unisys, who worked on the project along with applications engineers at the NQCC and mathematicians at the University of Newcastle. “On a Boeing 747 a displacement of just 75 cm can increase the carbon emissions on a flight of 10,000 miles by four tonnes, and also increases the fuel costs for the airline company.”
A hybrid quantum–classical solution has been used to optimize the configuration of air freight, which can improve fuel efficiency and lower carbon emissions. (Courtesy: Shutterstock/supakitswn)
With such a large number of possible loading combinations, classical computers cannot produce an exact solution for the optimal arrangement of cargo containers. In their project the team improved the precision of the solution by combining quantum annealing with high-performance computing, a hybrid approach that Unisys believes can offer immediate value for complex optimization problems. “We have reached the limit of what we can achieve with classical computing, and with this work we have shown the benefit of incorporating an element of quantum processing into our solution,” explains Sinno.
The HSBC project team also found that a hybrid quantum–classical solution could provide an immediate performance boost for detecting anomalous transactions. In this case, a quantum simulator running on a classical computer was used to run quantum algorithms for machine learning. “These simulators allow us to execute simple QML programmes, even though they can’t be run to the same level of complexity as we could achieve with a physical quantum processor,” explains Marco Paini, the project lead for Rigetti. “These simulations show the potential of these low-depth QML programmes for fraud detection in the near term.”
The team also simulated more complex QML approaches using a similar but smaller-scale problem, demonstrating a further improvement in performance. This outcome suggests that running deeper QML algorithms on a physical quantum processor could deliver an advantage for detecting anomalies in larger datasets, even though the hardware does not yet provide the performance needed to achieve reliable results. “This initiative not only showcases the near-term applicability of advanced fraud models, but it also equips us with the expertise to leverage QML methods as quantum computing scales,” comments Intellura.
Indeed, the results obtained so far have enabled the project partners to develop a roadmap that will guide their ongoing development work as the hardware matures. One key insight, for example, is that even a fault-tolerant quantum computer would struggle to process the huge financial datasets produced by a bank like HSBC, since a finite amount of time is needed to run the quantum calculation for each data point. “From the simulations we found that the hybrid quantum–classical solution produces more false positives than classical methods,” says Paini. “One approach we can explore would be to use the simulations to flag suspicious transactions and then run the deeper algorithms on a quantum processor to analyse the filtered results.”
This particular project also highlighted the need for agreed protocols to navigate the strict rules on data security within the banking sector. For this project the HSBC team was able to run the QML simulations on its existing computing infrastructure, avoiding the need to share sensitive financial data with external partners. In the longer term, however, banks will need reassurance that their customer information can be protected when processed using a quantum computer. Anticipating this need, the NQCC has already started to work with regulators such as the Financial Conduct Authority, which is exploring some of the key considerations around privacy and data security, with that initial work feeding into international initiatives that are starting to consider the regulatory frameworks for using quantum computing within the financial sector.
For the cargo-loading project, meanwhile, Sinno says that an important learning point has been the need to formulate the problem in a way that can be tackled by the current generation of quantum computers. In practical terms that means defining constraints that reduce the complexity of the problem, but that still reflect the requirements of the real-world scenario. “Working with the applications engineers at the NQCC has helped us to understand what is possible with today’s quantum hardware, and how to make the quantum algorithms more viable for our particular problem,” he says. “Participating in these studies is a great way to learn and has allowed us to start using these emerging quantum technologies without taking a huge risk.”
Indeed, one key feature of these feasibility studies is the opportunity they offer for different project partners to learn from each other. Each project includes an end-user organization with a deep knowledge of the problem, quantum specialists who understand the capabilities and limitations of present-day solutions, and academic experts who offer an insight into emerging theoretical approaches as well as methodologies for benchmarking the results. The domain knowledge provided by the end users is particularly important, says Paini, to guide ongoing development work within the quantum sector. “If we only focused on the hardware for the next few years, we might come up with a better technical solution but it might not address the right problem,” he says. “We need to know where quantum computing will be useful, and to find that convergence we need to develop the applications alongside the algorithms and the hardware.”
Another major outcome from these projects has been the ability to make new connections and identify opportunities for future collaborations. As a national facility NQCC has played an important role in providing networking opportunities that bring diverse stakeholders together, creating a community of end users and technology providers, and supporting project partners with an expert and independent view of emerging quantum technologies. The NQCC has also helped the project teams to share their results more widely, generating positive feedback from the wider community that has already sparked new ideas and interactions.
“We have been able to network with start-up companies and larger enterprise firms, and with the NQCC we are already working with them to develop some proof-of-concept projects,” says Sinno. “Having access to that wider network will be really important as we continue to develop our expertise and capability in quantum computing.”
Through new experiments, researchers in Switzerland have tested models of how microwaves affect low-temperature chemical reactions between ions and molecules. Through their innovative setup, Valentina Zhelyazkova and colleagues at ETH Zurich showed for the first time how the application of microwave pulses can slow down reaction rates via nonthermal mechanisms.
Physicists have been studying chemical reactions between ions and neutral molecules for some time. At close to room temperature, classical models can closely predict how the electric fields emanating from ions will induce dipoles in nearby neutral molecules, allowing researchers to calculate these reaction rates with impressive accuracy. Yet as temperatures drop close to absolute zero, a wide array of more complex effects come into play, which have gradually been incorporated into the latest theoretical models.
“At low temperatures, models of reactivity must include the effects of the permanent electric dipoles and quadrupole moments of the molecules, the effect of their vibrational and rotational motion,” Zhelyazkova explains. “At extremely low temperatures, even the quantum-mechanical wave nature of the reactants must be considered.”
Rigorous experiments
Although these low-temperature models have steadily improved in recent years, the ability to put them to the test through rigorous experiments has so far been hampered by external factors.
In particular, stray electric fields in the surrounding environment can heat the ions and molecules, so that any important quantum effects are quickly drowned out by noise. “Consequently, it is only in the past few years that experiments have provided information on the rates of ion–molecule reactions at very low temperatures,” Zhelyazkova explains.
In their study, Zhelyazkova’s team improved on these past experiments through an innovative approach to cooling the internal motions of the molecules being heated by stray electric fields. Their experiment involved a reaction between positively-charged helium ions and neutral molecules of carbon monoxide (CO). This creates neutral atoms of helium and oxygen, and a positively-charged carbon atom.
To initiate the reaction, the researchers created separate but parallel supersonic beams of helium and CO that were combined in a reaction cell. “In order to overcome the problem of heating the ions by stray electric fields, we study the reactions within the distant orbit of a highly excited electron, which makes the overall system electrically neutral without affecting the ion–molecule reaction taking place within the electron orbit,” explains ETH’s Frédéric Merkt.
Giant atoms
In such a “Rydberg atom”, the highly excited electron is some distance from the helium nucleus and its other electron. As a result, a Rydberg helium atom can be considered an ion with a “spectator” electron, which has little influence over how the reaction unfolds. To ensure the best possible accuracy, “we use a printed circuit board device with carefully designed surface electrodes to deflect one of the two beams,” explains ETH’s, Fernanda Martins. “We then merged this beam with the other, and controlled the relative velocity of the two beams.”
Altogether, this approach enabled the researchers to cool the molecules internally to temperatures below 10 K – where their quantum effects can dominate over externally induced noise. With this setup, Zhelyazkova, Merkt, Martins, and their colleagues could finally put the latest theoretical models to the test.
According to the latest low-temperature models, the rate of the CO–helium ion reaction should be determined by the quantized rotational states of the CO molecule – whose energies lie within the microwave range. In this case, the team used microwave pulses to put the CO into different rotational states, allowing them to directly probe their influence on the overall reaction rate.
Three important findings
Altogether, their experiment yielded three important findings. It confirmed that the reaction rate can vary, depending on the rotational state of the CO molecule; it showed that this reactivity can be modified by using a short microwave pulse to excite the CO molecule from its ground state to its first excited state – with the first excited state being less reactive than the ground state.
The third and most counterintuitive finding is that microwaves can slow down the reaction rate, via mechanisms unrelated to the heat they impart on the molecules absorbing them. “In most applications of microwaves in chemical synthesis, the microwaves are used as a way to thermally heat the molecules up, which always makes them more reactive,” Zhelyazkova says.
Building on the success of their experimental approach, the team now hopes to investigate these nonthermal mechanisms in more detail – with the aim to shed new light on how microwaves can influence chemical reactions via effects other than heating. In turn, their results could ultimately pave the way for advanced new techniques for fine-tuning the rate of reactions between ions and neutral molecules.
A quarter of a century ago, in May 2000, I published an article entitled “Why science thrives on criticism”. The article, which ran to slightly over a page in Physics World magazine, was the first in a series of columns called Critical Point. Periodicals, I said, have art and music critics as well as sports and political commentators, and book and theatre reviewers too. So why shouldn’t Physics World have a science critic?
The implication that I had a clear idea of the “critical point” for this series was not entirely accurate. As the years go by, I have found myself improvising, inspired by politics, books, scientific discoveries, readers’ thoughts, editors’ suggestions and more. If there is one common theme, it’s that science is like a workshop – or a series of loosely related workshops – as I argued in The Workshop and the World, a book that sprang from my columns.
Workshops are controlled environments, inside which researchers can stage and study special things – elementary particles, chemical reactions, plant uptakes of nutrients – that appear rarely or in a form difficult to study in the surrounding world. Science critics do not participate in the workshops themselves or even judge their activities. What they do is evaluate how workshops and worlds interact.
This can happen in three ways
Critical triangle
First is to explain why what’s going on inside the workshops matters to outsiders. Sometimes, those activities can be relatively simple to describe, which leads to columns concerning all manner of everyday activities. I have written, for example, about the physics of coffee and breadmaking. I’ve also covered toys, tops, kaleidoscopes, glass and other things that all of us – physicists and non-physicists alike – use, value and enjoy.
Physicists often engage in activities that might seem inconsequential to them yet are an intrinsic part of the practice of physics
When viewing science as workshops, a second role is to explain why what’s outside the workshops matters to insiders. That’s because physicists often engage in activities that might seem inconsequential to them – they’re “just what the rest the world does” – yet are an intrinsic part of the practice of physics. I’ve covered, for example, physicists taking out patents, creating logos, designing lab architecture, taking holidays, organizing dedications, going on retirement and writing memorials for the deceased.
Such activities I term “black elephants”. That’s because they’re a cross between things physicists don’t want to talk about (“elephants in the room”) and things that force them to renounce cherished notions (just as “black swans” disprove that “all swans are white”).
A third role of a science critic is to explain what matters that takes place both inside and outside the workshop. I’m thinking of things like competition, leadership, trust, surprise, workplace training courses, cancel culture and even jokes and funny tales. Interpretations of the meaning of quantum mechanics, such as “QBism”, which I covered both in 2019 and 2022, are an ongoing interest. That’s because they’re relevant both to the structure of physics and to philosophy as they disrupt notions of realism, objectivity, temporality and the scientific method.
Being critical
The term “critic” may suggest someone with a congenitally negative outlook, but that’s wrong. My friend Fred Cohn, a respected opera critic, told me that, in a conversation after a concert, he criticized the performance of the singer Luciano Pavarotti. His remark provoked a woman to shout angrily at him: “Could you do better?” Of course not! It’s the critic’s role to evaluate performances of an activity, not to perform the activity oneself.
Working practices In his first Critical Point column for Physics World, philosopher and historian of science Robert P Crease interrogated the role of the science critic. (Courtesy: iStock/studiostockart)
Having said that, sometimes a critic must be critical to be honest. In particular, I hate it when scientists try to delegitimize the experience of non-scientists by saying, for example, that “time does not exist”. Or when they pretend they don’t see rainbows but wavelengths of light or that they don’t see sunrises or the plane of a Foucault pendulum move but the Earth spinning. Comments like that turn non-scientists off science by making it seem elitist and other-worldly. It’s what I call “scientific gaslighting”.
Most of all, I hate it when scientists pontificate that philosophy is foolish or worthless, especially when it’s the likes of Steven Pinker, who ought to know better. Writing in Nature (518 300), I once criticized the great theoretical physicist Steven Weinberg, who I counted as a friend, for taking a complex and multivalent text, plucking out a single line, and misreading it as if the line were from a physics text.
The text in question was Plato’s Phaedo, where Socrates expresses his disappointment with his fellow philosopher Anaxagoras for giving descriptions of heavenly bodies “in purely physical terms, without regard to what is best”. Weinberg claimed this statement meant that Socrates “was not very interested in natural science”. Nothing could be further from the truth.
At that moment in the Phaedo, Socrates is recounting his intellectual autobiography. He has just come to the point where, as a youth, he was entranced by materialism and was eager to hear Anaxagoras’s opposing position. When Anaxagoras promised to describe the heavens both mechanically and as the product of a wise and divine mind but could do only the former, Socrates says he was disappointed.
Weinberg’s jibe ignores the context. Socrates is describing how he had once embraced Anaxagoras’s view of a universe ruled by a divine mind but later rejected that view. As an adult, Socrates learned to test hypotheses and other claims through putting them to the test, just as modern-day scientists do. Weinberg was misrepresenting Socrates by describing a position that he later abandoned.
The critical point of the critical point
Ultimately, the “critical point” of my columns over the last 25 years has been to provoke curiosity and excitement about what philosophers, historians and sociologists do for science. I’ve also wanted to raise awareness that these fields are not just fripperies but essential if we are to fully understand and protect scientific activity.
As I have explained several times – especially in the wake of the US shutting its High Flux Beam Reactor and National Tritium Labeling Facility – scientists need to understand and relate to the surrounding world with the insight of humanities scholars. Because if they don’t, they are in danger of losing their workshops altogether.
New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.
The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.
The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.
From the night sky to the laboratory
One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”
Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).
This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”
The 94Sr(alpha,n)97Zr reaction
To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.
The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.
“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.
To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.
Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.
As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.
Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.
In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.
Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.
Cat on a hot tin…microwave cavity?
The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.
Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.
The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.
To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.
Benefits for quantum sensing and error correction
According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.
“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.
Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.
However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.
Electron therapy has long played an important role in cancer treatments. Electrons with energies of up to 20 MeV can treat superficial tumours while minimizing delivered dose to underlying tissues; they are also ideal for performing total skin therapy and intraoperative radiotherapy. The limited penetration depth of such low-energy electrons, however, limits the range of tumour sites that they can treat. And as photon-based radiotherapy technology continues to progress, electron therapy has somewhat fallen out of fashion.
That could all be about to change with the introduction of radiation treatments based on very high-energy electrons (VHEEs). Once realised in the clinic, VHEEs – with energies from 50 up to 400 MeV – will deliver highly penetrating, easily steerable, conformal treatment beams with the potential to enable emerging techniques such as FLASH radiotherapy. French medical technology company THERYQ is working to make this opportunity a reality.
Therapeutic electron beams are produced using radio frequency (RF) energy to accelerate electrons within a vacuum cavity. An accelerator of a just over 1 m in length can boost electrons to energies of about 25 MeV – corresponding to a tissue penetration depth of a few centimetres. It’s possible to create higher energy beams by simply daisy chaining additional vacuum chambers. But such systems soon become too large and impractical for clinical use.
THERYQ is focusing on a totally different approach to generating VHEE beams. “In an ideal case, these accelerators allow you to reach energy transfers of around 100 MeV/m,” explains THERYQ’s Sébastien Curtoni. “The challenge is to create a system that’s as compact as possible, closer to the footprint and cost of current radiotherapy machines.”
Working in collaboration with CERN, THERYQ is aiming to modify CERN’s Compact Linear Collider technology for clinical applications. “We are adapting the CERN technology, which was initially produced for particle physics experiments, to radiotherapy,” says Curtoni. “There are definitely things in this design that are very useful for us and other things that are difficult. At the moment, this is still in the design and conception phase; we are not there yet.”
VHEE advantages
The higher energy of VHEE beams provides sufficient penetration to treat deep tumours, with the dose peak region extending up to 20–30 cm in depth for parallel (non-divergent) beams using energy levels of 100–150 MeV (for field sizes of 10 x 10 cm or above). And in contrast to low-energy electrons, which have significant lateral spread, VHEE beams have extremely narrow penumbra with sharp beam edges that help to create highly conformal dose distributions.
“Electrons are extremely light particles and propagate through matter in very straight lines at very high energies,” Curtoni explains. “If you control the initial direction of the beam, you know that the patient will receive a very steep and well defined dose distribution and that, even for depths above 20 cm, the beam will remain sharp and not spread laterally.”
Electrons are also relatively insensitive to tissue inhomogeneities, such as those encountered as the treatment beam passes through different layers of muscle, bone, fat or air. “VHEEs have greater robustness against density variations and anatomical changes,” adds THERYQ’s Costanza Panaino. “This is a big advantage for treatments in locations where there is movement, such as the lung and pelvic areas.”
It’s also possible to manipulate VHEEs via electromagnetic scanning. Electrons have a charge-to-mass ratio roughly 1800 times higher than that of protons, meaning that they can be steered with a much weaker magnetic field than required for protons. “As a result, the technology that you are building has a smaller footprint and the possibility costing less,” Panaino explains. “This is extremely important because the cost of building a proton therapy facility is prohibitive for some countries.”
Enabling FLASH
In addition to expanding the range of clinical indications that can be treated with electrons, VHEE beams can also provide a tool to enable the emerging – and potentially game changing – technique known as FLASH radiotherapy. By delivering therapeutic radiation at ultrahigh dose rates (higher than 100 Gy/s), FLASH vastly reduces normal tissue toxicity while maintaining anti-tumour activity, potentially minimizing harmful side-effects.
The recent interest in the FLASH effect began back in 2014 with the report of a differential response between normal and tumour tissue in mice exposed to high dose-rate, low-energy electrons. Since then, most preclinical FLASH studies have used electron beams, as did the first patient treatment in 2019 – a skin cancer treatment at Lausanne University Hospital (CHUV) in Switzerland, performed with the Oriatron eRT6 prototype from PMB-Alcen, the French company from which THERYQ originated.
FLASH radiotherapy is currently being used in clinical trials with proton beams, as well as with low-energy electrons, where it remains intrinsically limited to superficial treatments. Treating deep-seated tumours with FLASH requires more highly penetrating beams. And while the most obvious option would be to use photons, it’s extremely difficult to produce an X-ray beam with a high enough dose rate to induce the FLASH effect without excessive heat generation destroying the conversion target.
“It’s easier to produce a high dose-rate electron beam for FLASH than trying to [perform FLASH] with X-rays, as you use the electron beam directly to treat the patient,” Curtoni explains. “The possibility to treat deep-seated tumours with high-energy electron beams compensates for the fact that you can’t use X-rays.”
Panaino points out that in addition to high dose rates, FLASH radiotherapy also relies on various interdependent parameters. “Ideally, to induce the FLASH effect, the beam should be pulsed at a frequency of about 100 Hz, the dose-per-pulse should be 1 Gy or above, and the dose rate within the pulse should be higher than 106 Gy/s,” she explains.
Into the clinic
THERYQ is using its VHEE expertise to develop a clinical FLASH radiotherapy system called FLASHDEEP, which will use electrons at energies of 100 to 200 MeV to treat tumours at depths of up to 20 cm. The first FLASHDEEP systems will be installed at CHUV (which is part of a consortium with CERN and THERYQ) and at the Gustave Roussy cancer centre in France.
“We are trying to introduce FLASH into the clinic, so we have a prototype FLASHKNiFE machine that allows us to perform low-energy, 6 and 9 MeV, electron therapy,” says Charlotte Robert, head of the medical physics department research group at Gustave Roussy. “The first clinical trials using low-energy electrons are all on skin tumours, aiming to show that we can safely decrease the number of treatment sessions.”
While these initial studies are limited to skin lesions, clinical implementation of the FLASHDEEP system will extend the benefits of FLASH to many more tumour sites. Robert predicts that VHEE-based FLASH will prove most valuable for treating radioresistant cancers that cannot currently be cured. The rationale is that FLASH’s ability to spare normal tissue will allow delivery of higher target doses without increasing toxicity.
“You will not use this technology for diseases that can already be cured, at least initially,” she explains. “The first clinical trial, I’m quite sure, will be either glioblastoma or pancreatic cancers that are not effectively controlled today. If we can show that VHEE FLASH can spare normal tissue more than conventional radiotherapy can, we hope this will have a positive impact on lesion response.”
“There are a lot of technological challenges around this technology and we are trying to tackle them all,” Curtoni concludes. “The ultimate goal is to produce a VHEE accelerator with a very compact beamline that makes this technology and FLASH a reality for a clinical environment.”
Brain–computer interfaces (BCIs) enable the flow of information between the brain and an external device such as a computer, smartphone or robotic limb. Applications range from use in augmented and virtual reality (AR and VR), to restoring function to people with neurological disorders or injuries.
Electroencephalography (EEG)-based BCIs use sensors on the scalp to noninvasively record electrical signals from the brain and decode them to determine the user’s intent. Currently, however, such BCIs require bulky, rigid sensors that prevent use during movement and don’t work well with hair on the scalp, which affects the skin–electrode impedance. A team headed up at Georgia Tech’s WISH Center has overcome these limitations by creating a brain sensor that’s small enough to fit between strands of hair and is stable even while the user is moving.
“This BCI system can find wide applications. For example, we can realize a text spelling interface for people who can’t speak,” says W Hong Yeo, Harris Saunders Jr Professor at Georgia Tech and director of the WISH Center, who co-led the project with Tae June Kang from Inha University in Korea. “For people who have movement issues, this BCI system can offer connectivity with human augmentation devices, a wearable exoskeleton, for example. Then, using their brain signals, we can detect the user’s intentions to control the wearable system.”
A tiny device
The microscale brain sensor comprises a cross-shaped structure of five microneedle electrodes, with sharp tips (less than 30°) that penetrate the skin easily with nearly pain-free insertion. The researchers used UV replica moulding to create the array, followed by femtosecond laser cutting to shape it to the required dimensions – just 850 x 1000 µm – to fit into the space between hair follicles. They then coated the microsensor with a highly conductive polymer (PEDOT:Tos) to enhance its electrical conductivity.
Between the hairs The size and lightweight design of the sensor significantly reduces motion artefacts. (Courtesy: W Hong Yeo)
The microneedles capture electrical signals from the brain and transmit them along ultrathin serpentine wires that connect to a miniaturized electronics system on the back of the neck. The serpentine interconnector stretches as the skin moves, isolating the microsensor from external vibrations and preventing motion artefacts. The miniaturized circuits then wirelessly transmit the recorded signals to an external system (AR glasses, for example) for processing and classification.
Yeo and colleagues tested the performance of the BCI using three microsensors inserted into the scalp of the occipital lobe (the brain’s visual processing centre). The BCI exhibited excellent stability, offering high-quality measurement of neural signals – steady-state visual evoked potentials (SSVEPs) – for up to 12 h, while maintaining low contact impedance density (0.03 kΩ/cm2).
The team also compared the quality of EEG signals measured using the microsensor-based BCI with those obtained from conventional gold-cup electrodes. Participants wearing both sensor types closed and opened their eyes while standing, walking or running.
With the participant stood still, both electrode types recorded stable EEG signals, with an increased amplitude upon closing the eyes, due to the rise in alpha wave power. During motion, however, the EEG time series recorded with the conventional electrodes showed noticeable fluctuations. The microsensor measurements, on the other hand, exhibited minimal fluctuations while walking and significantly fewer fluctuations than the gold-cup electrodes while running.
Overall, the alpha wave power recorded by the microsensors during eye-closing was higher than that of the conventional electrode, which could not accurately capture EEG signals while the user was running. The microsensors only exhibited minor motion artefacts, with little to no impact on the EEG signals in the alpha band, allowing reliable data extraction even during excessive motion.
Real-world scenario
Next, the team showed how the BCI could be used within everyday activities – such as making calls or controlling external devices – that require a series of decisions. The BCI enables a user to make these decisions using their thoughts, without needing physical input such as a keyboard, mouse or touchscreen. And the new microsensors free the user from environmental and movement constraints.
The researchers demonstrated this approach in six subjects wearing AR glasses and a microsensor-based EEG monitoring system. They performed experiments with the subjects standing, walking or running on a treadmill, with two distinct visual stimuli from the AR system used to induce SSVEP responses. Using a train-free SSVEP classification algorithm, the BCI determined which stimulus the subject was looking at with a classification accuracy of 99.2%, 97.5% and 92.5%, while standing, walking and running, respectively.
The team also developed an AR-based video call system controlled by EEG, which allows users to manage video calls (rejecting, answering and ending) with their thoughts, demonstrating its use during scenarios such as ascending and descending stairs and navigating hallways.
“By combining BCI and AR, this system advances communication technology, offering a preview of the future of digital interactions,” the researchers write. “Additionally, this system could greatly benefit individuals with mobility or dexterity challenges, allowing them to utilize video calling features without physical manipulation.”
With so much turmoil in the world at the moment, it’s always great to meet enthusiastic physicists celebrating all that their subject has to offer. That was certainly the case when I travelled with my colleague Tami Freeman to the 2025 Celebration of Physics at Nottingham Trent University (NTU) on 10 April.
Organized by the Institute of Physics (IOP), which publishes Physics World, the event was aimed at “physicists, creative thinkers and anyone interested in science”. It also featured some of the many people who won IOP awards last year, including Nick Stone from the University of Exeter, who was awarded the 2024 Rosalind Franklin medal and prize.
Stone was honoured for his “pioneering use of light for diagnosis and therapy in healthcare”, including “developing novel Raman spectroscopic tools and techniques for rapid in vivo cancer diagnosis and monitoring”. Speaking in a Physics World Live chat, Stone explained why Raman spectroscopy is such a useful technique for medical imaging.
Nottingham is, of course, a city famous for medical imaging, thanks in particular to the University of Nottingham Nobel laureate Peter Mansfield (1933–2017), who pioneered magnetic resonance imaging (MRI). In an entertaining talk, Rob Morris from NTU explained how MRI is also crucial for imaging foodstuffs, helping the food industry to boost productivity, reduce waste – and make tastier pork pies.
Still on the medical theme, Niall Holmes from Cerca Magnetics, which was spun out from the University of Nottingham, explained how his company has developed wearable magnetoencephalography (MEG) sensors that can measures magnetic fields generated by neuronal firings in the brain. In 2023 Cerca won one of the IOP’s business and innovation awards.
Richard Friend from the University of Cambridge, who won the IOP’s top Isaac Newton medal and prize, discussed some of the many recent developments that have followed from his seminal 1990 discovery that semiconducting polymers can be used in light-emitting diodes (LEDs).
The event ended with a talk from particle physicist Tara Shears from the University of Liverpool, who outlined some of the findings of the new IOP report Physics and AI, to which she was an adviser. Based on a survey with 700 responses and a workshop with experts from academia and industry, the report concludes that physics doesn’t only benefit from AI – but underpins it too.
I’m sure AI will be good for physics overall, but I hope it never removes the need for real-life meetings like the Celebration of Physics.
Researchers from the Institute of Physics of the Chinese Academy of Sciences have produced the first two-dimensional (2D) sheets of metal. At just angstroms thick, these metal sheets could be an ideal system for studying the fundamental physics of the quantum Hall effect, 2D superfluidity and superconductivity, topological phase transitions and other phenomena that feature tight quantum confinement. They might also be used to make novel electronic devices such as ultrathin low-power transistors, high-frequency devices and transparent displays.
Since the discovery of graphene – a 2D sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently-bonded atoms are separated by gaps. The presence of these gaps mean that neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets.
Making atomically thin metals would expand this class of technologically important structures. However, because each atom in a metal is strongly bonded to surrounding atoms in all directions, thinning metal sheets to this degree has proved difficult. Indeed, many researchers thought it might be impossible.
Melting and squeezing pure metals
The technique developed by Guangyu Zhang, Luojun Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. The team used MoS2/sapphire because both materials are atomically flat and lack dangling bonds that could react with the metals. They also have high Young’s moduli, of 430 GPa and 300 GPa respectively, meaning they can withstand extremely high pressures.
Once the metal powders melted into a droplet, the researchers applied a pressure of 200 MPa. They then continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal formed.
The team produced five atomically thin 2D metals using this technique. The thinnest, at around 6.3 Å, was bismuth, followed by tin (~5.8 Å), lead (~7.5 Å), indium (~8.4 Å) and gallium (~9.2 Å).
“Arduous explorations”
Zhang, Du and colleagues started this project around 10 years ago after they decided it would be interesting to work on 2D materials other than graphene and its layered vdW cousins. At first, they had little success. “Since 2015, we tried out a host of techniques, including using a hammer to thin a metal foil – a technique that we borrowed from gold foil production processes – all to no avail,” Du recalls. “We were not even able to make micron-thick foils using these techniques.”
After 10 years of what Du calls “arduous explorations”, the team finally moved a crucial step forward by developing the vdW squeezing method.
Writing in Nature, the researchers say that the five 2D metals they’ve realized so far are just the “tip of the iceberg” for their method. They now intend to increase this number. “In terms of novel properties, there is still a knowledge gap in the emerging electrical, optical, magnetic properties of 2D metals, so it would be nice to see how these materials behave physically as compared to their bulk counterparts thanks to 2D confinement effects,” says Zhang. “We would also like to investigate to what extent such 2D metals could be used for specific applications in various technological fields.”
A proposed experiment that would involve trapping atoms on a two-layered laser grid could be used to study the mechanism behind high-temperature superconductivity. Developed by physicists in Germany and France led by Henning Schlömer the new techniques could revolutionize our understanding of high-temperature superconductivity.
Superconductivity is a phenomenon characterized by an abrupt drop to zero of electric resistance when certain materials are cooled below a critical temperature. It has remained in the physics zeitgeist for over a hundred years and continues to puzzle contemporary physicists. While scientists have a good understanding of “conventional” superconductors (which tend to have low critical temperatures), the physics of high-temperature superconductors remains poorly understood. A deeper understanding of the mechanisms responsible for high-temperature superconductivity could unveil the secrets behind macroscopic quantum phenomena in many-body systems.
Mimicking real crystalline materials
Optical lattices have emerged as a powerful tool to study such many-body quantum systems. Here, two counter-propagating laser beams overlap to create a standing wave. Extending this into two dimensions creates a grid (or lattice) of potential-energy minima where atoms can be trapped (see figure). The interactions between these trapped atoms can then be tuned to mimic real crystalline materials giving us an unprecedented ability to study their properties.
Superconductivity is characterized by the formation of long-range correlations between electron pairs. While the electronic properties of high-temperature superconductors can be studied in the lab, it can be difficult to test hypotheses because the properties of each superconductor are fixed. In contrast, correlations between atoms in an optical lattice can be tuned, allowing different models and parameters to be explored.
Henning Schlömer (left) and Hannah Lange The Ludwig Maximilian University of Munich PhD students collaborated on the proposal. (Courtesy: Henning Schlömer/Hannah Lange)
This could be done by trapping fermionic atoms (analogous to electrons in a superconducting material) in an optical lattice and enabling them to form pair correlations. However, this has proved to be challenging because these correlations only occur at very low temperatures that are experimentally inaccessible. Measuring these correlations presents an additional challenge of adding or removing atoms at specific sites in the lattice without disturbing the overall lattice state. But now, Schlömer and colleagues propose a new protocol to overcome these challenges.
The proposal
The researchers propose trapping fermionic atoms on a two-layered lattice. By introducing a potential-energy offset between the two layers, they ensure that the atoms can only move within a layer and there is no hopping between layers. They enable magnetic interaction between the two layers, allowing the atoms to form spin-correlations such as singlets, where atoms always have opposing spins . The dynamics of such interlayer correlations will give rise to superconducting behaviour.
This system is modelled using a “mixed-dimensional bilayer” (MBD) model. It accounts for three phenomena: the hopping of atoms between lattice sites within a layer; the magnetic (spin) interaction between the atoms of the two layers; and the magnetic interactions within the atoms of a layer.
Numerical simulations of the MBD model suggest the occurrence of superconductor-like behaviour in optical lattices at critical temperatures much higher than traditional models. These temperatures are readily accessible in experiments.
To measure the correlations, one needs to track pair formation in the lattice. One way to track pairs is to add or remove atoms from the lattice without disturbing the overall lattice state. However, this is experimentally infeasible. Instead, the researchers propose doping the energetically higher layer with holes – that is the removal of atoms to create vacant sites. The energetically lower layer is doped with doublons, which are atom pairs that occupy just one lattice site. Then the potential offset between the two layers can be tuned to enable controlled interaction between the doublons and holes. This would allow researchers to study pair formation via this interaction rather than having to add or remove atoms from specific lattice sites.
Clever mathematical trick
To study superconducting correlations in the doped system, the researchers employ a clever mathematical trick. Using a mathematical transformation, they transform the model to an equivalent model described by only “hole-type” dopants without changing the underlying physics. This allows them to map superconducting correlations to density correlations, which can be routinely accessed is existing experiments.
With their proposal, Schlömer and colleagues are able to both prepare the optical lattice in a state, where superconducting behaviour occurs at experimentally accessible temperatures and study this behaviour by measuring pair formation.
When asked about possible experimental realizations, Schlömer is optimistic: “While certain subtleties remain to be addressed, the technology is already in place – we expect it will become experimental reality in the near future”.
Imagine, if you will, that you are a quantum system. Specifically, you are an unstable quantum system – one that would, if left to its own devices, rapidly decay from one state (let’s call it “awake”) into another (“asleep”). But whenever you start to drift into the “asleep” state, something gets in the way. Maybe it’s a message pinging on your phone. Maybe it’s a curious child peppering you with questions. Whatever it is, it jolts you out of your awake–asleep superposition and projects you back into wakefulness. And because it keeps happening faster than you can fall asleep, you remain awake, diverted from slumber by a stream of interruptions – or, in quantum terms, measurements.
This phenomenon of repeated measurements “freezing” an unstable quantum system into a particular state is known as the quantum Zeno effect (figure 1). Named after a paradox from ancient Greek philosophy, it was hinted at in the 1950s by the scientific polymaths Alan Turing and John von Neumann but only fully articulated in 1977 by the physicists Baidyanath Misra and George Sudarshan (J. Math. Phys.18 756). Since then, researchers have observed it in dozens of quantum systems, including trapped ions, superconducting flux qubits and atoms in optical cavities. But the apparent ubiquitousness of the quantum Zeno effect cannot hide the strangeness at its heart. How does the simple act of measuring a quantum system have such a profound effect on its behaviour?
A watched quantum pot
“When you come across it for the first time, you think it’s actually quite amazing because it really shows that the measurement in quantum mechanics influences the system,” says Daniel Burgarth, a physicist at the Friedrich-Alexander-Universität in Erlangen-Nürnberg, Germany, who has done theoretical work on the quantum Zeno effect.
Giovanni Barontini, an experimentalist at the University of Birmingham, UK, who has studied the quantum Zeno effect in cold atoms, agrees. “It doesn’t have a classical analogue,” he says. “I can watch a classical system doing something forever and it will continue doing it. But a quantum system really cares if it’s watched.”
1 A watched quantum pot
(Illustration courtesy: Mayank Shreshtha; Zeno image public domain; Zeno crop CC BY S Perquin)
Applying heat to a normal, classical pot of water will cause it to evolve from state 1 (not boiling) to state 2 (boiling) at the same rate regardless of whether anyone is watching it (even if it doesn’t seem like it). In the quantum world, however, a system that would normally evolve from one state to the other if left unobserved (blindfolded Zeno) can be “frozen” in place by repeated frequent measurements (eyes-open Zeno).
For the physicists who laid the foundations of quantum mechanics a century ago, any connection between measurement and outcome was a stumbling block. Several tried to find ways around it, for example by formalizing a role for observers in quantum wavefunction collapse (Niels Bohr and Werner Heisenberg); introducing new “hidden” variables (Louis de Broglie and David Bohm); and even hypothesizing the creation of new universes with each measurement (the “many worlds” theory of Hugh Everett).
But none of these solutions proved fully satisfactory. Indeed, the measurement problem seemed so intractable that most physicists in the next generation avoided it, preferring the approach sometimes described – not always pejoratively – as “shut up and calculate”.
Today’s quantum physicists are different. Rather than treating what Barontini calls “the apotheosis of the measurement effect” as a barrier to overcome or a triviality to ignore, they are doing something few of their forebears could have imagined. They are turning the quantum Zeno effect into something useful.
Noise management
To understand how freezing a quantum system by measuring it could be useful, consider a qubit in a quantum computer. Many quantum algorithms begin by initializing qubits into a desired state and keeping them there until they’re required to perform computations. The problem is that quantum systems seldom stay where they’re put. In fact, they’re famously prone to losing their quantum nature (decohering) at the slightest disturbance (noise) from their environment. “Whenever we build quantum computers, we have to embed them in the real world, unfortunately, and that real world causes nothing but trouble,” Burgarth says.
Quantum scientists have many strategies for dealing with environmental noise. Some of these strategies are passive, such as cooling superconducting qubits with dilution refrigerators and using electric and magnetic fields to suspend ionic and atomic qubits in a vacuum. Others, though, are active. They involve, in effect, tricking qubits into staying in the states they’re meant to be in, and out of the states they’re not.
The quantum Zeno effect is one such trick. “The way it works is that we apply a sequence of kicks to the system, and we are actually rotating the qubit with each kick,” Burgarth explains. “You’re rotating the system, and then effectively the environment wants to rotate it in the other direction.” Over time, he adds, these opposing rotations average out, protecting the system from noise by freezing it in place.
Quantum state engineering
While noise mitigation is useful, it’s not the quantum Zeno application that interests Burgarth and Barontini the most. The real prize, they agree, is something called quantum state engineering, which is much more complex than simply preventing a quantum system from decaying or rotating.
The source of this added complexity is that real quantum systems – much like real people – usually have more than two states available to them. For example, the set of permissible “awake” states for a person – the Hilbert space of wakefulness, let’s call it – might include states such as cooking dinner, washing dishes and cleaning the bathroom. The goal of quantum state engineering is to restrict this state-space so the system can only occupy the state(s) required for a particular application.
As for how the quantum Zeno effect does this, Barontini explains it by referring to Zeno’s original, classical paradox. In the fifth century BCE, the philosopher Zeno of Elea posed a conundrum based on an arrow flying through the air. If you look at this arrow at any possible moment during its flight, you will find that in that instant, it is motionless. Yet somehow, the arrow still moves. How?
In the quantum version, Barontini explains, looking at the arrow freezes it in place. But that isn’t the only thing that happens. “The funniest thing is that if I look somewhere, then the arrow cannot go where I’m looking,” he says. “It will have to go around it. It will have to modify its trajectory to go outside my field of view.”
By shaping this field of view, Barontini continues, physicists can shape the system’s behaviour. As an example, he cites work by Serge Haroche, who shared the 2012 Nobel Prize for Physics with another notable quantum Zeno experimentalist, David Wineland.
In 2014 Haroche and colleagues at the École Normale Supérieure (ENS) in Paris, France, sought to control the dynamics of an electron within a so-called Rydberg atom. In this type of atom, the outermost electron is very weakly bound to the nucleus and can occupy any of several highly excited states.
The researchers used a microwave field to divide 51 of these highly excited Rydberg states into two groups, before applying radio-frequency pulses to the system. Normally, these pulses would cause the electron to hop between states. However, the continual “measurement” supplied by the microwave field meant that although the electron could move within either group of states, it could not jump from one group to the other. It was stuck – or, more precisely, it was in a special type of quantum superposition known as a Schrödinger cat state.
Restricting the behaviour of an electron might not sound very exciting in itself. But in this and other experiments, Haroche and colleagues showed that imposing such restrictions brings forth a slew of unusual quantum states. It’s as if telling the system what it can’t do forces it to do a bunch of other things instead, like a procrastinator who cooks dinner and washes dishes to avoid cleaning the bathroom. “It really enriches your quantum toolbox,” explains Barontini. “You can generate an entangled state that is more entangled or methodologically more useful than other states you could generate with traditional means.”
Just what is a measurement, anyway?
As well as generating interesting quantum states, the quantum Zeno effect is also shedding new light on the nature of quantum measurements. The question of what constitutes a “measurement” for quantum Zeno purposes turns out to be surprisingly broad. This was elegantly demonstrated in 2014, when physicists led by Augusto Smerzi at the Università di Firenze, Italy, showed that simply shining a resonant laser at their quantum system (figure 2) produced the same quantum Zeno dynamics as more elaborate “projective” measurements – which in this case involved applying pairs of laser pulses to the system at frequencies tailored to specific atomic transitions. “It’s fair to say that almost anything causes a Zeno effect,” says Burgarth. “It’s a very universal and easy-to-trigger phenomenon.”
2 Experimental realization of quantum Zeno dynamics
(First published in Nature Commun.5 3194. Reproduced with permission from Springer Nature)
The energy level structure of a population of ultracold 87Rb atoms, evolving in a five-level Hilbert space given by the five spin orientations of the F=2 hyperfine ground state. An applied RF field (red arrows) couples neighbouring quantum states together and allows atoms to “hop” between states. Normally, atoms initially placed in the |F, mF> = |2,2> state would cycle between this state and the other four F=2 states in a process known as Rabi oscillation. However, by introducing a “measurement” – shown here as a laser beam (green arrow) resonant with the transition between the |1,0> state and the |2,0> state – Smerzi and colleagues drastically changed the system’s dynamics, forcing the atoms to oscillate between just the |2,2> and |2,1> states (represented by up and down arrows on the so-called Bloch sphere at right). An additional laser beam (orange arrow) and the detector D were used to monitor the system’s evolution over time.
Other research has broadened our understanding of what measurement can do. While the quantum Zeno effect uses repeated measurements to freeze a quantum system in place (or at least slow its evolution from one state to another), it is also possible to do the opposite and use measurements to accelerate quantum transitions. This phenomenon is known as the quantum anti-Zeno effect, and it has applications of its own. It could, for example, speed up reactions in quantum chemistry.
Over the past 25 years or so, much work has gone into understanding where the ordinary quantum Zeno effect leaves off and the quantum anti-Zeno effect begins. Some systems can display both Zeno and anti-Zeno dynamics, depending on the frequency of the measurements and various environmental conditions. Others seem to favour one over the other.
But regardless of which version turns out to be the most important, quantum Zeno research is anything but frozen in place. Some 2500 years after Zeno posed his paradox, his intellectual descendants are still puzzling over it.
With increased water scarcity and global warming looming, electrochemical technology offers low-energy mitigation pathways via desalination and carbon capture. This webinar will demonstrate how the less than 5 molar solid-state concentration swings afforded by cation intercalation materials – used originally in rocking-chair batteries – can effect desalination using Faradaic deionization (FDI). We show how the salt depletion/accumulation effect – that plagues Li-ion battery capacity under fast charging conditions – is exploited in a symmetric Na-ion battery to achieve seawater desalination, exceeding by an order of magnitude the limits of capacitive deionization with electric double layers. While initial modeling that introduced such an architecture blazed the trail for the development of new and old intercalation materials in FDI, experimental demonstration of seawater-level desalination using Prussian blue analogs required cell engineering to overcome the performance-degrading processes that are unique to the cycling of intercalation electrodes in the presence of flow, leading to innovative embedded, micro-interdigitated flow fields with broader application toward fuel cells, flow batteries, and other flow-based electrochemical devices. Similar symmetric FDI architectures using proton intercalation materials are also shown to facilitate direct-air capture of carbon dioxide with unprecedentedly low energy input by reversibly shifting pH within aqueous electrolyte.
Kyle Smith
Kyle C Smith joined the faculty of Mechanical Science and Engineering at the University of Illinois Urbana-Champaign (UIUC) in 2014 after completing his PhD in mechanical engineering (Purdue, 2012) and his post-doc in materials science and engineering (MIT, 2014). His group uses understanding of flow, transport, and thermodynamics in electrochemical devices and materials to innovate toward separations, energy storage, and conversion. For his research he was awarded the 2018 ISE-Elsevier Prize in Applied Electrochemistry of the International Society of Electrochemistry and the 2024 Dean’s Award for Early Innovation as an associate professor by UIUC’s Grainger College. Among his 59 journal papers and 14 patents and patents pending, his work that introduced Na-ion battery-based desalination using porous electrode theory [Smith and Dmello, J. Electrochem. Soc., 163, p. A530 (2016)] was among the top ten most downloaded in the Journal of the Electrochemical Society for five months in 2016. His group was also the first to experimentally demonstrate seawater-level salt removal using this approach [Do et al., Energy Environ. Sci., 16, p. 3025 (2023); Rahman et al., Electrochimica Acta, 514, p. 145632 (2025)], introducing flow fields embedded in electrodes to do so.
A model that could help explain how heavy elements are forged within collapsing stars has been unveiled by Matthew Mumpower at Los Alamos National Laboratory and colleagues in the US. The team suggests that energetic photons generated by newly forming black holes or neutron stars transmute protons within ejected stellar material into neutrons, thereby providing ideal conditions for heavy elements to form.
Astrophysicists believe that elements heavier than iron are created in violent processes such as the explosions of massive stars and the mergers of neutron stars. One way that this is thought to occur is the rapid neutron-capture process (r-process), whereby lighter nuclei created in stars capture neutrons in rapid succession. However, exactly where the r-process occurs is not well understood.
As Mumpower explains, the r-process must be occurring in environments where free neutrons are available in abundance. “But there’s a catch,” he says. “Free neutrons are unstable and decay in about 15 min. Only a few places in the universe have the right conditions to create and use these neutrons quickly enough. Identifying those places has been one of the toughest open questions in physics.”
Intense flashes of light
In their study, Mumpower’s team – which included researchers from the Los Alamos and Argonne national laboratories – looked at how lots of neutrons could be created within massive stars that are collapsing to become neutron stars or black holes. Their idea focuses on the intense flashes of light that are known to be emitted from the cores of these objects.
This radiation is emitted at wavelengths across the electromagnetic spectrum – including highly energetic gamma rays. Furthermore, the light is emitted along a pair of narrow jets, which blast outward above each pole of the star’s collapsing core. As they form, these jets plough through the envelope of stellar material surrounding the core, which had been previously ejected by the star. This is believed to create a “cocoon” of hot, dense material surrounding each jet.
In this environment, Mumpower’s team suggest that energetic photons in a jet collide with protons to create a neutron and a pion. Since these neutrons are have no electrical charge, many of them could dissolve into the cocoon, providing ideal conditions for the r-process to occur.
To test their hypothesis, the researchers carried out detailed computer simulations to predict the number of free neutrons entering the cocoon due to this process.
Gold and platinum
“We found that this light-based process can create a large number of neutrons,” Mumpower says. “There may be enough neutrons produced this way to build heavy elements, from gold and platinum all the way up to the heaviest elements in the periodic table – and maybe even beyond.”
If their model is correct, suggests that the origin of some heavy elements involves processes that are associated with the high-energy particle physics that is studied at facilities like the Large Hadron Collider.
“This process connects high-energy physics – which usually focuses on particles like quarks, with low-energy astrophysics – which studies stars and galaxies,” Mumpower says. “These are two areas that rarely intersect in the context of forming heavy elements.”
Kilonova explosions
The team’s findings also shed new light on some other astrophysical phenomena. “Our study offers a new explanation for why certain cosmic events, like long gamma-ray bursts, are often followed by kilonova explosions – the glow from the radioactive decay of freshly made heavy elements,” Mumpower continues. “It also helps explain why the pattern of heavy elements in old stars across the galaxy looks surprisingly similar.”
The findings could also improve our understanding of the chemical makeup of deep-sea deposits on Earth. The presence of both iron and plutonium in this material suggests that both elements may have been created in the same type of event, before coalescing into the newly forming Earth.
For now, the team will aim to strengthen their model through further simulations – which could better reproduce the complex, dynamic processes taking place as massive stars collapse.
US universities are in the firing line of the Trump administration, which is seeking to revoke the visas of foreign students, threatening to withdraw grants and demanding control over academic syllabuses. “The voice of science must not be silenced,” the letter writers say. “We all benefit from science, and we all stand to lose if the nation’s research enterprise is destroyed.”
Particularly hard hit are the country’s eight Ivy League universities, which have been accused of downplaying antisemitism exhibited in campus demonstrations in support of Gaza. Columbia University in New York, for example, has been trying to regain $400m in federal funds that the Trump administration threatened to cancel.
Columbia initially reached an agreement with the government on issues such as banning facemasks on its campus and taking control of its department responsible for courses on the Middle East. But on 8 April, according to reports, the National Institutes of Health, under orders from the Department of Health and Human Services, blocked all of its grants to Columbia.
Harvard University, meanwhile, has announced plans to privately borrow $750m after the Trump administration announced that it would review $9bn in the university’s government funding. Brown University in Rhode Island faces a loss of $510m, while the government has suspended several dozen research grants for Princeton University.
The administration also continues to oppose the use of diversity, equity and inclusion (DEI) programmes in universities. The University of Pennsylvania, from which Donald Trump graduated, faces the suspension of $175m in grants for offences against the government’s DEI policy.
Brain drain
Researchers in medical and social sciences are bearing the brunt of government cuts, with physics departments seeing relatively little impact on staffing and recruitment so far. “Of course we are concerned,” Peter Littlewood, chair of the University of Chicago’s physics department, told Physics World. “Nonetheless, we have made a deliberate decision not to halt faculty recruiting and stand by all our PhD offers.”
David Hsieh, executive officer for physics at California Institute of Technology, told Physics World that his department has also not taken any action so far. “I am sure that each institution is preparing in ways that make the most sense for them,” he says. “But I am not aware of any collective response at the moment.”
Yet universities are already bracing themselves for a potential brain drain. “The faculty and postdoc market is international, and the current sentiment makes the US less attractive for reasons beyond just finance,” warns Littlewood at Chicago.
That sentiment is echoed by Maura Healey, governor of Massachusetts, who claims that Europe, the Middle East and China are already recruiting the state’s best and brightest. “[They’re saying] we’ll give you a lab; we’ll give you staff. We’re giving away assets to other countries instead of training them, growing them [and] supporting them here.”
Science agencies remain under pressure too. The Department of Government Efficiency, run by Elon Musk, has already ended $420m in “unneeded” NASA contracts. The administration aims to cut the year’s National Science Foundation (NSF) construction budget, with data indicating that the agency has roughly halved its number of new grants since Trump became president.
Yet a threat to reduce the percentage of ancillary costs related to scientific grants appeared at least on hold, for now at least. “NSF awardees may continue to budget and charge indirect costs using either their federally negotiated indirect cost rate agreement or the “de minimis” rate of 15%, as authorized by the uniform guidance and other Federal regulations,” says an NSF spokesperson.
Researchers at the SLAC National Accelerator Laboratory in the US have produced the world’s most powerful ultrashort electron beam to date, concentrating petawatt-level peak powers into femtosecond-long pulses at an energy of 10 GeV and a current of around 0.1 MA. According to officials at SLAC’s Facility for Advanced Accelerator Experimental Tests (FACET-II), the new beam could be used to study phenomena in materials science, quantum physics and even astrophysics that were not accessible before.
High-energy electron beams are routinely employed as powerful probes in several scientific fields. To produce them, accelerator facilities like SLAC use strong electric fields to accelerate, focus and compress bunches of electrons. This is not easy, because as electrons are accelerated and compressed, they emit radiation and lose energy, causing the beam’s quality to deteriorate.
An optimally compressed beam
To create their super-compressed ultrashort beam, researchers led by Claudio Emma at FACET-II used a laser to shape the electron bunch’s profile with millimetre-scale precision in the first 10 metres of the accelerator, when the beam’s energy is lowest. They then took this modulated electron beam and boosted its energy by a factor of 100 in a kilometre-long stretch of downstream accelerating cavities. The last step was to compress the beam by a factor of 1000 by using magnets to turn the beam’s millimetre-scale features into a micron-sized long current spike.
One of the biggest challenges, Emma says, was to optimise the laser-based modulation of the beam in tandem with the accelerating cavity and magnetic fields of the magnets to obtain the optimally compressed beam at the end of the accelerator. “This was a large parameter space to work in with lots of knobs to turn and it required careful iteration before an optimum was found,” Emma says.
Measuring the ultra-short electron bunches was also a challenge. “These are typically so intense that if you intercept them with, for example, scintillating screens (a typical technique used in accelerators to diagnose properties of the beam like its spot size or bunch length), the beam fields are so strong they can melt these screens,” Emma explains. “To overcome this, we had to use a series of indirect measurements (plasma ionisation and beam-based radiation) along with simulations to diagnose just how strongly compressed and powerful these beams were.”
Beam delivery
According to Emma, generating extremely compressed electron beams is one of the most important challenges facing accelerator and beam physicists today. “It was interesting for us to tackle this challenge at FACET-II, which is a facility designed specifically to do this kind of research on extreme beam manipulation,” he says.
The team has already delivered the new high-current beams to experimenters who work on probing and optimising the dynamics of plasma-based accelerators. Further down the line, they anticipate much wider applications. “In the future we imagine that we will attract interest from users in multiple fields, be they materials scientists, strong-field quantum physicists or astrophysicists, who want to use the beam as a strong relativistic ‘hammer’ to study and probe a variety of natural interactions with the unique tool that we can provide,” Emma tells Physics World.
The researchers’ next step will be to increase the beam’s current by another order of magnitude. “This additional leap will require the use of a different plasma-based compression technique, rather than the current laser-based approach, which we hope to demonstrate at FACET-II in the near future,” Emma reveals.
Classical vs quantum Mpemba: a) In the classical strong Mpemba effect (sME), the overlap with the slowest decay mode (SDM) drops as the temperature increases until it reaches zero at Ts, the point at which the Mpemba effect is maximized. The overlap then increases as the temperature increases further. With a weak ME, if the overlap between the SDM and the initial state with high temperature TH is smaller than the overlap with the lower-temperature state Tc, the system can reach equilibrium faster. When the ME is strong, the overlap between Ts and SMD is zero, and the system reaches the equilibrium exponentially faster. With no ME, the overlap between the TH state and SDM is at a maximum; therefore, the system reaches the equilibrium more slowly. b) Paths taken by the system when reaching the stationary state. While the system decaying from the normal or weak ME states reaches the stationary state, decay from the sME state follows a straight line to reach equilibrium, implying this is the fastest path. c) Three-energy-level system involved in obtaining the sME. d) Distance vs time on a logarithmic scale. States |2> and |0> exhibit similar slow relaxation rates, whereas the sME state exhibits a fast exponential decay rate when reaching equilibrium. (Courtesy: Hui Jing)
Researchers from China, the UK and Singapore have demonstrated for the first time that choosing the right set of initial conditions can speed up the relaxation process in quantum systems. Their experiments using single trapped ions are a quantum analogue of the classical Mpemba effect, in which hot water can, under certain circumstances, cool faster than cold water. By showing that it is possible to exponentially accelerate the relaxation of a pure state into a stationary state – the hallmark of the so-called strong Mpemba effect – they also provide strategies for designing and analysing open quantum systems such as those used in quantum batteries.
In both the classical and the quantum worlds, the difference between the relaxation process of a system in a strong Mpemba effect (sME) state and any other state is that the decay rate of a sME state is greater than the others. This naturally leads to the conclusion that initial conditions influence the speed at which a system will reach equilibrium. However, the mathematics of the quantum and classical sME are different. While in the classical world an open system is described by the Fokker-Planck equation, with the temperature as the key variable, in the quantum world the Lindblad master equation applies, and the energy of the sME state is what matters.
Paths and overlap
To understand why a quantum system in a particular initial state reaches a steady state faster than any other, we should think about the possible paths that a system can take. One key path is known as the slowest decay mode (SDM), which is the path that takes the system the most time to decay. At the other extreme, the fastest relaxation path is the one taken by a system in the sME initial state. This relaxation path must avoid any overlap with the SDM’s path.
Hui Jing, a physicist at China’s Hunan Normal University who co-led the study, points out that a fundamental characteristic of the quantum sME is that its relaxation path includes the so-called Liouvillian exceptional point (LEP). At this point, an eigenvalue of the dynamical generator, which is the Liovillian superoperator that describes the time evolution of the open quantum system through the Lindblad master equation, changes from real to complex. When the eigenvalue of the SDM is real, the system is successfully prepared in the sME. When the eigenvalue of the SDM acquires an imaginary part, the overlap between the prepared sME state and the SDM is no longer zero. The LEP therefore signals the transition from strong to weak Mpemba effect.
Experimental set-up
To create a pure state that presents zero overlap with the SDM in an open quantum system, Jing and colleagues trapped a 40Ca+ ion and coupled three of its energy levels through laser interactions. The first laser beam, with a wavelength of 729 nm, coupled the ground state with the two excited states with coupling strengths characterized by Rabi frequencies Ω1 and Ω2. A second, circularly polarized laser beam at 854 nm controlled the decay between the first excited state and the ground state.
By tuning the Rabi frequencies, researchers were able to explore different relaxation regimes of the system. When the ratio between the Rabi frequencies was much smaller or bigger than the LEP, they observed the sME and weak ME, respectively. When the ratio equalled the LEP, the transition from sME to weak ME took place.
This work, which is described in Nature Communications, marks the first experimental realization of the quantum strong Mpemba effect. According to Jing, the team’s methods offer an experimental alternative to existing ways of increasing the ion cooling rate or enhancing the efficiency of quantum batteries. Now, the group plans to study how the quantum Mpemba effect behaves at the LEP, since this point could lead to faster decay rates.
“Fusion is now within reach” and represents “one of the economic opportunities of the century”. Not the words of an optimistic fusion scientist but from Kerry McCarthy, parliamentary under-secretary of state at the UK’s Department for Energy Security and Net Zero.
She was speaking on Tuesday at the inaugural Fusion Fest by Economist Impact. Held in London, the day-long event featured 400 attendees and more than 60 speakers from around the world.
McCarthy outlined several initiatives to keep the UK at the “forefront of fusion”. That includes investing £20m into Starmaker One, a £100m endeavour announced in early April to kickstart UK investment fusion fund.
The usual cliché is that fusion is always being 20 years away, perhaps not helped by large international projects such as the ITER experimental fusion reactor that is currently being built in Cadarache, France, which has struggling with delays and cost hikes.
Yet many delegates at the meeting were optimistic that significant developments are within reach with private firms racing to demonstrate “breakeven” – generating more power out than needed to fuel the reaction. Some expect “a few” private firms to announce breakeven by 2030.
And these aren’t small ventures. Commonwealth Fusion Systems, based in Massachusetts, US, for example, has 1300 people. Yet large international companies are, for the moment, only dipping their toe into the fusion pool.
While some $8bn has already been spent by private firms on fusion, many expect the funding floodgates to open once breakeven has been achieved in a private lab.
Most stated that a figure of about $50-60bn, however, would be needed to make fusion a real endeavour in terms of delivering power to the grid, something that could happen in the 2040s. But it was reiterated throughout the day that fusion must provide energy at a price that consumers would be willing to pay.
On target
It is not only private firms that are making progress. Many will point out that ITER has laid much of the groundwork in terms of fostering a fusion “ecosystem” – a particular buzzword of the day – that was demonstrated, in part, by the significant attendence at the event.
In a recent shot, she said that the device had produced 7 MJ with about 2 MJ having been delivered to the small capsule target. This represents a gain of about 3.4 – much more than its previous record of 2.4.
NIF, which is based on inertial confinement fusion rather than magnetic confinement, is currently undergoing refurbishment and upgrades. It is hoped that this will increase the energy input to about 2.6 MJ but gains of between 10-15 will be demonstrated if the technique can go anywhere.
Despite the number of fusion firms ballooning from a handful in the early 2010s to some 30 today, the general feeling at the meeting was that only a few will likely go on to build power plants, with the remainder using fusion for other sectors.
The issue is that no-one knew what technology would likely succeed, so all to play for.
This episode of the Physics World Weekly podcast features an interview with Panicos Kyriacou, who is chief scientist at the UK-based start-up Crainio. The company has developed a non-invasive way of using light to measure the pressure inside the skull. Knowing this intracranial pressure is crucial when diagnosing traumatic brain injury, which a leading cause of death and disability. Today, the only way to assess intracranial pressure is to insert a sensor into the patient’s brain, so Crainio’s non-invasive technique could revolutionize how brain injuries are diagnosed and treated.
Kyriacou tells Physics World’s Tami Freeman why it is important to assess a patient’s intracranial pressure as soon as possible after a head injury. He explains how Crainio’s optical sensor measures blood flow in the brain and then uses machine learning to deduce the intracranial pressure.
Kyriacou is also professor of engineering at City St George’s University of London, where the initial research for the sensor was done. He recalls how Crainio was spun out of the university and how it is currently in a second round of clinical trials.
As well as being non-invasive, Crainio’s technology could reduce the cost of determining intracranial pressure and make it possible to make measurements in the field, shortly after injuries occur.
Andrew Martin, skills policy lead at the UK’s Department for Science, Innovation and Technology (DSIT), flashed up a slide. Speaking at the ninth Careers in Quantum event in Bristol last week, he listed the eight skills that the burgeoning quantum-technology sector wants. Five are various branches of engineering, including electrical and electronics, mechanical, software and systems. A sixth is materials science and chemistry, with a seventh being quality control.
Quantum companies, of course, do also want “quantum specialists”, which was the eighth skill identified by Martin. But it’s a sign of how mature the sector has become that being a hotshot quantum physicist is no longer the only route in. That point was underlined by Carlos Faurby, a hardware integration engineer at Sparrow Quantum in Denmark, which makes single-photon sources for quantum computers. “You don’t need a PhD in physics to work at Sparrow,” Faurby declared.
Quantum tech certainly has a plethora of career options, with the Bristol event featuring a selection of firms from across the quantum ecosystem. Some are making prototype quantum computers (Quantum Motion, Quantinuum, Oxford Ionics) or writing the algorithms to run on quantum computers (Phasecraft). Others are building quantum networks (BT, Toshiba), working on quantum error correction (Riverlane) or developing quantum cryptography (KETS Quantum). Businesses building hardware such as controllers and modems were present too.
With the 2025 International Year of Quantum Science and Technology (IYQ) now in full swing, the event underlined just how thriving the sector is, with lots of career choices for physicists – whether you have a PhD or not. But competition to break in is intense. Phasecraft says it gets 50–100 applicants for each student internship it offers, with Riverlane receiving almost 200 applications for two summer placements.
That’s why it’s vital for physics students to develop their “soft skills” – or “professional skills” as several speakers preferred to call them. Team working, project management, collaboration and communication are all essential for jobs in the quantum industry, as indeed they are for all careers. Sadly, many physicists don’t realize soon enough just how crucial soft skills are.
Reflecting on his time at Light Trace Photonics, which he co-founded in 2021, Dominic Sulway joked in a panel discussion that he’d “enjoyed developing all the skills people told me I’d need for my PhD”. Of course, if you really want to break into the sector, why not follow his lead and start a business yourself? It’s a rewarding experience, I was told, and there doesn’t seem to be any slow-down in the number of quantum firms starting up.
For more information on career options for physicists, check out the free-to-read 2025 Physics World Careers guide
A quantum computer has been used for the first time to generate strings of certifiably random numbers. The protocol for doing this, which was developed by a team that included researchers at JPMorganChase and the quantum computing firm Quantinuum, could have applications in areas ranging from lotteries to cryptography – leading Quantinuum to claim it as quantum computing’s first commercial application, though other firms have made similar assertions. Separately, Quantinuum and its academic collaborators used the same trapped-ion quantum computer to explore problems in quantum magnetism and knot theory.
Genuinely random numbers are important in several fields, but classical computers cannot create them. The best they can do is to generate apparently random or “pseudorandom” numbers. Randomness is inherent in the laws of quantum mechanics, however, so quantum computers are naturally suited to random number generation. In fact, random circuit sampling – in which all qubits are initialized in a given state and allowed to evolve via quantum gates before having their states measured at the output – is often used to benchmark their power.
Of course, not everyone who wants to produce random numbers will have their own quantum computer. However, in 2023 Scott Aaronson of the University of Texas at Austin, US and his then-PhD student Shi-Han Hungsuggested that a client could send a series of pseudorandomly chosen “challenge” circuits to a central server. There, a quantum computer could perform random circuit sampling before sending the readouts to the client.
If these readouts are truly the product of random circuit sampling measurements performed on a quantum computer, they will be truly random numbers. “Certifying the ‘quantumness’ of the output guarantees its randomness,” says Marco Pistoia, JPMorganChase’s head of global technology applied research.
Importantly, this certification is something a classical computer can do. The way this works is that the client samples a subset of the bit strings in the readouts and performs a test called cross-entropy benchmarking. This test measures the probability that the numbers could have come from a non-quantum source. If the client is satisfied with this measurement, they can trust that the samples were genuinely the result of random circuit sampling. Otherwise, they may conclude that the data could have been generated by “spoofing” – that is, using a classical algorithm to mimic a quantum computer. The degree of confidence in this test, and the number of bits they are willing to settle for to achieve this confidence, is up to the client.
High-fidelity quantum computing
In the new work, Pistoia, Aaronson, Hung and colleagues sent challenge circuits to the 56-qubit Quantinuum H2-1 quantum computer over the Internet. The attraction of the Quantinuum H2-1, Pistoia explains, is its high fidelity: “Somebody could say ‘Well, when it comes to randomness, why would you care about accuracy – it’s random anyway’,” he says. “But we want to measure whether the number that we get from Quantinuum really came from a quantum computer, and a low-fidelity quantum computer makes it more difficult to ascertain that with confidence… That’s why we needed to wait all these years, because a low-fidelity quantum computer wouldn’t have given us the certification part.”
The team then certified the randomness of the bits they got back by performing cross-entropy benchmarking using four of the world’s most powerful supercomputers, including Frontier at the US Department of Energy’s Oak Ridge National Laboratory. The results showed that it would have been impossible for a dishonest adversary with similar classical computing power to spoof a quantum computer – provided the client set a short enough time limit.
One drawback is that at present, the computational cost of verifying that random numbers have not been spoofed is similar to the computational cost of spoofing them. “New work is needed to develop approaches for which the certification process can run on a regular computer,” Pistoia says. “I think this will remain an active area of research in the future.”
A more important difference, Foss-Feig argues, is that whereas the other groups used a partly analogue approach to simulating their quantum magnetic system, with all quantum gates activated simultaneously, Quantinuum’s approach divided time into a series of discrete steps, with operations following in a sequence similar to that of a classical computer. This digitization meant the researchers could perform a discrete gate operation as required, between any of the ionic qubits in their lattice. “This digital architecture is an extremely convenient way to compile a very wide range of physical problems,” Foss-Feig says. “You might think, for example, of simulating not just spins, for example, but also fermions or bosons.”
While the researchers say it would be just possible to reproduce these simulations using classical computers, they plan to study larger models soon. A 96-qubit version of their device, called Helios, is slated for launch later in 2025.
“We’ve gone through a shift”
Quantum information scientist Barry Sanders of the University of Calgary, Canada is impressed by all three works. “The real game changer here is Quantinuum’s really nice 56-qubit quantum computer,” he says. “Instead of just being bigger in its number of qubits, it’s hit multiple important targets.”
In Sanders’ view, the computer’s fully digital architecture is important for scalability, although he notes that many in the field would dispute that. The most important development, he adds, is that the research frames the value of a quantum computer in terms of its accomplishments.
“We’ve gone through a shift: when you buy a normal computer, you want to know what that computer can do for you, not how good is the transistor,” he says. “In the old days, we used to say ‘I made a quantum computer and my components are better than your components – my two-qubit gate is better’… Now we say, ‘I made a quantum computer and I’m going to brag about the problem I solved’.”
The random number generation paper is published in Nature. The others are available on the arXiv pre-print server.
A team of researchers in Switzerland, Germany and the US has observed clear evidence of quantum mechanical interference behaviour in collisions between a methane molecule and a gold surface. As well as extending the boundaries of quantum effects further into the classical world, the team say the work has implications for surface chemistry, which is important for many industrial processes.
The effects of interference in light are generally easy to observe. Whenever a beam of light passes through closely-spaced slits or bounces off an etched grating, an alternating pattern of bright and dark intensity modulations appears, corresponding to locations of constructive and destructive interference, respectively. This was the outcome of Thomas Young’s original double-slit experiment, which was carried out in the 1800s and showed that light behaves like a wave.
For molecules and other massive objects, observing interference is trickier. Though quantum mechanics decrees that these also interfere when they scatter off surfaces, and a 1920s version of Young’s double-slit experiment showed that this was true for electrons, the larger the objects are, the more difficult it is to observe interference effects. Indeed, the disappearance of such effects is a sign that the object’s wavefunction has “decohered” – that is, the object has stopped behaving like a wave and started obeying the laws of classical physics.
Similar to the double-slit experiment
In the new work, researchers led by Rainer Beck of the EPFL developed a way to observe interference in complex polyatomic molecules. They did this by using an infrared laser to push methane (CH4) molecules into specific rovibrational states before scattering the molecules off an atomically smooth and chemically inert Au(111) surface. They then detected the molecules’ final states using a second laser and an instrument called a bolometer that measures the tiny temperature change as molecules absorb the laser’s energy.
Using this technique, Beck and colleagues identified a pattern in the quantum states of the methane molecules after they collided with the surface. When two states had different symmetries, the quantum mechanical amplitudes for the different pathways taken during the transition between them cancelled out. In states with the same symmetry, however, the pathways reinforced each other, leading to an intense, clearly visible signal.
The researchers say that this effect is similar to the destructive and constructive interference of the double-slit experiment, but not quite the same. The difference is that interference in the double-slit experiment stems from diffraction, whereas the phenomenon Beck and colleagues observed relates to the rotational and vibrational states of the methane molecules.
A rule to explain the patterns
The researchers had seen hints of such behaviour in experiments a few years ago, when they scattered methane from a nickel surface. “We saw that some rotational quantum states were somewhat weakly populated by the collisions while other states that were superficially very similar (that is, with the same energy and same angular momentum) were more strongly populated,” explains Christopher Reilly, a postdoctoral researcher at EPFL and the lead author of a paper in Science on the work. “When we moved on to collisions with a gold surface, we discovered that these population imbalances were now very pronounced.”
This discovery spurred them to find an explanation. “We concluded that we might be observing a conservation of the reflection parity of the methane molecule’s wavefunction,” Reilly says. “We then set out to test it for molecules prepared in vibrationally excited states and our results confirmed our hypothesis spectacularly.”
Because the team’s technique for detecting quantum states relies on spectroscopy, Reilly says the “intimidating complexity” of the spectrum of quantum states in a medium-sized molecule like methane was a challenge. “While our narrow-bandwidth lasers allowed us to probe the population in individual quantum states, we still needed to know exactly which wavelength we need to tune the laser wavelength to in order to address a given state,” he explains.
This, in turn, meant knowing the molecule’s energy levels very precisely, as they were trying to compare populations of states with only marginally different energies. “It is only just in the last couple years that modelling of methane’s spectrum has become accurate enough to permit a reliable assignment of the quantum states involved in a given infrared transition,” Reilly says, adding that the HITEMP project of the HITRAN spectroscopic database was a big help.
Rethinking molecule-surface dynamics
According to Reilly, the team’s results show that classical models cannot fully capture molecule-surface dynamics. “This has implications for our general understanding of chemistry at surfaces, which is where in fact the majority of chemistry relevant to industry (think catalysts) and technology (think semiconductors) occurs,” he says. “The first step of any surface reaction is the adsorption of the reactants onto the surface and this step often requires the overcoming of some energetic barrier. Whether an incoming molecule will adsorb depends not only on the molecule’s total energy but on whether this energy can be effectively channelled into overcoming the barrier.
“Our scattering experiments directly probe these dynamics and show that, to really understand the different fundamental steps of surface chemistry, quantum mechanics is needed,” he tells Physics World.
Deterministic entanglement through holonomy: A system of four coupled optical waveguides (A, C, E, W), with three inter-waveguide coupling coefficients (k_A,k_E,k_W) vary in such a way to define a closed path γ. (Courtesy: Reprinted with permission from http://dx.doi.org/10.1103/PhysRevLett.134.080201)
Physicists at the Georgia Institute of Technology, US have introduced a novel way to generate entanglement between photons – an essential step in building scalable quantum computers that use photons as quantum bits (qubits). Their research, published in Physical Review Letters, leverages a mathematical concept called non-Abelian quantum holonomy to entangle photons in a deterministic way without relying on strong nonlinear interactions or irrevocably probabilistic quantum measurements.
Entanglement is fundamental to quantum information science, distinguishing quantum mechanics from classical theories and serving as a pivotal resource for quantum technologies. Existing methods for entangling photons often suffer from inefficiencies, however, requiring additional particles such as atoms or quantum dots and additional steps such as post-selection that eliminate all outcomes of a quantum measurement in which a desired event does not occur.
While post-selection is a common strategy for entangling non-interacting quantum particles, protocols for entangled state preparation that use post-selection are non-deterministic. This is because they rely upon making measurements, and the result of obtaining a certain state of the system after a measurement is associated with a probability, making it inevitably non-deterministic.
Non-Abelian holonomy
The new approach provides a direct and deterministic alternative. In it, the entangled photons occupy distinguishable spatial modes of optical waveguides, making entanglement more practical for real-world applications. To develop it, Georgia Tech’s Aniruddha Bhattacharya and Chandra Raman took inspiration from a 2023 experiment by physicists at Universität Rostock, Germany, that involved coupled photonic waveguides on a fused silica chip. Both works exploit a property known as non-Abelian holonomy, which is essentially a geometric effect that occurs when a quantum system evolves along a closed path in parameter space (more precisely, it is a matrix-valued generalization of a pure geometric phase).
In Bhattacharya and Raman’s approach, photons evolve in a waveguide system where their quantum states undergo a controlled transformation that leads to entanglement. The pair derive an analytical expression for the holonomic transformation matrix, showing that the entangling operation corresponds to a unitary rotation within an effective pseudo-angular momentum space. Because this process is fully unitary, it does not require measurement or external interventions, making it inherently robust.
Beyond the Hong-Ou-Mandel effect
A classic example of photon entanglement is the Hong–Ou–Mandel (HOM) effect, where two identical photons interfere at a beam splitter, leading to quantum correlations between them. The new method extends such interference effects beyond two photons, allowing deterministic entanglement of multiple photons and even higher-dimensional quantum states known as qudits (d-level systems) instead of qubits (two-level systems). This could significantly improve the efficiency of quantum information protocols.
Because state preparation and measurement are relatively straightforward in this approach, Bhattacharya and Raman say it is well-suited for quantum computing. Since the method relies on geometric principles, it naturally protects against certain types of noise, making it more robust than traditional approaches. They add that their technique could even be used to construct an almost universal set of near-deterministic entangling gates for quantum computation with light. “This innovative use of non-Abelian holonomy could shift the way we think about photonic quantum computing,” they say.
By providing a deterministic and scalable entanglement mechanism, Bhattacharya and Raman add that their method opens the door to more efficient and reliable photonic quantum technologies. The next steps will be to validate the approach experimentally and explore practical implementations in quantum communication and computation. Further in the future, it will be necessary to find ways of integrating this approach with other quantum systems, such as matter-based qubits, to enable large-scale quantum networks.
Water molecules on the surface of an electrode flip just before they give up electrons to form oxygen – a feat of nanoscale gymnastics that explains why the reaction takes more energy than it theoretically should. After observing this flipping in individual water molecules for the first time, scientists at Northwestern University in the US say that the next step is to find ways of controlling it. Doing so could improve the efficiency of the reaction, making it easier to produce both oxygen and hydrogen fuel from water.
The water splitting process takes place in an electrochemical cell containing water and a metallic electrode. When a voltage is applied to the electrode, the water splits into oxygen and hydrogen via two separate half-reactions.
The problem is that the half-reaction that produces oxygen, known as the oxygen evolution reaction (OER), is difficult and inefficient and takes more energy than predicted by theory. “It should require 1.23 V,” says Franz Geiger, the Northwestern physical chemist who led the new study, “but in reality, it requires more like 1.5 or 1.8 V.” This extra energy cost is one of the reasons why water splitting has not been implemented on a large scale, he explains.
Determining how water molecules arrange themselves
In the new work, Geiger and colleagues wanted to test whether the orientation of the water’s oxygen atoms affects the kinetics of the OER. To do this, they directed an 80-femtosecond pulse of infrared (1034 nm) laser light onto the surface of the electrode, which was in this case made of nickel. They then measured the intensity of the reflected light at half the incident wavelength.
This method, which is known as second harmonic and vibrational sum-frequency generation spectroscopy, revealed that the water molecules’ alignment on the surface of the electrode depended on the applied voltage. By analysing the amplitude and phase of the signal photons as this voltage was cycled, the researchers were able to pin down how the water molecules arranged themselves.
They found that before the voltage was applied, the water molecules were randomly oriented. At a specific applied voltage, however, they began to reorient. “We also detected water dipole flipping just before cleavage and electron transfer,” Geiger adds. “This allowed us to distinguish flipping from subsequent reaction steps.”
An unexplored idea
The researchers’ explanation for this flipping is that at high pH levels, the surface of the electrode is negatively charged due to the presence of nickel hydroxide groups that have lost their protons. The water molecules therefore align with their most positively charged ends facing the electrode. However, this means that the ends containing the electrons needed for the OER (which reside in the oxygen atoms) are pointing away from the electrode. “We hypothesized that water molecules must flip to align their oxygen atoms with electrochemically active nickel oxo species at high applied potential,” Geiger says.
This idea had not been explored until now, he says, because water absorbs strongly in the infrared range, making it appear opaque at the relevant frequencies. The electrodes typically employed are also too thick for infrared light to pass through. “We overcame these challenges by making the electrode thin enough for near-infrared transmission and by using wavelengths where water’s absorbance is low (the so-called ‘water window’),” he says.
Other challenges for the team included designing a spectrometer that could measure the second harmonic generation amplitude and phase and developing an optical model to extract the number of net-aligned water molecules and their flipping energy. “The full process – from concept to publication – took three years,” Geiger tells Physics World.
The team’s findings, which are detailed in Science Advances, suggest that controlling the orientation of water at the interface with the electrode could improve OER catalyst performance. For example, surfaces engineered to pre-align water molecules might lower the kinetic barriers to water splitting. “The results could also refine electrochemical models by incorporating structural water energetics,” Geiger says. “And beyond the OER, water alignment may also influence other reactions such as the hydrogen evolution reaction and CO₂ reduction to liquid fuels, potentially impacting multiple energy-related technologies.”
The researchers are now exploring alternative electrode materials, including NiFe and multi-element catalysts. Some of the latter can outperform iridium, which has traditionally been the best-performing electrocatalyst, but is very rare (it comes from meteorites) and therefore expensive. “We have also shown in a related publication (in press) that water flipping occurs on an earth-abundant semiconductor, suggesting broader applicability beyond metals,” Geiger reveals.
Reversible switching Schematic illustrating the hard/soft transition of the hydrogel/NAAC composite. (Courtesy: CC BY 4.0/Int. J. Extrem. Manuf. 10.1088/2631-7990/adbd97)
Complex hydrogel structures created using 3D printing are increasingly employed in fields including flexible electronics, soft robotics and regenerative medicine. Currently, however, such hydrogels are often soft and fragile, limiting their practical utility. Researchers at Zhejiang University in China have now fabricated 3D-printed hydrogels that can be easily, and repeatably, switched between soft and hard states, enabling novel applications such as smart medical bandages or information encryption.
“Our primary motivation was to overcome the inherent limitations of 3D-printed hydrogels, particularly their soft, weak and fragile mechanical properties, to broaden their application potential,” says co-senior author Yong He.
The research team created the hard/soft switchable composite by infusing supersaturated salt solution (sodium acetate, NAAC) into 3D-printed polyacrylamide (PAAM)-based hydrogel structures. The hardness switching is enabled by the liquid/solid transition of the salt solution within the hydrogel.
Initially, the salt molecules are arranged randomly within the hydrogel and the PAAM/NAAC composite is soft and flexible. The energy barrier separating the soft and hard states prevents spontaneous crystallization, but can be overcome by artificially seeding a crystal nucleus (via exposure to a salt crystal or contact with a sharp object). This seed promotes a phase transition to a hard state, with numerous rigid, rod-like nanoscale crystals forming within the hydrogel matrix.
Superior mechanical parameters
The researchers created a series of PAAM/NAAC structures, using projection-based 3D printing to print hydrogel shapes and then soaking them in NAAC solution. Upon seeding, the structures rapidly transformed from transparent to opaque as the crystallization spread through the sample at speeds of up to 4.5 mm/s.
The crystallization dramatically changed the material’s mechanical performance. For example, a soft cylinder of PAAM/1.5NAAC (containing 150 wt% salt) could be easily compressed by hand, returning to its original shape after release. After crystallization, four 9x9x12 mm cylinders could support an adult’s weight without deforming.
For this composite, just 1 min of crystallization dramatically increased the compression Young’s modulus compared with the soft state. And after 24 h, the Young’s modulus grew from 110 kPa to 871.88 MPa. Importantly, the hydrogel could be easily returned to its soft state by heating and then cooling, a process that could be repeated many times.
The team also performed Shore hardness testing on various composites, observing that hardness values increased with increasing NAAC concentration. In PAAM/1.7NAAC composites (170 wt% salt), the Shore D value reached 86.5, comparable to that of hard plastic materials.
The hydrogel’s crosslinking density also impacted its mechanical performance. For PAAM/1.5NAAC composites, increasing the mass percentage of polymer crosslinker from 0.02 to 0.16 wt% increased the compression Young’s modulus to 1.2 GPa and the compression strength to 81.7 MPa. The team note that these parameters far exceed those of any existing 3D-printed hydrogels.
Smart plaster cast
He and colleagues demonstrated how the hard/soft switching and robust mechanical properties of PAAM/NAAC can create medical fixation devices, such as a smart plaster cast. The idea here is that the soft hydrogel can be moulded around the injured bone, and then rapidly frozen in shape by crystallization to support the injury and promote healing.
The researchers tested the smart plaster cast on an injured forearm. After applying a layer of soft cotton padding, they carefully wrapped around layers of the smart plaster bandage (packed within a polyethylene film to prevent accidental seeding). The flexible hydrogel could be conformed to the curved surface of limbs and then induced to crystallize.
Medical fixation device Application of the PAAM/NAAC composite to create a smart plaster cast. (Courtesy: CC BY 4.0/Int. J. Extrem. Manuf. 10.1088/2631-7990/adbd97)
After just 10 min of crystallization, the smart plaster cast reached a yield strength of 8.7 MPa, rapidly providing support for the injured arm. In comparison, a traditional plaster cast (as currently used to treat bone fractures) took about 24 h to fully harden, reaching a maximum yield strength of 3.9 MPa
To determine the safety of the exothermic crystallization process, the team monitored temperature changes in the plaster cast nearest to the skin. The temperature peaked at 41.5 °C after 25 min of crystallization, below the ISO-recommended maximum safe temperature of 50 °C.
The researchers suggest that the ease of use, portability and fast response of the smart plaster cast could provide a simple and effective solution for emergency and first aid situations. Another benefit is that, in contrast to traditional plaster casts that obstruct X-rays and hinder imaging, X-rays easily penetrate through the smart plaster cast to enable high-quality imaging during the healing process.
While the composites exhibit high strength and Young’s modulus, they are not as tough as ideally desired. “For example, the elongation at break was less than 10% in tensile testing for the PAAM/1.5NAAC and PAAM/1.7NAAC samples, highlighting the challenge of balancing toughness with strength and modulus,” He tells Physics World. “Therefore, our current research focuses on enhancing the toughness of these composite materials without compromising their modulus, with the goal of developing strong, tough and mechanically switchable materials.”
In 2014 the American mathematical physicist S James Gates Jr shared his “theorist’s bucket list” of physics discoveries he would like to see happen before, as he puts it, he “shuffles off this mortal coil”. A decade later, Physics World’s Margaret Harris caught up once more with Gates, who is now at the University of Maryland, US, to see what discoveries he can check off his list; what he would still like to see discovered, proven or explored; and what more he might add to the list, as of 2025.
The first thing on your list 10 years ago was the discovery of the Higgs boson, which had happened. The next thing on your list was gravitational waves.
The initial successful detection of gravity waves [in 2015] was a spectacular day for a lot of us. I had been following the development of that detector [the Laser Interferometer Gravitational-wave Observatory, or LIGO] almost from its birth. The first time I heard about detecting gravity waves was around 1985. I was a new associate professor at Maryland, and a gentleman by the name of [Richard] Rick Isaacson, who was a programme officer at the National Science Foundation (NSF), called me one day into his office to show me a proposal from a Caltech-MIT collaboration to fund a detector. I read it and I said this will never work. Fortunately, Isaacson is a superhero and made this happen because for decades he was the person in the NSF with the faith that this could happen; so when it did, it was just an amazing day.
Why is the discovery of these gravitational waves so exciting for physicists?
Albert Einstein’s final big prediction was that there would be observable gravitational waves in the universe. It’s very funny – if you go back into the literature, he first says yes this is possible, but at some point he changes his mind again. It’s very interesting to think about how human it is to bounce back and forth, and then to have Mother Nature say look, you got it right the first time. So such a sharp confirmation of the theory of general relativity was unlike anything I could imagine happening in my lifetime, quite frankly, even though it was on my bucket list.
The other thing is that our species knows about the heavens mostly because there have been “entities” that are similar to Mercury, the Greek god who carried messages from Mount Olympus. In our version of the story, Mercury is replaced by photons. It’s light that has been telling us for hundreds of thousands of years, maybe a million years, that there’s something out there and this drove the development of science for several hundred years. With the detection of gravitational waves, there’s a new kid on the block to deliver the message, and that’s the graviton. Just like light, it has both particle and wave aspects, so now we have detected gravitational waves, the next big thing is to be able to detect gravitons.
We are not completely clear on exactly how to see gravitons, but once we have that knowledge, we will be able to do something that we’ve never been able to do as a species in this universe. After the initial moments of the Big Bang, there was a period of darkness, when matter was far too hot to form neutral atoms, and light could not travel through the dense plasma. It took 380,000 years for electrons to be trapped in orbits around nuclei, forming the first atoms.
Eventually, the universe had expanded so much that the average temperature and density of particles had dropped enough for light to travel. Now what’s really interesting is if you look at the universe via photons, you can only look so far back up to that point when light was first able to travel through the universe, often referred to as the “first dawn”. We detected this light in the 1960s, and it’s called the cosmic microwave background. If you want to peer further back in time beyond this period, you can’t use light but you can use gravitational waves. We will be able as a species eventually to look maybe all the way back to the Big Bang, and that’s remarkable.
What’s the path to seeing gravitons experimentally?
At the time that gravitational waves were detected by LIGO there were three different detectors, two in the US and one on the border of France and Italy called Virgo. There is a new LIGO site coming online in India now, and so what’s going to happen, provided there continues to be a global consensus on continuing to do this science, is that more sites like this are going to come online, which will give us higher-fidelity pictures. It’s going to be a difference akin to going from black and white TV to colour.
Wish fulfilled Aerial view of the Virgo detector in Italy. This facility became the third to detect gravitational waves, in 2017, after the two LIGO detectors in the US. As more gravitational-wave facilities come online around the world, we increase our chance of detecting gravitons. (CCO 1.0 The Virgo collaboration)
In the universe now, the pathway to detecting gravitons involves two steps. First, you probably want to measure the polarization of gravitons, and Fabry–Pérot interferometers, such as LIGO, have that capacity. If it’s a polarized graviton wave, the bending of space-time has a certain signature, whether it’s left or right-handed. If we are lucky enough we will actually see that polarization, I would guess within the next 10 years.
The second step is quantization, which is going to be a challenge. Back in the 1960s a physicist at the University of Maryland named Joseph Weber developed what are now called Weber bars. They’re big metal bars and the idea was you cool them down and then if a graviton impinges on these bars, it would induce lattice vibrations in the metal, and you would detect those. I suspect there’s going to be a big push in going back and upgrading that technology. One of the most exciting things about that is they might be quantum Weber bars. That’s the road that I could see to actually nailing down the existence of the graviton.
Number three on your bucket list from a decade ago was supersymmetry. How have its prospects developed in the past 10 years?
At the end of the Second World War, in an address to the Japanese people after the atomic bombing of Hiroshima and Nagasaki, the Japanese emperor [known as Showa in Japan, Hirohito in the West] used the phrase “The situation has developed not necessarily to our advantage”, and I believe we can apply that to supersymmetry. In 2006 I published a paper where I said explicitly I did not expect the Large Hadron Collider (LHC) to detect supersymmetry. It was a back-of-the-envelope calculation, where I was looking at the issue of anomalous magnetic moments. Because the magnetic moments can be sensitive to particles you can’t actually detect, by looking at the anomalous magnetic moment and then comparing the measured value to what is predicted by all the particles that you know, you can put lower bounds on the particles that you don’t know, and that’s what I did to come up with this number.
It looked to me like the lightest “superpartner” was probably going to be in the range of 30 Tev. The LHC’s initial operations were at 7 TeV and it’s currently at 14 TeV, so I’m feeling comfortable about this issue. If it’s not found by the time we reach 100 Tev, well, I’m likely going to kick the bucket by the time we get that technology. But I am confident that SUSY is out there in nature for reasons of quantum stability.
Also, observations of particle physics – particularly high-precision observations, magnetic moments, branching ratios, decay rates – are not the only way to think about finding supersymmetry. In particular, one could imagine that within string theory, there might be cosmological implications (arXiv:1907.05829), which are mostly limited to the question of dark matter and dark energy. When it comes to the dark-matter contribution in the universe, if you look at the mathematics of supersymmetry, you can easily find that there are particles that we haven’t observed yet and these might be the lightest supersymmetric particle.
And the final thing in your bucket list, which you’ve touched on, was superstring theory. When we last spoke, you said that you did not expect to see it. How has that changed, if at all?
Unless I’m blessed with a life as long as Methuselah, I don’t expect to see that. I think that for superstring theory to win observational acceptance, it will likely come about not from a single experiment, but from a confluence of observations of the cosmology and astrophysics type, and maybe then the lightest super symmetric particle will be found. By the way, I don’t expect extra dimensions ever to be found. But if I did have several hundred years to live, those are the kinds of likely expectations I would have.
And have you added anything new to your bucket list over the past 10 years?
Yes, but I don’t quite know how to verbalize it. It has to do with a confluence of things around quantum mechanics and information. In my own research, one of the striking things about the graphs that we developed to understand the representation theory of supersymmetry –we call them “adinkras” – is that error-correcting codes are part of these constructs. In fact, for me this is the proudest piece of research I’ve ever enabled – to discover a kind of physics law, or at least the possibility of a physics law, that includes error-correcting codes. I know of no previous example in history where a law of physics includes error-correcting codes, but we can clearly see it in the mathematics around these graphs (arXiv:1108.4124).
That had a profound impact on the way I think about information theory. In the 1980s, John Wheeler came up with this very interesting way to think about quantum mechanics (“Information, physics, quantum: the search for links” Proc. 3rd Int. Symp. Foundations of Quantum Mechanics, Tokyo, 1989, pp354–368). A shorthand phrase to describe it is “it from bit” – meaning that the information that we see in the universe is somehow connected to bits. As a young person, I thought that was the craziest thing I had ever heard. But in my own research I saw that it’s possible for the laws of physics to contain bits in the form of error-correcting codes, so I had to then rethink my rejection of what I thought was a wild idea.
In fact, now that I’m old, I’ve concluded that if you do theoretical physics long enough, you too can become crazy – because that’s what sort of happened to me! In the mathematics of supersymmetry, there is no way to avoid the presence of error-correcting codes and therefore bits. And because of that my new item for the bucket list is an actual observational demonstration that the laws of quantum mechanics entail the use of information in bits.
In terms of when we might see that, it will be long after I’ve gone. Unless I somehow get another 150 years of life. Intellectually, that’s how long I would estimate it will take as of now, because the hints are so stark, they suggest something is definitely going on.
We’ve talked a little bit about how science has changed in the past 10 years. Of course, science is not unconnected with the rest of the world. There have been some changes in other things that impinge on science, particularly those recently developing in the US. What’s your take on that?
Unfortunately, it’s been very predictable. Two years ago I wrote an essay called “Expelled from the mountain top?” (Science 380 993). I took that title from a statement by Martin Luther King Jr where he says “I’ve been to the mountaintop”, and the part about being “expelled” refers to closing down opportunities for people of colour. In my essay I talked about the fact that it looked to me like the US was moving in a direction where it would be less likely that people like me – a man of colour, an African American, a scientist – would continue to have access to the kind of educational training that it takes to do this [science].
I’m still of the opinion that the 2023 decision the Supreme Court made [about affirmative action] doesn’t make sense. What it is saying is that diversity has no role in driving innovation. But there’s lots of evidence that that’s not right. How do you think cities came into existence? They are places where innovation occurs because you have diverse people coming to cities.
You add to that the presence of a new medium – the Internet – and the fact that with this new medium, anyone can reach millions of people. Why is this a little bit frightening? Well, fake news. Misinformation.
Still hopeful Jim Gates discusses his career and his lifelong interest in supersymmetry with an audience at the Royal College of Art in London earlier this year. (Courtesy: Margaret Harris)
I ran into a philosopher about a year ago, and he made a statement that I found very profound. He said think of the printing press. It allowed books to disseminate through Western European society in a way that had never happened before, and therefore it drove literacy. How long did it take for literacy levels to increase? 50 to 100 years. Then he said, now let’s think about the Internet. What’s different about it? The difference is that anyone can say anything and reach millions of people. And so the challenge is how long it will take for our species to learn to write the Internet without misinformation or fake news. And if he’s right, that’s 100, 150 years. That’s part of the challenge that the US is facing. It’s not just a challenge for my country, but somehow it seems to be particularly critical in my country.
So what does this have to do with science? In 2005 I was invited to deliver a plenary address to the American Association for the Advancement of Science annual meeting. In that address, I made statements about science being turned off because it was clear to me, even back then in my country, that there were elements in our society that would be perfectly happy to deny evidence brought forth by scientists, and that these elements were becoming stronger.
You put this all together and it’s going to be an extraordinarily important, challenging time for the continuation of science because, certainly at the level of fundamental science, this is something that the public generally has to say “Yes, we want to invest in this”. If you have agencies and agents in society denying vaccines, for example, or denying the scientific evidence around evolution or climate change, if this is going to be something that the public buys into, then science itself potentially can be turned off, and that’s the thing I was warning about in 2005.
What are some practical things that members of the scientific community can do to help prevent that from happening?
First of all, come down from the ivory tower. I’ve been a part of some activities, and they normally are under the rubric of restoring the public’s trust in science, and I think that’s the wrong framing. It’s the public faith in science that’s under attack. So from my perspective, that’s what I’d much rather have people really thinking about.
What would you say the difference is between having trust in science and having faith in science?
In my mind, if I trust something, I will listen. If I have faith in something, I will listen and I will act. To me, this is a sharp distinction.
Personally, even though I expect that it’s going to be really hard going forward, I am hopeful. And I would urge young people never to lose that hope. If you lose hope, there is no hope. It’s just that simple. And so I am hopeful. Even though people may take my comments as “oh, he’s just depressed” – no, I’m not. Because I’m a scientist, I believe that one must, in a clear-eyed, hard-headed manner, look at the evidence that’s in front of us and not sentimentally try to dodge what you see, and that’s who I am. So I am hopeful in spite of all the things that I’ve just said to you.
If a water droplet flowing over a surface gets stuck, and then unsticks itself, it generates an electric charge. The discoverers of this so-called depinning phenomenon are researchers at RMIT University and the University of Melbourne, both in Australia, and they say that boosting it could make energy-harvesting devices more efficient.
The newly observed charging mechanism is conceptually similar to slide electrification, which occurs when a liquid leaves a surface – that is, when the surface goes from wet to dry. However, the idea that the opposite process can also generate a charge is new, says Peter Sherrell, who co-led the study. “We have found that going from dry to wet matters as well and may even be (in some cases) more important,” says Sherrell, an interdisciplinary research fellow at RMIT. “Our results show how something as simple as water moving on a surface still shows basic phenomena that have not been understood yet.”
Co-team leader Joe Berry, a fluid dynamics expert at Melbourne, notes that the charging mechanism only occurs when the water droplet gets temporarily stuck on the surface. “This suggests that we could design surfaces with specific structure and/or chemistry to control this charging,” he says. “We could reduce this charge for applications where it is a problem – for example in fuel handling – or, conversely, enhance it for applications where it is a benefit. These include increasing the speed of chemical reactions on catalyst surfaces to make next-generation batteries more efficient.”
More than 500 experiments
To observe depinning, the researchers built an experimental apparatus that enabled them to control the sticking and slipping motion of a water droplet on a Teflon surface while measuring the corresponding change in electrical charge. They also controlled the size of the droplet, making it big enough to wet the surface all at once, or smaller to de-wet it. This allowed them to distinguish between multiple mechanisms at play as they sequentially wetted and dried the same region of the surface.
Their study, which is published in Physical Review Letters, is based on more than 500 wetting and de-wetting experiments performed by PhD student Shuaijia Chen, Sherrell says. These experiments showed that the largest change in charge – from 0 to 4.1 nanocoulombs (nC) – occurred the first time the water contacted the surface. The amount of charge then oscillated between about 3.2 and 4.1 nC as the system alternated between wet and dry phases. “Importantly, this charge does not disappear,” Sherrell says. “It is likely generated at the interface and probably retained in the droplet as it moves over the surface.”
The motivation for the experiment came when Berry asked Sherrell a deceptively simple question: was it possible to harvest electricity from raindrops? To find out, they decided to supervise a semester-long research project for a master’s student in the chemical engineering degree programme at Melbourne. “The project grew from there, first with two more research project students [before] Chen then took over to build the final experimental platform and take the measurements,” Berry recalls.
The main challenge, he adds, was that they did not initially understand the phenomenon they were measuring. “Another obstacle was to design the exact protocol required to repeatedly produce the charging effect we observed,” he says.
Potential applications
Understanding how and why electric charge is generated as liquids flow during over surfaces is important, Berry says, especially with new, flammable types of renewable fuels such as hydrogen and ammonia seen as part of the transition to net zero. “At present, with existing fuels, charge build-up is reduced by restricting flow using additives or other measures, which may not be effective in newer fuels,” he explains. “This knowledge may help us to engineer coatings that could mitigate charge in new fuels.”
The RMIT/Melbourne researchers now plan to investigate the stick-slip phenomenon with other types of liquids and surfaces and are keen to partner with industries to target applications that can make a real-world impact. “At this stage, we have simply reported that this phenomenon occurs,” Sherrell says. “We now want to show that we can control when and where these charging events happen – either to maximize them or eliminate them. We are still a long way off from using our discovery for chemical and energy applications – but it’s a big step in the right direction.”
An international team led by chemists at the University of British Columbia (UBC), Canada, has reported strong experimental evidence for a superfluid phase in molecular hydrogen at 0.4 K. This phase, theoretically predicted in 1972, had only been observed in helium and ultracold atomic gases until now, and never in molecules. The work could give scientists a better understanding of quantum phase transitions and collective phenomena. More speculatively, it could advance the field of hydrogen storage and transportation.
Superfluidity is a quantum mechanical effect that occurs at temperatures near absolute zero. As the temperatures of certain fluids approach this value, they undergo a transition to a zero-viscosity state and begin to flow without resistance – behaviour that is fundamentally different to that of ordinary liquids.
Previously, superfluidity had been observed in helium (3He and 4He) and in clusters of ultracold atoms known as Bose-Einstein condensates. In principle, molecular hydrogen (H2), which is the simplest and lightest of all molecules, should also become superfluid at ultracold temperatures. Like 4He, H2 is a boson, so it is theoretically capable of condensing into a superfluid phase. The problem is that it is only predicted to enter this superfluid state at a temperature between 1 and 2 K, which is lower than its freezing point of 13.8 K.
A new twist on a spinning experiment
To keep their molecular hydrogen liquid below its freezing point, team leader Takamasa Momose and colleagues at UBC confined small clusters of hydrogen molecules inside helium nanodroplets at 0.4 K. They then embedded a methane molecule in the hydrogen cluster and observed its rotation with laser spectroscopy.
Momose describes this set-up as a miniature version of an experiment performed by the Georgian physicist Elephter Andronikashvili in 1946, which showed that disks inside superfluid helium could rotate without resistance. They chose methane as their “disk”, Momose explains, because it rotates quickly and interacts only very weakly with H2, meaning it does not disturb the behaviour of the medium in which it spins.
Onset of superfluidity
In clusters containing less than six hydrogen molecules, they observed some evidence of friction affecting the methane’s rotation. As the clusters grew to 10 molecules, this friction began to disappear and the spinning methane molecule rotated faster, without resistance. This implies that most of the hydrogen molecules around it are behaving as a single quantum entity, which is a signature of superfluidity. “For clusters larger than N = 10, the hydrogen acted like a perfect superfluid, confirming that it flows with zero resistance,” Momose tells Physics World.
The researchers, who have been working on this project for nearly 20 years, say they took it on because detecting superfluidity in H2 is “one of the most intriguing unanswered questions in physics – debated for 50 years”. As well as working out how to keep hydrogen in a liquid state at extremely low temperatures, they also had to find a way to detect the onset of superfluidity with high enough precision. “By using methane as a probe, we were finally able to measure how hydrogen affects its motion,” Momose says.
A deeper understanding
The team say the discovery opens new avenues for exploring quantum fluids beyond helium. This could lead scientists to a deeper understanding of quantum phase transitions and collective quantum phenomena, Momose adds.
The researchers now plan to study larger hydrogen clusters (ranging from N = 20 to over a million) to understand how superfluidity evolves with size and whether the clusters eventually freeze or remain fluid. “This will help us explore the boundary between quantum and classical matter,” Momose explains.
They also want to test how superfluid hydrogen responds to external stimuli such as electric and magnetic fields. Such experiments could reveal even more fascinating quantum behaviours and deepen our understanding of molecular superfluidity, Momose says. They could also have practical applications, he adds.
“From a practical standpoint, hydrogen is a crucial element in clean energy technologies, and understanding its quantum properties could inspire new approaches for hydrogen storage and transportation,” he says. “The results from these [experiments] may also provide critical insights into achieving superfluidity in bulk liquid hydrogen – an essential step toward harnessing frictionless flow for more efficient energy transport systems.”
I recently met an old friend from the time when we both worked in the light-emitting diode (LED) industry. We started discussing how every technology has its day – a window of opportunity – but also how hard it is to know which companies will succeed long-term. As we got talking, I was reminded of a very painful product launch by an LED lighting firm that I was running at the time.
The incident occurred in 2014 when my company had a technology division that was developing wirelessly connected lighting. Our plan was to unveil our new Bluetooth control system at a trade show, but everything that could go wrong for us on the day pretty much did. However, the problems with our product weren’t down to any
particular flaws in our technology.
Instead, they were triggered by issues with a technology from a rival lighting firm that was also exhibiting at the event. We ended up in a rather ugly confrontation with our competitors, who seemed in denial about their difficulties. I realized there was a fundamental problem with their technology – even if they couldn’t see it – and predicted that, for them, the writing was on the wall.
Bother at the booth
Our plan was to put microprocessors and radios into lighting to make it smart, more energy efficient and with integrated sensors and Bluetooth controls. There were no fundamental physics barriers – it just needed some simple thermal management (the LEDs and particularly the radios had to be kept below 70 ºC), some electronics and lots of software.
Back then, the LED lighting industry was following a technology roadmap that envisaged these solid-state devices eventually generating 320 lumens per watt, compared to about 10 lumens per watt from conventional incandescent lamps. As the road map progressed, there’d be ever fewer thermal challenges.
With more and more countries phasing out conventional bulbs, LEDs are continuing their march along the roadmap. Almost every bulb sold these days is an LED and the overall global lighting market was worth $140bn in 2023, according to Fortune Business Insights. Lighting accounts for 15–18% of all electricity consumption in the European Union alone.
Back at that 2014 trade show, called LuxLive, all initially seemed to be going well as we set up our display. There was the odd software bug, but were able to work around that and happily control our LED lights with a smart phone connected via WiFi to a low-cost lighting server (a bit like a Raspberry Pi), with the smart LED fixtures and sensors connected via Bluetooth.
Opportunity knocks Almost all lights now sold are based on light-emitting diodes, but – as with all new technologies – it wasn’t initially clear which firms’ products would succeed. (Courtesy: iStock/KirVKV)
With final preparations over, the trade show opened and our first customer came up to the stand. We started giving them a demo but nothing seemed to be working – to our surprise, we simply could not get our lights to respond. A flurry of behind-the-scenes activity ensured (mostly us switching everything off and on again) but nothing made a difference.
Strangely, my phone call appeared to go dead just as I passed another booth from a rival firm
To try to get to the bottom of things, I stepped a decent distance away from our booth and rang one of our technical team up in London for support. I explained our problem but, strangely, as I walked back towards the booth, I got cut off. My call appeared to have gone dead just as I passed another booth from a rival firm called Ceravision.
It was developing high-efficiency plasma (HEP) lamps that could generate up to 100 lumens per watt – roughly where LEDs were at the time. Its lamps used radio-frequency waves to heat a plasma without needing any electrodes. Designed for sports stadia and warehouses, Ceravision’s bulbs were super bright. In fact, I was almost blinded by its products, which were pointing into the aisle, presumably to attract attention.
Back at base, our technical team frantically tried to figure out why our products weren’t working. But they were stuck and I spent the rest of the day unable to demonstrate our products to potential customers. Then, as if by magic, our system started working again – just as someone noticed we weren’t being blinded by Ceravision’s light any more.
I walked over to Ceravision’s booth as its team was packing up and had a chat with a sales guy, who I asked how the system worked. He told me it was a microwave waveguide light source but didn’t appear to know much more. So I asked him if he wouldn’t mind turning on the light again to demonstrate its output, which he did.
I glanced back at my team, who a few moments earlier had been all smiles that our system was now working, even if they were confused as to why. Suddenly, as the Ceravision light was turned on, our system broke down again. I requested to speak to one of Ceravision’s technical team but was told they wouldn’t be back until the following day.
Lighting the way
I left for the show’s awards dinner, where my firm won an innovation award for the wireless lighting system we’d been trying – unsuccessfully – to demonstrate all day. Later that night, I started looking into Ceravision in more detail. Based in Milton Keynes, UK, its idea of an electrodeless lamp wasn’t new – Nikola Tesla had filed a patent for such a device back in 1894.
Tesla realized that this type of lamp would benefit from a long life and little discolouration as there are no electrodes to degrade or break. In fact, Tesla knew how to get around the bulbs’ technical drawbacks, which involved constraining the radio waves and minimizing their power. Eventually, in the late 1990s, as radio-frequency sources became available, a US company called Fusion Lighting got this technology to market.
Its lamps consisted of a golf ball-sized fused-quartz bulb containing several milligrams of sulphur powder and argon gas at the end of a thin glass spindle. Enclosed in a microwave-resonant wire-mesh cage, the bulb was bombarded by 2.45 GHz microwaves from a magnetron of the kind you get in a microwave oven. The bulb had a design lifetime of about 60,000 hours and emitted 100 lumens per watt.
Unfortunately, its efficiency was poor as 80–85% of the light generated was trapped inside the opaque ceramic waveguide. Worse still, various satellite companies petitioned the US authorities to force Fusion Lighting to cut its electromagnetic emissions by 99.9%. They feared that otherwise its bulbs would interfere with WiFi, cordless phones and satellite radio services in North America, which also operate at 2.4 GHz.
Tasty stuff LED lights these days are used in all corners of modern life – including to grow plants for food. (Courtesy: iStock/Supersmario)
In 2001 Fusion Lighting agreed to install a perforated metal shield around its lamps to reduce electromagnetic emissions by 95%. However, this decision only reduced the light output, making the bulbs even more inefficient. Ceravision’s solution was an optically clear quartz waveguide and integrated lamp that yielded 100–5000 watts of power without any damage to the lamps.
The company claimed its technology was ideal for growing plants – delivering blue and ultraviolet light missing from other sources – along with everything from sterilizing water to illuminating TV studios. And whereas most magnetrons break down after about 2000 hours, Ceravision’s magnetrons lasted for more than 40,000 hours. It had even signed an agreement with Toshiba to build high-efficient magnetrons.
What could possibly go wrong?
The truth hurts
Back at the trade show, I arrived the following morning determined to get a resolution with the Ceravision technical team. Casually, I asked one of them, who had turned up early, to come over to our booth to see our system. It was working – until the rest of Ceravision arrived and switched their lights on. Once again, our award-winning system gave up the ghost.
Ceravision refused to accept there was a correlation between their lights going on and our system breaking down
Things then got a little ugly. Ceravision staff refused to accept there was a correlation between their lights going on and our system breaking down. Our product must be rubbish, they said. If so, I asked, how come my mobile phone had stopped working too? Silence. I went to talk to the show organizers – all they could do was tell Ceravision to point its lights down, rather than into the aisle.
This actually worked for us as it seemed the “blocking signal” was reasonably directional. It became obvious from Ceravision’s defensive response that this WiFi blocking problem must have come up before – its staff had some over-elaborate and clearly rehearsed responses. As for us, we simply couldn’t show our product in its best light to customers, who just felt it wasn’t ready for market.
Messaging matters For companies developing new products, it’s vital to listen and act on customer feedback as the market develops. (Courtesy: Shutterstock/magic pictures)
In the intervening years, I’ve talked to several ex-employees of Ceravision, who’ve all told me it trialled lots of different systems for different markets. But its products always proved to be too expensive and too complex – and eventually LEDs caught up with them. When I asked them about lights disrupting WiFi, most either didn’t seem aware of the issue or, if they did, knew it couldn’t be fixed without slashing the light output and efficiency of the bulbs.
That in turn forced Ceravision to look for ever more niche applications with ever-tinier markets. Many staff left, realizing the emperor had no clothes. Eventually, market forces took their toll on the company, which put its lighting business into receivership in 2020. Its parent company with all the patents and intellectual property suffered a similar fate in 2023.
I suspect (but don’t know for sure) that the window of opportunity closed due to high system costs, overly complex manufacturing and low volumes, coupled with LEDs becoming cheaper and more efficient. Looking back on 2014, the writing was really on the wall for the technology, even if no-one wanted to read the warning signs. A product that disrupts your mobile or WiFi signal was simply never going to succeed.
It was a classic case of a company having a small window of opportunity before better solutions came along and it missed the proverbial boat. Of course, hindsight is a wonderful thing. Clearly the staff could see what was wrong, but it took a long time for managers and investors to see or tackle the issues too. Perhaps when you are trying to deliver on promises you end up focusing on the wrong things.
The moral of the story is straightforward: constantly review your customers’ feedback and re-evaluate your products as the market develops. In business, it really is that simple if you want to succeed.
A ground-breaking method to create “audible enclaves” – localized zones where sound is perceptible while remaining completely unheard outside – has been unveiled by researchers at Pennsylvania State University and Lawrence Livermore National Laboratory. Their innovation could transform personal audio experiences in public spaces and improve secure communications.
“One of the biggest challenges in sound engineering is delivering audio to specific listeners without disturbing others,” explains Penn State’s Jiaxin Zhong. “Traditional speakers broadcast sound in all directions, and even directional sound technologies still generate audible sound along their entire path. We aimed to develop a method that allows sound to be generated only at a specific location, without any leakage along the way. This would enable applications such as private speech zones, immersive audio experiences, and spatially controlled sound environments.”
To achieve precise audio targeting, the researchers used a phenomenon known as difference-frequency wave generation. This process involves emitting two ultrasonic beams – sound waves with frequencies beyond the range of human hearing – that intersect at a chosen point. At their intersection, these beams interact to produce a lower-frequency sound wave within the audible range. In their experiments, the team used ultrasonic waves at frequencies of 40 kHz and 39.5 kHz. When these waves converge, they generated an audible sound at 500 Hz, which falls within the typical human hearing range of approximately 20 Hz–20 kHz.
To prevent obstacles like human bodies from blocking the sound beams, the researchers used self-bending beams that follow curved paths instead of travelling in straight lines. They did this by passing ultrasound waves through specially designed metasurfaces, which redirected the waves along controlled trajectories, allowing them to meet at a specific point where the sound is generated.
Manipulative metasurfaces
“Metasurfaces are engineered materials that manipulate wave behaviour in ways that natural materials cannot,” said Zhong. “In our study, we use metasurfaces to precisely control the phase of ultrasonic waves, shaping them into self-bending beams. This is similar to how an optical lens bends light.”
The researchers began with computer simulations to model how ultrasonic waves would travel around obstacles, such as a human head, to determine the optimal design for the sound sources and metasurfaces. These simulations confirmed the feasibility of creating an audible enclave at the intersection of the curved beams. Subsequently, the team constructed a physical setup in a room-sized environment to validate their findings experimentally. The results closely matched their simulations, demonstrating the practical viability of their approach.
“Our method allows sound to be produced only in an intended area while remaining completely silent everywhere else,” says Zhong. “By using acoustic metasurfaces, we direct ultrasound along curved paths, making it possible to ‘place’ sound behind objects without a direct line of sight. A person standing inside the enclave can hear the sound, but someone just a few centimetres away will hear almost nothing.”
Initially, the team produced a steady 500 Hz sound within the enclave. By allowing the frequencies of the two ultrasonic sources to vary, they generated a broader range of audible sounds, covering the frequencies from 125 Hz–4 kHz. This expanded range includes much of the human auditory spectrum, increasing the potential applications of the technique.
The ability to generate sound in a confined space without any audible leakage opens up many possible applications. Museums and exhibitions could provide visitors with personalized audio experiences without the need for headphones, allowing individuals to hear different information depending on their location. In cars, drivers could receive navigation instructions without disturbing passengers, who could simultaneously listen to music or other content. Virtual and augmented reality applications could benefit from more immersive soundscapes that do not require bulky headsets.
The technology could also enhance secure communications, creating localized zones where sensitive conversations remain private even in shared spaces. In noisy environments, future adaptations of this method might allow for targeted noise cancellation, reducing unwanted sound in specific areas while preserving important auditory information elsewhere.
Future challenges
While their results are promising, the researchers acknowledge several challenges that must be addressed before the technology can be widely implemented. One concern is the intensity of the ultrasonic beams required to generate audible sound at a practical volume. Currently, achieving sufficient sound levels necessitates ultrasonic intensities that may have unknown effects on human health.
Another challenge is ensuring high-quality sound reproduction. The relationship between the ultrasonic beam parameters and the resulting audible sound is complex, making it difficult to produce clear audio across a wide range of frequencies and volumes.
“We are currently working on improving sound quality and efficiency,” Zhong said. “We are exploring deep learning and advanced nonlinear signal processing methods to optimize sound clarity. Another area of development is power efficiency — ensuring that the ultrasound-to-audio conversion is both effective and safe for practical use. In the long run, we hope to collaborate with industry partners to bring this technology to consumer electronics, automotive audio, and immersive media applications.”
Join us for an insightful webinar highlighting cutting-edge research in 2D transition-metal dichalcogenides (TMDs) and their applications in quantum optics.
This session will showcase multimodal imaging techniques, including reflection and time-resolved photoluminescence (TRPL), performed with our high-performance MicroTime 100 microscope. Complementary spectroscopic insights are provided through photoluminescence emission measurements using the FluoTime 300 spectrometer, highlighting the unique characteristics of these advanced materials and their potential in next-generation photonic devices.
Whether you’re a researcher, engineer, or enthusiast in nanophotonics and quantum materials, this webinar will offer valuable insights into the characterization and design of van der Waals materials for quantum optical applications. Don’t miss this opportunity to explore the forefront of 2D material spectroscopy and imaging with a leading expert in the field.
Shengxi Huang
Shengxi Huang is an associate professor in the Department of Electrical and Computer Engineering at Rice University. Huang earned her PhD in electrical engineering and computer science at MIT in 2017, under the supervision of Professors Mildred Dresselhaus and Jing Kong. Following that, she did postdoctoral research at Stanford University with Professors Tony Heinz and Jonathan Fan. She obtained her bachelor’s degree with the highest honors at Tsinghua University, China. Before joining Rice, she was an assistant professor in the Department of Electrical Engineering, Department of Biomedical Engineering, and Materials Research Institute at The Pennsylvania State University.
Huang’s research interests involve light-matter interactions of quantum materials and nanostructures, and the development of new quantum optical platforms and biochemical sensing technologies. In particular, her research focuses on (1) understanding optical and electronic properties of new materials such as 2D materials and Weyl semimetals, (2) developing new biochemical sensing techniques with applications in medical diagnosis, and (3) exploring new quantum optical effects and quantum sensing. She is leading the SCOPE (Sensing, Characterization, and OPtoElectronics) Laboratory.
Multiphoton microscopy is a nonlinear optical imaging technique that enables label-free, damage-free biological imaging. Performed using femtosecond laser pulses to generate two- and three-photon processes, multiphoton imaging techniques could prove invaluable for rapid cancer diagnosis or personalized medicine.
Imaging biological samples with traditional confocal microscopy requires sample slicing and staining to create contrast in the tissue. The nonlinear mechanisms generated by femtosecond laser pulses, however, eliminate the need for labelling or sample preparation, revealing molecular and structural details within tissue and cells while leaving the sample intact.
Looking to bring these benefits to cancer diagnostics, Netherlands-based start-up Flash Pathology is developing a compact, portable multiphoton microscope that creates pathology-quality images in real time, without the need for sample fixation or staining.
Fast and accurate cancer diagnosis with higher harmonic imaging
The inspiration for Flash Pathology came from Marloes Groot of Vrije Universiteit (VU) Amsterdam. While studying multiphoton microscopy of brain tumours, Groot recognized the need for a portable microscope for clinical settings. “This is a really powerful technique,” she says. “I was working on my large laboratory setup and I thought if we start a company, we could transform this into a mobile device.”
Groot teamed up with Frank van Mourik, now Flash Pathology’s CTO, to shrink the imaging device into a compact 60 x 80 x 115 cm system. “Frank made a device that can be transported, in a truck, wheeled through corridors, and still when you plug it in, it’s on and it produces images – for a nonlinear microscope, this is pretty special,” she explains.
Van Mourik has now built several multiphoton microscopes for Flash Pathology, with one of the main achievements the ability to measure samples with extremely low power levels. “When I started in my lab, I used 200 mW of power, but we’ve been able to reduce that to 5 mW,” Groot notes. “We have performed extensive studies to show that our imaging does not affect the tissue.”
Flash Pathology’s multiphoton microscope is designed to provide rapid on-site histologic feedback on excised tissue, such as diagnostic biopsies or tissue from surgical resections. One key application is lung cancer diagnosis, where there is a clinical need for rapid intraoperative feedback on biopsies. The standard histopathological analysis requires extensive sample preparation and can take several days to provide results.
Rapid tissue analysis Multiphoton microscopy images of (left to right) adipose tissue, cartilage and lymphoid tissue (each image is 400 x 400 µm). The adipose tissue image shows large adipocytes (fat cells); the cartilage image shows a hyaline (glass-like) background with chondrocytes (cells); the image of lymphoid tissue shows many small lymphocytes (a type of immune cell). (Courtesy: VU Amsterdam)
“With lung biopsies, it’s challenging to obtain good diagnostic material,” explains Sylvia Spies, a PhD student at VU Amsterdam. “The lesions can be quite small and it can be difficult to get to the right position and take a good sample, so they use several techniques (fluoroscopy/CT or ultrasound) to find the right position and take multiple biopsies from the lesion. Despite these techniques, the diagnostic yield is still around 70%, so 30% of cases still don’t get a diagnosis and patients might have to come back for a repeat biopsy procedure.”
Multiphoton imaging, on the other hand, can rapidly visualize unprocessed tissue samples, enabling diagnosis in situ. A recent study using Flash Pathology’s microscope to analyse lung biopsies demonstrated that it could image a biopsy sample and provide feedback just 6 min after excision, with an accuracy of 87% – enabling immediate decisions as to whether a further biopsy is required.
“Also, many clinical fields are now focusing on a one-stop-shop with diagnosis and treatment in one procedure,” adds Spies. “Here, you really need a technique that can rapidly determine whether a lesion is benign or malignant.”
The microscope’s impressive diagnostic performance is partly due to its ability to generate four nonlinear signals simultaneously using a single ultrafast femtosecond laser: second- and third-harmonic generation plus two- and three-photon fluorescence. The system then uses filters to spectrally separate these signals, which provide complementary diagnostic information. Second-harmonic generation, for example, is sensitive to non-centrosymmetric structures such as collagen, while third-harmonic generation only occurs at interfaces with differing refractive indices, such as cell membranes or boundaries between the nucleus and cytoplasm.
“What I like about this technique is that you can see similar features as in conventional histology,” says Spies. “You can see structures such as collagen fibres, elastin fibres and cellular patterns, but also cellular details such as the cytoplasm, the nucleus (and its size), nucleoli and cilia. All these tiny details are the same features that the pathologists look at in conventional histology.”
Applying femtosecond lasers for 3D-in-depth visualization
The femtosecond laser plays a key role in enabling multiphoton microscopy. To excite two- and three-photon processes, you need to have two or more photons in the same place at exactly the same time. And the likelihood of this happening increases rapidly when using ultrashort laser pulses.
“The shorter the pulses are in the time domain, the higher the probability that you have an overlap of two pulses in a focal point,” explains Oliver Prochnow, CEO of VALO Innovations, a part of HÜBNER Photonics. “Therefore you need to have a very high-intensity, extremely short laser pulse. The shorter the better.”
The VALO Femtosecond Series of ultrafast fibre lasers can deliver pulses as short as 30 fs, which is achieved by exploiting nonlinear mechanisms to broaden the spectral bandwidth to more than 100 nm. As the optical spectrum and pulse duration are inherently related by Fourier transformation, a broadband spectrum will result in a very short pulse. And the shorter the pulse, at the same average power, the higher its peak power – and the higher the probability of producing multiphoton processes.
Laser parameters Left: typical temporal pulse profile highlighting the sub 50 fs pulse duration with very low pulse pedestal; the inset shows the typical beam profile. Right: typical optical spectrum of HÜBNER Photonics’ VALO Femtosecond Series lasers. (Courtesy: HÜBNER Photonics)
“If you decrease the pulse duration by a factor of five, this gives roughly a five times higher signal from two-photon absorption,” says Prochnow. “In contrast, a three-photon process scales with the third power of the intensity and with the inverse of the pulse duration squared. So you have a roughly 25 times higher signal, if you decrease the pulse duration by a factor of five at the same average power.” Crucially, the shorter pulses deliver this high peak power while maintaining a low average power, reducing sample heating and minimizing photobleaching.
The broadband optical spectrum is particularly important for enabling practical three-photon microscopy. The challenge here is that traditional ytterbium-based lasers with a wavelength of around 1030 nm produce a three-photon signal in the UV range, which is too short to be transmitted through standard optics.
Broadband spectrum Fundamental and third-harmonic generation (THG) spectra of a 30 fs broadband fibre laser (red) compared with standard 150 fs lasers. The solid black line shows the typical transmission characteristics of a standard microscopy objective. Only a THG spectrum generated from wavelengths of above 1080 nm will be transmitted. (Courtesy: HÜBNER Photonics)
The VALO Femtosecond Series overcomes this problem by having a broadband spectrum that extends up to 1140 nm. Frequency tripling then generates a signal with a long enough wavelength to pass through a standard microscope objective, enabling the VALO lasers to excite both two-photon and three-photon processes. “Our lasers provide the opportunity to perform simultaneous three-photon microscopy and two-photon microscopy using a simple fibre laser solution,” says Prochnow.
The lasers include an integrated dispersion pre-compensation unit to compensate for the dispersion of a microscope objective and provide the shortest pulses at the sample. Additionally, the lasers do not require water cooling, making them easy to use or integrate.
Towards future clinical applications
Flash Pathology is currently testing its microscope in several hospitals in the Netherlands, including Amsterdam UMC, as well as the Princess Maxima Center for paediatric oncology. “Sylvia performed a study in their pathology department and for a year measured all kinds of tissue samples that came through,” says Groot. “We also recently installed a device at the Queen Elizabeth Hospital in Glasgow, for a study on mesothelioma.”
With prototypes now available for research use, the company also plans to develop a fully certified multiphoton microscopy system. “Our ultimate goal is to sell a certified medical diagnostic device that will take a biopsy and produce images, but also contain artificial intelligence to help to interpret the images and give diagnostic conclusions about the nature of the illness,” says van Mourik.
Once fully realised in the clinic, the multiphoton microscopy system will provide an invaluable tool for rapid, in situ tissue analysis during bronchoscopy procedures or other operations. The unique combination of four nonlinear imaging modalities, made possible with a single compact femtosecond laser, delivers complementary diagnostic information. “This will be the big gain, to be able to provide a diagnosis bedside during a procedure,” van Mourik concludes.
This episode of the Physics World Weekly podcast features William Phillips, who shared the 1997 Nobel Prize for Physics for his work on cooling and trapping atoms using laser light.
In a wide-ranging conversation with Physics World’s Margaret Harris, Phillips talks about his long-time fascination with quantum physics – which began with an undergraduate project on electron spin resonance. Phillips chats about quirky quantum phenomena such as entanglement and superposition and explains how they are exploited in atomic clocks and quantum computing. He also looks to the future of quantum technologies and stresses the importance of curiosity-led research.
Phillips has spent much of his career at US’s National Institute for Standards and Technology (NIST) in Maryland and he also a professor of physics at the University of Maryland.
This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.
Scientists in the US have developed a new type of photovoltaic battery that runs on the energy given off by nuclear waste. The battery uses a scintillator crystal to transform the intense gamma rays from radioisotopes into electricity and can produce more than a microwatt of power. According to its developers at Ohio State University and the University of Toledo, it could be used to power microelectronic devices such as microchips.
The idea of a nuclear waste battery is not new. Indeed, Raymond Cao, the Ohio State nuclear engineer who led the new research effort, points out that the first experiments in this field date back to the early 1950s. These studies, he explains, used a 50 milli-Curie 90Sr-90Y source to produce electricity via the electron-voltaic effect in p-n junction devices.
However, the maximum power output of these devices was just 0.8 μW, and their power conversion efficiency (PCE) was an abysmal 0.4 %. Since then, the PCE of nuclear voltaic batteries has remained low, typically in the 1–3% range, and even the most promising devices have produced, at best, a few hundred nanowatts of power.
Exploiting the nuclear photovoltaic effect
Cao is confident that his team’s work will change this. “Our yet-to-be-optimized battery has already produced 1.5 μW,” he says, “and there is much room for improvement.”
To achieve this benchmark, Cao and colleagues focused on a different physical process called the nuclear photovoltaic effect. This effect captures the energy from highly-penetrating gamma rays indirectly, by coupling a photovoltaic solar cell to a scintillator crystal that emits visible light when it absorbs radiation. This radiation can come from several possible sources, including nuclear power plants, storage facilities for spent nuclear fuel, space- and submarine-based nuclear reactors or, really, anyplace that happens to have large amounts of gamma ray-producing radioisotopes on hand.
The scintillator crystal Cao and colleagues used is gadolinium aluminium garnet (GAGG), and they attached it to a solar cell made from polycrystalline CdTe. The resulting device measures around 2 x 2 x 1 cm, and they tested it using intense gamma rays emitted by two different radioactive sources, 137Cs and 60Co, that produced 1.5 kRad/h and 10 kRad/h, respectively. 137Cs is the most common fission product found in spent nuclear fuel, while 60Co is an activation product.
Enough power for a microsensor
The Ohio-Toledo team found that the maximum power output of their battery was around 288 nW with the 137Cs source. Using the 60Co irradiator boosted this to 1.5 μW. “The greater the radiation intensity, the more light is produced, resulting in increased electricity generation,” Cao explains.
The higher figure is already enough to power a microsensor, he says, and he and his colleagues aim to scale the system up to milliwatts in future efforts. However, they acknowledge that doing so presents several challenges. Scaling up the technology will be expensive, and gamma radiation gradually damages both the scintillator and the solar cell. To overcome the latter problem, Cao says they will need to replace the materials in their battery with new ones. “We are interested in finding alternative scintillator and solar cell materials that are more radiation-hard,” he tells Physics World.
The researchers are optimistic, though, arguing that optimized nuclear photovoltaic batteries could be a viable option for harvesting ambient radiation that would otherwise be wasted. They report their work in Optical Materials X.
Researchers in the Netherlands, Austria, and France have created what they describe as the first operating system for networking quantum computers. Called QNodeOS, the system was developed by a team led by Stephanie Wehner at Delft University of Technology. The system has been tested using several different types of quantum processor and it could help boost the accessibility of quantum computing for people without an expert knowledge of the field.
In the 1960s, the development of early operating systems such as OS/360 and UNIX represented a major leap forward in computing. By providing a level of abstraction in its user interface, an operating system enables users to program and run applications, without having to worry about how to reconfigure the transistors in the computer processors. This advance laid the groundwork for the many of the digital technologies that have revolutionized our lives.
“If you needed to directly program the chip installed in your computer in order to use it, modern information technologies would not exist,” Wehner explains. “As such, the ability to program and run applications without needing to know what the chip even is has been key in making networks like the Internet actually useful.”
Quantum and classical
The users of nascent quantum computers would also benefit from an operating system that allows quantum (and classical) computers to be connected in networks. Not least because most people are not familiar with the intricacies of quantum information processing.
However, quantum computers are fundamentally different from their classical counterparts, and this means a host of new challenges faces those developing network operating systems.
“These include the need to execute hybrid classical–quantum programs, merging high-level classical processing (such as sending messages over a network) with quantum operations (such as executing gates or generating entanglement),” Wehner explains.
Within these hybrid programs, quantum computing resources would only be used when specifically required. Otherwise, routine computations would be offloaded to classical systems, making it significantly easier for developers to program and run their applications.
No standardized architecture
In addition, Wehner’s team considered that, unlike the transistor circuits used in classical systems, quantum operations currently lack a standardized architecture – and can be carried out using many different types of qubits.
Wehner’s team addressed these design challenges by creating a QNodeOS, which is a hybridized network operating system. It combines classical and quantum “blocks”, that provide users with a platform for performing quantum operations.
“We implemented this architecture in a software system, and demonstrated that it can work with different types of quantum hardware,” Wehner explains. The qubit-types used by the team included the electronic spin states of nitrogen–vacancy defects in diamond and the energy levels of individual trapped ions.
Multi-tasking operation
“We also showed how QNodeOS can perform advanced functions such as multi-tasking. This involved the concurrent execution of several programs at once, including compilers and scheduling algorithms.”
QNodeOS is still a long way from having the same impact as UNIX and other early operating systems. However, Wehner’s team is confident that QNodeOS will accelerate the development of future quantum networks.
“It will allow for easier software development, including the ability to develop new applications for a quantum Internet,” she says. “This could open the door to a new area of quantum computer science research.”
The nervous system is often considered the body’s wiring, sending electrical signals to communicate needs and hazards between different parts of the body. However, researchers at the University of Massachusetts at Amherst have now also measured bioelectronic signals propagating from cultured epithelial cells, as they respond to a critical injury.
“Cells are pretty amazing in terms of how they are making collective decisions, because it seems like there is no centre, like a brain,” says researcher Sunmin Yu, who likens epithelial cells to ants in the way that they gather information and solve problems. Alongside lab leader Steve Granick, Yu reports this latest finding in Proceedings of the National Academy of Sciences, suggesting a means for the communication between cells that enables them to coordinate with each other.
While neurons function by bioelectric signals, and punctuated rhythmic bioelectrical signals allow heart muscle cells to keep the heart pumping blood throughout our body, when it comes to intercell signals for any other type of cell, the most common hypothesis is the exchange of chemical cues. Yu, however, had noted from previous work by other groups that the process of “extruding” wounded epithelial cells to get rid of them involved increased expression of the relevant proteins at some distance from the wound itself.
“Our thought process was to inquire about the mechanism by which information could be transmitted over the necessary long distance,” says Yu. She realised that common molecular signalling mechanisms, such as extracellular signal-regulated kinase 1/2 (ERK), which has a speed of around 1 mm/s, would be rather slow as a potential conduit.
Epithelial signals measure up
Yu and Granick grew a layer of epithelial cells on a microelectrode array (MEA). While other approaches to measuring electrical activity in cultured cells exist, an MEA has the advantage of combining electrical sensitivity with a long range, enabling the researchers to collect both temporal and spatial information on electrical activity. They then “wounded” the cells by exposing them to an intense focused laser beam.
Following the wound, the researchers observed electrical potential changes with comparable amplitudes and similar shapes to those observed in neurons, but over much longer periods of time. “The signal propagation speed we measured is about 1000 times slower than neurons and 10 times faster than ERK,” says Yu, expressing great interest in whether the “high-pitch speaking” neurons and heart tissue cells communicate with these “low-pitch speaking” epithelial cells, and if so, how.
The researchers noted an apparent threshold in the amplitude of the generated signal required for it to propagate. But for those that met this threshold, propagation of the electric signals spanned regions up to 600 µm for as long as measurements could be recorded, which was 5 h. Given the mechanical forces generated during “cell extrusion”, the researchers hypothesized the likely role of mechanosensitive proteins in generating the signals. Sure enough, inhibiting the mechanosensitive ion channels shut down the generation of electrical signals.
Yu and Granick highlight previous suggestions that electrical potentials in epithelial cells may be important for regulating the coordinated changes that take place during embryogenesis and regeneration, as well as being implicated in cancer. However, this is the first observation of such electrical potentials being generated and propagating across epithelial tissue.
“Yu and Granick have discovered a remarkable new form of electrical signalling emitted by wounded epithelial cells – cells traditionally viewed as electrically passive,” says Seth Fraden, whose lab at Brandeis University in Massachusetts in the US investigates a range of soft matter topics but was not involved in this research.
Fraden adds that it raises an “intriguing” question: “What is the signal’s target? In light of recent findings by Nathan Belliveau and colleagues, identifying the protein Galvanin as a specific electric-field sensor in immune cells, a compelling hypothesis emerges: epithelial cells send these electric signals as distress calls and immune cells – nature’s healers – receive them to rapidly locate and respond to tissue injuries. Such insights may have profound implications for developing novel regenerative therapies and bioelectric devices aimed at accelerating wound healing.”
Adam Ezra Cohen, whose team at Harvard University in the US focuses on innovative technology for probing molecules and cells, and who was not directly involved in this research, also finds the research “intriguing” but raises numerous questions: “What are the underlying membrane voltage dynamics? What are the molecular mechanisms that drive these spikes? Do similar things happen in intact tissues or live animals?” he asks, adding that techniques such as patch clamp electrophysiology and voltage imaging could address these questions.
A new technique could reduce the risk of blood clots associated with medical implants, making them safer for patients. The technique, which was developed by researchers at the University of Sydney, Australia, involves coating the implants with highly hydrophilic molecules known as zwitterions, thereby inhibiting the build-up of clot-triggering proteins.
Proteins in blood can stick to the surfaces of medical implants such as heart valves and vascular stents. When this happens, it produces a cascade effect in which multiple mechanisms lead to the formation of extensive clots and fibrous networks. These clots and networks can impair the function of implanted medical devices so much that invasive surgery may be required to remove or replace the implant.
To prevent this from happening, the surfaces of implants are often treated with polymeric coatings that resist biofouling. Hydrophilic polymeric coatings such as polyethylene glycol are especially useful, as their water-loving nature allows a thin layer of water to form between them and the surface of the implants, held in place via hydrogen and/or electrostatic bonds. This water layer forms a barrier that prevents proteins from sticking, or adsorbing, to the implant.
An extra layer of zwitterions
Recently, researchers discovered that polymers coated with an extra layer of small molecules called zwitterions provided even more protection against protein adsorption. “Zwitter” means “hybrid” in German; hence, zwitterions are molecules that carry both positive and negative charge, making them neutrally charged overall. These molecules are also very hydrophilic and easily form tight bonds with water molecules. The resulting layer of water has a structure that is similar to that of bulk water, which is energetically stable.
A further attraction of zwitterionic coatings for medical implants is that zwitterions are naturally present in our bodies. In fact, they make up the hydrophilic phospholipid heads of mammalian cell membranes, which play a vital role in regulating interactions between biological cells and the extracellular environment.
Plasma functionalization
In the new work, researchers led by Sina Naficy grafted nanometre-thick zwitterionic coatings onto the surfaces of implant materials using a technique called plasma functionalization. They found that the resulting structures reduce the amount of fibrinogen proteins that adsorb onto the implants by roughly nine-fold and decrease blood clot formation (thrombosis) by almost 75%.
Naficy and colleagues achieved their results by optimizing the density, coverage and thickness of the coating. This was critical for realizing the full potential of these materials, they say, because a coating that is not fully optimized would not reduce clotting.
Naficy tells Physics World that the team’s main goal is to enhance the surface properties of medical devices. “These devices when implanted are in contact with blood and can readily cause thrombosis or infection if the surface initiates certain biological cascade reactions,” he explains. “Most such reactions begin when specific proteins adsorb on the surface and activate the next stage of cascade. Optimizing surface properties with the aid of zwitterions can control / inhibit protein adsorption, hence reducing the severity of adverse body reactions.”
The researchers say they will now be evaluating the long-term stability of the zwitterion-polymer coatings and trying to scale up their grafting process. They report their work in Communications Materials and Cell Biomaterials.
The CERN particle-physics lab near Geneva has released plans for the 15bn SwFr (£13bn) Future Circular Collider (FCC) – a huge 91 km circumference machine. The three-volume feasibility study, released on 31 March, calls for the giant accelerator to collide electrons with positrons to study the Higgs boson in unprecedented detail. If built, the FCC would replace the 27 km Large Hadron Collider (LHC), which will come to an end in the early 2040s.
Work on the FCC feasibility study began in 2020 and the report examines the physics objectives, geology, civil engineering, technical infrastructure and territorial and environmental impact. It also looks at the R&D needed for the accelerators and detectors as well as the socioeconomic benefits and cost.
The study, involving some 150 institutes in over 30 countries, took into account some 100 different scenarios for the collider before landing on a ring circumference of 90.7 km that would be built underground at a depth of about 200 m, on average.
The FCC would also contain eight surface sites to access the tunnel with seven in France and one in Switzerland, and four main detectors. “The design is such that there is minimal impact on the surface, but with the best possible physics output,” says FCC study leader Michael Benedikt.
The funding model for the FCC is still a work in progress, but it is estimated that at least two-thirds of the cost of building the FCC-ee will come from CERN’s 24 member states.
Four committees will now review the feasibility study, beginning with CERN’s scientific committee in July. It will then go to a cost-review panel before being reviewed by the CERN council’s scientific and finance committees. In November, the CERN council will then examine the proposal with a decision to go ahead taken in 2028.
If given the green light, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030 and it would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years. It’s main aim would be to study the Higgs boson with a much better precision that the LHC.
To the energy frontier: if built, the FCC-hh would begin operation in 2073 and run to the end of the century (courtesy: PIXELRISE)
The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.
The original design energy for the FCC-hh was to reach 100 TeV but that has now been reduced to 85 TeV. That is mostly due to the uncertainty in magnet technology. The HL-LHC will use 12 T superconducting quadrupole magnets made from niobium-tin (Nb3Sn) to squeeze the beams to boost the luminosity.
CERN engineers think it is possible to increase that to 14 T and if this was used for the FCC it would result in a collision centre-of-mass energy of about 85 TeV. “It’s a prudent approach at this stage,” noted Fabiola Gianotti, current CERN director-general, adding that the FCC would be “the most extraordinary instrument ever built.”
The original design called for high-temperature superconducting magnets, such as so-called ReBCO tapes, and CERN is looking into such technology. If it came to fruition in the necessary timescales and was implemented in the FCC-hh then it could push the energy to 120 TeV.
China plans
One potential spanner in the works is China’s plans for a very similar machine called the Circular Election-Positron Collider (CEPC). A decision on the CEPC could come this year with construction beginning in 2027.
Yet officials at CERN are not concerned. They point to the fact that many different colliders have been built by CERN, which has the expertise as well as infrastructure to build such a huge collider. “Even if China goes ahead, I hope the decision is to compete,” says CERN council president Costas Fountas. “Just like Europe did with the LHC when the US started to build the [cancelled] Superconducting Super Collider.”
If the CERN council decides, however, not to go ahead with the FCC, then Gianotti says that other designs to replace the LHC are still on the table such as a linear machine or a demonstrator muon collider.
I was unprepared for the Roger Penrose that I met in The Impossible Man. As a PhD student training in relativity and quantum gravity at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, I once got to sit next to Penrose. Unsure of what to say to the man whose ideas about black-hole singularities are arguably why I took an interest in becoming a physicist, I asked him how he had come up with the idea for the space-time diagrams now known as “Penrose diagrams”.
Penrose explained to me that he simply couldn’t make sense of space-time without them, that was all. He spoke in kind terms, something I wasn’t quite used to. I was more familiar with people reacting as if my questions were stupid or impertinent. What I felt from Penrose – who eventually shared the 2020 Nobel Prize for Physics with Reinhard Genzel and Andrea Ghez for his work on singularities – was humility and generosity.
The Penrose of The Impossible Man isn’t so much humble as oblivious and, in my reading, quite spoiled
In hindsight, I wonder if I overread him, or if, having been around too many brusque theoretical physicists, my bar as a PhD student was simply too low. The Penrose of The Impossible Man isn’t so much humble as oblivious and, in my reading, quite spoiled. As a teenager he was especially good at taking care of his sister and her friends, generous with his time and thoughtfulness. But it ends there.
As we learn in this biography – written by the Canadian journalist Patchen Barss – one of those young friends, Judith Daniels, later became the object of Penrose’s affection when he was a distinguished faculty member at the University of Oxford in his 40s. A significant fraction of the book is centred on Penrose’s relationship with Daniels, whom he became reacquainted with in the early 1970s when she was an undergraduate studying mathematics at John Cass College in London.
At the time Penrose was unhappily married to Joan, an American he’d met in 1958 when he was a postdoc at the University of Cambridge. In Barss’s telling, Penrose essentially forces Daniels into the position of muse. He writes her copious letters explaining his intellectual ideas and communicating his inability to get his work done without replies from her, which he expects to contain critical analyses of his scientific proposals.
The letters are numerous and obsessive, even when her replies are thin and distant. Eventually, Penrose also begins to request something more – affection and even love. He wants a relationship with her. Barss never exactly states that this was against Daniels’s will, but he offers readers sufficient details of her letters to Penrose that it’s hard to draw another conclusion.
Unanswered questions
Barss was able to read her letters because they had been returned to Penrose after Daniels’s death in 2005. Penrose, however, never re-examined any of them until Barss interviewed him for this biography. This raises a lot of questions that remain unanswered by the end of the book. In particular, why did Daniels continue to participate in a correspondence that was eventually thousands of pages long on Penrose’s side?
Judith Daniels was a significant figure in Penrose’s life, yet her death and memory seem to have been unremarkable to him for much of his later life
My theory is that Daniels felt she owed it to this great man of science. She also confesses at one point that she had a childhood crush on him. Her affection was real, even if not romantic; it is as if she was trapped in the dynamic. Penrose’s lack of curiosity about the letters after her death is also strange to me. Daniels was a significant figure in his life, yet her death and memory seem to have been unremarkable to him for much of his later life.
By the mid-1970s, when Daniels was finally able to separate herself from what was – on Penrose’s side – an extramarital emotional affair, Penrose went seeking new muses. They were always female students of mathematics and physics.
Just when it seems like we’ve met the worst of Penrose’s treatment of women, we’re told about his “physical aggression” toward his eventual ex-wife Joan and his partial abandonment of the three sons they had together. This is glossed over very quickly. And it turns out there is even more.
Penrose, like many of his contemporaries, primarily trained male students. Eventually he did take on one woman, Vanessa Thomas, who was a PhD student in his group at Oxford’s Mathematical Institute, where he’d moved in 1972.
Thomas never finished her PhD; Penrose pursued her romantically and that was the end of her doctorate. As scandalous as this is, I didn’t find the fact of the romance especially shocking because it is common enough in physics, even if it is increasingly frowned upon and, in my opinion, generally inappropriate. For better or worse, I can think of other examples of men in physics who fell in love with women advisees.
But in all the cases I know of, the woman has gone on to complete her degree either under his or someone else’s supervision. In these same cases, the age difference was usually about a decade. What happened with Thomas – who married Penrose in 1988 – seems like the worst-case scenario: a 40-year age difference and a budding star of mathematics, reduced to serving her husband’s career. Professional boundaries were not just transgressed, but obliterated.
Barss chooses not to offer much in the way of judgement about the impact that Penrose had on the women in science whom he made into muses and objects of romantic affection. The only exception is Ivette Fuentes, who was already a star theoretical physicist in her own right when Penrose first met her in 2012. Interview snippets with Fuentes reveal that the one time Penrose spoke of her as a muse, she rejected him and their friendship until he apologized.
No woman, it seems, had ever been able to hold him to the fire before. Fuentes does, however, describe how Penrose gave her an intellectual companion, something she’d previously been denied by the way the physics community is structured around insider “families” and pedigrees. It is interesting to read this in the context of Penrose’s own upbringing as landed gentry.
Gilded childhood
An intellectually precocious child growing up in 1930s England, Penrose is handed every resource for his intellectual potential to blossom. When he notices a specific pattern linking addition and multiplication, an older sibling is on hand to show him there’s a general rule from number theory that explains the pattern. The family at this point, we’re told, has a cook and a maid who doubles as a nanny. Even in a community of people from well-resourced backgrounds, Penrose stands out as an especially privileged example.
When the Second World War starts, his family readily secures safe passage to a comfortable home in Canada – a privilege related to their status as welcomed members of Britain’s upper class and one that was not afforded to many continental European Jewish families at the time (Penrose’s mother and therefore Penrose was Jewish by descent). Indeed, Canada admitted the fewest Jewish refugees of any Allied nation and famously denied entry to the St Louis, which was sent back to Europe, where a third of its 937 Jewish passengers were murdered in the Holocaust.
In Ontario, the Penrose children have a relatively idyllic experience. Throughout the rest of his childhood and his adult life, the path has been continuously smoothed for Penrose, either by his parents (who bought him multiple homes) or mentors and colleagues who believed in his genius. One is left wondering how many other people might have such a distinguished career if, from birth, they are handed everything on a silver platter and never required to take responsibility for anything.
To tell these and later stories, Barss relies heavily on interviews with Penrose. Access to their subject for any biographer is tricky. While it creates a real opportunity for the author, there is also the challenge of having a relationship with someone whose memories you need to question. Barss doesn’t really interrogate Penrose’s memory but seems to take them as gospel.
During the first half of the book, I wondered repeatedly if The Impossible Man is effectively a memoir told in the third person. Eventually, Barss does allow other voices to tell the story. Ultimately, though, this is firmly a book told from Penrose’s point of view. Even the inclusion of Daniels’s story was at least in part at Penrose’s behest.
I found myself wanting to hear more from the women in Penrose’s life. Penrose often saw himself following a current determined by these women. He came, for example, to believe his first wife had essentially trapped him in their relationship by falling for him.
Penrose never takes responsibility for any of his own actions towards the women in his life. So I wondered: how did they see it? What were their lives like? His ex-wife Joan (who died in 2019) and estranged wife Vanessa, who later became a mathematics teacher, both gave interviews for the book. But we learn little about their perspective on the man whose proclivities and career dominated their own lives.
One day there will be another biography of Penrose that will necessarily have distance from its subject because he will no longer be with us. The Impossible Man will be an important document for any future biographer, containing as it does such a close rendering of Penrose’s perspective on his own life.
The cost of genius
When it comes to describing Penrose’s contributions to mathematics and physics, the science writing, especially in the early pages, sings. Barss has a knack for writing up difficult ideas – whether it’s Penrose’s Nobel-prize-winning work on singularities or his attempt at quantum gravity, twistor theory. Overall, the luxurious prose makes the book highly readable.
Sometimes Barss indulges cosmic flourishes in a way that appears to reinforce Penrose’s perspective that the universe is happening to him rather than one over which he has any influence. In the end, I don’t know if we learn the cost of genius, but we certainly learn the cost of not recognizing that we are a part of the universe that has agency.
The final chapter is really Barss writing about himself and Penrose, and the conversations they have together. Penrose has macular degeneration now, so while both are on a visit to Perimeter in 2019, Barss reads some of his letters to Judith back to Penrose. Apparently, Penrose becomes quite emotional in a way that it seems no-one had ever seen – he weeps.
After that, he asks Barss to include the story about Judith. So, on some level, he knows he has erred.
The end of The Impossible Man is devastating. Barss describes how he eventually gains access to two of Penrose’s sons (three with Joan and one with Vanessa). In those interviews, he hears from children who have been traumatized by witnessing what they call “physical aggression” toward their mother. Even so, they both say they’d like to improve their relationship with their father.
Barss then asks a 92-year-old Penrose if he wants to patch things up with his family. His reply: “I feel my life is busy enough and if I get involved with them, it just distracts from other things.” As Barss concludes, Penrose is persistently unwilling to accept that in his life, he has been in the driver’s seat. He has had choices and doesn’t want to take responsibility for that. This, as much as Penrose’s intellectual interests and achievements, is the throughline of the text.
Penrose has shown that he doesn’t really care what others think, as long as he gets what he wants scientifically
The Penrose we meet at the end of The Impossible Man has shown that he doesn’t really care what others think, as long as he gets what he wants scientifically. It’s clear that Barss has a real affection for him, which makes his honesty about the Penrose he finds in the archives all the more remarkable. Perhaps motivated by generosity toward Penrose, Barss also lets the reader do a lot of the analysis.
I wonder, though, how many physicists who are steeped in this culture, and don’t concern themselves with gender equity issues, will miss how bad some of Penrose’s behaviour has been, as his colleagues at the time clearly did. The only documented objections to his behaviour seem more about him going off the deep end with his research into consciousness, cyclic theory and attacks on cosmic inflation.
As I worked on this review, I considered whether a different reviewer would have simply complained that the book has lots of stuff about Penrose’s personal messes that we don’t need to know. Maybe, to other readers, Penrose doesn’t come off quite as badly. For me, I prefer the hero I met in person rather than in the pages of this book. The Impossible Man is an important text, but it’s heartbreaking in the end.
Agrivoltaics is an interdisciplinary research area that lies at the intersection of photovoltaics (PVs) and agriculture. Traditional PV systems used in agricultural settings are made from silicon materials and are opaque. The opaque nature of these solar cells can block sunlight reaching plants and hinder their growth. As such, there’s a need for advanced semi-transparent solar cells that can provide sufficient power but still enable plants to grow instead of casting a shadow over them.
In a recent study headed up at the Institute for Microelectronics and Microsystems (IMM) in Italy, Alessandra Alberti and colleagues investigated the potential of semi-transparent perovskite solar cells as coatings on the roof of a greenhouse housing radicchio seedlings.
Solar cell shading an issue for plant growth
Opaque solar cells are known to induce shade avoidance syndrome in plants. This can cause morphological adaptations, including changes in chlorophyll content and an increased leaf area, as well as a change in the metabolite profile of the plant. Lower UV exposure can also reduce polyphenol content – antioxidant and anti-inflammatory molecules that humans get from plants.
Addressing these issues requires the development of semi-transparent PV panels with high enough efficiencies to be commercially feasible. Some common panels that can be made thin enough to be semi-transparent include organic and dye-sensitized solar cells (DSSCs). While these have been used to provide power while growing tomatoes and lettuces, they typically only have a power conversion efficiency (PCE) of a few percent – a more efficient energy harvester is still required.
A semi-transparent perovskite solar cell greenhouse
Perovskite PVs are seen as the future of the solar cell industry and show a lot of promise in terms of PCE, even if they are not yet up to the level of silicon. However, perovskite PVs can also be made semi-transparent.
Experimental set-up The laboratory-scale greenhouse. (Courtesy: CNR-IMM)
In this latest study, the researchers designed a laboratory-scale greenhouse using a semi-transparent europium (Eu)-enriched CsPbI3 perovskite-coated rooftop and investigated how radicchio seeds grew in the greenhouse for 15 days. They chose this Eu-enriched perovskite composition because CsPbI3 has superior thermal stability compared with other perovskites, making it ideal for long exposures to the Sun’s rays. The addition of Eu into the CsPbI3 structure improved the perovskite stability by minimizing the number of intrinsic defects and increasing the surface-to-volume ratio of perovskite grains.
Alongside this stability, this perovskite also has no volatile components that could potentially effuse under high surface temperatures. It also typically possesses a high PCE – the record for this composition is 21.15%, which is significantly higher and much more commercially feasible than previously possible with organic PVs and DSSCs. This perovskite, therefore, provides a good trade-off between the PCE that can be achieved while transmitting enough light to allow the seedlings to grow.
Low light conditions promote seedling growth
Even though the seedlings were exposed to lower light conditions than natural light, the team found that they grew more quickly, and with bigger leaves, than those under glass panels. This is attributed to the perovskite acting as a filter for only red light to pass through. And red light is known to improve the photosynthetic efficiency and light absorption capabilities of plants, as well as increase the levels of sucrose and hexose within the plant.
The researchers also found that seedlings grown under these conditions had different gene expression patterns compared with those grown under glass. These expression patterns were associated with environmental stress responses, growth regulation, metabolism and light perception, suggesting that the seedlings naturally adapted to different light conditions – although further research is needed to see whether these adaptations will improve the crop yield.
Overall, the use of perovskite PVs strikes a good balance between being able to provide enough power to cover the annual energy needs for irrigation, lighting and air conditioning, while still allowing the seedlings to grow – and grow much quicker and faster. The team suggest that the perovskite solar cells could help with indoor food production operations in the agricultural sector as a potentially affordable solution, although more work now needs to be done on much larger scales to test the technology’s commercial feasibility.
The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.
The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.
“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.
DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”. The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.
Intrinsic energy density
Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.
DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.
This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.
DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.
The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.
Density fluctuations DESI observations showing nearby bright galaxies (yellow), luminous red galaxies (orange), emission-line galaxies (blue), and quasars (green). The inset shows the large-scale structure of a small portion of the universe. (Courtesy: Claire Lamman/DESI collaboration)
“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”
As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.
The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.
However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.
“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”
Dark energy takes over
The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.
“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.
Scalar quantum field
Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.
However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”
Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.
“Big rip” avoided
Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.
“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.
Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”
The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.
At a conference in 2014, bioengineer Jeffrey Fredberg of Harvard University presented pictures of asthma cells. To most people, the images would have been indistinguishable – they all showed tightly packed layers of cells from the airways of people with asthma. But as a physicist, Lisa Manning saw something no one else had spotted; she could tell, just by looking, that some of the model tissues were solid and some were fluid.
Animal tissues must be able to rearrange and flow but also switch to a state where they can withstand mechanical stress. However, whereas solid-liquid transitions are generally associated with a density change, many cellular systems, including asthma cells, can change from rigid to fluid-like at a constant packing density.
Many of a tissue’s properties depend on biochemical processes in its constituent cells, but some collective behaviours can be captured by mathematical models, which is the focus of Manning’s research. At the time, she was working with postdoctoral associate Dapeng Bi on a theory that a tissue’s rigidity depends on the shape of the cells, with cells in a rigid state touching more neighbouring cells than those in a fluid-like one. When she saw the pictures of the asthma cells she knew she was right. “That was a very cool moment,” she says.
Manning – now the William R Kenan, Jr Professor of Physics at Syracuse University in the US – began her research career in theoretical condensed-matter physics, completing a PhD at the University of California, Santa Barbara, in 2008. The thesis was on the mechanical properties of amorphous solids – materials that don’t have long-ranged order like a crystal but are nevertheless rigid. Amorphous solids include many plastics, soils and foods, but towards the end of her graduate studies, Manning started thinking about where else she could apply her work.
I was looking for a project where I could use some of the skills that I had been developing as a graduate student in an orthogonal way
“I was looking for a project where I could use some of the skills that I had been developing as a graduate student in an orthogonal way,” Manning recalls. Inspiration came from of a series of talks on tissue dynamics at the Kavli Institute for Theoretical Physics, where she recognized that the theories she had worked on could also apply to biological systems. “I thought it was amazing that you could apply physical principles to those systems,” she says.
The physics of life
Manning has been at Syracuse since completing a postdoc at Princeton University, and although she has many experimental collaborators, she is happy to still be a theorist. Whereas experimentalists in the biological sciences generally specialize in just one or two experimental models, she looks for “commonalities across a wide range of developmental systems”. That principle has led Manning to study everything from cancer to congenital disease and the development of embryos.
“In animal development, pretty universally one of the things that you must do is change from something that’s the shape of a ball of cells into something that is elongated,” says Manning, who working to understand how this happens. With collaborator Karen Kasza at Columbia University, she has demonstrated that rather than stretching as a solid, it’s energy efficient for embryos to change shape by undergoing a phase transition to a fluid, and many of their predictions have been confirmed in fruit fly embryo models.
More recently, Manning has been looking at how ideas from AI and machine learning can be applied to embryogenesis. Unlike most condensed-matter systems, tissues continuously tune individual interactions between cells, and it’s these localized forces that drive complex shape changes during embryonic development. Together with Andrea Liu of the University of Pennsylvania, Manning is now developing a framework that treats cell–cell interactions like weights in a neural network that can be adjusted to produce a desired outcome.
“I think you really need almost a new type of statistical physics that we don’t have yet to describe systems where you have these individually tunable degrees of freedom,” she says, “as opposed to systems where you have maybe one control parameter, like a temperature or a pressure.”
Developing the next generation
Manning’s transition to biophysics was spurred by an unexpected encounter with scientists outside her field. Between 2019 and 2023, she was director of the Bio-inspired Institute at Syracuse University, which supported similar opportunities for other researchers, including PhD students and postdocs. “As a graduate student, it’s a little easy to get focused on the one project that you know about, in the corner of the universe that your PhD is in,” she says.
As well as supporting science, one of the first things Manning spearheaded at the institute was a professional development programme for early-career researchers. “During our graduate schools, we’re typically mostly trained on how to do the academic stuff,” she says, “and then later in our careers, we’re expected to do a lot of other types of things like manage groups and manage funding.” To support their wider careers, participants in the programme build non-technical skills in areas such as project management, intellectual property and graphic design.
What I realized is that I did have implicit expectations that were based on my culture and background, and that they were distinct from those of some of my students
Manning’s senior role has also brought opportunities to build her own skills, with the COVID-19 pandemic in particular making her reflect and reevaluate how she approached mentorship. One of the appeals of academia is the freedom to explore independent research, but Manning began to see that her fear of micromanaging her students was sometimes creating confusion.
“What I realized is that I did have implicit expectations that were based on my culture and background, and that they were distinct from those of some of my students,” she says. “Because I didn’t name them, I was actually doing my students a disservice.” If she could give advice to her younger self, it would be that the best way to support early-career researchers as equals is to set clear expectations as soon as possible.
When Manning started at Syracuse, most of her students wanted to pursue research in academia, and she would often encourage them to think about other career options, such as working in industry. However, now she thinks academia is perceived as the poorer choice. “Some students have really started to get this idea that academia is too challenging and it’s really hard and not at all great and not rewarding.”
Manning doesn’t want anyone to be put off pursuing their interests, and she feels a responsibility to be outspoken about why she loves her job. For her, the best thing about being a scientist is encapsulated by the moment with the asthma cells: “The thrill of discovering something is a joy,” she says, “being for just a moment, the only person in the world that understands something new.”
Core physics This apple tree at Woolsthorpe Manor is believed to have been the inspiration for Isaac Newton. (Courtesy: Bs0u10e01/CC BY-SA 4.0)
Physicists in the UK have drawn up plans for an International Year of Classical Physics (IYC) in 2027 – exactly three centuries after the death of Isaac Newton. Following successful international years devoted to astronomy (2009), light (2015) and quantum science (2025), they want more recognition for a branch of physics that underpins much of everyday life.
A bright green Flower of Kent apple has now been picked as the official IYC logo in tribute to Newton, who is seen as the “father of classical physics”. Newton, who died in 1727, famously developed our understanding of gravity – one of the fundamental forces of nature – after watching an apple fall from a tree of that variety in his home town of Woolsthorpe, Lincolnshire, in 1666.
“Gravity is central to classical physics and contributes an estimated $270bn to the global economy,” says Crispin McIntosh-Smith, chief classical physicist at the University of Lincoln. “Whether it’s rockets escaping Earth’s pull or skiing down a mountain slope, gravity is loads more important than quantum physics.”
McIntosh-Smith, who also works in cosmology having developed the Cosmic Crisp theory of the universe during his PhD, will now be leading attempts to get endorsement for IYC from the United Nations. He is set to take a 10-strong delegation from Bramley, Surrey, to Paris later this month.
An official gala launch ceremony is being pencilled in for the Travelodge in Grantham, which is the closest hotel to Newton’s birthplace. A parallel scientific workshop will take place in the grounds of Woolsthorpe Manor, with a plenary lecture from TV physicist Brian Cox. Evening entertainment will feature a jazz band.
Numerous outreach events are planned for the year, including the world’s largest demonstration of a wooden block on a ramp balanced by a crate on a pulley. It will involve schoolchildren pouring Golden Delicious apples into the crate to illustrate Newton’s laws of motion. Physicists will also be attempting to break the record for the tallest tower of stacked Braeburn apples.
But there is envy from those behind the 2025 International Year of Quantum Science and Technology. “Of course, classical physics is important but we fear this year will peel attention away from the game-changing impact of quantum physics,” says Anne Oyd from the start-up firm Qrunch, who insists she will only play a cameo role in events. “I believe the impact of classical physics is over-hyped.”
A new artificial intelligence/machine learning method rapidly and accurately characterizes binary neutron star mergers based on the gravitational wave signature they produce. Though the method has not yet been tested on new mergers happening “live”, it could enable astronomers to make quicker estimates of properties such as the location of mergers and the masses of the neutron stars. This information, in turn, could make it possible for telescopes to target and observe the electromagnetic signals that accompany such mergers.
When massive objects such as black holes and neutron stars collide and merge, they emit ripples in spacetime known as gravitational waves (GWs). In 2015 scientists on Earth began observing these ripples using kilometre-scale interferometers that measure the minuscule expansion and contraction of space–time that occurs when a gravitational wave passes through our planet. These interferometers are located in the US, Italy and Japan and are known collectively as the LVK observatories after their initials: the Laser Interferometer GW Observatory (LIGO), the Virgo GW Interferometer (Virgo) and the Kamioka GW Detector (KAGRA).
When two neutron stars in a binary pair merge, they emit electromagnetic waves as well as GWs. While both types of wave travel at the speed of light, certain poorly understood processes that occur within and around the merging pair cause the electromagnetic signal to be slightly delayed. This means that the LVK observatories can detect the GW signal coming from a binary neutron star (BNS) merger seconds, or even minutes, before its electromagnetic counterpart arrives. Being able to identify GWs quickly and accurately therefore increases the chances of detecting other signals from the same event.
This is no easy task, however. GW signals are long and complex, and the main technique currently used to interpret them, Bayesian inference, is slow. While faster alternatives exist, they often make algorithmic approximations that negatively affect their accuracy.
Trained with millions of GW simulations
Physicists led by Maximilian Dax of the Max Planck Institute for Intelligent Systems in Tübingen, Germany have now developed a machine learning (ML) framework that accurately characterizes and localizes BNS mergers within a second of a GW being detected, without resorting to such approximations. To do this, they trained a deep neural network model with millions of GW simulations.
Once trained, the neural network can take fresh GW data as input and predict corresponding properties of the merging BNSs – for example, their masses, locations and spins – based on its training dataset. Crucially, this neural network output includes a sky map. This map, Dax explains, provides a fast and accurate estimate for where the BNS is located.
The new work built on the group’s previous studies, which used ML systems to analyse GWs from binary black hole (BBH) mergers. “Fast inference is more important for BNS mergers, however,” Dax says, “to allow for quick searches for the aforementioned electromagnetic counterparts, which are not emitted by BBH mergers.”
The researchers, who report their work in Nature, hope their method will help astronomers to observe electromagnetic counterparts for BNS mergers more often and detect them earlier – that is, closer to when the merger occurs. Being able to do this could reveal important information on the underlying processes that occur during these events. “It could also serve as a blueprint for dealing with the increased GW signal duration that we will encounter in the next generation of GW detectors,” Dax says. “This could help address a critical challenge in future GW data analysis.”
So far, the team has focused on data from current GW detectors (LIGO and Virgo) and has only briefly explored next-generation ones. They now plan to apply their method to these new GW detectors in more depth.
Waseem completed his DPhil in physics at the University of Oxford in the UK, where he worked on applied process-relational philosophy and employed string diagrams to study interpretations of quantum theory, constructor theory, wave-based logic, quantum computing and natural language processing. At Oxford, Waseem continues to teach mathematics and physics at Magdalen College, the Mathematical Institute, and the Department of Computer Science.
Waseem has played a key role in organizing the Lahore Science Mela, the largest annual science festival in Pakistan. He also co-founded Spectra, an online magazine dedicated to training popular-science writers in Pakistan. For his work popularizing science he received the 2021 Diana Award, was highly commended at the 2021 SEPnet Public Engagement Awards, and won an impact award in 2024 from Oxford’s Mathematical, Physical and Life Sciences (MPLS) division.
What skills do you use every day in your job?
I’m a theoretical physicist, so if you’re thinking about what I do every day, I use chalk and a blackboard, and maybe a pen and paper. However, for theoretical physics, I believe the most important skill is creativity, and the ability to dream and imagine.
What do you like best and least about your job?
That’s a difficult one because I’ve only been in this job for a few weeks. What I like about my job is the academic freedom and the opportunity to work on both education and research. My role is divided 50/50, so 50% of the time I’m thinking about the structure of natural languages like English and Urdu, and how to use quantum computers for natural language processing. The other half is spent using our diagrammatic formalism called “quantum picturalism” to make quantum physics accessible to everyone in the world. So, I think that’s the best part. On the other hand, when you have a lot of smart people together in the same room or building, there can be interpersonal issues. So, the worst part of my job is dealing with those conflicts.
What do you know today, that you wish you knew when you were starting out in your career?
It’s a cynical view, but I think scientists are not always very rational or fair in their dealings with other people and their work. If I could go back and give myself one piece of advice, it would be that sometimes even rational and smart people make naive mistakes. It’s good to recognize that, at the end of the day, we are all human.
Disabled people in science must be recognized and given better support to help reverse the numbers of such people dropping out of science. That is the conclusion of a new report released today by the National Association of Disabled Staff Networks (NADSN). It also calls for funders to stop supporting institutions that have toxic research cultures and for a change in equality law to recognize the impact of discrimination on disabled people including neurodivergent people.
About 22% of working-age adults in the UK are disabled. Yet it is estimated that only 6.4% of people in science have a disability, falling to just 4% for senior academic positions. What’s more, barely 1% of research grant applications to UK Research and Innovation – the umbrella organization for the UK’s main funding councils – are from researchers who disclose being disabled. Disabled researchers who do win grants receive less than half the amount compared to non-disabled researchers.
NADSN is an umbrella organization for disabled staff networks, with a focus on higher education. It includes the STEMM Action Group, which was founded in 2020 and consists of nine people at universities across the UK who work in science and have lived experience of disability, chronic illness or neurodivergence. The group develops recommendations to funding bodies, learned societies and higher-education institutions to address barriers faced by those who are marginalised due to disability.
In 2021 the group published a “problem statement” that identified issues facing disabled people in science. They range from digital problems, such as the need for accessible fonts in reports and presentations, to physical concerns such as needing access ramps for people in wheelchairs or automatic doors to open heavy fire doors. Other issues include the need for adjustable desks in offices and wheelchair accessible labs.
“Many of these physical issues tend to be afterthoughts in the planning process,” says Francesca Doddato, a physicist from Lancaster University, who co-wrote the latest report. “But at that point they are much harder, and more costly, to implement.”
We need to have this big paradigm shift in terms of how we see disability inclusion
Francesca Doddato
Workplace attitudes and cultures can also be a big problem for disabled people in science, some 62% of whom report having been bullied and harassed compared to 43% of all scientists. “Unfortunately, in research and academia there is generally a toxic culture in which you are expected to be hyper productive, move all over the world, and have a focus on quantity over quality in terms of research output,” says Doddato. “This, coupled with society-wide attitudes towards disabilities, means that many disabled people struggle to get promoted and drop out of science.”
The action group spent the past four years compiling their latest report – Towards a fully inclusive environment for disabled people in STEMM – to present solutions to these issues. They hope it will raise awareness of the inequity and discrimination experienced by disabled people in science and to highlight the benefits of having an inclusive environment.
The report identifies three main areas that will have to be reformed to make science fully inclusive for disabled scientists: enabling inclusive cultures and practices; enhancing accessible physical and digital environments; and accessible and proactive funding.
In the short term, it calls on people to recognize the challenges and barriers facing disabled researchers and to improve work-based training for managers. “One of the best things is just being willing to listen and ask what can I do to help?” notes Doddato. “Being an ally is vitally important.”
Doddato says that sharing meeting agendas and documents ahead of time, ensuring that documents are presented in accessible formats, or acknowledging that tasks such as getting around campus can take longer are some aspects that can be useful.“All of these little things can really go a long way in shifting those attitudes and being an ally, and those things they don’t need policies that people need to be willing to listen and be willing to change.”
Medium- and long-term goals in the report involve holding organisations responsible for their working practice polices and to stop promoting and funding toxic research cultures. “We hope that report encourages funding bodies to put pressure on institutions if they are demonstrating toxicity and being discriminatory,” adds Doddato. The report also calls for a change to equality law to recognize the impact of intersectional discrimination, although it admits that this will be a “large undertaking” and will be the subject of a further NADSN report.
Doddato adds that disabled people’s voices need to be hear “loud and clear” as part of any changes. “What we are trying to address with the report is to push universities, research institutions and societies to stop only talking about doing something and actually implement change,” says Doddato. “We need to have a big paradigm shift in terms of how we see disability inclusion. It’s time for change.”
Neutron-activated gold Novel activation imaging technique enables real-time visualization of gold nanoparticles in the body without the use of external tracers. (Courtesy: Nanase Koshikawa from Waseda University)
Gold nanoparticles are promising vehicles for targeted delivery of cancer drugs, offering biocompatibility plus a tendency to accumulate in tumours. To fully exploit their potential, it’s essential to be able to track the movement of these nanoparticles in the body. To date, however, methods for directly visualizing their pharmacokinetics have not yet been established. Aiming to address this shortfall, researchers in Japan are using neutron-activated gold radioisotopes to image nanoparticle distribution in vivo.
The team, headed up by Nanase Koshikawa and Jun Kataoka from Waseda University, are investigating the use of radioactive gold nanoparticles based on 198Au, which they create by irradiating stable gold (197Au) with low-energy neutrons. The radioisotope 198Au has a half-life of 2.7 days and emits 412 keV gamma rays, enabling a technique known as activation imaging.
“Our motivation was to visualize gold nanoparticles without labelling them with tracers,” explains Koshikawa. “Radioactivation allows gold nanoparticles themselves to become detectable from outside the body. We used neutron activation because it does not change the atomic number, ensuring the chemical properties of gold nanoparticles remain unchanged.”
In vivo studies
The researchers – also from Osaka University and Kyoto University – synthesized 198Au-based nanoparticles and injected them into tumours in four mice. They used a hybrid Compton camera (HCC) to detect the emitted 412 keV gamma rays and determine the in vivo nanoparticle distribution, on the day of injection and three and five days later.
The HCC, which incorporates two pixelated scintillators, a scatterer with a central pinhole, and an absorber, can detect radiation with energies from tens of keV to nearly 1 MeV. For X-rays and low-energy gamma rays, the scatterer enables pinhole-mode imaging. For gamma rays over 200 keV, the device functions as a Compton camera.
The researchers reconstructed the 412 keV gamma signals into images, using an energy window of 412±30 keV. With the HCC located 5 cm from the animals’ abdomens, the spatial resolution was 7.9 mm, roughly comparable to the tumour size on the day of injection (7.7 x 11 mm).
In vivo distribution Images of 198Au nanoparticles in the bodies of two mice obtained with the HCC on the day of administration. (Courtesy: CC BY 4.0/Appl. Phys. Lett. 10.1063/5.0251048)
Overlaying the images onto photographs of the mice revealed that the nanoparticles accumulated in both the tumour and liver. In mice 1 and 2, high pixel values were observed primarily in the tumour, while mice 3 and 4 also had high pixel values in the liver region.
After imaging, the mice were euthanized and the team used a gamma counter to measure the radioactivity of each organ. The measured activity concentrations were consistent with the imaging results: mice 1 and 2 had higher nanoparticle concentrations in the tumour than the liver, and mice 3 and 4 had higher concentrations in the liver.
Tracking drug distribution
Next, Koshikawa and colleagues used the 198Au nanoparticles to label astatine-211 (211At), a promising alpha-emitting drug. They note that although 211At emits 79 keV X-rays, allowing in vivo visualization, its short half-life of just 7.2 h precludes its use for long-term tracking of drug pharmacokinetics.
The researchers injected the 211At-labelled nanoparticles into three tumour-bearing mice and used the HCC to simultaneously image 211At and 198Au, on the day of injection and one or two days later. Comparing energy spectra recorded just after injection with those two days later showed that the 211At peak at 79 keV significantly decreased in height owing to its decay, while the 412 keV 198Au peak maintained its height.
The team reconstructed images using energy windows of 79±10 and 412±30 keV, for pinhole- and Compton-mode reconstruction, respectively. In these experiments, the HCC was placed 10 cm from the mouse, giving a spatial resolution of 16 mm – larger than the initial tumour size and insufficient to clearly distinguish tumours from small organs. Nevertheless, the researchers point out that the rough distribution of the drug was still observable.
On the day of injection, the drug distribution could be visualized using both the 211At and 198Au signals. Two days later, imaging using 211At was no longer possible. In contrast, the distribution of the drug could still be observed via the 412 keV gamma rays.
With further development, the technique may prove suitable for future clinical use. “We assume that the gamma ray exposure dose would be comparable to that of clinical imaging techniques using X-rays or gamma rays, such as SPECT and PET, and that activation imaging is not harmful to humans,” Koshikawa says.
Activation imaging could also be applied to more than just gold nanoparticles. “We are currently working on radioactivation of platinum-based anticancer drugs to enable their visualization from outside the body,” Koshikawa tells Physics World. “Additionally, we are developing new detectors to image radioactive drugs with higher spatial resolution.”
Edinburgh researchers filmed ants and the sequence of movements they do when picking up seeds and other things. They then used this to build a robot gripper.
The device consists of two aluminium plates that each contain four rows of “hairs” made from thermoplastic polyurethane.
The hairs are 20 mm long and 1 mm in diameter, protruding in a v-shape. This allowing the hairs to surround circular objects, which can be particularly difficult to grasp and hold onto using paraellel plates.
In tests picking up 30 different household items including a jam jar and shampoo bottle (see video), adding hairs to the gripper increased the prototype’s grasp success rate from 64% to 90%.
The researchers think that such a device could be used in environmental clean-up as well as in construction and agriculture.
Barbara Webb from the University of Edinburgh, who led the research, says the work is “just the first step”.
“Now we can see how [ants’] antennae, front legs and jaws combine to sense, manipulate, grasp and move objects – for instance, we’ve discovered how much ants rely on their front legs to get objects in position,” she adds. “This will inform further development of our technology.”
Researchers at the EMBL in Germany have dramatically reduced the time required to create images using Brillouin microscopy, making it possible to study the viscoelastic properties of biological samples far more quickly and with less damage than ever before. Their new technique can image samples with a field of view of roughly 10 000 pixels at a speed of 0.1 Hz – a 1000-fold improvement in speed and throughput compared to standard confocal techniques.
Mechanical properties such as the elasticity and viscosity of biological cells are closely tied to their function. These properties also play critical roles in processes such as embryo and tissue development and can even dictate how diseases such as cancer evolve. Measuring these properties is therefore important, but it is not easy since most existing techniques to do so are invasive and thus inherently disruptive to the systems being imaged.
Non-destructive, label- and contact-free
In recent years, Brillouin microscopy has emerged as a non-destructive, label- and contact-free optical spectroscopy method for probing the viscoelastic properties of biological samples with high resolution in three dimensions. It relies on Brillouin scattering, which occurs when light interacts with the phonons (or collective vibrational modes) that are present in all matter. This interaction produces two additional peaks, known as Stokes and anti-Stokes Brillouin peaks, in the spectrum of the scattered light. The position of these peaks (the Brillouin shift) and their linewidth (the Brillouin width) are related to the elastic and viscous properties, respectively, of the sample.
The downside is that standard Brillouin microscopy approaches analyse just one point in a sample at a time. Because the scattering signal from a single point is weak, imaging speeds are slow, yielding long light exposure times that can damage photosensitive components within biological cells.
“Light sheet” Brillouin imaging
To overcome this problem, EMBL researchers led by Robert Prevedel began exploring ways to speed up the rate at which Brillouin microscopy can acquire two- and three-dimensional images. In the early days of their project, they were only able to visualize one pixel at a time. With typical measurement times of tens to hundreds of milliseconds for a single data point, it therefore took several minutes, or even hours, to obtain two-dimensional images of 50–250 square pixels.
In 2022, however, they succeeded in expanding the field of view to include an entire spatial line — that is, acquiring image data from more than 100 points in parallel. In their latest work, which they describe in Nature Photonics, they extended the technique further to allow them to view roughly 10 000 pixels in parallel over the full plane of a sample. They then used the new approach to study mechanical changes in live zebrafish larvae.
“This advance enables much faster Brillouin imaging, and in terms of microscopy, allows us to perform ‘light sheet’ Brillouin imaging,” says Prevedel. “In short, we are able to ‘under-sample’ the spectral output, which leads to around 1000 fewer individual measurements than normally needed.”
Towards a more widespread use of Brillouin microscopy
Prevedel and colleagues hope their result will lead to more widespread use of Brillouin microscopy, particularly for photosensitive biological samples. “We wanted to speed-up Brillouin imaging to make it a much more useful technique in the life sciences, yet keep overall light dosages low. We succeeded in both aspects,” he tells Physics World.
Looking ahead, the researchers plan to further optimize the design of their approach and merge it with microscopes that enable more robust and straightforward imaging. “We then want to start applying it to various real-world biological structures and so help shed more light on the role mechanical properties play in biological processes,” Prevedel says.
FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).
Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.
Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.
Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.
For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.
The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.
FLASH confers skin protection
To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.
The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.
To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.
The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)
For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.
Similar skin damage between electrons and protons
Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.
Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.
The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”
New data from the NOvA experiment at Fermilab in the US contain no evidence for so-called “sterile” neutrinos, in line with results from most – though not all – other neutrino detectors to date. As well as being consistent with previous experiments, the finding aligns with standard theoretical models of neutrino oscillation, in which three active types, or flavours, of neutrino convert into each other. The result also sets more stringent limits on how much an additional sterile type of neutrino could affect the others.
“The global picture on sterile neutrinos is still very murky, with a number of experiments reporting anomalous results that could be attributed to sterile neutrinos on one hand and a number of null results on the other,” says NOvA team member Adam Lister of the University of Wisconsin, Madison, US. “Generally, these anomalous results imply we should see large amounts of sterile-driven neutrino disappearance at NOvA, but this is not consistent with our observations.”
Neutrinos were first proposed in 1930 by Wolfgang Pauli as a way to account for missing energy and spin in the beta decay of nuclei. They were observed in the laboratory in 1956, and we now know that they come in (at least) three flavours: electron, muon and tau. We also know that these three flavours oscillate, changing from one to another as they travel through space, and that this oscillation means they are not massless (as was initially thought).
Significant discrepancies
Over the past few decades, physicists have used underground detectors to probe neutrino oscillation more deeply. A few of these detectors, including the LSND at Los Alamos National Laboratory, BEST in Russia, and Fermilab’s own MiniBooNE, have observed significant discrepancies between the number of neutrinos they detect and the number that mainstream theories predict.
One possible explanation for this excess, which appears in some extensions of the Standard Model of particle physics, is the existence of a fourth flavour of neutrino. Neutrinos of this “sterile” type do not interact with the other flavours via the weak nuclear force. Instead, they interact only via gravity.
Detecting sterile neutrinos would fundamentally change our understanding of particle physics. Indeed, some physicists think sterile neutrinos could be a candidate for dark matter – the mysterious substance that is thought to make up around 85% of the matter in the universe but has so far only made itself known through the gravitational force it exerts.
Near and far detectors
The NOvA experiment uses two liquid scintillator detectors to monitor a stream of neutrinos created by firing protons at a carbon target. The near detector is located at Fermilab, approximately 1 km from the target, while the far detector is 810 km away in northern Minnesota. In the new study, the team measured how many muon-type neutrinos survive the journey through the Earth’s crust from the near detector to the far one. The idea is that if fewer neutrinos survive than the conventional three-flavour oscillations picture predicts, some of them could have oscillated into sterile neutrinos.
The experimenters studied two different interactions between neutrinos and normal matter, says team member V Hewes of the University of Cincinnati, US. “We looked for both charged current muon neutrino and neutral current interactions, as a sterile neutrino would manifest differently in each,” Hewes explains. “We then compared our data across those samples in both detectors to simulations of neutrino oscillation models with and without the presence of a sterile neutrino.”
No excess of neutrinos seen
Writing in Physical Review Letters, the researchers state that they found no evidence of neutrinos oscillating into sterile neutrinos. What is more, introducing a fourth, sterile neutrino did not provide better agreement with the data than sticking with the standard model of three active neutrinos.
This result is in line with several previous experiments that looked for sterile neutrinos, including those performed at T2K, Daya Bay, RENO and MINOS+. However, Lister says it places much stricter constraints on active-sterile neutrino mixing than these earlier results. “We are really tightening the net on where sterile neutrinos could live, if they exist,” he tells Physics World.
The NOvA team now hopes to tighten the net further by reducing systematic uncertainties. “To that end, we are developing new data samples that will help us better understand the rate at which neutrinos interact with our detector and the composition of our beam,” says team member Adam Aurisano, also at the University of Cincinnati. “This will help us better distinguish between the potential imprint of sterile neutrinos and more mundane causes of differences between data and prediction.”
NOvA co-spokesperson Patricia Vahle, a physicist at the College of William & Mary in Virginia, US, sums up the results. “Neutrinos are full of surprises, so it is important to check when anomalies show up,” she says. “So far, we don’t see any signs of sterile neutrinos, but we still have some tricks up our sleeve to extend our reach.”
Last week I had the pleasure of attending the Global Physics Summit (GPS) in Anaheim California, where I rubbed shoulders with 15,0000 fellow physicists. The best part of being there was chatting with lots of different people, and in this podcast I share two of those conversations.
First up is Chetan Nayak, who is a senior researcher at Microsoft’s Station Q quantum computing research centre here in California. In February, Nayak and colleagues claimed a breakthrough in the development of topological quantum bits (qubits) based on Majorana zero modes. In principle, such qubits could enable the development of practical quantum computers, but not all physicists were convinced, and the announcement remains controversial – despite further results presented by Nayak in a packed session at the GPS.
I caught up with Nayak after his talk and asked him about the challenges of achieving Microsoft’s goal of a superconductor-based topological qubit. That conversation is the first segment of today’s podcast.
Distinctive jumping technique
Up next, I chat with Atharva Lele about the physics of manu jumping, which is a competitive aquatic sport that originates from the Māori and Pasifika peoples of New Zealand. Jumpers are judged by the height of their splash when they enter the water, and the best competitors use a very distinctive technique.
Lele is an undergraduate student at the Georgia Institute of Technology in the US, and is part of team that analysed manu techniques in a series of clever experiments that included plunging robots. He explains how to make a winning manu jump while avoiding the pain of a belly flop.
The first direct evidence for auroras on Neptune has been spotted by the James Webb Space Telescope (JWST) and the Hubble Space Telescope.
Auroras happen when energetic particles from the Sun become trapped in a planet’s magnetic field and eventually strike the upper atmosphere with the energy released creating a signature glow.
Auroral activity has previously been seen on Jupiter, Saturn and Uranus but not on Neptune despite hints in a flyby of the planet by NASA’s Voyager 2 in 1989.
“Imaging the auroral activity on Neptune was only possible with [the JWST’s] near-infrared sensitivity,” notes Henrik Melin from Northumbria University. “It was so stunning to not just see the auroras, but the detail and clarity of the signature really shocked me.”
The data was taken by JWST’s Near-Infrared Spectrograph as well as Hubble’s Wide Field Camera 3. The cyan on the image above represent auroral activity and is shown together with white clouds on a multi-hued blue orb that is Neptune.
While auroras on Earth occur at the poles, on Neptune they happen elsewhere. This is due to the nature of Neptune’s magnetic field, which is tilted by 47 degrees from the planet’s rotational axis.
As well as the visible imagery, the JWST also detected an emission line from trihydrogen cation (H3+), which can be created in auroras.
Physicists in Germany have found an alternative explanation for an anomaly that had previously been interpreted as potential evidence for a mysterious “dark force”. Originally spotted in ytterbium atoms, the anomaly turns out to have a more mundane cause. However, the investigation, which involved high-precision measurements of shifts in ytterbium’s energy levels and the mass ratios of its isotopes, could help us better understand the structure of heavy atomic nuclei and the physics of neutron stars.
Isotopes are forms of an element that have the same number of protons and electrons, but different numbers of neutrons. These different numbers of neutrons produce shifts in the atom’s electronic energy levels. Measuring these so-called isotope shifts is therefore a way of probing the interactions between electrons and neutrons.
In 2020, a team of physicists at the Massachusetts Institute of Technology (MIT) in the US observed an unexpected deviation in the isotope shift of ytterbium. One possible explanation for this deviation was the existence of a new “dark force” that would interact with both ordinary, visible matter and dark matter via hypothetical new force-carrying particles (bosons).
Although dark matter is thought to make up about 85 percent of the universe’s total matter, and its presence can be inferred from the way light bends as it travels towards us from distant galaxies, it has never been detected directly. Evidence for a new, fifth force (in addition to the known strong, weak, electromagnetic and gravitational forces) that acts between ordinary and dark matter would therefore be very exciting.
Measuring ytterbium isotope shifts and atomic masses
Mehlstäubler, Blaum and colleagues came to this conclusion after measuring shifts in the atomic energy levels of five different ytterbium isotopes: 168,170,172,174,176Yb. They did this by trapping ions of these isotopes in an ion trap at the PTB and then using an ultrastable laser to drive certain electronic transitions. This allowed them to pin down the frequencies of specific transitions (2S1/2→2D5/2 and 2S1/2→2F7/2) with a precision of 4 ×10−9, the highest to date.
They also measured the atomic masses of the ytterbium isotopes by trapping individual highly-charged Yb42+ ytterbium ions in the cryogenic PENTATRAP Penning trap mass spectrometer at the MPIK. In the strong magnetic field of this trap, team member and study lead author Menno Door explains, the ions are bound to follow a circular orbit. “We measure the rotational frequency of this orbit by amplifying the miniscule inducted current in surrounding electrodes,” he says. “The measured frequencies allowed us to very precisely determine the related mass ratios of the various isotopes with a precision of 4 ×10−12.”
From these data, the researchers were able to extract new parameters that describe how the ytterbium nucleus deforms. To back up their findings, a group at TU Darmstadt led by Achim Schwenk simulated the ytterbium nuclei on large supercomputers, calculating their structure from first principles based on our current understanding of the strong and electromagnetic interactions. “These calculations confirmed that the leading signal we measured was due to the evolving nuclear structure of ytterbium isotopes, not a new fifth force,” says team member Matthias Heinz.
“Our work complements a growing body of research that aims to place constraints on a possible new interaction between electrons and neutrons,” team member Chih-Han Yeh tells Physics World. “In our work, the unprecedented precision of our experiments refined existing constraints.”
The researchers say they would now like to measure other isotopes of ytterbium, including rare isotopes with high or low neutron numbers. “Doing this would allow us to control for uncertain ‘higher-order’ nuclear structure effects and further improve the constraints on possible new physics,” says team member Fiona Kirk.
Door adds that isotope chains of other elements such as calcium, tin and strontium would also be worth investigating. “These studies would allow to further test our understanding of nuclear structure and neutron-rich matter, and with this understanding allow us to probe for possible new physics again,” he says.
Located about 40 light years from us, the exoplanet Trappist-1 b, orbiting an ultracool dwarf star, has perplexed astronomers with its atmospheric mysteries. Recent observations made by the James Webb Space Telescope (JWST) at two mid-infrared bands (12.8 and 15 µm), suggest that the exoplanet could either be bare, airless rock like Mercury or shrouded by a hazy carbon dioxide (CO2) atmosphere like Titan.
The research, reported in Nature Astronomy, provides the first thermal emission measurements for Trappist-1 b suggesting two plausible yet contradictory scenarios. This paradox challenges our current understanding of atmospheric models and highlights the need for further investigations – both theoretical and observational.
Scenario one: airless rock
An international team of astronomers, co-led by Elsa Ducrot and Pierre-Olivier Lagage from the Commissariat aux Énergies Atomiques (CEA) in Paris, France, obtained mid-infrared observations for Trappist-1 b for 10 secondary eclipse measurements (recorded as the exoplanet moves behind the star) using the JWST Mid-Infrared Instrument (MIRI). They recorded emission data at 12.8 and 15 µm and compared the findings with various surface and atmospheric models.
The thermal emission at 15 µm corresponded with Trappist-1 b being almost null-albedo bare rock; however, the emission at 12.8 µm refuted this model. At this wavelength, the exoplanet’s measured flux was most consistent with the surface model of an igneous, low-silica-content rock called ultramafic rock. The model assumes the surface to be geologically unweathered.
Trappist-1 b, the innermost planet in the Trappist-1 system, experiences strong tidal interaction and induction heating from its host star. This could trigger volcanic activity and continuous resurfacing, which could lead to a young surface like that of Jupiter’s volcanic moon Io. The researchers argue that these scenarios support the idea that Trappist-1 b is an airless rocky planet with a young ultramafic surface.
The team next explored atmospheric models for the exoplanet, which unfolded a different story.
Scenario two: haze-rich CO2 atmosphere
Ducrot and colleagues fitted the measured flux data with hazy atmospheric models centred around 15 µm. The results showed that Trappist-1 b could have a thick CO2-rich atmosphere with photochemical haze, but with a twist. For an atmosphere dominated by greenhouse gases such as CO2, which is strongly absorbing, temperature is expected to increase with increasing pressure (at lower levels). Consequently, they anticipated the brightness temperature should be lower at 15 µm (which measures temperature high in the atmosphere) than at 12.8 µm. But the observations showed otherwise. They proposed that this discrepancy could be explained by a thermal inversion, where the upper atmosphere has higher temperature than the layers below.
In our solar system, Titan’s atmosphere also shows thermal inversion due to heating through haze absorption. Haze is an efficient absorber of stellar radiation. Therefore, it could absorb radiation high up in the atmosphere, leading to heating of the upper atmosphere and cooling of the lower atmosphere. Indeed, this model is consistent with the team’s measurements. However, this leads to another plausible question: what forms this haze?
Trappist-1 b’s close proximity to Trappist-1 and the strong X-ray and ultraviolet radiation from the host star could create haze in the exoplanet’s atmosphere via photodissociation. While Titan’s hydrocarbon haze arises from photodissociation of methane, the same is not possible for Trappist-1 b as methane and CO2 cannot coexist photochemically and thermodynamically.
One plausible scenario is that the photochemical haze forms due to the presence of hydrogen sulphide (H2S). The volcanic activity in an oxidized, CO2-dominated atmosphere could supply H2S, but it is unlikely that it could sustain the levels needed for the haze. Additionally, as the innermost planet around an active star, Trappist-1 b is subjected to constant space weathering, raising the question of the sustainability of its atmosphere.
The researchers note that although the modelled atmospheric scenario appears less plausible than the airless bare-rock model, more theoretical and experimental work is needed to create a conclusive model.
What is the true nature of Trappist-1 b?
The two plausible surface and atmospheric models for Trappist-1 b provide an enigma. How could a planet be simultaneously an airless, young ultramafic rock and have a haze-filled CO2-rich atmosphere? The resolution might come not from theoretical models but from additional measurements.
Currently, the available data only capture the dayside thermal flux within two infrared bands, which proved insufficient to decisively distinguish between an airless surface and a CO₂-rich atmosphere. To solve this planetary paradox, astronomers advocate for broader spectral coverage and photometric phase curve measurements to help explain heat redistribution patterns essential for atmospheric confirmation.
JWST’s observations of Trappist-1 b demonstrate its power to precisely detect thermal emissions from exoplanets. However, the contradictory interpretations of the data highlight its limitations too and emphasize the need for higher resolution spectroscopy. With only two thermal flux measurements insufficient to give a precise answer, future JWST observations of Trappist-1 b might uncover its true picture.
Co-author Michaël Gillon, an astrophysicist at the University of Liège, emphasizes the importance of the results. “The agreement between our two measurements of the planet’s dayside fluxes at 12.8 and 15 microns and a haze-rich CO2-dominated atmosphere is an important finding,” he tells Physics World. “It shows that dayside flux measurements in one or a couple of broadband filters is not enough to fully discriminate airless versus atmosphere models. Additional phase curve and transit transmission data are necessary, even if for the latter, the interpretation of the measurements is complicated by the inhomogeneity of the stellar surface.”
For now, TRAPPIST-1 b hides its secrets, either standing as airless barren world scorched by its star or hidden underneath a hazy thick CO2 veil.
Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.
Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.
With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.
From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.
For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.
As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.
Supporting students
The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.
Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.
Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.
More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.
The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.
Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.
Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.
Orthopaedic implants that bear loads while bones heal, then disappear once they’re no longer needed, could become a reality thanks to a new technique for enhancing the mechanical properties of zinc alloys. Developed by researchers at Monash University in Australia, the technique involves controlling the orientation and size of microscopic grains in these strong yet biodegradable materials.
Implants such as plates and screws provide temporary support for fractured bones until they knit together again. Today, these implants are mainly made from sturdy materials such as stainless steel or titanium that remain in the body permanently. Such materials can, however, cause discomfort and bone loss, and subsequent injuries to the same area risk additional damage if the permanent implants warp or twist.
To address these problems, scientists have developed biodegradable alternatives that dissolve once the bone has healed. These alternatives include screws made from magnesium-based materials such as MgYREZr (trade name MAGNEZIX), MgYZnMn (NOVAMag) and MgCaZn (RESOMET). However, these materials have compressive yield strengths of just 50 to 260 MPa, which is too low to support bones that need to bear a patient’s weight. They also produce hydrogen gas as they degrade, possibly affecting how biological tissues regenerate.
Zinc alloys do not suffer from the hydrogen gas problem. They are biocompatible, dissolving slowly and safely in the body. There is even evidence that Zn2+ ions can help the body heal by stimulating bone formation. But again, their mechanical strength is low: at less than 30 MPa, they are even worse than magnesium in this respect.
Making zinc alloys strong enough for load-bearing orthopaedic implants is not easy. Mechanical strategies such as hot-extruding binary alloys have not helped much. And methods that focus on reducing the materials’ grain size (to hamper effects like dislocation slip) have run up against a discouraging problem: at body temperature (37 °C), ultrafine-grained Zn alloys become mechanically weaker as their so-called “creep resistance” decreases.
Grain size goes bigger
In the new work, a team led by materials scientist and engineer Jian-Feng Nei tried a different approach. By increasing grain size in Zn alloys rather than decreasing it, the Monash team was able to balance the alloys’ strength and creep resistance – something they say could offer a route to stronger zinc alloys for biodegradable implants.
In compression tests of extruded Zn–0.2 wt% Mg alloy samples with grain sizes of 11 μm, 29 μm and 47 μm, the team measured stress-strain curves that show a markedly higher yield strength for coarse-grained samples than for fine-grained ones. What is more, the compressive yield strengths of these coarser-grained zinc alloys are notably higher than those of MAGNEZIX, NOVAMag and RESOMET biodegradable magnesium alloys. At the upper end, they even rival those of high-strength medical-grade stainless steels.
The researchers attribute this increased compressive yield to a phenomenon called the inverse Hall–Petch effect. This effect comes about because larger grains favour metallurgical effects such as intra-granular pyramidal slip as well as a variation of a well-known metal phenomenon called twinning, in which a specific kind of defect forms when part of the material’s crystal structure flips its orientation. Larger grains also make the alloys more flexible, allowing them to better adapt to surrounding biological tissues. This is the opposite of what happens with smaller grains, which facilitate inter-granular grain boundary sliding and make alloys more rigid.
The new work, which is detailed in Nature, could aid the development of advanced biodegradable implants for orthopaedics, cardiovascular applications and other devices, says Nei. “With improved biocompatibility, these implants could be safer and do away with the need for removal surgeries, lowering patient risk and healthcare costs,” he tells Physics World. “What is more, new alloys and processing techniques could allow for more personalized treatments by tailoring materials to specific medical needs, ultimately improving patient outcomes.”
The Monash team now aims to improve the composition of the alloys and achieve more control over how they degrade. “Further studies on animals and then clinical trials will test their strength, safety and compatibility with the body,” says Nei. “After that, regulatory approvals will ensure that the biodegradable metals meet medical standards for orthopaedic implants.”
The team is also setting up a start-up company with the goal of developing and commercializing the materials, he adds.
Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.
The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.
The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”
At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.
Recent claims of quantum advantage
In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.
To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.
Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.
The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors. “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”
“How progress gets made”
Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.
The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.
Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”
Sometimes, you just have to follow your instincts and let serendipity take care of the rest.
North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.
But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.
My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.
This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.
Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.
The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)
Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.
One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)
The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.
Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.
The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)
I’m standing next to Yang Fugui in front of the High Energy Photon Source (HEPS) in Beijing’s Huairou District about 50 km north of the centre of the Chinese capital. The HEPS isn’t just another synchrotron light source. It will, when it opens later this year, be the world’s most advanced facility of its type. Construction of this giant device started in 2019 and for Yang – a physicist who is in charge of designing the machine’s beamlines – we’re at a critical point.
“This machine has many applications, but now is the time to make sure it does new science,” says Yang, who is a research fellow at the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS), which is building the new machine. With the ring completed, optimizing the beamlines will be vital if the facility is to open up new research areas.
From the air – Google will show you photos – the HEPS looks like a giant magnifying glass lying in a grassy field. But I’ve come by land, and from my perspective it resembles a large and gleaming low-walled silver sports stadium, surrounded by well-kept bushes, flowers and fountains.
I was previously in Beijing in 2019 at the time ground for the HEPS was broken when the site was literally a green field. Back then, I was told, the HEPS would take six-and-a-half years to build. We’re still on schedule and, if all continues to run as planned, the facility will come online in December 2025.
Lighting up the world
There are more than 50 synchrotron radiation sources around the world, producing intense, coherent beams of electromagnetic radiation used for experiments in everything from condensed-matter physics to biology. Three significant hardware breakthroughs, one after the other, have created natural divisions among synchrotron sources, leading them to be classed by their generation.
Along with Max IV in Sweden, SIRIUS in Brazil and the Extremely Brilliant Source at the European Synchrotron Radiation Facility (ESRF) in France, the HEPS is a fourth-generation source. These days such devices are vital and prestigious pieces of scientific infrastructure, but synchrotron radiation began life as an unexpected nuisance (Phys. Perspect. 10 438).
Classical electrodynamics says that charged particles undergoing acceleration – changing their momentum or velocity – radiate energy tangentially to their trajectories. Early accelerator builders assumed they could ignore the resulting energy losses. But in 1947, scientists building electron synchrotrons at the General Electric (GE) Research Laboratory in Schenectady, New York, were dismayed to find the phenomenon was real, sapping the energies of their devices.
Where it all began Synchrotron light is created whenever charged particles are accelerated. It gets its name because it was first observed in 1947 by scientists at the General Electric Research Laboratory in New York, who saw a bright speck of light through their synchrotron accelerator’s glass vacuum chamber – the visible portion of that energy. (Courtesy: AIP Emilio Segrè Visual Archives, John P Blewett Collection)
Nuisances of physics, however, have a way of turning into treasured tools. By the early 1950s, scientists were using synchrotron light to study absorption spectra and other phenomena. By the mid-1960s, they were using it to examine the surface structures of materials. But a lot of this work was eclipsed by seemingly much sexier physics.
High-energy particle accelerators, such as CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, were regarded as the most exciting, well-funded and biggest instruments in physics. They were the symbols of physics for politicians, press and the public – the machines that studied the fundamental structure of the world.
Researchers who had just discovered the uses of synchrotron light were forced to scrape parts for their instruments. These “first-generation” synchrotrons, such as “Tantalus” in Wisconsin, the Stanford Synchrotron Radiation Project in California, and the Cambridge Electron Accelerator in Massachusetts, were cobbled together from discarded pieces of high energy accelerators or grafted onto them. They were known as “parasites”.
Early adopter A drawing of plans for the Stanford Synchrotron Radiation Project in the US, which became one of the “first generation” of dedicated synchrotron-light sources when it opened in 1974. (Courtesy: SLAC – Zawojski)
In the 1970s, accelerator physicists realized that synchrotron sources could become more useful by shrinking the angular divergence of the electron beam, thereby improving the “brightness”. Renate Chasman and Kenneth Green devised a magnet array to maximize this property. Dubbed the “Chasman–Green lattice”, it begat a second-generation of dedicated light sources, built not borrowed.
Hard on the heels of Synchrotron Radiation Light Source, which opened in the UK in 1981, the National Synchrotron Light Source (NSLS I) at Brookhaven was the first second-generation source to use such a lattice. China’s oldest light source, the Beijing Synchrotron Radiation Facility, which opened to users in Beijing early in 1991, had a Chasman–Green lattice but also had to skim photons off an accelerator; it was a first-generation machine with a second-generation lattice. China’s first fully second-generation machine was the Hefei Light Source, which opened later that year.
By then instruments called “undulators” were already starting to be incorporated into light sources. They increased brightness hundreds-fold, doing so by wiggling the electron beam up and down, causing a coherent addition of electron field through each wiggle. While undulators had been inserted into second-generation sources, the third generation built them in from the start.
Bright thinking Consisting of a periodic array of dipole magnets (red and green blocks), undulators have a static magnetic field that alternates with a wavelength λu. An electron beam passing through the magnets is forced to oscillate, emitting light hundreds of times brighter than would otherwise be possible (orange). Such undulators were added to second-generation synchrotron sources – but third-generation facilities had them built in from the start. (Courtesy: Creative Commons Attribution-Share Alike 3.0 Bastian Holst)
The first of these light sources was the ESRF, which opened to users in 1988. It was followed by the Advanced Photon Source (APS) at Argonne National Laboratory in 1995 and SPring-8 in Japan in 1999. The first third-generation source on the Chinese mainland was the Shanghai Synchrotron Radiation Facility, which opened in 2009.
In the 2010s, “multi-bend achromat” magnets drastically shrank the size of beam elements, further increasing brilliance. Several third generation machines, including the APS, have been upgraded with achromats, turning third-generation machines into fourth. SIRIUS, which has an energy of 3 GeV, was the first fourth-generation machine to be built from scratch.
Next in sequence The Advanced Photon Source at the Argonne National Laboratory in the US, which is a third-generation synchrotron-light source. (Courtesy: Argonne National Laboratory)
Set to operate at 6 GeV, the HEPS will be the first high-energy fourth-generation machine built from scratch. It is a step nearer to the “diffraction limit” that’s ultimately imposed by the way the uncertainty principle limits the simultaneous specification of certain properties. It makes further shrinking of the beam possible – but only at the expense of lost brilliance. That limit is still on the horizon, but the HEPS draws it closer.
The HEPS is being built next to a mountain range north of Beijing, where the bedrock provides a stable platform for the extraordinarily sensitive beams. Next door to the HEPS is a smaller stadium-like building for experimental labs and offices, and a yet smaller building for housing behind that.
Staff at the HEPS successfully stored the machine’s first electron beam in August 2024 and are now enhancing and optimizing parameters such as electron beam current strength and lifetime. When it opens at the end of the year, the HEPS will have 14 beamlines but is designed eventually to have around 90 experimental stations. “Our task right now is to build more beamlines” Yang told me.
Looking around
After studying physics at the University of Science and Technology in Hefei, Yang’s first job was as a beamline designer at the HEPS. On my visit, the machine was still more than a year from being operational and the experimental hall surrounding the ring was open. It is spacious unlike of many US light sources I’ve been to, which tend to be crammed due to numerous upgrades of the machine and beamlines.
As with any light source, the main feature of the HEP is its storage ring, which consists of alternating straight sections and bends. At the bends, the electrons shed X-rays like rain off a spinning umbrella. Intense, energetic and finely tunable, the X-rays are carried off down beamlines, where are they made useful for almost everything from materials science to biomedicine.
New science Fourth-generation sources, such as the High Energy Photon Source (HEPS), need to attract academic and business users from home and abroad. But only time will tell what kind of new science might be made possible. (Courtesy: IHEP)
We pass other stations optimized for 2D, 3D and nanoscale structures. Occasionally, a motorized vehicle loaded with equipment whizzes by, or workers pass us on bicycles. Every so often, I see an overhead red banner in Chinese with white lettering. Translating, Yang says the banners promote safety, care and the need for precision in doing high-quality work, signs of the renowned Chinese work ethic.
We then come to what is labelled a “pink” beam. Unlike a “white” beam, which has a broad spread of wavelengths, or a monochromatic beam of a very specific colour such as red, a pink beam has a spread of wavelengths that are neither broad nor narrow. This allows a much broader flux – typically two orders of magnitude more than a monochromatic beam – allowing a researcher fast diffraction patterns.
Another beamline, meanwhile, is labelled “tender” because its energy falls between 2 keV (“soft” X-rays) and 10 keV (“hard” X-rays). It’s for materials “somewhere between grilled steak and Jell-O” one HEPS researcher quips to me, referring to the wobbly American desert. A tender beam is for purposes that don’t require atomic-scale resolution, such as the magnetic behaviour of atoms.
Three beam pipes pass over the experimental hall to end stations that lie outside the building. They will be used, among other things, for applications in nanoscience, with a monochromator throwing out much of the X-ray beam to make it extremely coherent. We also pass a boxy, glass structure that is a clean room for making parts, as well as a straight pipe about 100 m long that will be used to test tiny vibrations in the Earth that might affect the precision of the beam.
Challenging times
I once spoke to one director of the NSLS, who would begin each day by walking around that facility, seeing what the experimentalists were up to and asking if they needed help. His trip usually took about 5–10 minutes; my tour with Yang took an hour.
But fourth-generation sources, such as the HEPS, face two daunting challenges. One is to cultivate a community of global users. Nearby the HEPS is CAS’s new Yanqi Lake campus, which lies on the other side of the mountains from Beijing, from where I can see the Great Wall meandering through the nearby hills. Faculty and students at CAS will form part of academic users of the HEPS, but how will the lab bring in researchers from abroad?
The HEPS will also need to get in users from business, convincing companies of the value of their machine. SPring-8 in Japan has industrial beamlines, including one sponsored by car giant Toyota, while China’s Shanghai machine has beamlines built by the China Petroleum and Chemical Corporation (Sinopec).
Yang is certainly open to collaboration with business partners. “We welcome industries, and can make full use of the machine, that would be enough,” he says. “If they contribute to building the beamlines, even better.”
The other big challenge for fourth-generation sources is to discover what new things are made possible by the vastly increased flux and brightness. A new generation of improved machines doesn’t necessarily produce breakthrough science; it’s not like one can turn on a machine with greater brightness and a field of new capabilities unfolds before you.
Going fourth The BM18 beamline on the Extremely Brilliant Source (EBS) at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The EBS is a dedicated fourth-generation light source, with the BM18 beamline being ideal for monitoring very slowly changing systems. (Courtesy: ESRF)
Instead, what can happen is that techniques that are demonstrations or proof-of-concept research in one generation of synchrotron become applied in niche areas in the next, but become routine in the generation after that. A good example is speckle spectrometry – an interference-based technique that needs a sufficiently coherent light source – that should become widely used at fourth-generation sources like HEPS.
For the HEPS, the challenge will be to discover what new research in materials, chemistry, engineering and biomedicine these techniques will make possible. Whenever I ask experimentalists at light sources what kinds of new science the fourth-generation machines will allow, the inevitable answer is something like, “Ask me in 10 years!”
Yang can’t wait that long. “I started my career here,” he says, gesturing excitedly to the machine. “Now is the time – at the beginning – to try to make this machine do new science. If it can, I’ll end my career here!”