↩ Accueil

Vue normale

Reçu aujourd’hui — 1 septembre 2025Physics World

Garbage in, garbage out: why the success of AI depends on good data

1 septembre 2025 à 14:00

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

Reçu avant avant-hierPhysics World

Why foamy heads on Belgium beers last so long

29 août 2025 à 16:44

It’s well documented that a frothy head on a beverage can stop the liquid from sloshing around and onto the floor – it’s one reason why when walking around with coffee, it swills around more than beer, for example.

When it comes to beer, a clear sign of a good brew is a big head of foam at the top of a poured glass.

Beer foam is made of many small bubbles of air, separated from each other by thin films of liquid. These thin films must remain stable, or the bubbles will pop, and the foam will collapse.

What holds these thin films together is not completely understood and is likely conglomerates of proteins, surface viscosity or the presence of surfactants – molecules that reduce surface tension and are found in soaps and detergents.

To find out more, researchers from ETH Zurich and Eindhoven University of Technology (EUT) investigated beer-foam stability for different types of beers at varying stages of the fermentation process.

They found that for single-fermentation beers, the foams are mostly held together with the surface viscosity of the beer. This is influenced by proteins in the beer – the more they contain the more viscous the film and more stable the foam will be.

“We can directly visualize what’s happening when two bubbles come into close proximity,” notes EUT material scientist Emmanouil Chatzigiannakis. “We can directly see the bubble’s protein aggregates, their interface, and their structure.”

When it comes to double-fermented beers, however, the proteins in the beer are altered slightly by yeast cells and come together to form a two-dimensional membrane that keeps foam intact longer.

The head was found to be even more stable for triple-fermented beers, which include Belgium Trappist beers. The proteins change further and behave like a surfactant that stabilizes the bubbles.

The team say that the finding of how the fermentation process alters the stability of bubbles could be used to produce more efficient ways of creating foams – or identify ways to control the amount of froth so that everyone can pour a perfect glass of beer every time. Cheers!

The post Why foamy heads on Belgium beers last so long appeared first on Physics World.

Making molecules with superheavy elements could shake up the periodic table

29 août 2025 à 14:00

Nuclear scientists at the Lawrence Berkeley National Laboratory (LBNL) in the US have produced and identified molecules containing nobelium for the first time. This element, which has an atomic number of 102, is the heaviest ever to be observed in a directly-identified molecule, and team leader Jennifer Pore says the knowledge gained from such work could lead to a shake-up at the bottom of the periodic table.

“We compared the chemical properties of nobelium side-by-side to simultaneously produced molecules containing actinium (element number 89),” says Pore, a research scientist at LBNL. “The success of these measurements demonstrates the possibility to further improve our understanding of heavy and superheavy-element chemistry and so ensure that these elements are placed correctly on the periodic table.”

The periodic table currently lists 118 elements. As well as vertical “groups” containing elements with similar properties and horizontal “periods” in which the number of protons (atomic number Z) in the nucleus increases from left to right, these elements are arranged in three blocks. The block that contains actinides such as actinium (Ac) and nobelium (No), as well as the slightly lighter lanthanide series, is often shown offset, below the bottom of the main table.

The end of a predictive periodic table?

Arranging the elements this way is helpful because it gives scientists an intuitive feel for the chemical properties of different elements. It has even made it possible to predict the properties of new elements as they are discovered in nature or, more recently, created in the laboratory.

The problem is that the traditional patterns we’ve come to know and love may start to break down for elements at the bottom of the table, putting an end to the predictive periodic table as we know it. The reason, Pore explains, is that these heavy nuclei have a very large number of protons. In the actinides (Z > 88), for example, the intense charge of these “extra” protons exerts such a strong pull on the inner electrons that relativistic effects come into play, potentially changing the elements’ chemical properties.

“As some of the electrons are sucked towards the centre of the atom, they shield some of the outer electrons from the pull,” Pore explains. “The effect is expected to be even stronger in the superheavy elements, and this is why they might potentially not be in the right place on the periodic table.”

Understanding the full impact of these relativistic effects is difficult because elements heavier than fermium (Z = 100) need to be produced and studied atom by atom. This means resorting to complex equipment such as accelerated ion beams and the FIONA (For the Identification Of Nuclide A) device at LBNL’s 88-Inch Cyclotron Facility.

Producing and directly identifying actinide molecules

The team chose to study Ac and No in part because they represent the extremes of the actinide series. As the first in the series, Ac has no electrons in its 5f shell and is so rare that the crystal structure of an actinium-containing molecule was only determined recently. The chemistry of No, which contains a full complement of 14 electrons in its 5f shell and is the heaviest of the actinides, is even less well known.

In the new work, which is described in Nature, Pore and colleagues produced and directly identified molecular species containing Ac and No ions. To do this, they first had to produce Ac and No. They achieved this by accelerating beams of 48Ca with the 88-Inch Cyclotron and directing them onto targets of 169Tm and 208Pb, respectively. They then used the Berkeley Gas-filled Separator to separate the resulting actinide ions from unreacted beam material and reaction by-products.

The next step was to inject the ions into a chamber in the FIONA spectrometer known as a gas catcher. This chamber was filled with high-purity helium, as well as trace amounts of H2O and N2, at a pressure of approximately 150 torr. After interactions with the helium gas reduced the actinide ions to their 2+ charge state, so-called “coordination compounds” were able to form between the 2+ actinide ions and the H2O and N2 impurities. This compound-formation step took place either in the gas buffer cell itself or as the gas-ion mixture exited the chamber via a 1.3-mm opening and entered a low-pressure (several torr) environment. This transition caused the gas to expand at supersonic speeds, cooling it rapidly and allowing the molecular species to stabilize.

Once the actinide molecules formed, the researchers transferred them to a radio-frequency quadrupole cooler-buncher ion trap. This trap confined the ions for up to 50 ms, during which time they continued to collide with the helium buffer gas, eventually reaching thermal equilibrium. After they had cooled, the molecules were reaccelerated using FIONA’s mass spectrometer and identified according to their mass-to-charge ratio.

A fast and sensitive instrument

FIONA is much faster than previous such instruments and more sensitive. Both properties are important when studying the chemistry of heavy and superheavy elements, which Pore notes are difficult to make, and which decay quickly. “Previous experiments measured the secondary particles made when a molecule with a superheavy element decayed, but they couldn’t identify the exact original chemical species,” she explains. “Most measurements reported a range of possible molecules and were based on assumptions from better-known elements. Our new approach is the first to directly identify the molecules by measuring their masses, removing the need for such assumptions.”

As well as improving our understanding of heavy and superheavy elements, Pore says the new work might also have applications in radioactive isotopes used in medical treatment. For example, the 225Ac isotope shows promise for treating certain metastatic cancers, but it is difficult to make and only available in small quantities, which limits access for clinical trials and treatment. “This means that researchers have had to forgo fundamental chemistry experiments to figure out how to get it into patients,” Pore notes. “But if we could understand such radioactive elements better, we might have an easier time producing the specific molecules needed.”

The post Making molecules with superheavy elements could shake up the periodic table appeared first on Physics World.

Super sticky underwater hydrogels designed using data mining and AI

29 août 2025 à 10:00

The way in which new materials are designed is changing, with data becoming ever more important in the discovery and design process. Designing soft materials is a particularly tricky task that requires selection of different “building blocks” (monomers in polymeric materials, for example) and optimization of their arrangement in molecular space.

Soft materials also exhibit many complex behaviours that need to be balanced, and their molecular and structural complexities make it difficult for computational methods to help in the design process – often requiring costly trial and error experimental approaches instead. Now, researchers at Hokkaido University in Japan have combined artificial intelligence (AI) with data mining methods to develop an ultra-sticky hydrogel material suitable for very wet environments – a difficult design challenge because the properties that make materials soft don’t usually promote adhesion. They report their findings in Nature.

Challenges of designing sticky hydrogels

Hydrogels are a permeable soft material composed of interlinked polymer networks with water held within the network. Hydrogels are highly versatile, with properties controlled by altering the chemical makeup and structure of the material.

Designing hydrogels computationally to perform a specific function is difficult, however, because the polymers used to build the hydrogel network can contain a plethora of chemical functional groups, complicating the discovery of suitable polymers and the structural makeup of the hydrogel. The properties of hydrogels are also influenced by factors including the molecular arrangement and intermolecular interactions between molecules (such as van der Waals forces and hydrogen bonds). There are further challenges for adhesive hydrogels in wet environments, as hydrogels will swell in the presence of water, which needs to be factored into the material design.

Data driven methods provide breakthrough

To develop a hydrogel with a strong and lasting underwater adhesion, the researchers mined data from the National Center for Biotechnology Information (NCBI) Protein database. This database contains the amino acid sequences responsible for adhesion in underwater biological systems – such as those found in bacteria, viruses, archaea and eukaryotes. The protein sequences were synthetically mimicked and adapted for the polymer strands in hydrogels.

“We were inspired by nature’s adhesive proteins, but we wanted to go beyond mimicking a few examples. By mining the entire protein database, we aimed to systematically explore new design rules and see how far AI could push the boundaries of underwater adhesion,” says co-lead author Hailong Fan.

The researchers used information from the database to initially design and synthesize 180 bioinspired hydrogels, each with a unique polymer network and all of which showed adhesive properties beyond other hydrogels. To improve them further, the team employed machine learning to create hydrogels demonstrating the strongest underwater adhesive properties to date, with instant and repeatable adhesive strengths exceeding 1 MPa – an order-of-magnitude improvement over previous underwater adhesives. In addition, the AI-designed hydrogels were found to be functional across many different surfaces in both fresh and saline water.

“The key achievement is not just creating a record-breaking underwater adhesive hydrogel but demonstrating a new pathway – moving from biomimetic experience to data-driven, AI-guided material design,” says Fan.

A versatile adhesive

The researchers took the three best performing hydrogels and tested them in different wet environments to show that they could maintain their adhesive properties for long time periods. One hydrogel was used to stick a rubber duck to a rock by the sea, which remained in place despite continuous wave impacts over many tide cycles. A second hydrogel was used to patch up a 20 mm hole on a pipe filled with water and instantly stopped a high-pressure leak. This hydrogel remained in place for five months without issue. The third hydrogel was placed under the skin of mice to demonstrate biocompatibility.

The super strong adhesive properties in wet environments could have far ranging applications, from biomedical engineering (prosthetic coatings or wearable biosensors) to deep-sea exploration and marine farming. The researchers also note that this data-driven approach could be adapted for designing other functional soft materials.

When asked about what’s next for this research, Fan says that “our next step is to study the molecular mechanisms behind these adhesives in more depth, and to expand this data-driven design strategy to other soft materials, such as self-healing and biomedical hydrogels”.

The post Super sticky underwater hydrogels designed using data mining and AI appeared first on Physics World.

From a laser lab to The Economist: physicist Jason Palmer on his move to journalism

28 août 2025 à 15:55

My guest in this episode of the Physics World Weekly podcast is the journalist Jason Palmer, who co-hosts “The Intelligence” podcast at The Economist.

Palmer did a PhD in chemical physics at Imperial College London before turning his hand to science writing with stints at the BBC and New Scientist.

He explains how he made the transition from the laboratory to the newsroom and offers tips for scientists planning to make the same career journey. We also chat about how artificial intelligence is changing how journalists work.

The post From a laser lab to <em>The Economist</em>: physicist Jason Palmer on his move to journalism appeared first on Physics World.

Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury

28 août 2025 à 12:55

Traumatic brain injury (TBI), caused by a sudden impact to the head, is a leading cause of death and disability. After such an injury, the most important indicator of how severe the injury is intracranial pressure – the pressure inside the skull. But currently, the only way to assess this is by inserting a pressure sensor into the patient’s brain. UK-based startup Crainio aims to change this by developing a non-invasive method to measure intracranial pressure using a simple optical probe attached to the patient’s forehead.

Can you explain why diagnosing TBI is such an important clinical challenge?

Every three minutes in the UK, someone is admitted to hospital with a head injury, it’s a very common problem. But when someone has a blow to the head, nobody knows how bad it is until they actually reach the hospital. TBI is something that, at the moment, cannot be assessed at the point of injury.

From the time of impact to the time that the patient receives an assessment by a neurosurgical expert is known as the golden hour. And nobody knows what’s happening to the brain during this time – you don’t know how best to manage the patient, whether they have a severe TBI with intracranial pressure rising in the head, or just a concussion or a medium TBI.

Once at the hospital, the neurosurgeons have to assess the patient’s intracranial pressure, to determine whether it is above the threshold that classifies the injury as severe. And to do that, they have to drill a hole in the head – literally – and place an electrical probe into the brain. This really is one of the most invasive non-therapeutic procedures, and you obviously can’t do this to every patient that comes with a blow in the head. It has its risks, there is a risk of haemorrhage or of infection.

Therefore, there’s a need to develop technologies that can measure intracranial pressure more effectively, earlier and in a non-invasive manner. For many years, this was almost like a dream: “How can you access the brain and see if the pressure is rising in the brain, just by placing an optical sensor on the forehead?”

Crainio has now created such a non-invasive sensor; what led to this breakthrough?

The research goes back to 2016, at the Research Centre for Biomedical Engineering at City, University of London (now City St George’s, University of London), when the National Institute for Health Research (NIHR) gave us our first grant to investigate the feasibility of a non-invasive intracranial sensor based on light technologies. We developed a prototype, secured the intellectual property and conducted a feasibility study on TBI patients at the Royal London Hospital, the biggest trauma hospital in the UK.

It was back in 2021, before Crainio was established, that we first discovered that after we shone certain frequencies of light, like near-infrared, into the brain through the forehead, the optical signals coming back – known as the photoplethysmogram, or PPG – contained information about the physiology or the haemodynamics of the brain.

When the pressure in the brain rises, the brain swells up, but it cannot go anywhere because the skull is like concrete. Therefore, the arteries and vessels in the brain are compressed by that pressure. PPG measures changes in blood volume as it pulses through the arteries during the cardiac cycle. If you have a viscoelastic artery that is opening and closing, the volume of blood changes and this is captured by the PPG. Now, if you have an artery that is compromised, pushed down because of pressure in the brain, that viscoelastic property is impacted and that will impact the PPG.

Changes in the PPG signal due to changes arising from compression of the vessels in the brain, can give us information about the intracranial pressure. And we developed algorithms to interrogate this optical signal and machine learning models to estimate intracranial pressure.

How did the establishment of Crainio help to progress the sensor technology?

Following our research within the university, Crainio was set up in 2022. It brought together a team of experts in medical devices and optical sensors to lead the further development and commercialization of this device. And this small team worked tirelessly over the last few years to generate funding to progress the development of the optical sensor technology and bring it to a level that is ready for further clinical trials.

Panicos Kyriacou
Panicos Kyriacou “At Crainio we want to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.” (Courtesy: Crainio)

In 2023, Crainio was successful with an Innovate UK biomedical catalyst grant, which will enable the company to engage in a clinical feasibility study, optimize the probe technology and further develop the algorithms. The company was later awarded another NIHR grant to move into a validation study.

The interest in this project has been overwhelming. We’ve had a very positive feedback from the neurocritical care community. But we also see a lot of interest from communities where injury to the brain is significant, such as rugby associations, for example.

Could the device be used in the field, at the site of an accident?

While Crainio’s primary focus is to deliver a technology for use in critical care, the system could also be used in ambulances, in helicopters, in transfer patients and beyond. The device is non-invasive, the sensor is just like a sticking plaster on the forehead and the backend is a small box containing all the electronics. In the past few years, working in a research environment, the technology was connected into a laptop computer. But we are now transferring everything into a graphical interface, with a monitor to be able to see the signals and the intracranial pressure values in a portable device.

Following preliminary tests on patients, Crainio is now starting a new clinical trial. What do you hope to achieve with the next measurements?

The first study, a feasibility study on the sensor technology, was done during the time when the project was within the university. The second round is led by Crainio using a more optimized probe. Learning from the technical challenges we had in the first study, we tried to mitigate them with a new probe design. We’ve also learned more about the challenges associated with the acquisition of signals, the type of patients, how long we should monitor.

We are now at the stage where Crainio has redeveloped the sensor and it looks amazing. The technology has received approval by MHRA, the UK regulator, for clinical studies and ethical approvals have been secured. This will be an opportunity to work with the new probe, which has more advanced electronics that enable more detailed acquisition of signals from TBI patients.

We are again partnering with the Royal London Hospital, as well as collaborators from the traumatic brain injury team at Cambridge and we’re expecting to enter clinical trials soon. These are patients admitted into neurocritical trauma units and they all have an invasive intracranial pressure bolt. This will allow us to compare the physiological signal coming from our intracranial pressure sensor with the gold standard.

The signals will be analysed by Crainio’s data science team, with machine learning algorithms used to look at changes in the PPG signal, extract morphological features and build models to develop the technology further. So we’re enriching the study with a more advanced technology, and this should lead to more accurate machine learning models for correctly capturing dynamic changes in intracranial pressure.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose traumatic brain injury effectively, faster, accurately and earlier

This time around, we will also record more information from the patients. We will look at CT scans to see whether scalp density and thickness have an impact. We will also collect data from commercial medical monitors within neurocritical care to see the relation between intracranial pressure and other physiological data acquired in the patients. We aim to expand our knowledge of what happens when a patient’s intracranial pressure rises – what happens to their blood pressures? What happens to other physiological measurements?

How far away is the system from being used as a standard clinical tool?

Crainio is very ambitious. We’re hoping that within the next couple of years we will progress adequately in order to achieve CE marking and all meet the standards that are necessary to launch a medical device.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose TBI effectively, faster, accurately and earlier. This can only yield better outcomes and improve patients’ quality-of-life.

Of course, as a company we’re interested in being successful commercially. But the ambition here is, first of all, to keep the cost affordable. We live in a world where medical technologies need to be affordable, not only for Western nations, but for nations that cannot afford state-of-the-art technologies. So this is another of Crainio’s primary aims, to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.

The post Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury appeared first on Physics World.

Extremely stripped star reveals heavy elements as it explodes

28 août 2025 à 10:00
Artist's impression of a star just before it explodes
Stripped star Artist’s impression of the star that exploded to create SN 2021yfj. Shown are the ejection of silicon (grey), sulphur (yellow) and argon (purple) just before the final explosion. (Courtesy: WM Keck Observatory/Adam Makarenko)

For the first time, astronomers have observed clear evidence for a heavily stripped star that has shed many of its outer layers before its death in a supernova explosion. Led by Steve Schulze at Northwestern University, the team has spotted the spectral signatures of heavier elements that are usually hidden deep within stellar interiors.

Inside a star, atomic nuclei fuse together to form heavier elements in a process called nucleosynthesis. This releases a vast amount of energy that offsets the crushing force of gravity.

As stars age, different elements are consumed and produced. “Observations and models of stars tell us that stars are enormous balls of hydrogen when they are born,” Schulze explains. “The temperature and density at the core are so high that hydrogen is fused into helium. Subsequently, helium fuses into carbon, and this process continues until iron is produced.”

Ageing stars are believed to have an onion-like structure, with a hydrogen outer shell enveloping deeper layers of successively heavier elements. Near the end of a star’s life, inner-shell elements including silicon, sulphur, and argon fuse to form a core of iron. Unlike lighter elements, iron does not release energy as it fuses, but instead consumes energy from its surroundings. As a result, the star can no longer withstand its own gravity and it will collapse rapidly in and then explode in a dramatic supernova.

Hidden elements

Rarely, astronomers can observe an old star that has blown out its outer layers before exploding. When the explosion finally occurs, heavier elements that are usually hidden within deeper shells create absorption lines in the supernova’s light spectrum, allowing astronomers to determine the compositions of these inner layers. So far, inner-layer elements as heavy as carbon and oxygen have been observed, but not direct evidence for elements in deeper layers.

Yet in 2021, a mysterious new observation was made by a programme of the Zwicky Transient Facility headed by Avishay Gal-Yam at the Weizmann Institute of Science in Israel. The team was scanning the sky for signs of infant supernovae at the very earliest stages following their initial explosion.

“On 7 September 2021 it was my duty to look for infant supernovae,” Schulze recounts. “We discovered SN 2021yfj due to its rapid increase in brightness. We immediately contacted Alex Filippenko’s group at the University of California Berkeley to ask whether they could obtain a spectrum of this supernova.”

When the results arrived, the team realised that the absorption lines in the supernova’s spectrum were unlike anything they had encountered previously. “We initially had no idea that most of the features in the spectrum were produced by silicon, sulphur, and argon,” Schulze continues. Gal-Yam took up the challenge of identifying the mysterious features in the spectrum.

Shortly before death

In the meantime, the researchers examined simultaneous observations of SN 2021yfj, made by a variety of ground- and space-based telescopes. When Gal-Yam’s analysis was complete, all of the team’s data confirmed the same result. “We had detected a supernova embedded in a shell of material rich in silicon, sulphur, and argon,” Schulze describes. “These elements are formed only shortly before a star dies, and are often hidden beneath other materials – therefore, they are inaccessible under normal circumstances.”

The result provided clear evidence that the star had been more heavily stripped back towards the end of its life than any other observed previously: shedding many of its outer layers before the final explosion.

“SN 2021yfj demonstrates that stars can die in far more extreme ways than previously imagined,” says Schulze. “It reveals that our understanding of how stars evolve and die is still not complete, despite billions of them having already been studied.” By studying their results, the team now hopes that astronomers can better understand the later stages of stellar evolution, and the processes leading up to these dramatic ends.

The research is described in Nature.

The post Extremely stripped star reveals heavy elements as it explodes appeared first on Physics World.

Rainer Weiss: US gravitational-wave pioneer dies aged 92

27 août 2025 à 18:05

Rainer Weiss, who shared the Nobel Prize for Physics in 2017 for the discovery of gravitational waves, died on 25 August at the age of 92. Weiss came up with the idea of detecting gravitational waves by measuring changes in distance as tiny as 10–18 m via an interferometer several kilometres long. His proposal eventually led to the formation of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO), which first detected such waves in 2015.

Weiss was born in Berlin, Germany, on 29 September 1932 shortly before the Nazis rose to power. With a father who was Jewish and an ardent communist, Weiss and his family were forced to flee the country – first to Czechoslovakia and then to the US in 1939.  Weiss was raised in New York, finishing his school days at the private Columbia Grammar School thanks to a scholarship from a refugee relief organization.

In 1950 Weiss began studying electrical engineering at Massachusetts Institute of Technology (MIT) before switching to physics, eventually earning a PhD in 1962, developing atomic clocks under the supervision of Jerrold Zacharias,. He then worked at Tufts University before moving to Princeton University, where he was a research associate with the astronomer and physicist Robert Dicke.

In 1964 Weiss returned to MIT, where he began developing his idea of using a large interferometer to measure gravitational waves. Teaming up with Kip Thorne at the California Institute of Technology (Caltech), Weiss drew up a feasibility study for a kilometre-scale laser interferometer. In 1979 the National Science Foundation funded Caltech and MIT to develop the proposal to build LIGO.

Construction of two LIGO detectors – one in Hanford, Washington and the other at Livingston, Louisiana, each of which featured arms 4 km long – began in 1990, with the facilities opening in 2002. After almost a decade of operation, however, no waves had been detected so in 2011 the two observatories were upgraded to make them 10 times more sensitive than before.

On 14 September 2015 – during the first observation run of what was known as Advanced LIGO, or aLIGO – the interferometer detected gravitational waves from two merging black holes some  1.3 billion light-years from Earth. The discovery was announced by those working on aLIGO in February 2016.

The following year, Weiss was awarded one half of the 2017 Nobel Prize for Physics “for decisive contributions to the LIGO detector and the observation of gravitational waves”. The other half was shared by Thorne and fellow Caltech physicist Barry Barish, who was LIGO project director.

‘An indelible mark’

As well as pioneering the detection of gravitational waves, Weiss also developed atomic clocks and led efforts to measure the spectrum of the cosmic microwave background via weather balloons. He co-founded NASA’s Cosmic Background Explorer project, measurements from which have helped support the Big Bang theory describing the expansion of the universe.

In addition to the Nobel prize, Weiss was awarded the Gruber Prize in Cosmology in 2006, the Einstein Prize from the American Physical Society in 2007 as well as the Shaw Prize and the Kavli Prize in Astrophysics, both in 2016.

MIT’s dean of science Nergis Mavalvala, who worked with Weiss to build an early prototype of a gravitational-wave detector as part of her PhD in the 1990s, says that every gravitational-wave event that is observed “will be a reminder of his legacy”.

“[Weiss] leaves an indelible mark on science and a gaping hole in our lives,” says Mavalvala. “I am heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us – of passion for science and discovery, but most of all to always put people first.”

The post Rainer Weiss: US gravitational-wave pioneer dies aged 92 appeared first on Physics World.

Famous double-slit experiment gets its cleanest test yet

27 août 2025 à 14:00

Scientists at the Massachusetts Institute of Technology (MIT) in the US have achieved the cleanest demonstration yet of the famous double-slit experiment. Using two single atoms as the slits, they inferred the photon’s path by measuring subtle changes in the atoms’ properties after photon scattering. Their results matched the predictions of quantum theory: interference fringes when no path was observed, two bright spots when it was.

First performed in the 1800s by Thomas Young, the double-slit experiment has been revisited many times. Its setup is simple: send light toward a pair of slits in a screen and watch what happens. Its outcome, however, is anything but. If the light passes through the slits unobserved, as it did in Young’s original experiment, an interference pattern of bright and dark fringes appears, like ripples overlapping in a pond. But if you observe which slit the light goes through, as Albert Einsten proposed in a 1920s “thought experiment” and as other physicists have since demonstrated in the laboratory, the fringes vanish in favour of two bright spots. Hence, whether light acts as a wave (fringes) or a particle (spots) depends on whether anyone observes it. Reality itself seems to shift with the act of looking.

The great Einstein–Bohr debate

Einstein disliked the implications of this, and he and Niels Bohr debated them extensively. According to Einstein, observation only has an effect because it introduces noise. If the slits were mounted on springs, he suggested, their recoil would reveal the photon’s path without destroying the fringes.

Bohr countered that measuring the photon’s recoil precisely enough to reveal its path would blur the slits’ positions and erase interference. For him, this was not a flaw of technology but a law of nature – namely, his own principle of complementarity, which states that quantum systems can show wave-like or particle-like behaviour, but never both at once.

Physicists have performed numerous versions of the experiment since, and each time the results have sided with Bohr. Yet the unavoidable noise in real set-ups left room for doubt that this counterintuitive rule was truly fundamental.

Atoms as slits

To celebrate the International Year of Quantum Science and Technology, physicists in Wolfgang Ketterle’s group at MIT performed Einstein’s thought experiment directly. They began by cooling more than 10,000 rubidium atoms to near absolute zero and trapping them in a laser-made lattice such that each one acted as an individual scatterer of light. If a faint beam of light was sent through this lattice, a single photon could scatter off an atom.

Since the beam was so faint, the team could collect very little information per experimental cycle. “This was the most difficult part,” says team member Hanzhen Lin, a PhD student at MIT. “We had to repeat the experiment thousands of times to collect enough data.”

In every such experiment, the key was to control how much photon path information the atoms provided. The team did this by adjusting the laser traps to tune the “fuzziness” of the atoms’ position. Tightly trapped atoms had well-defined positions and so, according to Heisenberg’s uncertainty principle, they could not reveal much about the photon’s path. In these experiments, fringes appeared. Loosely trapped atoms, in contrast, had more position uncertainty and were able to move, meaning an atom struck by a photon could carry a trace of that interaction. This faint record was enough to collapse the interference fringes, leaving only spots. Once again, Bohr was right.

While Lin acknowledges that theirs is not the first experiment to measure scattered light from trapped atoms, he says it is the first to repeat the measurements after the traps were removed, while the atoms floated freely. This went further than Einstein’s spring-mounted slit idea, and (since the results did not change) eliminated the possibility that the traps were interfering with the observation.

“I think this is a beautiful experiment and a testament to how far our experimental control has come,” says Thomas Hird, a physicist who studies atom-light interactions at the University of Birmingham, UK, and was not involved in the research. “This probably far surpasses what Einstein could have imagined possible.”

The MIT team now wants to observe what happens when there are two atoms per site in the lattice instead of one. “The interactions between the atoms at each site may give us interesting results,” Lin says.

The team describes the experiment in Physical Review Letters.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Famous double-slit experiment gets its cleanest test yet appeared first on Physics World.

Towards quantum PET: harnessing the diagnostic power of positronium imaging

27 août 2025 à 11:00

Positron emission tomography (PET) is a diagnostic imaging technique that uses an injected radioactive tracer to detect early signs of cancer, brain disorders or other diseases. At Jagiellonian University in Poland, a research team headed up by Paweł Moskal is developing a totally new type of PET scanner. The Jagiellonian PET (J-PET) can image the properties of positronium, a positron–electron bound state produced during PET scans, offering potential to increase the specificity of PET diagnoses.

The researchers have now recorded the first ever in vivo positronium image of the human brain. They also used the J-PET to show that annihilation photons generated during PET scans are not completely quantum entangled, opening up the possibility of using the degree of quantum entanglement as a diagnostic indicator. Moskal tells Physics World’s Tami Freeman about these latest breakthroughs and the team’s ongoing project to build the world’s first whole-body quantum PET scanner.

Can you describe how conventional PET images are generated?

PET is based on the annihilation of a positron with an electron to create two photons. The patient is administered a radiopharmaceutical labelled with a positron-emitting radionuclide (for example, fluoro-deoxy-glucose (FDG) labelled with 18F), which localizes in targeted tissues. The 18F emits positrons inside the body, which annihilate with electrons from the body, and the resulting annihilation photons are registered by the PET scanner.

By measuring the locations and times of the photons’ interactions in the scanner, we can reconstruct the density distribution of annihilation points in the body. With 18F-FDG, this image correlates with the density distribution of glucose, which in turn, indicates the rate of glucose metabolism. Thus the PET scanner delivers an image of the radiopharmaceutical’s metabolic rate in the body.

Such an image enables physicians to identify tissues with abnormal metabolism, such as cancers that metabolize glucose up to 10 times more intensively than healthy tissues. Therefore, PET scanners can provide information about alterations in cell function, even before cancer may be visible in anatomical images recorded using CT or MRI.

During annihilation, a short-lived atom called positronium can form. What’s the rationale for imaging this positronium?

It’s amazing that in tissue, positron–electron annihilation proceeds via the formation of positronium in about 40% of cases. Positronium, a bound state of matter and antimatter (an electron and a positron), is short lived because it can undergo self-annihilation into photons. In tissue, however, it can decay via additional processes that further shorten its lifetime. For example, its positron may annihilate by “picking off” an electron from a surrounding atom, or it may convert from the long-lived state (ortho-positronium) to the short-lived state (para-positronium) through interaction with oxygen molecules.

In tissue, therefore, positronium lifetime is an indicator of the intra- and inter-molecular structure and the concentration of oxygen molecules. Both molecular composition and the degree of oxygen concentration differ between healthy and cancerous tissues, with hypoxia (a deficit in tissue oxygenation) a major feature of solid tumours that’s related to the development of metastases and treatment resistance.

As such, imaging positronium lifetime can help in early disease recognition at the stage of molecular alterations. It can also improve diagnosis and the proper choice of anti-cancer therapy. In the case of brain diagnostics, positronium imaging may become an early diagnostic indicator for neurodegenerative disorders such as dementia, Alzheimer’s disease and Parkinson’s disease.

So how does the J-PET detect positronium?

To reconstruct the positronium lifetime we use a radionuclide (44Sc, 82Rb or 124I, for example) that, after emitting a positron, promptly (within a few picoseconds) emits an additional gamma photon. This “prompt gamma” can be used to measure the exact time that the positron was emitted into the tissue and formed positronium.

Multiphoton detection in a PET scanner
Multiphoton detection In about 1% of cases, after emitting a positron that annihilates with an electron into photons (blue arrows), 68Ga also emits a prompt gamma (solid arrow). (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)

Current PET scanners are designed to register only two annihilation photons, which makes them incapable of determining positronium lifetime. The J-PET is the first multiphoton PET scanner designed for simultaneous registration of any number of photons.

The registration of annihilation photons enables us to reconstruct the time and location of the positronium decay, while registration of the prompt gamma provides the time of its formation. The positronium lifetime is then calculated as the time difference between annihilation and prompt gamma emission.

Can you describe how your team recorded the first in vivo positronium image?

Last year we presented the world’s first in vivo images of positronium lifetime in a human, reported in Science Advances. For this, we designed and constructed a modular, lightweight and portable J-PET tomograph, consisting of 24 independent detection modules, each weighing only 2 kg. The device uses a multiphoton data acquisition system, invented by us, to simultaneously register prompt gamma and annihilation photons – the first PET scanner in the world to achieve this.

The research was performed at the Medical University of Warsaw, with studies conducted following routine procedures so as not to interfere with routine diagnostics and therapy. If a patient agreed to stay longer on the platform, we had about 10 minutes to install the J-PET tomograph around them and collect data.

First patient imaging with J-PET
In vivo imaging The first imaging of a patient, illustrating the advantages of the J-PET as a portable, lightweight device with an adaptable imaging volume. (Courtesy: Paweł Moskal)

The first patient was a 45-year-old man with glioblastoma (an aggressive brain tumour) undergoing alpha-particle radiotherapy. The primary aim of his therapy was to destroy the tumour using alpha particles emitted by the radionuclide 225Ac. The positronium imaging was made possible by the concurrent theranostic application of the radionuclide 68Ga to monitor the site of cancer lesions using a PET scanner.

The patient was administered a simultaneous intra-tumoural injection of the alpha-particle-emitting radiopharmaceutical (225Ac-DOTA-SP) for therapy and the positron emitting pharmaceutical (68Ga-DOTA-SP) for diagnosis. In about 1% of cases, after emitting a positron that annihilates with an electron, 68Ga also emits a prompt gamma ray.

We determined the annihilation location by measuring the time and position of interaction of the annihilation photons in the scanner. For each image voxel, we also determined a lifetime spectrum as the distribution of differences between the time of annihilation and the time of prompt gamma emission.

Our study found that positronium lifetimes in glioblastoma cells are shorter than in salivary glands and healthy brain tissues. We showed for the first time that the mean lifetime of ortho-positronium in a glioma (1.77±0.58 ns) is shorter than in healthy brain tissue (2.72±0.72 ns). This finding demonstrates that positronium imaging could be used for in vivo diagnosis to differentiate between healthy and cancerous tissues.

Positronium images of a patient with glioblastoma
Lifetime distributions Positronium images of a patient with glioblastoma, showing the difference in mean ortho-positronium lifetime between glioma and healthy brain. (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)

You recently demonstrated that J-PET can detect quantum entanglement of annihilation photons, how could this impact cancer diagnostics?

For this study, reported earlier this year in Science Advances, we used the laboratory prototype of the J-PET scanner (as employed previously for the first ex vivo positronium imaging). The crucial result was the first ever observation that photons from electron–positron annihilation in matter are not completely quantum entangled. Our study is pioneering in revealing a clear dependence of the degree of photon entanglement on the material in which the annihilation occurs.

These results are totally new compared with all previous investigations of photons from positron–electron annihilations. Up to this point, all experiments had focused on showing that this entanglement is maximal, and for that purpose, were performed in metals. None of the previous studies mentioned or even hypothesized a possible material dependence.

Laboratory prototype of the J-PET scanner
Lab prototype The J-PET scanner used to discover non-maximal entanglement, with (left to right) Deepak Kumar, Sushil Sharma and Pawel Moskal. (Courtesy: Damian Gil and Deepak Kumar)

If the degree of quantum entanglement of annihilation photons depends on the material, it may also differ according to tissue type or the degree of hypoxia. This is a hypothesis that we will test in future studies. I recently received an ERC Advanced Grant, entitled “Can tissue oxidation be sensed by positronium?”, to investigate whether the degree of oxidation in tissue can be sensed by the degree of quantum entanglement of photons originating from positron annihilation.

What causes annihilation photons to be entangled (or not)?

Quantum entanglement is a fascinating phenomenon that cannot be explained by our classical perception of the world. Entangled photons behave as if one instantly knows what is happening with the other, regardless of how far apart they are, so they propagate in space as a single object.

Annihilation photons are entangled if they originate from a pure quantum state. A state is “pure” if we know everything that can be known about it. For example, if the photons originate from the ground state of para-positronium (a pure state), then we expect them to be maximally entangled.

However, if electron–positron annihilation occurs in a mixed state (a statistical mixture of different pure states) where we have incomplete information, then the resulting photons will not be maximally entangled. In our case, this could be the annihilation of a positron from positronium with electrons from the patient’s body. Because these electrons can have different angular momenta with respect to the positron, the annihilation generally occurs from a mixed state.

You have also measured the polarization of the annihilation photons; how is this information used?

In current PET scanners, images are reconstructed based on the position and time of interaction of annihilation photons within the scanner. However, annihilation photons also carry information about their polarization.

Theoretically, annihilation photons are quantum entangled in polarization and exhibit non-local correlations. In the case of electron–positron annihilation into two photons, this means that the amplitude of the distribution of the relative angle between their polarization planes is larger when they are quantum entangled than when they propagate in space as independent objects.

State-of-the-art PET scanners, however, cannot access polarization information. Annihilation photons have energy in the mega-electronvolt range and their polarization cannot be determined using established optical methods, which are designed for optical photons in the electronvolt range. Because these energetic annihilation photons interact with single electrons, their polarization can only be sensed via Compton scattering.

The angular distribution of photons scattered by electrons is not isotropic with respect to the polarization direction. Instead, scattering is most likely to occur in a plane perpendicular to the polarization plane of the photon before scattering. Thus, by determining the scattering plane (containing the primary and scattered photon), one can estimate the direction of polarization as being perpendicular to that plane. Therefore, to practically determine the polarization plane of the photon, you need to know its directions of flight both before and after Compton scattering in the material.

In plastic scintillators, annihilation photons primarily interact via the Compton effect. As the J-PET is built from plastic scintillators, it’s ideally suited to provide information about the photons’ polarization, which can be determined by registering both the annihilation photon and the scattered photon and then reconstructing the scattering plane.

Using the J-PET scanner, we determined the distribution of the relative angle between the polarization planes of photons from positron–electron annihilation in a porous polymer. The amplitude of the observed distribution is smaller than predicted for maximally quantum-entangled two-photon states, but larger than expected for separable photons.

This result can be explained by assuming that photons from pick-off annihilation are not entangled, while photons from direct and para-positronium annihilations are maximally entangled. Our finding indicates that the degree of entanglement depends on the annihilation mechanism in matter, opening avenues for exploring polarization correlations in PET as a diagnostic indicator.

What further developments are planned for the J-PET scanner?

When creating the J-PET technology, we started with a two-strip prototype, then a 24-strip prototype in 2014, followed by a full-scale 192-strip prototype in 2016. In 2021 we completed the construction of a lightweight (60 kg) J-PET version that is both modular and portable, and which we used to demonstrate the first clinical images.

The next step is the construction of the total-body quantum J-PET scanner. We are now at the stage of collecting all the elements of this scanner and expect to complete construction in 2028. The scanner will be installed at the Center for Theranostics, established by myself and Ewa Stępień, medical head of the J-PET team, at Jagiellonian University.

Schematic of the full-body J-PET scanner
Future developments Schematic cross-section of the full-body J-PET scanner under construction at Jagiellonian University. The diagram shows the patient and several examples of electron–positron annihilation. (Courtesy: Rev. Mod. Phys. 10.1103/RevModPhys.95.021002)

Total-body PET provides the ability to image the metabolism of all tissues in the body at the same time. Additionally, due to the high sensitivity of total-body PET scanners, it is possible to perform dynamic imaging – essentially, creating a movie of how the radiopharmaceutical distributes throughout the body over time.

The total-body J-PET will also be able to register the pharmacokinetics of drugs administered to a patient. However, its true distinction is that it will be the world’s first quantum PET scanner with the ability to image the degree of quantum entanglement of annihilation photons throughout the patient’s body. Additionally, it will be the world’s first total-body multiphoton PET, enabling simultaneous positronium imaging in the entire human body.

How do you see the J-PET’s clinical applications evolving in the future?

We have already performed the first clinical imaging using J-PET at the Medical University of Warsaw and the University Hospital in Kraków. The studies included the diagnosis of patients with neuroendocrine, prostate and glioblastoma tumours. The data collected at these hospitals were used to reconstruct standard PET images as well as positronium lifetime images.

Next, we plan to conduct positronium imaging of phantoms and humans with various radionuclides to explore its clinical applications as a biomarker for tissue pathology and hypoxia. We also intend to explore the J-PET’s multiphoton capabilities for simultaneous double-tracer imaging, as well as study the degree of quantum entanglement as a function of the annihilation mechanism.

Finally, we plan to explore the possibilities of applying quantum entanglement to diagnostics, and we look forward to performing total-body positronium and quantum entanglement imaging with the total-body J-PET in the Center for Theranostics.

  • Paweł Moskal is a panellist in the forthcoming Physics World Live event on 25 September 2025. The event, which also features Miles Padgett from the University of Glasgow and Matt Brookes from the University of Nottingham, will examine how medical physics can make the most of the burgeoning field of quantum science. You can sign up free here.

The post Towards quantum PET: harnessing the diagnostic power of positronium imaging appeared first on Physics World.

Enzymes, stock prices and temperature alerts

27 août 2025 à 09:03

Imagine a molecule moving randomly through a cell. Its goal is to bind to an enzyme, a process essential for many chemical reactions in the body.

However, the enzyme isn’t always ready to bind. It switches between two states: a reactive state, where it can bind the molecule, and a non-reactive state, where it cannot.

Even if the molecule reaches the enzyme, binding will only take place if the enzyme happens to be in its reactive state at that exact moment. Simply arriving at the enzyme isn’t enough -the gate must also be open.

This scenario is an example of a gated first-passage process. The term refers to situations where an event (like binding) only happens if two conditions are met: the particle must reach a specific target, and the target must be in the right state to allow the event to occur.

These processes are important in a wide range of fields including chemistry, biophysics, finance, and climate science.

Existing models have offered valuable insights in simple cases, such as point targets, but even those have left some questions unresolved — and the challenges only deepen in more realistic scenarios involving extended targets or thresholds.

In order to address this problem, a team of researchers from India and Israel set about developing a new mathematical framework to analyse these type of processes.

The new approach uses a concept called renewal theory to break down complex processes into simpler, repeatable parts. Renewal theory is a branch of probability that deals with events that happen repeatedly over time, with random intervals between them.

The team showed that their method can solve previously unsolved problems and reveals universal patterns in how long these processes take. They were even able to explain surprising effects that previously posed a major challenge.

Crucially, their method can be applied to many real-world systems, from chemical reactions to intermittent data monitoring.

Read the full article

Continuous gated first-passage processes – IOPscience

Yuval Scher et al. 2024 Rep. Prog. Phys. 87 108101

 

The post Enzymes, stock prices and temperature alerts appeared first on Physics World.

Studying quantum materials under strain on picosecond timescales

27 août 2025 à 09:01

Interesting phenomena in quantum materials are often found near boundaries between different competing ground states.

Understanding the competition between these states is a central problem in condensed matter physics because of the potential applications to quantum computing and superconductivity.

There are many different types of ground states but the one that’s important here is a charge density wave (CDW). This is where the electron density of a material becomes modulated in a periodic pattern.

TbTe₃, or terbium tritelluride is a quasi-two-dimensional material made up of alternating layers of conducting tellurium (Te) planes and insulating rare-earth terbium (Tb) block layers.

It has attracted a lot of interest recently though because it has two competing CDW states and represents an excellent platform to study new quantum phenomena.

Previous experiments have shown that these states can be tuned when the material is put under pressure, even leading to an induced superconducting state.

These experiments all used an isotropic pressure – the same in all directions. However, because this material is quasi-two dimensional, it would be even more interesting to see how it responds to a strain in one particular direction.

This is exactly what the team at SLAC have done.

They used ultrafast optical reflectivity to probe the dynamics of the competing CDW states in TbTe₃ at different strains.

They found that these two competing states are incredibly similar in energy and become more stable with increasing strain.

What’s really exciting though is the method they used. Their measurements were recorded in a pump-probe setup on timescales of a couple of picoseconds (trillionths of a second).

Combined with the application of a directional strain, this technique could be used in the future to study many other quantum materials with exciting properties.

Read the full article

Emergent symmetry in TbTe3 revealed by ultrafast reflectivity under anisotropic strain – IOPscience

Soyeun Kim et al 2024 Rep. Prog. Phys. 87 100501

The post Studying quantum materials under strain on picosecond timescales appeared first on Physics World.

Optical imaging tool could help diagnose and treat sudden hearing loss

26 août 2025 à 15:00

Optical coherence tomography (OCT), a low-cost imaging technology used to diagnose and plan treatment for eye diseases, also shows potential as a diagnostic tool for assessing rapid hearing loss.

Researchers at the Keck School of Medicine of USC have developed an OCT device that can acquire diagnostic quality images of the inner ear during surgery. These images enable accurate measurement of fluids in the inner ear compartments. The team’s proof-of-concept study, described in Science Translational Medicine, revealed that the fluid levels correlated with the severity of a patient’s hearing loss.

An imbalance between the two inner ear fluids, endolymph and perilymph, is associated with sudden, unexplainable hearing loss and acute vertigo, symptoms of ear conditions such as Ménière’s disease, cochlear hydrops and vestibular schwannomas. This altered fluid balance – known as endolymphatic hydrops (ELH) – occurs when the volume of endolymph increases in one compartment and the volume of perilymph decreases in the other.

Because the fluid chambers of the inner ear are so small, there has previously been no effective way to assess endolymph-to-perilymph fluid balance in a living patient. Now, the Keck OCT device enables imaging of inner ear structures in real time during mastoidectomy – a procedure performed during many ear and skull base surgeries, and which provides optical access to the lateral and posterior semicircular canals (SCCs) of the inner ear.

OCT offers a quicker, more accurate and less expensive way to see inner ear fluids, hair cells and other structures compared with the “gold standard” MRI scans. The researchers hope that ultimately, the device will evolve into an outpatient assessment tool for personalized treatments for hearing loss and vertigo. If it can be used outside a surgical suite, OCT technology could also support the development and testing of new treatments, such as gene therapies to regenerate lost hair cells in the inner ear.

Intraoperative OCT

The intraoperative OCT system, developed by senior author John Oghalai and colleagues, comprises an OCT adaptor containing the entire interferometer, which attaches to the surgical microscope, plus a medical cart containing electronic devices including the laser, detector and computer.

The OCT system uses a swept-source laser with a central wavelength of 1307 nm and a bandwidth of 89.84 nm. The scanning beam spot size is 28.8 µm and has a depth-of-focus of 3.32 mm. The system’s axial resolution of 14.0 µm and lateral resolution of 28.8 µm provide an in-plane resolution of 403 µm2.

The laser output is directed into a 90:10 optical fibre fused coupler, with the 10% portion illuminating the interferometer’s reference arm. The other 90% illuminates the sample arm, passes through a fibre-optic circulator, and is combined with a red aiming beam that’s used to visually position the scanning beam on the region-of-interest.

After the OCT and aiming beams are guided onto the sample for scanning, and the interferometric signal needed for OCT imaging is generated, two output ports of the 50:50 fibre optic coupler direct the light signal into a balanced photodetector for conversion into an electronic signal. A low-pass dichroic mirror allows back-reflected visible light to pass through into an eyepiece and a camera. The surgeon can then use the eyepiece and real-time video to ensure correct positioning for the OCT imaging.

Feasibility study

The team performed a feasibility study on 19 patients undergoing surgery at USC to treat Ménière’s disease (an inner-ear disorder), vestibular schwannoma (a benign tumour) or middle-ear infection with normal hearing (the control group). All surgical procedures required a mastoidectomy.

Immediately after performing the mastoidectomy, the surgeon positioned the OCT microscope with the red aiming beam targeted at the SCCs of the inner ear. After acquiring a 3D volume image of the fluid compartments in the inner ear, which took about 2 min, the OCT microscope was removed from the surgical suite and the surgical procedure continued.

The OCT system could clearly distinguish the two fluid chambers within the SCCs. The researchers determined that higher endolymph levels correlated with patients having greater hearing loss. In addition to accurately measuring fluid levels, the system revealed that patients with vestibular schwannoma had higher endolymph-to-perilymph ratios than patients with Ménière’s disease, and that compared with the controls, both groups had increased endolymph and reduced perilymph, indicating ELH.

The success of this feasibility study may help improve current microsurgery techniques, by guiding complex temporal bone surgery that requires drilling close to the inner ear. OCT technology could help reduce surgical damage to delicate ear structures and better distinguish brain tumours from healthy tissue. The OCT system could also be used to monitor the endolymph-to-perilymph ratio in patients with Ménière’s disease undergoing endolymphatic shunting, to verify that the procedure adequately decompresses the endolymphatic space. Efforts to make a smaller, less expensive system for these types of surgical use are underway.

The researchers are currently working to improve the software and image processing techniques in order to obtain images from patients without having to remove the mastoid bone, which would enable use of the OCT system for outpatient diagnosis.

The team also plans to adapt a handheld version of an OCT device currently used to image the tympanic membrane and middle ear to enable imaging of the human cochlea in the clinic. Imaging down the ear canal non-invasively offers many potential benefits when diagnosing and treating patients who do not require surgery. For example, patients determined to have ELH could be diagnosed and treated rapidly, a process that currently takes 30 days or more.

Oghalai and colleagues are optimistic about improvements being made in OCT technology, particularly in penetration depth and tissue contrast. “This will enhance the utility of this imaging modality for the ear, complementing its potential to be completely non-invasive and expanding its indication to a wider range of diseases,” they write.

The post Optical imaging tool could help diagnose and treat sudden hearing loss appeared first on Physics World.

Why quantum technology is driving quantum fundamentals

26 août 2025 à 12:00
computer graphic of human skull superimposed with colourful representation of quantum physics
(Courtesy: iStock/agsandrew)

Science and technology go hand in hand but it’s not always true that basic research leads to applications. Many early advances in thermodynamics, for example, followed the opposite path, emerging from experiments with equipment developed by James Watt, who was trying to improve the efficiency of steam engines. In a similar way, much progress in optics and photonics only arose after the invention of the laser.

The same is true in quantum physics, where many of the most exciting advances are occuring in companies building quantum computers, developing powerful sensors, or finding ways to send information with complete security. The cutting-edge techniques and equipment developed to make those advances then, in turn, let us understand the basic scientific and philosophical questions of quantum physics.

Quantum entanglement, for example, is no longer an academic curiosity, but a tangible resource that can be exploited in quantum technology. But because businesses are now applying this resource to real-world problems, it’s becoming possible to make progress on basic questions about what entanglement is. It’s a case of technological applications leading to fundamental answers, not the other way round.

In a recent panel event in our Physics World Live series, Elise Crull (a philosopher), Artur Ekert (an academic) and Stephanie Simmons (an industrialist) came together to discuss the complex interplay between quantum technology and quantum foundations. Elise Crull, who trained in physics, is now associate professor of philosophy at the City University of New York. Artur Ekert is a quantum physicist and cryptographer at the University of Oxford, UK, and founding director of the Center for Quantum Technologies in Singapore. Stephanie Simmons is chief quantum officer at Photonic, co-chair of Canada’s Quantum Advisory Council, and associate professor of physics at Simon Fraser University in Vancouver.

Elise Crull, Artur Ekert and Stephanie Simmons
Quantum panellists From left: Elise Crull, Artur Ekert and Stephanie Simmons. (Courtesy: City University of New York; CC BY The Royal Society; CC BY-SA SBoone)

Presented here is an edited extract of their discussion, which you can watch in full online.

Can you describe the interplay between applications of quantum physics and its fundamental scientific and philosophical questions?

Stephanie Simmons: Over the last 20 years, research funding for quantum technology has risen sharply as people have become aware of the exponential speed-ups that lie in store for some applications. That commercial potential has brought a lot more people into the field and made quantum physics much more visible. But in turn, applications have also let us learn more about the fundamental side of the subject.

We’re learning so much at a fundamental level because of technological advances

Stephanie Simmons

They have, for example, forced us to think about what quantum information really means, how it can be treated as a resource, and what constitutes intelligence versus consciousness. We’re learning so much at a fundamental level because of those technological advances. Similarly, understanding those foundational aspects lets us develop technology in a more innovative way.

If you think about conventional, classical supercomputers, we use them in a distributed fashion, with lots of different nodes all linked up. But how can we achieve that kind of “horizontal scalability” for quantum computing? One way to get distributed quantum technology is to use entanglement, which isn’t some kind of afterthought but the core capability.

How do you manage entanglement, create it, distribute it and distil it? Entanglement is central to next-generation quantum technology but, to make progress, you need to break free from previous thinking. Rather than thinking along classical lines with gates, say, an “entanglement-first” perspective will change the game entirely.

Artur Ekert: As someone more interested in the foundations of quantum mechanics, especially the nature of randomness, technology has never really been my concern. However, every single time I’ve tried to do pure research, I’ve failed because I’ve discovered it has interesting links to technology. There’s always someone saying: “You know, it can be applied to this and that.”

Think about some of the classic articles on the foundations of quantum physics, such as the 1935 Einstein–Podolsky–Rosen (EPR) paper suggesting that quantum mechanics is incomplete. If you look at them from the perspective of data security, you realize that some concepts – such as the ability to learn about a physical property without disturbing it – are relevant to cryptography. After all, it offers a way into perfect eavesdropping.

So while I enjoy the applications and working with colleagues on the corporate side, I have something of a love–hate relationship with the technological world.

illustration of quantum entanglement
Fundamental benefits Despite being so weird, quantum entanglement is integral to practical applications of quantum mechanics. (Courtesy: iStock/Jian Fen)

Elise Crull: These days physicists can test things that they couldn’t before – maybe not the really weird stuff like indefinite causal ordering but certainly quantum metrology and the location of the quantum-classical boundary. These are really fascinating areas to think about and I’ve had great fun interacting with physicists, trying to fathom what they mean by fundamental terms like causality.

Was Schrödinger right to say that it’s entanglement that forces our entire departure from classical lines of thought? What counts as non-classical physics and where is the boundary with the quantum world? What kind of behaviour is – and is not – a signature of quantum phenomena? These questions make it a great time to be a philosopher.

Do you have a favourite quantum experiment or quantum technology that’s been developed over the last few decades?

Artur Ekert: I would say the experiments of Alain Aspect in Orsay in the early 1980s, who built on the earlier work of John Clauser, to see if there is a way to violate Bell inequalities. When I was a graduate student in Oxford, I found the experiment absolutely fascinating, and I was surprised it didn’t get as much attention at the time as I thought it should. It was absolutely mind-blowing that nature is inherently random and refutes the notion of local “hidden variables”.

There are, of course, many other beautiful experiments in quantum physics. There are cavity quantum electrodynamic and ion-trap experiments that let physicists go from controlling a bunch of atoms to individual atoms or ions. But to me the Aspect experiment was different because it didn’t confirm something that we’d already experienced. As a student I remember thinking: “I don’t understand this; it just doesn’t make sense. It’s mind-boggling.”

Elise Crull: The Bell-type experiments are how I got interested in the philosophy of quantum mechanics. I wasn’t around when Aspect did his first experiments, but at the recent Helgoland conference marking the centenary of quantum mechanics, he was on stage with Anton Zeilinger debating the meaning of Bell violations. So, it’s an experiment that’s still unsettled almost 50 years later and we have different stories involving causality to explain it.

The game is to go from a single qubit or small quantum systems to many-body quantum systems and to look at the emergent phenomena there

Elise Crull

I’m also interested in how physicists are finding clever ways to shield systems from decoherence, which is letting us see quantum phenomena at higher and higher levels. It seems the game is to go from a single qubit or small quantum systems to many-body quantum systems and to look at the emergent phenomena there. I’m looking forward to seeing further results.

Stephanie Simmons: I’m particularly interested in large quantum systems, which will let us do wonderful things like error correction and offer exponential speed-ups on algorithms and entanglement distribution for large distances. Having those capabilities will unlock new technology and let us probe the measurement problem, which is the core of so many of the unanswered questions in quantum physics.

Figuring out how to get reliable quantum systems out of noisy quantum systems was not at all obvious. It took a good decade for various teams around the world to do that. You’re pushing the edges of performance but it’s a really fast-moving space and I would say quantum-error correction is the technology that I think is most underappreciated.

How large could a quantum object or system be? And if we ever built it, what new fundamental information about quantum mechanics would it tell us?

Artur Ekert: Technology has driven progress in our understanding of the quantum world. We’ve gone from being able to control zillions of atoms in an ensemble to just one but the challenge is now to control more of them – two, three or four. It might seem paradoxical to have gone from many to one and back to many but the difference is that we can now control those quantum states. We can engineer those interactions and look at emerging phenomena. I don’t believe there will be a magic number where quantum will stop working – but who knows? Maybe when we get to 42 atoms the world will be different.

Elise Crull: It depends what you’re looking for. To detect gravitational waves, LIGO already uses Weber bars, which are big aluminium rods – weighing about a tonne – that vibrate like quantum oscillators. So we already have macroscopic systems that need to be treated quantum mechanically. The question is whether you can sustain entanglement longer and over greater distance.

What are the barriers to scaling up quantum devices so they can be commercially successful?

Stephanie Simmons: To unleash exponential speed-ups in chemistry or cybersecurity, we will need quantum computers with 400 to 2000 application-grade logical qubits. They will need to perform to a certain degree of precision, which means you need error correction. The overheads will be high but we’ve raised a lot of money on the assumption that it all pans out, though there’s no reason to think there’s a limit.

I don’t feel like there’s anything that would bar us from hitting that kind of commercial success. But when you’re building things that have never been built before, there are always “unknown unknowns”, which is kind of fun. There’s always the possibility of seeing some kind of interesting emergent phenomenon when we build very large quantum systems that don’t exist in nature.

cat in a cardboard box
Large potential After successfully being able to figure out how to control single atoms at a time, quantum physicists now want to control large groups of atoms – but is there a limit to how big quantum objects can be? (Courtesy: Shutterstock/S Castelli)

Artur Ekert: To build a quantum computer, we have to create enough logical qubits and make them interact, which requires an amazing level of precision and degree of control. There’s no reason why we shouldn’t be able to do that, but what would be fascinating is if – in the process of doing so – we discovered there is a fundamental limit.

While I support all efforts to build quantum computers, I’d almost like them to fail because we might then discover something that refutes quantum physics

Artur Ekert

So while I support all efforts to build quantum computers, I’d almost like them to fail because we might then discover something that refutes quantum physics. After all, building a quantum computer is probably the most complicated and sophisticated experiment in quantum physics. It’s more complex than the whole of the Apollo project that sent astronauts to the Moon: the degree of precision of every single component that is required is amazing.

If quantum physics breaks down at some point, chances are it’ll be in this kind of experiment. Of course, I wish all my colleagues investing in quantum computing get a good return for their money, but I have this hidden agenda. Failing to build a quantum computer would be a success for science: it would let us learn something new. In fact, we might even end up with an even more powerful “post-quantum” computer.

Surely the failure of quantum mechanics, driven by those applications, would be a bombshell if it ever happened?

Artur Ekert: People seeking to falsify quantum prediction are generally looking at connections between quantum and gravity so how would you be able to refute quantum physics with a quantum computer? Would it involve observing no speed-up where a speed-up should be seen, or would it be failure of some other sort?

My gut feeling is make this quantum experiment as complex and as sophisticated as you want, scale it up to the limits, and see what happens. If it works as we currently understand it should work, that’s fine, we’ll have quantum computers that will be useful for something.  But if it doesn’t work for some fundamental reason, it’s also great – it’s a win–win game.

Are we close to the failure of quantum mechanics?

Elise Crull: I think Arthur has a very interesting point. But we have lots of orders of magnitude to go before we have a real quantum computer. In the meantime, many people working on quantum gravity – whether string theory or canonical quantum gravity – are driven by their deep commitment to the universality of quantization.

There are, for example, experiments being designed by some to disprove classical general relativity by entangling space–time geometries. The idea is to kick out certain other theories or find upper and lower bounds on a certain theoretical space. I think we will make a lot of progress by not by trying to defeat quantum mechanics but to look at the “classicality” of other field theories and try to test those.

How will quantum technology benefit areas other than, say, communication and cryptography?

Stephanie Simmons: History suggests that every time we commercialize a branch of physics, we aren’t great at predicting where that platform will go. When people invented the first transistor, they didn’t anticipate the billions that you could put onto a chip. So for the new generation of people who are “quantum native”, they’ll have access to tools and concepts with which they’ll quickly become familiar.

You have to remember that people think of quantum mechanics as counterintuitive. But it’s actually the most self-consistent set of physics principles. Imagine if you’re a character in a video game and you jump in midair; that’s not reality, but it’s totally self-consistent. Quantum is exactly the same. It’s weird, but self-consistent. Once you get used to the rules, you can play by them.

I think that there’s a real opportunity to think about chemistry in a much more computational sense. Quantum computing is going to change the way people talk about chemistry. We have the opportunity to rethink the way chemistry is put together, whether it’s catalysts or heavy elements. Chemicals are quantum-mechanical objects – if you had 30 or 50 atoms, with a classical computer it would just take more bits than there are atoms in the universe to work out their electronic structure.

Has industry become more important than academia when it comes to developing new technologies?

Stephanie Simmons: The grand challenge in the quantum world is to build a scaled-up, fault-tolerant, exponentially sped-up quantum system that could simultaneously deliver the repeaters we need to do all the entanglement distribution technologies. And all of that work, or at least a good chunk of it, is in companies. The focus of that development has left academia.

Industry is the most fast-moving place to be in quantum at the moment, and things will emerge that will surprise people

Stephanie Simmons

Sure, there are still contributions from academia, but there is at least 10 times as much going on in industry tackling these ultra-complicated, really complex system engineering challenges. In fact, tackling all those unknown unknowns, you actually become a better “quantum engineer”. Industry is the most fast-moving place to be in quantum at the moment, and things will emerge that will surprise people.

Detail of a quantum computer
Competitive edge Most efforts to build quantum computers are now in industry, not academia. (Courtesy: Shutterstock/Bartlomiej K Wroblewski)

Artur Ekert: We can learn a lot from colleagues who work in the commercial sector because they ask different kinds of questions. My own first contact was with John Rarity and Paul Tabster at the UK Defence Evaluation and Research Agency, which became QinetiQ after privatization. Those guys were absolutely amazing and much more optimistic than I was about the future of quantum technologies. Paul in particular is an unsung hero of quantum tech. He showed me how you can think not in terms of equations, but devices – blocks you can put together, like quantum LEGO.

Over time, I saw more and more of my colleagues, students and postdocs going into the commercial world. Some even set up their own companies and I have a huge respect for my colleagues who’ve done that. I myself am involved with Speqtral in Singapore, which does satellite quantum communication, and I’m advising a few other firms too.

Most efforts to build quantum devices are now outside academia. In fact, it has to be that way because universities are not designed to build quantum computers, which requires skills and people not found in a typical university. The only way to work out what quantum is good for is through start-up companies. Some will fail; but some will survive – and the survivors will be those that bet on the right applications of quantum theory.

What technological or theoretical breakthrough do you most hope to see that make the biggest difference?

Elise Crull: I would love someone to design an experiment to entangle space–time geometries, which would be crazy but would definitely kick general relativity off the table. It’s a dream that I’d love to see happen.

Stephanie Simmons: I’m really keen to see distributed logical qubits that are horizontally scalable.

Artur Ekert: On the practical side, I’d like to see real progress in quantum-error-correcting codes and fault-tolerant computing. On the fundamental side, I’d love experiments that provide a better understanding of the nature of randomness and its links with special relativity.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Why quantum technology is driving quantum fundamentals appeared first on Physics World.

Highest-resolution images ever taken of a single atom reveal new kind of vibrations

25 août 2025 à 16:00

Researchers in the US have directly imaged a class of extremely low-energy atomic vibrations called moiré phasons for the first time. In doing so, they proved that these vibrations are not just a theoretical concept, but are in fact the main way that atoms vibrate in certain twisted two-dimensional materials. Such vibrations may play a critical role in heat and charge transport and how quantum phases behave in these materials.

“Phasons had only been predicted by theory until now, and no one had ever directly observed them, or even thought that this was possible,” explains Yichao Zhang of the University of Maryland, who co-led the effort with Pinshane Huang of the University of Illinois at Urbana-Champaign. “Our work opens up an entirely new way of understanding lattice vibrations in 2D quantum materials.”

A second class of moiré phonons

When two sheets of a 2D materials are placed on top of each other and slightly twisted, their atoms form a moiré pattern, or superlattice. This superlattice contains quasi-periodic regions of rotationally aligned regions (denoted AA or AB) separated by a network of stacking faults called solitons.

Materials of this type are also known to possess distinctive vibrational modes known as moiré phonons, which arise from vibrations of the material’s crystal lattice. These modes vary with the twist angle between layers and can change the physical properties of the materials.

In addition to moiré phonons, two-dimensional moiré materials are also predicted to host a second class of vibrational mode known as phasons. However, these phasons had never been directly observed experimentally until now.

Imaging phasons at the picometre scale

In the new work, which is published in Science, the researchers used a powerful microscopy technique called electron ptychography that enabled them to image samples with spatial resolutions as fine as 15 picometres (1 pm = 10-12 m). At this level of precision, explains Zhang, subtle changes in thermally driven atomic vibrations can be detected by analysing the shape and size of individual atoms. “This meant we could map how atoms vibrate across different stacking regions of the moiré superlattice,” she says. “What we found was striking: the vibrations weren’t uniform – atoms showed larger amplitudes in AA-stacked regions and highly anisotropic behaviour at soliton boundaries. These patterns align precisely with theoretical predictions for moiré phasons.”

Coloured dots showing thermal vibrations in a single atom
Good vibrations: The experiment measured thermal vibrations in a single atom. (Courtesy: Yichao Zhang et al.)

Zhang has been studying phonons using electron microscopy for years, but limitations on imaging resolutions had largely restricted her previous studies to nanometre (10-9 m) scales. She recently realized that electron ptychography would resolve atomic vibrations with much higher precision, and therefore detect moiré phasons varying across picometre scales.

She and her colleagues chose to study twisted 2D materials because they can support many exotic electronic phenomena, including superconductivity and correlated insulated states. However, the role of lattice dynamics, including the behaviour of phasons in these structures, remains poorly understood. “The problem,” she explains, “is that phasons are both extremely low in energy and spatially non-uniform, making them undetectable by most experimental techniques. To overcome this, we had to push electron ptychography to its limits and validate our observations through careful modelling and simulations.”

This work opens new possibilities for understanding (and eventually controlling) how vibrations behave in complex 2D systems, she tells Physics World. “Phasons can affect how heat flows, how electrons move, and even how new phases of matter emerge. If we can harness these vibrations, we could design materials with programmable thermal and electronic properties, which would be important for future low-power electronics, quantum computing and nanoscale sensors.”

More broadly, electron ptychography provides a powerful new tool for exploring lattice dynamics in a wide range of advanced materials. The team is now using electron ptychography to study how defects, strain and interfaces affect phason behaviour. These imperfections are common in many real-world materials and devices and can cause their performance to deteriorate significantly. “Ultimately, we hope to capture how phasons respond to external stimuli, like how they evolve with change in temperature or applied fields,” Zhang reveals. “That could give us an even deeper understanding of how they interact with electrons, excitons or other collective excitations in quantum materials.”

The post Highest-resolution images ever taken of a single atom reveal new kind of vibrations appeared first on Physics World.

William Phillips: why quantum physics is so ‘deliciously weird’

25 août 2025 à 12:00
William Phillips
Entranced by quantum William Phillips. (Courtesy: NIST)

William Phillips is a pioneer in the world of quantum physics. After graduating from Juniata College in Pennsylvania in 1970, he did a PhD with Dan Kleppner at the Massachusetts Institute of Technology (MIT), where he measured the magnetic moment of the proton in water. In 1978 Phillips joined the National Bureau of Standards in Gaithersburg, Maryland, now known as the National Institute of Standards and Technology (NIST), where he is still based.

Phillips shared the 1997 Nobel Prize for Physics with Steven Chu and Claude Cohen-Tannoudji for their work on laser cooling. The technique uses light from precisely tuned laser beams to slow atoms down and cool them to just above absolute zero. As well as leading to more accurate atomic clocks, laser cooling proved vital for the creation of Bose–Einstein condensates – a form of matter where all constituent particles are in the same quantum state.

To mark the International Year of Quantum Science and Technology, Physics World online editor Margaret Harris sat down with Phillips in Gaithersburg to talk about his life and career in physics. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.

How did you become interested in quantum physics?

As an undergraduate, I was invited by one of the professors at my college to participate in research he was doing on electron spin resonance. We were using the flipping of unpaired spins in a solid sample to investigate the structure and behaviour of a particular compound. Unlike a spinning top, electrons can spin only in two possible orientations, which is pretty weird and something I found really fascinating. So I was part of the quantum adventure even as an undergraduate.

What did you do after graduating?

I did a semester at Argonne National Laboratory outside Chicago, working on electron spin resonance with two physicists from Argentina. Then I was invited by Dan Kleppner – an amazing physicist – to do a PhD with him at the Massachusetts Institute of Technology. He really taught me how to think like a physicist. It was in his lab that I first encountered tuneable lasers, another wonderful tool for using the quantum properties of matter to explore what’s going on at the atomic level.

A laser-cooling laboratory set-up
Chilling out William Phillips working on laser-cooling experiments in his laboratory circa 1986. (Courtesy: NIST)

Quantum mechanics is often viewed as being weird, counter-intuitive and strange. Is that also how you felt?

I’m the kind of person entranced by everything in the natural world. But even in graduate school, I don’t think I understood just how strange entanglement is. If two particles are entangled in a particular way, and you measure one to be spin “up”, say, then the other particle will necessarily be spin “down” – even though there’s no connection between them. Not even a signal travelling at the speed of light could get from one particle to the other to tell it, “You’d better be ‘down’ because the first one was measured to be ‘up’.” As a graduate student I didn’t understand how deliciously weird nature is because of quantum mechanics.

Is entanglement the most challenging concept in quantum mechanics?

It’s not that hard to understand entanglement in a formal sense. But it’s hard to get your mind wrapped around it because it’s so weird and distinct from the kinds of things that we experience on a day-to-day basis. The thing that it violates – local realism – seems so reasonable. But experiments done first by John Clauser and then Alain Aspect and Anton Zeilinger, who shared the Nobel Prize for Physics in 2022, basically proved that it happens.

What quantum principle has had the biggest impact on your work?

Superposition has enabled the creation of atomic clocks of incredible precision. When I first came to NIST in 1978, when it was still called the National Bureau of Standards, the very best clock in the world was in our labs in Boulder, Colorado. It was good to one part in 1013.

Because of Einstein’s general relativity, clocks run slower if they’re deeper in a gravitational potential. The effect isn’t big: Boulder is about 1.5 km above sea level and a clock there would run faster than a sea level clock by about 1.5 parts in 1013. So if you had two such clocks – one at sea level and one in Boulder – you’d barely be able to resolve the difference. Now, at least in part because of the laser cooling and trapping ideas that my group and I have worked on, one can resolve a difference of less than 1 mm with the clocks that exist today. I just find that so amazing.

What research are you and your colleagues at NIST currently involved in?

Our laboratory has been a generator of ideas and techniques that could be used by people who make atomic clocks. Jun Ye, for example, is making clocks from atoms trapped in a so-called optical lattice of overlapping laser beams that are better than one part in 1018 – two orders of magnitude better than the caesium clocks that define the second. These newer types of clocks could help us to redefine the second.

We’re also working on quantum information. Ordinary digital information is stored and processed using bits that represent 0 or 1. But the beauty of qubits is that they can be in a superposition state, which is both 0 and 1. It might sound like a disaster because one of the great strengths of binary information is there’s no uncertainty; it’s one thing or another. But putting quantum bits into superpositions means you can do a problem in a lot fewer operations than using a classical device.

In 1994, for example, Peter Shor devised an algorithm that can factor numbers quantum mechanically much faster, or using far fewer operations, than with an ordinary classical computer. Factoring is a “hard problem”, meaning that the number of operations to solve it grows exponentially with the size of the number. But if you do it quantum mechanically, it doesn’t grow exponentially – it becomes an “easy” problem, which I find absolutely amazing. Changing the hardware on which you do the calculation changes the complexity class of a problem.

How might that change be useful in practical terms?

Shor’s algorithm is important because of public key encryption, which we use whenever we buy something online with a credit card. A company sends your computer a big integer number that they’ve generated by multiplying two smaller numbers together. That number is used to encrypt your credit card number. Somebody trying to intercept the transmission can’t get any useful information because it would take centuries to factor this big number. But if an evildoer had a quantum computer, they could factor the number, figure out your credit card and use it to buy TVs or whatever evildoers buy.

Now, we don’t have quantum computers that can do this yet – they can’t even do simple problems, let alone factor big numbers. But if somebody did do that, they could decrypt messages that do matter, such as diplomatic or military secrets. Fortunately, quantum mechanics comes to the rescue through something called the no-cloning theorem. These quantum forms of encryption prevent an eavesdropper from intercepting a message, duplicating it and using it – it’s not allowed by the laws of physics.

William Phillips performing a demo
Sharing the excitement William Phillips performing a demo during a lecture at the Sigma Pi Sigma Congress in 2000. (Courtesy: AIP Emilio Segrè Visual Archives)

Quantum processors can be made from different qubits – not just cold atoms but trapped ions, superconducting circuits and others, too. Which do you think will turn out best?

My attitude is that it’s too early to settle on one particular platform. It may well be that the final quantum computer is a hybrid device, where computations are done on one platform and storage is done on another. Superconducting quantum computers are fast, but they can’t store information for long, whereas atoms and ions can store information for a really long time – they’re robust and isolated from the environment, but are slow at computing. So you might use the best features of different platforms in different parts of your quantum computer.

But what do I know? We’re a long way from having quantum computers that can do interesting problems faster than classical device. Sure, you might have heard somebody say they’ve used a quantum computer to solve a problem that would take a classical device a septillion years. But they’ve probably chosen a problem that was easy for a quantum computer and hard for a classical computer – and it was probably a problem nobody cares about.

When do you think we’ll see quantum computers solving practical problems?

People are definitely going to make money from factoring numbers and doing quantum chemistry. Learning how molecules behave could make a big difference to our lives. But none of this has happened yet, and we may still be pretty far away from it. In fact, I have proposed a bet with my colleague Carl Williams, who says that by 2045 we will have a quantum computer that can factor numbers that a classical computer of that time cannot. My view is we won’t. I expect to be dead by then. But I hope the bet will encourage people to solve the problems to make this work, like error correction. We’ll also put up money to fund a scholarship or a prize.

What do you think quantum computers will be most useful for in the nearer term?

What I want is a quantum computer that can tackle problems such as magnetism. Let’s say you have a 1D chain of atoms with spins that can point up or down. Quantum magnetism is a hard problem because with n spins there are 2n possible states and calculating the overall magnetism of a chain of more than a few tens of spins is impossible for a brute-force classical computer. But a quantum computer could do the job.

There are quantum computers that already have lots of qubits but you’re not going to get a reliable answer from them. For that you have to do error correction by assembling physical qubits into what’s known as a logical qubit.  They let you determine whether an error has happened and fix it, which is what people are just starting to do. It’s just so exciting right now.

What development in quantum physics should we most look out for?

The two main challenges are: how many logical qubits we can entangle with each other; and for how long they can maintain their coherence. I often say we need an “immortal” qubit, one that isn’t killed by the environment and lasts long enough to be used to do an interesting calculation. That’ll determine if you really have a competent quantum computer.

Reflecting on your career so far, what are you most proud of?

Back in around 1988, we were just fooling around in the lab trying to see if laser cooling was working the way it was supposed to. First indications were: everything’s great. But then we discovered that the temperature to which you could laser cool atoms was lower than everybody said was possible based on the theory at that time. This is called sub-Doppler laser cooling, and it was an accidental discovery; we weren’t looking for it.

People got excited and our friends in Paris at the École Normale came up with explanations for what was going on. Steve Chu, who was at that point at Stanford University, was also working on understanding the theory behind it, and that really changed things in an important way. In fact, all of today’s laser-cooled caesium atomic clocks use that feature that the temperature is lower than the original theory of laser cooling said it was.

William Phillips at the IYQ 2025 opening ceremony
Leading light William Phillips spoke at the opening ceremony of the International Year of Quantum Science and Technology (IYQ 2025) at UNESCO headquarters in Paris earlier this year. (© UNESCO/Marie Etchegoyen. Used with permission.)

Another thing that has been particularly important is Bose–Einstein condensation, which is an amazing process that happens because of a purely quantum-mechanical feature that makes atoms of the same kind fundamentally indistinguishable. It goes back to the work of Satyendra Nath Bose, who 100 years ago came up with the idea that photons are indistinguishable and therefore that the statistical mechanics of photons would be different from the usual statistical mechanics of Boltzmann or Maxwell.

Bose–Einstein condensates, where almost all the atoms are in the same quantum state, were facilitated by our discovery that the temperature could be so much lower. To get this state, you’ve got to cool the atoms to a very low temperature – and it helps if the atoms are colder to start with.

Did you make any other accidental discoveries?

We also accidentally discovered optical lattices. In 1968 a Russian physicist named Vladilen Letokhov came up with the idea of trapping atoms in a standing wave of light. This was 10 years before laser cooling arrived and made it possible to do such a thing, but it was a great idea because the atoms are trapped over such a small distance that a phenomenon called Dicke narrowing gets rid of the Doppler shift.

Everybody knew this was a possibility, but we weren’t looking for it. We were trying to measure the temperature of the atoms in the laser-cooling configuration, and the idea we came up with was to look at the Doppler shift of the scattered light. Light comes in, and if it bounces off an atom that’s moving, there’ll be a Doppler shift, and we can measure that Doppler shift and see the distribution of velocities.

So we did that, and the velocity distribution just floored us. It was so odd. Instead of being nice and smooth, there was a big sharp peak right in the middle. We didn’t know what it was. We thought briefly that we might have accidentally made a Bose–Einstein condensate, but then we realized, no, we’re trapping the atoms in an optical lattice so the Doppler shift goes away.

It wasn’t nearly as astounding as sub-Doppler laser cooling because it was expected, but it was certainly interesting, and it is now used for a number of applications, including the next generation of atomic clocks.

How important is serendipity in research?

Learning about things accidentally has been a recurring theme in our laboratory. In fact, I think it’s an important thing for people to understand about the way that science is done. Often, science is done not because people are working towards a particular goal but because they’re fooling around and see something unexpected. If all of our science activity is directed toward specific goals, we’ll miss a lot of really important stuff that allows us to get to those goals. Without this kind of curiosity-driven research, we won’t get where we need to go.

In a nutshell, what does quantum meant to you?

Quantum mechanics was the most important discovery of 20th-century physics. Wave–particle duality, which a lot of people would say was the “ordinary” part of quantum mechanics, has led to a technological revolution that has transformed our daily lives. We all walk around with mobile phones that wouldn’t exist were it not for quantum mechanics. So for me, quantum mechanics is this idea that waves are particles and particles are waves.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post William Phillips: why quantum physics is so ‘deliciously weird’ appeared first on Physics World.

Electrochemical loading boosts deuterium fusion in a palladium target

25 août 2025 à 10:02

Researchers in Canada have used electrochemistry to increase the rate of nuclear fusion within a metal target that is bombarded with high-energy deuterium ions. While the process is unlikely to lead to a new source of energy – it consumes far more energy than it produces – further research could provide new insights into fusion and other areas of science.

Although modern fusion reactors are huge projects sometimes costing billions, the first evidence for an artificial fusion reaction – observed by Mark Oliphant and Ernest Rutherford in 1934 – was a simple experiment in which deuterium nuclei in a solid target were bombarded with deuterium ions.

Palladium is a convenient target for such experiments because the metal’s lattice has the unusual propensity to selectively absorb hydrogen (and deuterium) atoms. In 1989 the chemists Stanley Pons of the University of Utah and Martin Fleischmann of the University of Southampton excited the world by claiming that the electrolysis of heavy water using a palladium cathode caused absorbed deuterium atoms to undergo spontaneous nuclear fusion under ambient conditions (with no ion bombardment). However, this observation of “cold fusion” could not be reproduced by others.

Now, Curtis Berlinguette at the University of British Columbia and colleagues have looked at whether electrochemistry could enhance the rate of fusion triggered by bombarding palladium with high-energy deuterium ions.

Benchtop accelerator

In the new work, the researchers used a palladium foil as the cathode in an electrochemical cell that was used in the electrolysis of heavy water. The other side of the cathode was the target for a custom-made benchtop megaelectronvolt particle accelerator. Kuo-Yi Chen, a postdoc in Berlinguette’s group, developed a microwave plasma thruster that was used to dissociate deuterium into ions. “Then we have a magnetic field that directs the ions into that metal target,” explains Berlinguette. The process, called plasma immersion ion implantation, is sometimes used to dope semiconductors, but has never previously been used to trigger nuclear fusion. Their apparatus is dubbed the Thunderbird Reactor.

The researchers used a neutron detector surrounding the apparatus to count the fusion events occurring. They found that, when they turned on the reactor, they initially detected very few events. However, as the amount of deuterium implanted in the palladium grew, the number of fusion events grew and eventually plateaued. The researchers then switched on the electrochemical cell, driving deuterium into the palladium from the other side using a simple lead-acid battery. They found that the number of fusion events detected increased another 15%.

Currently, the reactor produces less than 10-10 times the amount of energy it consumes. However, the researchers believe it could be used in future research. “We provide the community with an apparatus to study fusion reactions at lower energy conditions than has been done before,” says Berlinguette. “It’s an uncharted experimental space so perhaps there might be some interesting surprises there…What we are really doing is providing the first clear experimental link between electrochemistry and fusion science.”

Berlinguette also notes that, even if the work never finds any productive application in nuclear fusion research, the techniques involved could be useful elsewhere. In high temperature superconductivity, for example, researchers often use extreme pressures to create metal hydrides: “Now we’re showing you can do this using electrochemistry instead,” he says. He also points to the potential for deuteration of drugs, which is an active area of research in pharmacology.

The research is described in a paper in Nature, with Chen as lead author.

Jennifer Dionne and her graduate student Amy McKeown-Green at Stanford University in the US are impressed: “In the work back in the 1930s they had a static target,” says McKeown-Green. “This is a really cool example of how you can perturb the system in this low-energy, sub-million Kelvin regime.” She would be interested to see further analysis on exactly what the temperature is and whether other metals show similar behaviours.

“Hydrogen and elements like deuterium tend to sit in the interstitial sites in the palladium lattice and, at room temperature and pressure, about 70% of those will be full,” explains Dionne. “A cool thing about this paper is that they showed how an electrical bias increases the amount of deuteration of the target. It was either completely obvious or completely counter-intuitive depending on how you look at it, and they’ve proved definitively that you can increase the amount of deuteration and then increase the fusion rate.”

The post Electrochemical loading boosts deuterium fusion in a palladium target appeared first on Physics World.

Tenured scientists in the US slow down and produce less impactful work, finds study

23 août 2025 à 16:00

Researchers in the US who receive tenure produce more novel but less impactful work, according to an analysis of the output of more than 12,000 academics across 15 disciplines. The study also finds that publication rates rise steeply and steadily during tenure-track, typically peaking the year before a scientist receives a permanent position. After tenure, their average publication rate settles near the peak value.

Carried out by data scientists led by Giorgio Tripodi from Northwestern University in Illinois, the study examined the publication history of academics five years before tenure and five years after. The researchers say that the observed pattern – a rise before tenure, followed by a peak and then a steady level – is highly reproducible.

“Tenure in the US academic system is a very peculiar contract,” explains Tripodi. “It [features] a relatively long probation period followed by a permanent appointment [which is] a strong incentive to maximize research output and avoid projects that are more likely to fail during the tenure track.”

The study reveals that academics in non-lab-based disciplines, such as mathematics, business, economics, sociology and political science, exhibit a fall in research output after tenure. But for those in the other 10 disciplines, including physics, publication rates are sustained around the pre-tenure peak.

“In lab-based fields, collaborative teams and sustained funding streams may help maintain high productivity post-tenure,” says Tripodi. “In contrast, in more individual-centred disciplines like mathematics or sociology, where research output is less dependent on continuous lab operation, the post-tenure slowdown appears to be more pronounced.”

The team also looked at the proportion of high-impact papers – defined as those in the top 5% of a field – and found that researchers in all 15 disciplines publish more high-impact papers before tenure than after. As for “novelty” – defined as atypical combinations of work – this increases with time, but the most novel papers tend to appear after tenure.

According to Tripodi, once tenure and job security has been secured, the pressure to publish shifts towards other objectives – a move that explains the plateau or decline seen in the publication data. “Our results show that tenure allows scientists to take more risks, explore novel research directions, and reorganize their research portfolio,” he adds.

The post Tenured scientists in the US slow down and produce less impactful work, finds study appeared first on Physics World.

Starlink satellite emissions interfere with radio astronomy

22 août 2025 à 11:42

The largest-ever survey of low-frequency radio emissions from satellites has detected emissions from the Starlink satellite “mega-constellation” across scientifically important low-frequency bands, including some that are protected for radio astronomy by international regulations. These emissions, which come from onboard electronics and are not intentional transmissions, could mask the weak radio-wave signals that astronomers seek to detect. As well as being damaging for radio astronomy, the researchers at Australia’s Curtin University who conducted the survey say their findings highlight the need for new regulations that cover unintended transmissions, not just deliberate ones.

“It is important to note that Starlink is not violating current regulations, so is doing nothing wrong,” says Steven Tingay, the executive director of the Curtin Institute of Radio Astronomy (CIRA) and a member of the survey team. Discussions with Starlink operator SpaceX on this topic, he adds, have been “constructive”.

The main purpose of Starlink and other mega-constellations is to provide Internet coverage around the world, including in areas that were previously unable to access it. In addition to SpaceX’s Starlink, other mega-constellations include Amazon’s Kuiper (US) and Eutelsat’s OneWeb (UK). This list is likely to expand in the future, with hundreds to tens of thousands of additional satellites planned for launch by China’s Shanghai Spacecom Satellite Technology (operator of the G60 Starlink/Qianfan constellation) and the Russian Federation (operator of the Sfera constellation).

While the effects of mega-constellations on optical astronomy have been widely studied, study leader Dylan Grigg, a PhD student in CIRA’s International Centre for Radio Astronomy Research, says that researchers are just beginning to realize the extent to which they are also adversely affecting radio astronomy. These effects extend to some of the most radio-quiet places on Earth. Indeed, several radio telescopes that were deliberately built in low-radio-noise locations – including the Murchison Widefield Array (MWA) in Western Australia and the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, as well as Europe’s Low Frequency Array (LOFAR) – have recently detected interfering satellite signals.

Largest survey of satellite effects on radio astronomy data

To understand the scale of the problem, Tingay, Grigg and colleagues turned to a radio telescope called the Engineering Development Array 2 (EDA2). This is a prototype station for the low-frequency half of the Square Kilometre Array (SKA-Low), which will be the world’s largest and most sensitive radio telescope when it comes online later this decade.

Using the EDA2, the researchers imaged the sky every two seconds at the frequencies that SKA-Low will cover. They did this using a software package Grigg developed that autonomously detects and identifies satellites in the images the EDA2 creates.

Although this was not the first time EDA2 has been deployed to analyse the effects of satellites on radio astronomy data, Grigg says it is the most comprehensive. “Ours is the largest survey looking into Starlink emissions at SKA-Low frequencies, with over 76 million of the images analysed,” he explains. “With the real SKA-Low coming online soon, we need as much information as possible to understand the threat satellite interference poses to radio astronomy.”

Emissions at protected frequencies

During the survey period, the researchers say they detected more than 112 000 radio emissions from over 1800 Starlink satellites. At some frequencies, up to 30% of all survey images contained at least one Starlink detection.

“While Starlink is not the only satellite network, it is the most immediate and frequent source of potential interference for radio astronomy,” Grigg says. “Indeed, it launched 477 satellites during this study’s four-month data collection period alone and has the most satellites in orbit – more than 7000 during the time of this study.”

But it is not only the sheer number of satellites that poses a challenge for astronomers. So, too, does the strength and frequency of their emissions. “Some satellites were detected emitting in bands where no signals are supposed to be present at all,” Grigg says. The list of rogue emitters, he adds, included 703 satellites the team identified at 150.8 MHz – a frequency that is meant to be reserved for radio astronomy under International Telecommunication Union regulations. “Since these emissions may come from components like onboard electronics and they’re not part of an intentional signal, astronomers can’t easily predict them or filter them out,” he says.

Potential for new regulations and mitigations

From a regulatory perspective, the widespread detection of unintended emissions, including within protected frequency bands, demonstrates the need for international regulation and limits on unintended emissions, Grigg tells Physics World. The Curtin team is now working with other radio astronomy research groups around the world with the aim of introducing updated policies that would regulate the impact of satellite constellations on radio astronomy.

In the meantime, Grigg says, “We are in an ongoing dialogue with SpaceX and are hopeful that we can continue to work with them to introduce mitigations to their satellites in the future.”

The survey is described in Astronomy & Astrophysics.

The post Starlink satellite emissions interfere with radio astronomy appeared first on Physics World.

Exoplanets suffering from a plague of dark matter could turn into black holes

21 août 2025 à 17:00

Dark matter could be accumulating inside planets close to the galactic centre, potentially even forming black holes that might consume the afflicted planets from the inside-out, new research has predicted.

According to the standard model of cosmology, all galaxies including the Milky Way sit inside huge haloes of dark matter, with the greatest density at the centre. This dark matter primarily interacts only through gravity, although some popular models such as weakly interacting massive particles (WIMPS) do imply that dark-matter particles may occasionally scatter off normal matter.

This has led PhD student Mehrdad Phoroutan Mehr and Tara Fetherolf of the University of California, Riverside, to make an extraordinary proposal: that dark matter could elastically scatter off molecules inside planets, lose energy and become trapped inside those planets, and then grow so dense that they collapse to form a black hole. In some cases, a black hole could be produced in just ten months, according to Mehr and Fetherolf’s calculations, reported in Physical Review D.

Even more remarkable is that while many planets would be consumed by their parasitic black hole, it is feasible that some planets could actually survive with a black hole inside them, while in others the black hole might evaporate, Mehr tells Physics World.

“Whether a black hole inside a planet survives or not depends on how massive it is when it first forms,” he says.

This leads to a trade-off between how quickly the black hole can grow and how soon the black hole can evaporate via Hawking radiation – the quantum effect that sees a black hole’s mass radiated away as energy.

The mass of a dark-matter particle remains unknown, but the less massive it is, and the more massive a planet is, then the greater the chance a planet has of capturing dark matter, and the more massive a black hole it can form. If the black hole starts out relatively massive, then the planet is in big trouble, but if it starts out very small then it can evaporate before it becomes dangerous. Of course, if it evaporates, another black hole could replace it in the future.

“Interestingly,” adds Mehr, “There is also a special in-between mass where these two effects balance each other out. In that case, the black hole neither grows nor evaporates – it could remain stable inside the planet for a long time.”

Keeping planets warm

It’s not the first time that dark matter has been postulated to accumulate inside planets. In 2011 Dan Hooper and Jason Steffen of Fermilab proposed that dark matter could become trapped inside planets and that the energy released through dark-matter particles annihilating could keep a planet outside the habitable zone warm enough for liquid water to exist on its surface.

Mehr and Fetherolf’s new hypothesis “is worth looking into more carefully”, says Hooper.

That said, Hooper cautions that the ability of dark matter to accumulate inside a planet and form a black hole should not be a general expectation for all models of dark matter. Rather, “it seems to me that there could be a small window of dark-matter models where such particles could be captured in stars at a rate that is high enough to lead to black hole formation,” he says.

Currently there remains a large parameter space for the possible properties for dark matter. Experiments and observations continue to chip away at this parameter space, but there remain a very wide range of possibilities. The ability of dark matter to self-annihilate is just one of those properties – not all models of dark matter allow for this.

If dark-matter particles do annihilate at a sufficiently high rate when they come into contact, then it is unlikely that the mass of dark matter inside a planet would ever grow large enough to form a black hole. But if they don’t self-annihilate, or at least not at an appreciable rate, then a black hole formed of dark matter could still keep a planet warm with its Hawking radiation.

Searching for planets with black holes inside

The temperature anomaly that this would create could provide a means of detecting planets with black holes inside them. It would be challenging – the planets that we expect to contain the most dark matter would be near the centre of the galaxy 26,000 light years away, where the dark-matter concentration in the halo is densest.

Even if the James Webb Space Telescope (JWST) could detect anomalous thermal radiation from such a distant planet, Mehr says that it would not necessarily be a smoking gun.

“If JWST were to observe that a planet is hotter than expected, there could be many possible explanations, we would not immediately attribute this to dark matter or a black hole,” says Mehr. “Rather, our point is that if detailed studies reveal temperatures that cannot be explained by ordinary processes, then dark matter could be considered as one possible – though still controversial – explanation.”

Another problem is that black holes cannot be distinguished from planets purely through their gravity. A Jupiter-mass planet has the same gravitational pull as a Jupiter-mass black hole that has just eaten a Jupiter-mass planet. This means that planetary detection methods that rely on gravity, from radial velocity Doppler shift measurements to astrometry and gravitational microlensing events, could not tell a planet and a black hole apart.

The planets in our own Solar System are also unlikely to contain much dark matter, says Mehr. “We assume that the dark matter density primarily depends on the distance from the centre of the galaxy,” he explains.

Where we are, the density of dark matter is too low for the planets to capture much of it, since the dark-matter halo is concentrated in the galactic centre. Therefore, we needn’t worry about Jupiter or Saturn, or even Earth, turning into a black hole.

The post Exoplanets suffering from a plague of dark matter could turn into black holes appeared first on Physics World.

Cosmic chemistry: Ewine van Dishoeck shares her zeal for astrochemistry

21 août 2025 à 15:59

This episode features a wide-ranging interview with the astrochemist Ewine van Dishoeck, who is professor emeritus of molecular astrophysics at Leiden Observatory in the Netherlands. In 2018 she was awarded The Kavli Prize in Astrophysics and in this podcast she talks about her passion for astrochemistry and how her research combines astronomy, astrophysics, theoretical chemistry and laboratory experiments.

Van Dishoeck talks about some of the key unanswered questions in astrochemistry, including how complex molecules form on the tiny specks of dust in interstellar space. We chat about the recent growth in our understanding of exoplanets and protoplanetary discs and the prospect of observing signs of life on distant planets or moons.

The Atacama Large Millimetre Array radio telescope and the James Webb Space Telescope are two of the major facilities that Van Dishoeck has been involved with. She talks about the challenges of getting the astronomy community to agree on the parameters of a new observatory and explains the how collaborative nature of these projects ensures that instruments meet the needs of multiple research communities.

Van Dishoeck looks to the future of astrochemistry and what new observatories could bring to the field. The interview ends with a call for the next generation of scientists to pursue careers in astrochemistry.

This podcast is sponsored by The Kavli Prize.

kavli-logo-mediumThe Kavli Prize honours scientists for basic research breakthroughs in astrophysics, nanoscience and neuroscience – transforming our understanding of the big, the small and the complex. One million dollars is awarded in each of the three fields.  The Kavli Prize is a partnership among The Norwegian Academy of Science and Letters, the Norwegian Ministry of Education and Research, and The Kavli Foundation (USA).

The vision for The Kavli Prize comes from Fred Kavli, a Norwegian-American entrepreneur and philanthropist who turned his lifelong fascination with science into a lasting legacy for recognizing scientific breakthroughs and for supporting basic research.

The Kavli Prize follows a two-year cycle, with an open call for nominations between 1 July and 1 October in odd-numbered years, and an announcement and award ceremony during even-numbered years. The next Kavli Prize will be announced in June 2026. Visit kavliprize.org for more information.

The post Cosmic chemistry: Ewine van Dishoeck shares her zeal for astrochemistry appeared first on Physics World.

Nano-engineered flyers could soon explore Earth’s mesosphere

21 août 2025 à 13:00

Small levitating platforms that can stay airborne indefinitely at very high altitudes have been developed by researchers in the US and Brazil. Using photophoresis, the devices could be adapted to carry small payloads in the mesosphere where flight is notoriously difficult. It could even be used in the atmospheres of moons and other planets.

Photophoresis occurs when light illuminates one side of a particle, heating it slightly more than the other. The resulting temperature difference in the surrounding gas means that molecules rebound with more energy on the warmer side than the cooler side – producing a tiny but measurable push.

For most of the time since its discovery in the 1870s, the effect was little more than a curiosity. But with more recent advances in nanotechnology, researchers have begun to explore how photophoresis could be put to practical use.

“In 2010, my graduate advisor, David Keith, had previously written a paper that described photophoresis as a way of flying microscopic devices in the atmosphere, and we wanted to see if larger devices could carry useful payloads,” explains Ben Schafer at Harvard University, who led the research. “At the same time, [Igor Bargatin’s group at the University of Pennsylvania] was doing fascinating work on larger devices that generated photophoretic forces.”

Carrying payloads

These studies considered a wide variety of designs: from artificial aerosols, to thin disks with surfaces engineered to boost the effect. Building on this earlier work, Schafer’s team investigated how lightweight photophoretic devices could be optimized to carry payloads in the mesosphere: the atmospheric layer at about 50–80 km above Earth’s surface, where the sparsity of air creates notoriously difficult flight conditions for conventional aircraft or balloons.

“We used these results to fabricate structures that can fly in near-space conditions, namely, under less than the illumination intensity of sunlight and at the same pressures as the mesosphere,” Schafer explains.

The team’s design consists two alumina membranes – each 100 nm thick, and perforated with nanoscale holes. The membranes are positioned a short distance apart, and connected by ligaments. In addition, the bottom membrane is coated with a light-absorbing chromium layer, causing it to heat the surrounding air more than the top layer as it absorbs incoming sunlight.

As a result, air molecules move preferentially from the cooler top side toward the warmer bottom side through the membranes’ perforations: a photophoretic process known as thermal transpiration. This one-directional flow creates a pressure imbalance across the device, generating upward thrust. If this force exceeds the device’s weight, it can levitate and even carry a payload. The team also suggests that the devices could be kept aloft at night using the infrared radiation emitted by Earth into space.

Simulations and experiments

Through a combination of simulations and experiments, Schafer and his colleagues examined how factors such as device size, hole density, and ligament distribution could be tuned to maximize thrust at different mesospheric altitudes – where both pressure and temperature can vary dramatically. They showed that platforms 10 cm in radius could feasibly remain aloft throughout the mesosphere, powered by sunlight at intensities lower than those actually present there.

Based on these results, the team created a feasible design for a photophoretic flyer with a 3 cm radius, capable of carrying a 10 mg payload indefinitely at altitudes of 75 km. With an optimized design, they predict payloads as large as 100 mg could be supported during daylight.

“These payloads could support a lightweight communications payload that could transmit data directly to the ground from the mesosphere,” Schafer explains. “Small structures without payloads could fly for weeks or months without falling out of the mesosphere.”

With this proof of concept, the researchers are now eager to see photophoretic flight tested in real mesospheric conditions. “Because there’s nothing else that can sustainably fly in the mesosphere, we could use these devices to collect ground-breaking atmospheric data to benefit meteorology, perform telecommunications, and predict space weather,” Schafer says.

Requiring no fuel, batteries, or solar panels, the devices would be completely sustainable. And the team’s ambitions go beyond Earth: with the ability to stay aloft in any low-pressure atmosphere with sufficient light, photophoretic flight could also provide a valuable new approach to exploring the atmosphere of Mars.

The research is described in Nature.

The post Nano-engineered flyers could soon explore Earth’s mesosphere appeared first on Physics World.

Deep-blue LEDs get a super-bright, non-toxic boost

21 août 2025 à 10:00

A team led by researchers at Rutgers University in the US has discovered a new semiconductor that emits bright, deep-blue light. The hybrid copper iodide material is stable, non-toxic, can be processed in solution and has already been integrated into a light-emitting diode (LED). According to its developers, it could find applications in solid-state lighting and display technologies.

Creating white light for solid-state lighting and full-colour displays requires bright, pure sources of red, green and blue light. While stable materials that efficiently emit red or green light are relatively easily to produce, those that generate blue light (especially deep-blue light) are much more challenging. Existing blue-light emitters based on organic materials are unstable, meaning they lose their colour quality over time. Alternatives based on lead-halide perovskites or cadmium-containing colloidal quantum dots are more stable, but also toxic for humans and the environment.

Hybrid copper-halide-based emitters promise the best of both worlds, being both non-toxic and stable. They are also inexpensive, with tuneable optical properties and a high luminescence efficiency, meaning they are good at converting power into visible light.

Researchers have already used a pure inorganic copper iodide material, Cs3Cu2I5, to make deep-blue LEDs. This material emits light at the ideal wavelength of 445 nm, is robust to heat and moisture, and it emits between 87–95% of the excitation photons it absorbs as luminescence photons, giving it a high photoluminescence quantum yield (PLQY).

However, the maximum ratio of photon output to electron input (known as the maximum external quantum efficiency, EQEmax) for this material is very low, at just 1.02%.

Strong deep-blue photoluminescence

In the new work, a team led by Rutgers materials chemist Jing Li developed a hybrid copper iodide with the chemical formula 1D-Cu4I8(Hdabco)4 (CuI(Hda), where Hdabco is 1,4-diazabicyclo-[2.2.2]octane-1-ium. This material emits strong deep-blue light at 449 nm with a PLQY near unity (99.6%).

Li and colleagues opted to use CuI(Hda) as the sole light emitting layer and built a thin-film LED out of it using a solution process. The new device has an EQEmax of 12.6% with colour coordinates (0.147, 0.087) and a peak brightness of around 4000 cd m-2. It is also relatively stable, with an operational half-lifetime (T50) of approximately 204 hours under ambient conditions. These figures mean that its performance rivals the best existing solution-processed deep-blue LEDs, Li says. The team also fabricated a large-area device measuring 4 cm² to demonstrate that the material could be used in real-world applications.

Interfacial hydrogen-bond passivation strategy

The low PLQY of previous such devices is partly due to the fact that charge carriers (electrons and holes) in these materials rapidly recombine in a non-radiative way, typically due to surface and bulk defects, or traps. The charge carriers also have a low radiative recombination rate, which is associated with a small exciton (electron-hole pair) binding energy.

Li and colleagues overcame this problem in their new device thanks to an interfacial hydrogen-bond passivation (DIHP) strategy that involves introducing hydrogen bonds via an ultrathin sheet of polymethylacrylate (PMMA) and a carbazole-phosphonic acid-based self-assembled monolayer (Ac2PACz) at the two interfaces of the CuI(Hda) emissive layer. This effectively passivates both heterojunctions of the copper-iodide hydride light-emitting layer and optimizes exciton binding energies. “Such a synergistic surface modification dramatically boosts the performance of the deep-blue LED by a factor of fourfold,” explains Li.

According to Li, the study suggests a promising route for developing blue emitters that are both energy-efficient and environmentally benign, without compromising on performance. “Through the fabrication of blue LEDs using a low cost, stable and nontoxic material capable of delivering efficient deep-blue light, we address major energy and ecological limitations found in other types of solution-processable emitters,” she tells Physics World.

Li adds that the hydrogen-bonding passivation technique is not limited to the material studied in this work. It could also be applied to minimize interfacial energy losses in a wide range of other solution-based, light-emitting optoelectronic systems.

The team is now pursuing strategies for developing other solution-processable, high-performance hybrid copper iodide-based emitter materials similar to CuI(Hda). “Our goal is to further enhance the efficiency and extend the operational lifetime of LEDs utilizing these next-generation materials,” says Li.

The present work is detailed in Nature.

The post Deep-blue LEDs get a super-bright, non-toxic boost appeared first on Physics World.

Physicists discover a new proton magic number

20 août 2025 à 15:00

The first precise mass measurements of an extremely short-lived and proton-rich nucleus, silicon-22, have revealed the “magic” – that is, unusually tightly bound – nature of nuclei containing 14 protons. As well as shedding light on nuclear structure, the discovery could improve our understanding of the strong nuclear force and the mechanisms by which elements form.

At the lighter end of the periodic table, stable nuclei tend to contain similar numbers of neutrons and protons. As the number of protons increases, additional neutrons are needed to balance out the mutual repulsion of the positively-charged protons. As a rule, therefore, an isotope of a given element will be unstable if it contains either too few neutrons or too many.

In 1949, Maria Goeppert Mayer and J Hans D Jensen proposed an explanation for this rule. According to their nuclear shell model, nuclei that contain certain “magic” numbers of nucleons (neutrons and/or protons) are more bound because they have just the right number of nucleons to fully fill their shells. Nuclei that contain magic numbers of both protons and neutrons are even more bound and are said to be “doubly magic”. Subsequent studies showed that for neutrons, these magic numbers are 2, 8, 20, 28, 50, 82 and 126.

While the magic numbers for stable and long-lived nuclei are now well-established, those for exotic, short-lived ones with unusual proton-neutron ratios are comparatively little understood. Do these highly unstable nuclei have the same magic numbers as their more stable counterparts? Or are they different?

In recent years, studies showing that neutron-rich nuclei have magic numbers of 14, 16, 32 and 34 have brought scientists closer to answering this question. But what about protons?

“The hunt for new magic numbers in proton-rich nuclei is just as exciting,” says Yuan-Ming Xing, a physicist at the Institute for Modern Physics (IMP) of the Chinese Academy of Sciences, who led the latest study on silicon-22. “This is because we know much less about the evolution of the shell structure of these nuclei, in which the valence protons are loosely bound.” Protons in these nuclei can even couple to states in the continuum, Xing adds, forming the open quantum systems that have become such a hot topic in quantum research.

Mirror nuclei

After measurements on oxygen-22 (14 neutrons, 8 protons) showed that 14 is a magic number of neutrons for this neutron-rich isotope, the hunt was on for a proton-rich counterpart. An important theory in nuclear physics known as isospin symmetry states that nuclei with interchanged numbers of protons and neutrons will have identical characteristics. The magic numbers for protons and neutrons for these “mirror” nuclei, as they are known, are therefore expected to be the same. “Of all the new neutron-rich doubly-magic nuclei discovered, only one loosely bound mirror nucleus for oxygen-22 exists,” says IMP team member Yuhu Zhang. “This is silicon-22.”

The problem is that silicon-22 (14 protons, 8 neutrons) has a short half-life and is hard to produce in quantities large enough to study. To overcome this, the researchers used an improved version of a technique known as Bρ-defined isochronous mass spectroscopy.

Working at the Cooler-Storage Ring of the Heavy Ion Research Facility in Lanzhou, China, Xing, Zhang and an international team of collaborators began by accelerating a primary beam of stable 36Ar15+ ions to around two thirds the speed of light. They then directed this beam onto a 15-mm-thick beryllium target, causing some of the 36Ar ions to fragment into silicon-22 nuclei. After injecting these nuclei into the storage ring, the researchers could measure their velocity and the time it took them to circle the ring. From this, they could determine their mass. This measurement confirmed that the proton number 14 is indeed magic in silicon-22.

A better understanding of nucleon interactions

“Our work offers an excellent opportunity to test the fundamental theories of nuclear physics for a better understanding of nucleon interactions, of how exotic nuclear structures evolve and of the limit of existence of extremely exotic nuclei,” says team member Giacomo de Angelis, a nuclear physicist affiliated with the National Laboratories of Legnaro in Italy as well as the IMP. “It could also help shed more light on the reaction rates for element formation in stars – something that could help astrophysicists to better model cosmic events and understand how our universe works.”

According to de Angelis, this first mass measurement of the silicon-22 nucleus and the discovery of the magic proton number 14 is “a strong invitation not only for us, but also for other nuclear physicists around the world to investigate further”. He notes that researchers at the Facility for Rare Isotope Beams (FRIB) at Michigan State University, US, recently measured the energy of the first excited state of the silicon-22 nucleus. The new High Intensity Heavy-Ion Accelerator Facility (HIAF) in Huizhou, China, which is due to come online soon, should enable even more detailed studies.

“HIAF will be a powerful accelerator, promising us ideal conditions to explore other loosely bound systems, thereby helping theorists to more deeply understand nucleon-nucleon interactions, quantum mechanics of open quantum systems and the origin of elements in the universe,” he says.

The present study is detailed in Physical Review Letters

The post Physicists discover a new proton magic number appeared first on Physics World.

❌