↩ Accueil

Vue lecture

Entangled light leads to quantum advantage

Photo showing the optical components used to manipulate the quantum fluctuations of light
Quantum manipulation: The squeezer – an optical parametric oscillator (OPO) that uses a nonlinear crystal inside an optical cavity to manipulate the quantum fluctuations of light – is responsible for the entanglement. (Courtesy: Jonas Schou Neergaard-Nielsen)

Physicists at the Technical University of Denmark have demonstrated what they describe as a “strong and unconditional” quantum advantage in a photonic platform for the first time. Using entangled light, they were able to reduce the number of measurements required to characterize their system by a factor of 1011, with a correspondingly huge saving in time.

“We reduced the time it would take from 20 million years with a conventional scheme to 15 minutes using entanglement,” says Romain Brunel, who co-led the research together with colleagues Zheng-Hao Liu and Ulrik Lund Andersen.

Although the research, which is described in Science, is still at a preliminary stage, Brunel says it shows that major improvements are achievable with current photonic technologies. In his view, this makes it an important step towards practical quantum-based protocols for metrology and machine learning.

From individual to collective measurement

Quantum devices are hard to isolate from their environment and extremely sensitive to external perturbations. That makes it a challenge to learn about their behaviour.

To get around this problem, researchers have tried various “quantum learning” strategies that replace individual measurements with collective, algorithmic ones. These strategies have already been shown to reduce the number of measurements required to characterize certain quantum systems, such as superconducting electronic platforms containing tens of quantum bits (qubits), by as much as a factor of 105.

A photonic platform

In the new study, Brunel, Liu, Andersen and colleagues obtained a quantum advantage in an alternative “continuous-variable” photonic platform. The researchers note that such platforms are far easier to scale up than superconducting qubits, which they say makes them a more natural architecture for quantum information processing. Indeed, photonic platforms have already been crucial to advances in boson sampling, quantum communication, computation and sensing.

The team’s experiment works with conventional, “imperfect” optical components and consists of a channel containing multiple light pulses that share the same pattern, or signature, of noise. The researchers began by performing a procedure known as quantum squeezing on two beams of light in their system. This caused the beams to become entangled – a quantum phenomenon that creates such a strong linkage that measuring the properties of one instantly affects the properties of the other.

The team then measured the properties of one of the beams (the “probe” beam) in an experiment known as a 100-mode bosonic displacement process. According to Brunel, one can imagine this experiment as being like tweaking the properties of 100 independent light modes, which are packets or beams of light. “A ‘bosonic displacement process’ means you slightly shift the amplitude and phase of each mode, like nudging each one’s brightness and timing,” he explains. “So, you then have 100 separate light modes, and each one is shifted in phase space according to a specific rule or pattern.”

By comparing the probe beam to the second (“reference”) beam in a single joint measurement, Brunel explains that he and his colleagues were able to cancel out much of the uncertainties in these measurements. This meant they could extract more information per trial than they could have by characterizing the probe beam alone. This information boost, in turn, allowed them to significantly reduce the number of measurements – in this case, by a factor of 1011.

While the DTU researchers acknowledge that they have not yet studied a practical, real-world system, they emphasize that their platform is capable of “doing something that no classical system will ever be able to do”, which is the definition of a quantum advantage. “Our next step will therefore be to study a more practical system in which we can demonstrate a quantum advantage,” Brunel tells Physics World.

The post Entangled light leads to quantum advantage appeared first on Physics World.

  •  

New adaptive optics technology boosts the power of gravitational wave detectors

Future versions of the Laser Interferometer Gravitational Wave Observatory (LIGO) will be able to run at much higher laser powers thanks to a sophisticated new system that compensates for temperature changes in optical components. Known as FROSTI (for FROnt Surface Type Irradiator) and developed by physicists at the University of California Riverside, US, the system will enable next-generation machines to detect gravitational waves emitted when the universe was just 0.1% of its current age, before the first stars had even formed.

Gravitational waves are distortions in spacetime that occur when massive astronomical objects accelerate and collide. When these distortions pass through the four-kilometre-long arms of the two LIGO detectors, they create a tiny difference in the (otherwise identical) distance that light travels between the centre of the observatory and the mirrors located at the end of each arm. The problem is that detecting and studying gravitational waves requires these differences in distance to be measured with an accuracy of 10-19 m, which is 1/10 000th the size of a proton.

Extending the frequency range

LIGO overcame this barrier 10 years ago when it detected the gravitational waves produced when two black holes located roughly 1.3 billion light–years from Earth merged. Since then, it and two smaller facilities, KAGRA and VIRGO, have observed many other gravitational waves at frequencies ranging from 30–2000 Hz.

Observing waves at lower and higher frequencies in the gravitational wave spectrum remains challenging, however. At lower frequencies (around 10–30 Hz), the problem stems from vibrational noise in the mirrors. Although these mirrors are hefty objects – each one measures 34 cm across, is 20 cm thick and has a mass of around 40 kg – the incredible precision required to detect gravitational waves at these frequencies means that even the minute amount of energy they absorb from the laser beam is enough to knock them out of whack.

At higher frequencies (150 – 2000 Hz), measurements are instead limited by quantum shot noise. This is caused by the random arrival time of photons at LIGO’s output photodetectors and is a fundamental consequence of the fact that the laser field is quantized.

A novel adaptive optics device

Jonathan Richardson, the physicist who led this latest study, explains that FROSTI is designed to reduce quantum shot noise by allowing the mirrors to cope with much higher levels of laser power. At its heart is a novel adaptive optics device that is designed to precisely reshape the surfaces of LIGO’s main mirrors under laser powers exceeding 1 megawatt (MW), which is nearly five times the power used at LIGO today.

Though its name implies cooling, FROSTI actually uses heat to restore the mirror’s surface to its original shape. It does this by projecting infrared radiation onto test masses in the interferometer to create a custom heat pattern that “smooths out” distortions and so allows for fine-tuned, higher-order corrections.

The single most challenging aspect of FROSTI’s design, and one that Richardson says shaped its entire concept, is the requirement that it cannot introduce even more noise into the LIGO interferometer. “To meet this stringent requirement, we had to use the most intensity-stable radiation source available – that is, an internal blackbody emitter with a long thermal time constant,” he tells Physics World. “Our task, from there, was to develop new non-imaging optics capable of reshaping the blackbody thermal radiation into a complex spatial profile, similar to one that could be created with a laser beam.”

Richardson anticipates that FROSTI will be a critical component for future LIGO upgrades – upgrades that will themselves serve as blueprints for even more sensitive next-generation observatories like the proposed Cosmic Explorer in the US and the Einstein Telescope in Europe. “The current prototype has been tested on a 40-kg LIGO mirror, but the technology is scalable and will eventually be adapted to the 440-kg mirrors envisioned for Cosmic Explorer,” he says.

Jan Harms, a physicist at Italy’s Gran Sasso Science Institute who was not involved in this work, describes FROSTI as “an ingenious concept to apply higher-order corrections to the mirror profile.” Though it still needs to pass the final test of being integrated into the actual LIGO detectors, Harms notes that “the results from the prototype are very promising”.

Richardson and colleagues are continuing to develop extensions to their technology, building on the successful demonstration of their first prototype. “In the future, beyond the next upgrade of LIGO (A+), the FROSTI radiation will need to be shaped into an even more complex spatial profile to enable the highest levels of laser power (1.5 MW) ultimately targeted,” explains Richardson. “We believe this can be achieved by nesting two or more FROSTI actuators together in a single composite, with each targeting a different radial zone of the test mass surfaces. This will allow us to generate extremely finely-matched optical wavefront corrections.”

The present study is detailed in Optica.

The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.

  •  

Chip-integrated nanoantenna efficiently harvests light from diamond defects

When diamond defects emit light, how much of that light can be captured and used for quantum technology applications? According to researchers at the Hebrew University of Jerusalem, Israel and Humboldt Universität of Berlin, Germany, the answer is “nearly all of it”. Their technique, which relies on positioning a nanoscale diamond at an optimal location within a chip-integrated nanoantenna, could lead to improvements in quantum communication and quantum sensing.

Illustration showing photon emission from a nanodiamond being directed by a bullseye antenna. The bullseye antenna is shown flat, and seven parallel orange arrows representing photons emerge from different parts of the bullseye, like candles on a birthday cake. At the centre of the bullseye is a diamond
Guided light: Illustration showing photon emission from a nanodiamond and light directed by a bullseye antenna. (Courtesy: Boaz Lubotzky)

Nitrogen-vacancy (NV) centres are point defects that occur when one carbon atom in diamond’s lattice structure is replaced by a nitrogen atom next to an empty lattice site (a vacancy). Together, this nitrogen atom and its adjacent vacancy behave like a negatively charged entity with an intrinsic quantum spin.

When excited with laser light, an electron in an NV centre can be promoted into an excited state. As the electron decays back to the ground state, it emits light. The exact absorption-and-emission process is complicated by the fact that both the ground state and the excited state of the NV centre have three sublevels (spin triplet states). However, by exciting an individual NV centre repeatedly and collecting the photons it emits, it is possible to determine the spin state of the centre.

The problem, explains Boaz Lubotzky, who co-led this research effort together with his colleague Ronen Rapaport, is that NV centres radiate over a wide range of angles. Hence, without an efficient collection interface, much of the light they emit is lost.

Standard optics capture around 80% of the light

Lubotzky and colleagues say they have now solved this problem thanks to a hybrid nanostructure made from a PMMA dielectric layer above a silver grating. This grating is arranged in a precise bullseye pattern that accurately guides light in a well-defined direction thanks to constructive interference. Using a nanometre-accurate positioning technique, the researchers placed the nanodiamond containing the NV centres exactly at the optimal location for light collection: right at the centre of the bullseye.

For standard optics with a numerical aperture (NA) of about 0.5, the team found that the system captures around 80% of the light emitted from the NV centres. When NA >0.7, this value exceeds 90%, while for NA > 0.8, Lubotzky says it approaches unity.

“The device provides a chip-based, room-temperature interface that makes NV emission far more directional, so a larger fraction of photons can be captured by standard lenses or coupled into fibres and photonic chips,” he tells Physics World. “Collecting more photons translates into faster measurements, higher sensitivity and lower power, thereby turning NV centres into compact precision sensors and also into brighter, easier-to-use single-photon sources for secure quantum communication.”

The researchers say their next priority is to transition their prototype into a plug-and-play, room-temperature module – one that is fully packaged and directly coupled to fibres or photonic chips – with wafer-level deterministic placement for arrays. “In parallel, we will be leveraging the enhanced collection for NV-based magnetometry, aiming for faster, lower-power measurements with improved readout fidelity,” says Lubotzky. “This is important because it will allow us to avoid repeated averaging and enable fast, reliable operation in quantum sensors and processors.”

They detail their present work in APL Quantum.

The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.

  •  

Precision sensing experiment manipulates Heisenberg’s uncertainty principle

Physicists in Australia and the UK have found a new way to manipulate Heisenberg’s uncertainty principle in experiments on the vibrational mode of a trapped ion. Although still at the laboratory stage, the work, which uses tools developed for error correction in quantum computing, could lead to improvements in ultra-precise sensor technologies like those used in navigation, medicine and even astronomy.

“Heisenberg’s principle says that if two operators – for example, position x and momentum, p – do not commute, then one cannot simultaneously measure both of them to absolute precision,” explains team leader Ting Rei Tan of the University of Sydney’s Nano Institute. “Our result shows that one can instead construct new operators – namely ‘modular position’ x̂ and ‘modular momentum’ p̂. These operators can be made to commute, meaning that we can circumvent the usual limitation imposed by the uncertainty principle.”

The modular measurements, he says, give the true measurement of displacements in position and momentum of the particle if the distance is less than a specific length l, known as the modular length. In the new work, they measured x̂ = x mod lx and p̂ = p mod lp, where lx and lp are the modular length in position and momentum.

“Since the two modular operators x̂ and p̂ commute, this means that they are now bounded by an uncertainty principle where the product is larger or equal to 0 (instead of the usual ℏ/2),” adds team member Christophe Valahu. “This is how we can use them to sense position and momentum below the standard quantum limit. The catch, however, is that this scheme only works if the signal being measured is within the sensing range defined by the modular lengths.”

The researchers stress that Heisenberg’s uncertainty principle is in no way “broken” by this approach, but it does mean that when observables associated with these new operators are measured, the precision of these measurements is not limited by this principle. “What we did was to simply push the uncertainty to a sensing range that is relatively unimportant for our measurement to obtain a better precision at finer details,” Valahu tells Physics World.

This concept, Tan explains, is related to an older method known as quantum squeezing that also works by shifting uncertainties around. The difference is that in squeezing, one reshapes the probability, reducing the spread in position at the cost of enlarging the spread of momentum, or vice versa. “In our scheme, we instead redistribute the probability, reducing the uncertainties of position and momentum within a defined sensing range, at the cost of an increased uncertainty if the signal is not guaranteed to lie within this range,” Tan explains. “We effectively push the unavoidable quantum uncertainty to places we don’t care about (that is, big, coarse jumps in position and momentum) so the fine details we do care about can be measured more precisely.

“Thus, as long as we know the signal is small (which is almost always the case for precision measurements), modular measurements give us the correct answer.”

Repurposed ideas and techniques

The particle being measured in Tan and colleagues’ experiment was a 171Yb+ ion trapped in a so-called grid state, which is a subclass of error-correctable logical state for quantum bits, or qubits. The researchers then used a quantum phase estimation protocol to measure the signal they imprinted onto this state, which acts as a sensor.

This measurement scheme is similar to one that is commonly used to measure small errors in the logical qubit state of a quantum computer. “The difference is that in this case, the ‘error’ corresponds to a signal that we want to estimate, which displaces the ion in position and momentum,” says Tan. “This idea was first proposed in a theoretical study.”

Towards ultra-precise quantum sensors

The Sydney researchers hope their result will motivate the development of next-generation precision quantum sensors. Being able to detect extremely small changes is important for many applications of quantum sensing, including navigating environments where GPS isn’t effective (such as on submarines, underground or in space). It could also be useful for biological and medical imaging, materials analysis and gravitational systems.

Their immediate goal, however, is to further improve the sensitivity of their sensor, which is currently about 14 x10-24 N/Hz1/2, and calculate its limit. “It would be interesting if we could push that to the 10-27 N level (which, admittedly, will not be easy) since this level of sensitivity could be relevant in areas like the search for dark matter,” Tan says.

Another direction for future research, he adds, is to extend the scheme to other pairs of observables. “Indeed, we have already taken some steps towards this: in the latter part of our present study, which is published in Science Advances, we constructed a modular number operator and a modular phase operator to demonstrate that the strategy can be extended beyond position and momentum.”

The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.

  •  

Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids

Researchers in Japan and Taiwan have captured three-dimensional images of an entire geothermal system deep in the Earth’s crust for the first time. By mapping the underground distribution of phenomena such as fracture zones and phase transitions associated with seismic activity, they say their work could lead to improvements in earthquake early warning models. It could also help researchers develop next-generation versions of geothermal power – a technology that study leader Takeshi Tsuji of the University of Tokyo says has enormous potential for clean, large-scale energy production.

“With a clear three-dimensional image of where supercritical fluids are located and how they move, we can identify promising drilling targets and design safer and more efficient development plans,” Tsuji says. “This could have direct implications for expanding geothermal power generation, reducing dependence on fossil fuels, and contributing to carbon neutrality and energy security in Japan and globally.”

In their study, Tsuji and colleagues focused on a region known as the brittle-ductile transition zone, which is where rocks go from being seismically active to mostly inactive. This zone is important for understanding volcanic activity and geothermal processes because it lies near an impermeable sealing band that allows fluids such as water to accumulate in a high-pressure, supercritical state. When these fluids undergo phase transitions, earthquakes may follow. However, such fluids could also produce more geothermal energy than conventional systems. Identifying their location is therefore important for this reason, too.

A high-resolution “digital map”

Many previous electromagnetic and magnetotelluric surveys suffered from low spatial resolution and were limited to regions relatively close to the Earth’s surface. In contrast, the techniques used in the latest study enabled Tsuji and colleagues to create a clear high-resolution “digital map” of deep geothermal reservoirs – something that has never been achieved before.

To make their map, the researchers used three-dimensional multichannel seismic surveys to image geothermal structures in the Kuju volcanic group, which is located on the Japanese island of Kyushu. They then analysed these images using a method they developed known as extended Common Reflection Surface (CRS) stacking. This allowed them to visualize deeper underground features such as magma-related structures, fracture-controlled fluid pathways and rock layers that “seal in” supercritical fluids.

“In addition to this, we applied advanced seismic tomography and machine-learning based analyses to determine the seismic velocity of specific structures and earthquake mechanisms with high accuracy,” explains Tsuji. “It was this integrated approach that allowed us to image a deep geothermal system in unprecedented detail.” He adds that the new technique is also better suited to mountainous geothermal regions where limited road access makes it hard to deploy the seismic sources and receivers used in conventional surveys.

A promising site for future supercritical geothermal energy production

Tsuji and colleagues chose to study the Kuju area because it is home to several volcanoes that were active roughly 1600 years ago and have erupted intermittently in recent years. The region also hosts two major geothermal power plants, Hatchobaru and Otake. The former has a capacity of 110 MW and is the largest geothermal facility in Japan.

The heat source for both plants is thought to be located beneath Mt Kuroiwa and Mt Sensui, and the region is considered a promising site for supercritical geothermal energy production. Its geothermal reservoir appears to consist of water that initially fell as precipitation (so-called meteoric water) and was heated underground before migrating westward through the fault system. Until now, though, no detailed images of the magmatic structures and fluid pathways had been obtained.

Tsuji says he has long wondered why geothermal power is not more widely used in Japan, despite the country’s abundant volcanic and thermal resources. “Our results now provide the scientific and technical foundation for next-generation supercritical geothermal power,” he tells Physics World.

The researchers now plan to try out their technique using portable seismic sources and sensors deployed in mountainous areas (not just along roads) to image the shallower parts of geothermal systems in greater detail as well. “We also plan to extend our surveys to other geothermal fields to test the general applicability of our method,” Tsuji says. “Ultimately, our goal is to provide a reliable scientific basis for the large-scale deployment of supercritical geothermal power as a sustainable energy source.”

The present work is detailed in Communications Earth & Environment.

The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.

  •  

Physicists explain why some fast-moving droplets stick to hydrophobic surfaces

What happens when a microscopic drop of water lands on a water-repelling surface? The answer is important for many everyday situations, including pesticides being sprayed on crops and the spread of disease-causing aerosols. Naively, one might expect it to depend on the droplet’s speed, with faster-moving droplets bouncing off the surface and slower ones sticking to it. However, according to new experiments, theoretical work and simulations by researchers in the UK and the Netherlands, it’s more complicated than that.

“If the droplet moves too slowly, it sticks,” explains Jamie McLauchlan, a PhD student at the University of Bath, UK who led the new research effort with Bath’s Adam Squires and Anton Souslov of the University of Cambridge. “Too fast, and it sticks again. Only in between is bouncing possible, where there is enough momentum to detach from the surface but not so much that it collapses back onto it.”

As well as this new velocity-dependent condition, the researchers also discovered a size effect in which droplets that are too small cannot bounce, no matter what their speed. This size limit, they say, is set by the droplets’ viscosity, which prevents the tiniest droplets from leaving the surface once they land on it.

Smaller-sized, faster-moving droplets

While academic researchers and industrialists have long studied single-droplet impacts, McLauchlan says that much of this earlier work focused on millimetre-sized drops that took place on millisecond timescales. “We wanted to push this knowledge to smaller sizes of micrometre droplets and faster speeds, where higher surface-to-volume ratios make interfacial effects critical,” he says. “We were motivated even further during the COVID-19 pandemic, when studying how small airborne respiratory droplets interact with surfaces became a significant concern.”

Working at such small sizes and fast timescales is no easy task, however. To record the outcome of each droplet landing, McLauchlan and colleagues needed a high-speed camera that effectively slowed down motion by a factor of 100 000. To produce the droplets, they needed piezoelectric droplet generators capable of dispensing fluid via tiny 30-micron nozzles. “These dispensers are highly temperamental,” McLauchlan notes. “They can become blocked easily by dust and fibres and fail to work if the fluid viscosity is too high, making experiments delicate to plan and run. The generators are also easy to break and expensive.”

Droplet modelled as a tiny spring

The researchers used this experimental set-up to create and image droplets between 30‒50 µm in diameter as they struck water-repelling surfaces at speeds of 1‒10 m/s. They then compared their findings with calculations based on a simple mathematical model that treats a droplet like a tiny spring, taking into account three main parameters in addition to its speed: the stickiness of the surface; the viscosity of the droplet liquid; and the droplet’s surface tension.

Previous research had shown that on perfectly non-wetting surfaces, bouncing does not depend on velocity. Other studies showed that on very smooth surfaces, droplets can bounce on a thin air layer. “Our work has explored a broader range of hydrophobic surfaces, showing that bouncing occurs due to a delicate balance of kinetic energy, viscous dissipation and interfacial energies,” McLauchlan tells Physics World.

This is exciting, he adds, because it reveals a previously unexplored regime for bounce behaviour: droplets that are too small, or too slow, will always stick, while sufficiently fast droplets can rebound. “This finding provides a general framework that explains bouncing at the micron scale, which is directly relevant for aerosol science,” he says.

A novel framework for engineering microdroplet processes

McLauchlan thinks that by linking bouncing to droplet velocity, size and surface properties, the new framework could make it easier to engineer microdroplets for specific purposes. “In agriculture, for example, understanding how spray velocities interact with plant surfaces with different hydrophobicity could help determine when droplets deposit fully versus when they bounce away, improving the efficiency of crop spraying,” he says.

Such a framework could also be beneficial in the study of airborne diseases, since exhaled droplets frequently bump into surfaces while floating around indoors. While droplets that stick are removed from the air, and can no longer transmit disease via that route, those that bounce are not. Quantifying these processes in typical indoor environments will provide better models of airborne pathogen concentrations and therefore disease spread, McLauchlan says. For example, in healthcare settings, coatings could be designed to inhibit or promote bouncing, ensuring that high-velocity respiratory droplets from sneezes either stick to hospital surfaces or recoil from them, depending on which mode of potential transmission (airborne or contact-based) is being targeted.

The researchers now plan to expand their work on aqueous droplets to droplets with more complex soft-matter properties. “This will include adding surfactants, which introduce time-dependent surface tensions, and polymers, which give droplets viscoelastic properties similar to those found in biological fluids,” McLauchlan reveals. “These studies will present significant experimental challenges, but we hope they broaden the relevance of our findings to an even wider range of fields.”

The present work is detailed in PNAS.

The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.

  •  

Physicists apply quantum squeezing to a nanoparticle for the first time

Physicists at the University of Tokyo, Japan have performed quantum mechanical squeezing on a nanoparticle for the first time. The feat, which they achieved by levitating the particle and rapidly varying the frequency at which it oscillates, could allow us to better understand how very small particles transition between classical and quantum behaviours. It could also lead to improvements in quantum sensors.

Oscillating objects that are smaller than a few microns in diameter have applications in many areas of quantum technology. These include optical clocks and superconducting devices as well as quantum sensors. Such objects are small enough to be affected by Heisenberg’s uncertainty principle, which places a limit on how precisely we can simultaneously measure the position and momentum of a quantum object. More specifically, the product of the measurement uncertainties in the position and momentum of such an object must be greater than or equal to ħ/2, where ħ is the reduced Planck constant.

In these circumstances, the only way to decrease the uncertainty in one variable – for example, the position – is to boost the uncertainty in the other. This process has no classical equivalent and is called squeezing because reducing uncertainty along one axis of position-momentum space creates a “bulge” in the other, like squeezing a balloon.

A charge-neutral nanoparticle levitated in an optical lattice

In the new work, which is detailed in Science, a team led by Kiyotaka Aikawa studied a single, charge-neutral nanoparticle levitating in a periodic intensity pattern formed by the interference of criss-crossed laser beams. Such patterns are known as optical lattices, and they are ideal for testing the quantum mechanical behaviour of small-scale objects because they can levitate the object. This keeps it isolated from other particles and allows it to sustain its fragile quantum state.

After levitating the particle and cooling it to its motional ground state, the team rapidly varied the intensity of the lattice laser. This had the effect of changing the particle’s oscillation frequency, which in turn changed the uncertainty in its momentum. To measure this change (and prove they had demonstrated quantum squeezing), the researchers then released the nanoparticle from the trap and let it propagate for a short time before measuring its velocity. By repeating these time-of-flight measurements many times, they were able to obtain the particle’s velocity distribution.

The telltale sign of quantum squeezing, the physicists say, is that the velocity distribution they measured for the nanoparticle was narrower than the uncertainty in velocity for the nanoparticle at its lowest energy level. Indeed, the measured velocity variance was narrower than that of the ground state by 4.9 dB, which they say is comparable to the largest mechanical quantum squeezing obtained thus far.

“Our system will enable us to realize further exotic quantum states of motions and to elucidate how quantum mechanics should behave at macroscopic scales and become classical,” Aikawa tells Physics World. “This could allow us to develop new kinds of quantum devices in the future.”

The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.

  •  

Motion blur brings a counterintuitive advantage for high-resolution imaging

Three pairs of greyscale images, showing text, a pattern of lines, and an image. The left images are blurred, the right images are clearer
Blur benefit: Images on the left were taken by a camera that was moving during exposure. Images on the right used the researchers’ algorithm to increase their resolution with information captured by the camera’s motion. (Courtesy: Pedro Felzenszwalb/Brown University)

Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.

“We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”

Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.

Exploiting the “tracks” left by small points of light

Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.

“There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”

Application scenarios

When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.

“Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”

The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.

“While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”

The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.

The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

  •  

Optical gyroscope detects Earth’s rotation with the highest precision yet

As the Earth moves through space, it wobbles. Researchers in Germany have now directly observed this wobble with the highest precision yet thanks to a large ring laser gyroscope they developed for this purpose. The instrument, which is located in southern Germany and operates continuously, represents an important advance in the development of super-sensitive rotation sensors. If further improved, such sensors could help us better understand the interior of our planet and test predictions of relativistic effects, including the distortion of space-time due to Earth’s rotation.

The Earth rotates once every day, but there are tiny fluctuations, or wobbles, in its axis of rotation. These fluctuations are caused by several factors, including the gravitational forces of the Moon and Sun and, to a lesser extent, the neighbouring planets in our Solar System. Other, smaller fluctuations stem from the exchange of momentum between the solid Earth and the oceans, atmosphere and ice sheets. The Earth’s shape, which is not a perfect sphere but is flattened at the poles and thickened at the equator, also contributes to the wobble.

These different types of fluctuations produce effects known as precession and nutation that cause the extension of the Earth’s axis to trace a wrinkly circle in the sky. At the moment, this extended axis is aligned precisely with the North Star. In the future, it will align with other stars before returning to the North Star again in a cycle that lasts 26,000 years.

Most studies of the Earth’s rotation involve combining data from many sources. These sources include very long baseline radio-astronomy observations of quasars; global satellite navigation systems (GNSS); and GNSS observations combined with satellite laser ranging (SLR) and Doppler orbitography and radiopositioning integrated by satellite (DORIS). These techniques are based on measuring the travel time of light, and because it is difficult to combine them, only one such measurement can be made per day.

An optical interferometer that works using the Sagnac effect

The new gyroscope, which is detailed in Science Advances, is an optical interferometer that operates using the Sagnac effect. At its heart is an optical cavity that guides a light beam around a square path 16 m long. Depending on the rate of rotation it experiences, this cavity selects two different frequencies from the beam to be coherently amplified. “The two frequencies chosen are the only ones that have an integer number of waves around the cavity,” explains team leader Ulrich Schreiber of the Technische Universität München (TUM). “And because of the finite velocity of light, the co-rotating beam ‘sees’ a slightly larger cavity, while the anti-rotating beam ‘sees’ a slightly shorter one.”

The frequency shift in the interference pattern produced by the co-rotating beam is projected onto an external detector and is strictly proportional to the Earth’s rotation rate. Because the accuracy of the measurement depends, in part, on the mechanical stability of the set-up, the researchers constructed their gyroscope from a glass ceramic that does not expand much with temperature. They also set it up horizontally in an underground laboratory, the Geodetic Observatory Wettzell in southern Bavaria, to protect it as much as possible from external vibrations.

The instrument can sense the Earth’s rotation to within an accuracy of 48 parts per billion (ppb), which corresponds to picoradians per second. “This is about a factor of 100 better than any other rotation sensor,” says Schreiber, “and, importantly, is less than an order of magnitude away from the regime in which relativistic effects can be measured – but we are not quite there yet.”

An increase in the measurement accuracy and stability of the ring laser by a factor of 10 would, Schreiber adds, allow the researchers to measure the space-time distortion caused by the Earth’s rotation. For example, it would permit them to conduct a direct test for the Lense-Thirring effect — that is, the “dragging” of space by the Earth’s rotation – right at the Earth’s surface.

To reach this goal, the researchers say they would need to amend several details of their sensor design. One example is the composition of the thin-film coatings on the mirrors inside their optical interferometer. “This is neither easy nor straightforward,” explains Schreiber, “but we have some ideas to try out and hope to progress here in the near future.

“In the meantime, we are working towards implementing our measurements into a routine evaluation procedure,” he tells Physics World.

The post Optical gyroscope detects Earth’s rotation with the highest precision yet appeared first on Physics World.

  •  

Advances in quantum error correction showcased at Q2B25

This year’s Q2B meeting took place at the end of last month in Paris at the Cité des Sciences et de l’Industrie, a science museum in the north-east of the city. The event brought together more than 500 attendees and 70 speakers – world-leading experts from industry, government institutions and academia. All major quantum technologies were highlighted: computing, AI, sensing, communications and security.

Among the quantum computing topics was quantum error correction (QEC) – something that will be essential for building tomorrow’s fault-tolerant machines. Indeed, it could even be the technology’s most important and immediate challenge, according to the speakers on the State of Quantum Error Correction Panel: Paul Hilaire of Telecom Paris/IP Paris, Michael Vasmer of Inria, Quandela’s Boris Bourdoncle, Riverlane’s Joan Camps and Christophe Vuillot from Alice & Bob.

As was clear from the conference talks, quantum computers are undoubtedly advancing in leaps and bounds. One of their most important weak points, however, is that their fundamental building blocks (quantum bits, or qubits) are highly prone to errors. These errors are caused by interactions with the environment – also known as noise – and correcting them will require innovative software and hardware. Today’s machines are only capable of running on average a few hundred operations before an error occurs; but in the future, we will have to develop quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp) or even a trillion error-free operations (TeraQuOps).

QEC works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits, such as superconducting circuits or trapped atoms. Each physical qubit is noisy, but they work together to preserve the quantum state of the logical qubit – at least for long enough to perform a calculation. It was Peter Shor who first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A technique known as syndrome decoding is then used to diagnose which error was the likely source of corruption on an encoded state. The error can then be reversed by applying a corrective operation depending on the syndrome.

Prototype quantum computer from NVIDIA
Computing advances A prototype quantum computer from NVIDIA that makes use of seven qubits. (Courtesy: Isabelle Dumé)

While error correction should become more effective as the number of physical qubits in a logical qubit increases, adding more physical qubits to a logical qubit also adds more noise. Much progress has been made in addressing this and other noise issues in recent years, however.

“We can say there’s a ‘fight’ when increasing the length of a code,” explains Hilaire. “Doing so allows us to correct more errors, but we also introduce more sources of errors. The goal is thus being able to correct more errors than we introduce. What I like with this picture is the clear idea of the concept of a fault-tolerant threshold below which fault-tolerant quantum computing becomes feasible.”

Developments in QEC theory

Speakers at the Q2B25 meeting shared a comprehensive overview of the most recent advancements in the field – and they are varied. First up, concatenated error correction codes. Prevalent in the early days of QEC, these fell by the wayside in favour of codes like surface code, but are making a return as recent work has shown. Concatenated codes can achieve constant encoding rates and a quantum computer operating on a linear, nearest-neighbour connectivity was recently put forward. Directional codes, the likes of which are being developed by Riverlane, are also being studied. These leverage native transmon qubit logic gates – for example, iSWAP gates – and could potentially outperform surface codes in some aspects.

The panellists then described bivariate bicycle codes, being developed by IBM, which offer better encoding rates than surface codes. While their decoding can be challenging for real-time applications, IBM’s “relay belief propagation” (relay BP) has made progress here by simplifying decoding strategies that previously involved combining BP with post-processing. The good thing is that this decoder is actually very general and works for all the “low-density parity check codes” — one of the most studied class of high performance QEC codes (these also include, for example, surface codes and directional codes).

There is also renewed interest in decoders that can be parallelized and operate locally within a system, they said. These have shown promise for codes like the 1D repetition code, which could revive the concept of self-correcting or autonomous quantum memory. Another possibility is the increased use of the graphical language ZX calculus as a tool for optimizing QEC circuits and understanding spacetime error structures.

Hardware-specific challenges

The panel stressed that to achieve robust and reliable quantum systems, we will need to move beyond so-called hero experiments. For example, the demand for real-time decoding at megahertz frequencies with microsecond latencies is an important and unprecedented challenge. Indeed, breaking down the decoding problem into smaller, manageable bits has proven difficult so far.

There are also issues with qubit platforms themselves that need to be addressed: trapped ions and neutral atoms allow for high fidelities and long coherence times, but they are roughly 1000 times slower than superconducting and photonic qubits and therefore require algorithmic or hardware speed-ups. And that is not all: solid-state qubits (such as superconducting and spin qubits) suffer from a “yield problem”, with dead qubits on manufactured chips. Improved fabrication methods will thus be crucial, said the panellists.

Q2B25

 

Collaboration between academia and industry

The discussions then moved towards the subject of collaboration between academia and industry. In the field of QEC, such collaboration is highly productive today, with joint PhD programmes and shared conferences like Q2B, for example. Large companies also now boast substantial R&D departments capable of funding high-risk, high-reward research, blurring the lines between fundamental and application-oriented research. Both sectors also use similar foundational mathematics and physics tools.

At the moment there’s an unprecedented degree of openness and cooperation in the field. This situation might change, however, as commercial competition heats up, noted the panellists. In the future, for example, researchers from both sectors might be less inclined to share experimental chip details.

Last, but certainly not least, the panellists stressed the urgent need for more PhDs trained in quantum mechanics to address the talent deficit in both academia and industry. So, if you were thinking of switching to another field, perhaps now could be the time to jump.

The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.

  •  

Perovskite detector could improve nuclear medicine imaging

A perovskite semiconductor that can detect and image single gamma-ray photons with both high-spatial and high-energy resolution could be used to create next-generation nuclear medicine scanners that can image faster and provide clearer results. The perovskite is also easier to grow and much cheaper than existing detector materials such as cadmium zinc telluride (CZT), say the researchers at Northwestern University in the US and Soochow University in China who developed it.

Nuclear medicine imaging techniques like single-photon emission computed tomography (SPECT) work by detecting the gamma rays emitted by a short-lived radiotracer delivered to a specific part of a patient’s body. Each gamma ray can be thought of as being a pixel of light, and after millions of these pixels have been collected, a 3D image of the region of interest can be built up by an external detector.

Such detectors are today made from either semiconductors like CZT or scintillators such as NaI:TI, CsI and LYSO, but CZT detectors are expensive – often costing hundreds of thousands to millions of dollars. CZT crystals are also brittle, making the detectors difficult to manufacture. While NaI is cheaper than CZT, detectors made of this material end up being bulky and generate blurrier images.

High-quality crystals of CsPbBr3

To overcome these problems, researchers led by Mercouri Kanatzidis and Yihui He studied the lead halide perovskite crystal CsPbBr3. They already knew that this was an efficient solar cell material and recently, they discovered that it also showed promise for detecting X-rays and gamma rays.

In the new work, detailed in Nature Communications, the team grew high-quality crystals of CsPbBr3 and fabricated them into detector devices. “When a gamma-ray photon enters the crystal, it interacts with the material and produces electron–hole pairs,” explains Kanatzidis. “These charge carriers are collected as an electrical signal that we can measure to determine both the energy of the photon and its point of interaction.”

The researchers found that their detectors could resolve individual gamma rays at the energies used in SPECT imaging with high resolution. They could also sense extremely weak signals from the medical tracer technetium-99m, which is routinely employed in hospital settings. They were thus able to produce sharp images that could distinguish features as small as 3.2 mm. This fine sensitivity means that patients would be exposed to shorter scan times or smaller doses of radiation compared with NaI or CZT detectors.

Ten years of optimization

“Importantly, a parallel study published in Advanced Materials the same week as our Nature Communications paper directly compared perovskite performance with CZT, the only commercial semiconductor material available today for SPECT, which showed that perovskites can even surpass CZT in certain aspects,” says Kanatzidis.

“The result was possible thanks to our efforts over the last 10 years in optimizing the crystal growth of CsPbBr3, improving the electrode contacts in the detectors and carrier transport and nuclear electronics therein,” adds He. “Since the first demonstration of high spectral resolution by CsPbBr3 in our previous work, it has gradually been recognized as the most promising competitor to CZT.”

Looking forward, the Northwestern–Soochow team is now busy scaling up detector fabrication and improving its long-term stability. “We are also trying to better understand the fundamental physics of how gamma rays interact in perovskites, which could help optimize future materials,” says Kanatzidis. “A few years ago, we established a new company, Actinia, with the goal of commercializing this technology and moving it toward practical use in hospitals and clinics,” he tells Physics World.

“High-quality nuclear medicine shouldn’t be limited to hospitals that can afford the most expensive equipment,” he says. “With perovskites, we can open the door to clearer, faster, safer scans for many more patients around the world. The ultimate goal is better scans, better diagnoses and better care for patients.”

The post Perovskite detector could improve nuclear medicine imaging appeared first on Physics World.

  •  

Bayes’ rule goes quantum

How would Bayes’ rule – a technique to calculate probabilities – work in the quantum world? Physicists at the National University of Singapore, Japan’s University of Nagoya, and the Hong Kong University of Science and Technology in Guangzhou have now put forward a possible explanation. Their work could help improve quantum machine learning and quantum error correction in quantum computing.

Bayes’ rule is named after Thomas Bayes who first defined it for conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances” in 1763.  It describes the probability of an event based on prior knowledge of conditions that might be related to the event. One area in which it is routinely used is to update beliefs based on new evidence (data). In classical statistics, the rule can be derived from the principle of minimum change, meaning that the updated beliefs must be consistent with the new data while only minimally deviating from the previous belief.

In mathematical terms, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Simply put, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. For example, when a person tests positive for Covid-19, they may have suspected that they were ill, but the new information confirms this. Bayes’ rule is a therefore way to calculate the probability of having contracted Covid-19 based not only on the test result, and the chance of the test yielding a false negative, but also on the patient’s initial suspicions.

Quantum analogue

Quantum versions of Bayes’ rule have been around for decades, but the approach through the minimum change principle had not been tried before. In the new work, a team led by Ge Bai, Francesco Buscemi and Valerio Scarani set out to do just that.

“We found which quantum Bayes’ rule is singled out when one maximizes the fidelity (which is equivalent to minimizing the change) between two processes,” explains Bai. “In many cases, the solution is the ‘Petz recovery map’, proposed by Dénes Petz in the 1980s and which was already considered as being one of the best candidates for the quantum Bayes’ rule. It is based on the rules of information processing, crucial not only for human reasoning, but also for machine learning models that update their parameters with new data.”

Quantum theory is counter-intuitive, and the mathematics is hard, says Bai. “Our work provides a mathematically sound way to update knowledge about a quantum system, rigorously derived from simple principles of reasoning, he tells Physics World. “It demonstrates that the mathematical description of a quantum system—the density matrix—is not just a predictive tool, but is genuinely useful for representing our understanding of an underlying system. “It effectively extends the concept of gaining knowledge, which mathematically corresponds to a change in probabilities, into the quantum realm.”

A conservative stance

The “simple principles of reasoning” encompass the minimum change principle, adds Buscemi. “The idea is that while new data should lead us to update our opinion or belief about something, the change should be as small as possible, given the data received.

“It’s a conservative stance of sorts: I’m willing to change my mind, but only by the amount necessary to accept the hard facts presented to me, no more.”

“This is the simple (yet powerful) principle that Ge mentioned,” he says, “and it guides scientific inference by preventing unwanted biases from entering the reasoning process.”

An axiomatic approach to the Petz recovery map

While several quantum versions of the Bayes’ rule have been put forward before now, these were mostly based on the fact of having analogous properties to their classical counterpart, adds Scarani. “Recently, Francesco and one co-author proposed an axiomatic approach to the most frequently-used quantum Bayes rule, the one using the Petz recovery map. Our work is the first to derive a quantum Bayes rule from an optimization principle, which works very generally for classical information, but which has been used here for the first time in quantum information.

The result is very intriguing, he says: “we recover the Petz map in many cases, but not all. If we take that our new approach is the correct way to define a quantum Bayes rule, then previous constructions based on analogies were correct very often, but not quite always; and one or more of the axioms are not to be enforced after all. Our work is therefore is a major advance, but it is not the end of the road – and this is nice.”

Indeed, the researchers say they are now busy further refining their quantum Bayes’ rule. They are also looking into applications for it. “Beyond machine learning, this rule could be powerful for inference—not just for predicting the future but also retrodicting the past,” says Bai. “This is directly applicable to problems in quantum communication, where one must recover encoded messages, and in quantum tomography, where the goal is to infer a system’s internal state from observations.

“We will be using our results to develop new, hopefully more efficient, and mathematically well-founded methods for these tasks,” he concludes.

The present study is detailed in Physical Review Letters.

The post Bayes’ rule goes quantum appeared first on Physics World.

  •  

Antiferromagnets could be better than ferromagnets for some ultrafast, high-density memories

Diagrams showing a memory made from a chiral antiferromagnet
How it works On the left is a schematic illustration of memory device consisting of a chiral antiferromagnet Mn3Sn / nonmagnetic metal heterostructure. On the right is a scanning electron microscope image of the fabricated device with a Mn3Sn nanodot and a nonmagnetic metal channel. (Courtesy: Yutaro Takeuchi et al)

While antiferromagnets show much promise for spintronics applications, they have proved more difficult to control compared to ferromagnets. Researchers in Japan have now succeeded in switching an antiferromagnetic manganese–tin nanodot using electric current pulses as short as 0.1 ns. Their work shows that these materials can be used to make efficient high-speed, high-density, memories that operate at gigahertz frequencies, so outperforming ferromagnets in this range.

In antiferromagnets, spins can flip quickly, potentially reaching frequencies well beyond the gigahertz. Such rapid spin flips are possible because neighbouring spins in antiferromagnets align antiparallel to each other thanks to strong interactions among the spins. This is different from ferromagnets, which have parallel electron spins.

Another of their advantages is that antiferromagnets display almost no macroscopic magnetization, meaning that bits can be potentially packed densely onto a chip. And that is not all: the values of bits in antiferromagnetic memory devices are generally unaffected by the presence of external magnetic fields. However, this insensitivity can be a disadvantage because it makes the bits difficult to control.

Faster than ferromagnets

In the new work, a team led by Shunsuke Fukami of Tohoku University made a nanoscale dot device from the chiral antiferromagnet Mn3Sn. They were able to rapidly and coherently rotate the antiparallel spins in the material using electric currents with a pulse length of just 0.1 ns at zero magnetic field. This is faster than is possible in any existing ferromagnetic device, they say.

The device is also capable of 1000 error-free switching cycles – a level of reliability not possible in ferromagnets, they add.

This result is possible because, unlike conventional antiferromagnets, MnSn exhibits a large change in electrical resistance thanks to its unique symmetry of the internal spin texture, explains Yutaro Takeuchi, who is lead author of a paper describing the study. “This effect provides us with an easy method for electrically detecting (reading out) the antiferromagnetic state. Doing this is usually difficult because antiferromagnets are externally ‘invisible’ (remember, they have zero net magnetization), which means their spin ordering cannot be easily read out.”

Until now, MnSn had mainly been studied in bulk samples, but in 2019, Fukami’s group succeeded in growing epitaxial thin films of the material. “This allowed us to perform clear-cut experiments using antiferromagnetic thin films and finally answer the question: can antiferromagnets really outperform their ferromagnetic cousins?” says Takeuchi. “Moreover, in this study, we took on the additional challenge of integrating antiferromagnetic thin films into nanoscale devices.”

New types of devices could be possible

Fukami and colleagues have been working on spintronics using ferromagnets for more than 20 years. “Although the fabrication of antiferromagnets was initially difficult, we finally managed to produce high-quality MnSn nanodot devices and demonstrated high-speed and high-efficiency control of the antiferromagnetic state,” Takeuchi tells Physics World. “We would say that our work represents a fusion of our two key strengths: a new method for depositing antiferromagnetic thin films and our conventional core technology in the nanofabrication of magnetic materials.”

As for potential applications, the most likely would be a high-performance non-volatile memory (MRAM), he says. “While MRAM technology is now commercially available, its applications remain limited. By further improving its high-speed and low power consumption, we anticipate a broader range of markets, including data centres and AI chips.”

The research, which is detailed in Science, has also highlighted some dynamical aspects of antiferromagnets not seen before in ferromagnets. “In particular, we found that the rotation frequency of an antiferromagnet can be modulated by an applied current, thanks to the unique dynamical equation it obeys,” explains Yuta Yamane, who did the theoretical modelling part of the study. “This distinct property may open the door to new types of devices, such as frequency-tuneable oscillators, and emerging concepts like probabilistic computing.”

Looking ahead, the team will now focus on improving the readout performance of antiferromagnets and pursuing new functionalities. “Thanks to their unique transport properties, chiral antiferromagnets allow us to detect spin ordering in experimental settings, but the readout performance has still not reached the level of ferromagnets,” says Takeuchi. “A breakthrough will be required to overcome this gap.”

The post Antiferromagnets could be better than ferromagnets for some ultrafast, high-density memories appeared first on Physics World.

  •  

Schwinger effect appears in a 2D superfluid

Vortices in a film
Accessible system What the vortices look like in a film. (Courtesy: P Stamp)

Vacuum tunnelling – an exotic process by which empty space can become temporarily filled with virtual particles when an extremely strong electric or magnetic field is applied to it – has never been observed in an experiment. This is because the field required to produce this “Schwinger effect” in the laboratory is simply too high and is usually only generated during intense astrophysical events. Theoretical physicists at the University of British Columbia (UBC) in Canada are now saying that an analogous effect could occur in a much simpler, tabletop system. In their model, a film of superfluid helium can be substituted for the vacuum and the superfluid flow of this helium for the massive field.

The physicist Julian Schwinger was the first to put forward the effect that now bears his name. In 1951, he hypothesised that applying a uniform electric field to a vacuum, which is theoretically devoid of matter, would cause electron–positron pairs to spring into existence there. The problem is that this field needs to be, literally, astronomically high – on the order of around 1018 V/m.

Pair production can also occur in superfluid helium-4

A team led by Philip Stamp says that a similar type of spontaneous pair production can occur in superfluid helium-4 just a few atomic layers thick and cooled to very low temperatures. In this liquid, which behaves essentially like a perfect, frictionless quantum vacuum state, pairs of quantized vortices/anti-vortices (spinning in opposite directions to each other) should occur in the presence of strong fluid flow. This process should be analogous to the Schwinger mechanism of vacuum tunnelling.

“The helium-4 film provides a nice analogue to several cosmic phenomena, such as the vacuum in deep space, quantum black holes and even the early the universe itself (phenomena we can’t ever approach in any direct experimental way),” says Stamp. “However, the real interest of this work may lie less in analogues (which may or may not accurately portray the ‘real thing’) and more in the way it alters our understanding of superfluids and of phase transitions in two-dimensional systems.”

“These are real physical systems in their own right, not just analogues. And we can do experiments on these.”

According to physicist Warwick Bowen of the University of Queensland in Australia, who was not involved in this study, the new work is “very interesting” and “exciting” because it describes a new mechanism to produce vortices. “This description might even tell us more about the microscopic origins of turbulence and represents a new kind of quantum phase transition,” he tells Physics World. “Importantly, the effect appears to be accessible with extensions to existing experimental techniques used to study thin superfluid helium films.”

Physicist Emil Varga of Prague’s Charles University in the Czech Republic, who was not involved in this study either, adds: “The work seems quite rigorous and might help clean up some outstanding discrepancies between theory and experiment. And the possible analogy with the Schwinger effect is, as far as I can tell, new and quite interesting and fits well into the emerging field of using superfluid helium-4 as a model system for high-energy and/or astrophysics.”

Stamp and colleagues say they would now like to better understand the vortex effective mass and look at analogues in full quantum gravity with no “semiclassical approximations”. They will also be focusing on how the effect they propose will lead to phenomena like quantum avalanches – which are different to quantum turbulence – and in particular, how it modifies the so-called “Kosterlitz-Thouless” picture of 2D transitions.

They report their present work in PNAS.

The post Schwinger effect appears in a 2D superfluid appeared first on Physics World.

  •  

Meniscus size and shape affect how liquid waves move through barriers

Even very small changes in the size and shape of a meniscus that forms between an object and the surface of a liquid can dramatically affect how much wave energy passes through this interface. The effect, seen for the very first time in an experiment, could come in useful for a host of practical applications that require fluid control, say the researchers at the University of Mississippi in the US who observed it.

When the upper surface of a liquid comes into contact with the container it is in or with another object, the layer of liquid at the interface curves upwards. This well-known capillary effect, produced by surface tension, is known as the meniscus.

In the new study, a team led by Likun Zhang at the National Center of Physical Acoustics and the Department of Physics at the University of Mississippi wanted to find out how the size and the shape of the meniscus affects the way waves move across it. In their experiments, the researchers filled a tank measuring 106 cm × 6.8 cm × 11 cm with distilled water to a height of 9.2 cm. They then placed a thin acrylic sheet 6.8 cm wide on the surface of the water to create the meniscus. Next, they sent surface waves with a frequency of about 15 Hz through the set-up using a paddle wavemaker and measured the ripples on the surface that resulted.

Precise adjustments

By varying both the frequency of the surface waves and the height and surface properties of the acrylic barrier (thanks to a surface coating to make it hydrophobic or hydrophilic), they were able to steadily adjust the meniscus very precisely – in steps of just 0.1 mm.

The researchers found that a slightly curved meniscus allows more wave energy to pass through the barrier. Conversely, if the meniscus curves more steeply, it reduces the energy transported by the fluid.

This is a counterintuitive result – we expect a barrier to block waves, explains Zhang. Instead, they observed that certain meniscus shapes can allow waves to pass through more easily. “Indeed, an adjustment of just a few millimetres can change the wave transmission by up to 60%, either going up or down depending on the meniscus shape,” he tells Physics World. “This is exciting because it’s the first time this effect has been observed in an experiment.”

The discovery could open up new ways to control fluids more precisely – just by adjusting the meniscus, he adds. “This could be useful in open fluid channels, where liquids flow with a free surface exposed to air instead of being in a closed pipe. Such channels are common in nature and are also important in engineered systems, for example, in microfluidic devices, thermal control, and even technologies employed in space.”

The researchers, who report their work in Physical Review Letters, say they now plan to develop theoretical models to better explain the effect they have observed. “For example, why do waves transmit less when the meniscus height is tall, but more when it is short?” ponders Zhang. “In the longer term, our goal is to exploit this knowledge to design better ways of controlling fluids for practical applications.”

The post Meniscus size and shape affect how liquid waves move through barriers appeared first on Physics World.

  •  

Compact diamond magnetometer detects metastatic tumours

Researchers at the University of Warwick in the UK have created an ultrasensitive magnetometer based on nitrogen-vacancy centres in diamond that’s small enough to be used for keyhole surgery. The sensor, which currently measures just 1 cm in diameter and could be made even smaller in the future, is designed to detect small cancer metastases via endoscopy or laparoscopy.

“It’s really bad news when tumour cells spread from their original site, and so it’s very important to detect this metastatic cancer as soon as possible,” says physicist Gavin Morley, who led this research effort together with his doctoral student Alex Newman. “The new cancers are often lodged in the lymph nodes and our device could be used to detect these cancers early when they are still small.”

Existing techniques to detect metastatic tumours include MRI and CT, but these technologies can only detect tumours that are at least 2 mm across. While alternatives like sentinel lymph node biopsy can detect tumours with a volume that is 1000 times smaller, this technique typically involves the use of radioactive tracer fluids that require special safety precautions, or blue dyes, which cause an allergic reaction in one in a hundred people.

Tracer travels to the nymph nodes

Medical device company Endomag recently developed a clinical technique that involves the surgeon injecting a magnetic tracer into a breast cancer tumour, explains Morley. “The tracer fluid travels to the nymph nodes and the surgeon can then identify the metastatic cells there and remove them.”

While this approach is efficient for breast cancer, the magnetometers employed today to detect the tracer are too large for use in keyhole surgery or endoscopy, he explains. “We wanted to create a device that can be used to detect the metastatic tumours and so built a version that’s smaller. The surgeons we’ve spoken to say that colorectal cancer could be the best place for us to focus on first for our magnetometer.”

NV magnetic sensor

Morley’s group has been working on magnetic field sensors using diamonds and lasers for ten years now. The diamonds are grown by the company Element Six in Oxford and they contain quantum defects known as nitrogen-vacancy (NV) centres. These are created when a pair of adjacent carbon atoms in the diamond lattice is replaced by a nitrogen atom, leaving one lattice site vacant. An NV centre is basically an isolated spin that is highly sensitive to an external magnetic field and it emits florescent light in a way that depends on the intensity and direction of this field. Measuring this light allows it to be used as a magnetic sensor.

“Our speciality is using optical fibres to send laser light into the diamond and detect the red light that comes back,” says Morley.

In this work, reported in Physical Review Applied, it was Newman who built the new sensor, Morley tells Physics World. “Alex likes fixing old sports cars and I liked the way he applied that thinking to this new technology. He tries different strategies and has built new types of diamond sensors that no-one has managed to build before.”

The Warwick researchers are now working on a number of applications for their sensors: as well as use within healthcare, they could be employed in space applications and future fusion power plants, says Morley. “Indeed, for Alex’s project, we were working on detecting damage in steel to help the National Nuclear Laboratory who have nuclear waste stored in steel containers. I then met Stuart Robertson, who is a breast cancer surgeon at the University Hospitals Coventry and Warwickshire: he told me how useful the Endomag solution is for breast cancer metastatic cells and I thought we could build a magnetometer that would help.”

Working with several surgeons, Morley, Newman and colleagues are now developing this work as part of the UK Quantum Biomedical Sensing Research Hub (Q-BIOMED). “For example, Jamie Murphy in the Cleveland Clinic in London is an expert on keyhole surgery, with a big interest in colorectal cancer,” says Morley. “And Conor McCann is an expert on gut health at the UCL Great Ormond Street Institute of Child Health. We’re interested in spinning out a company ourselves to take this forward alongside other applications of our diamond sensors.”

The researchers are also busy making the sensor even smaller. “At the moment the probe is 1 cm across, but we think we can get it down to be only 3 mm,” says Morley. “While 1 cm is small enough for keyhole surgery and endoscopy, getting it even smaller would make it useful for even more types of surgeries.”

The post Compact diamond magnetometer detects metastatic tumours appeared first on Physics World.

  •  

Unconventional approach to dark energy problem gives observed neutrino masses

An unconventional approach to solving the dark energy problem called the cosmologically coupled black hole (CCBH) hypothesis appears to be compatible with the observed masses of neutrinos. This new finding from researchers working at the DESI collaboration suggests that black holes may represent little Big Bangs played in reverse and could be used as a laboratory to study the birth and infancy of our universe. The study also confirms that the strength of dark energy has increased along with the formation rate of stars.

The Dark Energy Spectroscopic Instrument (DESI) is located on the Nicholas U Mayall four-metre Telescope at Kitt Peak National Observatory in Arizona. Its raison d’être is to shed more light on the “dark universe” – the 95% of the mass and energy in the universe that we know very little about. Dark energy is a hypothetical entity invoked to explain why the rate of expansion of the universe is (mysteriously) increasing – something that was discovered at the end of the last century.

According to standard theories of cosmology, matter is thought to comprise cold dark matter (CDM) and normal matter (mostly baryons and neutrinos). DESI can observe fluctuations in the matter density of the universe known as baryonic acoustic oscillations (BAOs), which are density fluctuations that were created after the Big Bang in the hot plasma of baryons and electrons that prevailed then. BAOs expand with the growth of the universe and represent a sort of “standard ruler” that allows cosmologists to map the universe’s expansion by statistically analysing the distance that separates pairs of galaxies and quasars.

Largest 3D map

DESI has produced the largest such 3D map of the universe ever and it recently published the first set of BAO measurements determined from observations of over 14 million extragalactic targets going back 11 billion years in time.

In the new study, the DESI researchers combined measurements from these new data with cosmic microwave background (CMB) datasets (which measure the density of dark matter and baryons from a time when the universe was less than 400,000 years old) to search for evidence of matter converting into dark energy. They did this by focusing on a new hypothesis known as the cosmologically coupled black hole (CCBH), which was put forward five years ago by DESI team member Kevin Croker, who works at Arizona State University (ASU), and his colleague Duncan Farrah at the University of Hawaii. This physical model builds on a mathematical description of black holes as bubbles of dark energy in space that was introduced over 50 years ago. CCBH describes a scenario in which massive stars exhaust their nuclear fuel and collapse to produce black holes filled with dark energy that then grows as the universe expands. The rate of dark energy production is therefore determined by the rate at which stars form.

Neutrino contribution

Previous analyses by DESI scientists suggested that there is less matter in the universe today compared to when it was much younger. When they then added the additional, known, matter source from neutrinos, there appeared to be no “room” and the masses of these particles therefore appeared negative in their calculations. Not only is this unphysical, explains team member Rogier Windhorst of the ASU’s School of Earth and Space Exploration, it also goes against experimental measurements made so far on neutrinos that give them a greater-than-zero mass.

When the researchers re-interpreted the new set of data with the CCBH model, they were able to resolve this issue. Since stars are made of baryons and black holes convert exhausted matter from stars into dark energy, the number of baryons today has decreased in comparison to the CMB measurements. This means that neutrinos can indeed contribute to the universe’s mass, slowing down the expansion of the universe as the dark energy produced sped it up.

“The new data are the most precise measurements of the rate of expansion of the universe going back more than 10 billion years,” says team member Gregory Tarlé at the University of Michigan, “and it results from the hard work of the entire DESI collaboration over more than a decade. We undertook this new study to confront the CCBH hypothesis with these data.”

Black holes as a laboratory

“We found that the standard assumptions currently employed for cosmological analyses simply did not work and we had to carefully revisit and rewrite massive amounts of a lot of cosmological computer code,” adds Croker.

“If dark energy is being sourced by black holes, these structures may be used as a laboratory to study the birth and infancy of our own universe,” he tells Physics World. “The formation of black holes may represent little Big Bangs played in reverse, and to make a biological analogy, they may be the ‘offspring’ of our universe.”

The researchers say they studied the CCBH scenario in its simplest form in this work, and found that it performs very well. “The next big observational test will involve a new layer of complexity, where consistency with the large-scale features of the Big Bang relic radiation, or CMB, and the statistical properties of the distribution of galaxies in space will make or break the model,” says Tarlé.

The research is described in Physical Review Letters.

The post Unconventional approach to dark energy problem gives observed neutrino masses appeared first on Physics World.

  •  

Quantum gas keeps its cool

Adding energy to a system usually heats it up, but physicists at the University of Innsbruck in Austria have now discovered a scenario in which this is not the case. Their new platform – a one-dimensional fluid of strongly interacting atoms cooled to just a few nanokelvin above absolute zero and periodically “kicked” using an external force – could be used to study how objects transition from being quantum and ordered to classical and chaotic.

Our everyday world is chaotic and chaos plays a crucial and often useful role in many areas of science – from nonlinear complex systems in mathematics, physics and biology to ecology, meteorology and economics. How a system evolves depends on its initial conditions, but this evolution is, by nature, inherently unpredictable.

While we know how chaos emerges in classical systems, how it does so in quantum materials is still little understood. When this happens, the quantum system reverts to being a classical one.

The quantum kicked rotor

Researchers have traditionally studied chaotic behaviour in driven systems – that is, rotating objects periodically kicked by an external force. The quantum version of these is the quantum kicked rotor (QKR). Here, quantum coherence effects can prevent the system from absorbing external energy, meaning that, in contrast to its classical counterpart, it doesn’t heat up – even if a lot of energy is applied. This “dynamical localization” effect has already been seen in dilute ultracold atomic gases.

The QKR is a highly idealized single-particle model system, explains study lead Hanns-Christoph Nägerl. However, real-world systems contain many particles that interact with each other – something that can destroy dynamical localization. Recent theoretical work has suggested that this localization may persist in some types of interacting, even strongly interacting, many-body quantum systems – for example, in 1D bosonic gases.

In the new work, Nägerl and colleagues made a QKR by subjecting samples of ultracold caesium (Cs) atoms to periodic kicks by means of a “flashed-on lattice potential”. They did this by loading a Bose–Einstein condensate of these atoms into an array of narrow 1D tubes created by a 2D optical lattice formed by laser beams propagating in the xy plane at right angles to each other. They then increased the power of the beams to heat up the Cs atoms.

Many-body dynamical localization

The researchers expected the atoms to collectively absorb energy over the course of the experiment. Instead, when they recorded how their momentum distribution evolved, they found that it actually stopped spreading and that the system’s energy reached a plateau. “Despite being continually kicked and strongly interacting, it no longer absorbed energy,” says Nägerl. “We say that it had localized in momentum space – a phenomenon known as many-body dynamical localization (MBDL).”

In this state, quantum coherence and many-body interactions prevent the system from heating up, he adds. “The momentum distribution essentially freezes and retains whatever structure it has.”

Nägerl and colleagues repeated the experiment by varying the interaction between the atoms – from zero (non-interacting) to strongly interacting. They found that the system always localizes.

Quantum coherence is crucial for preventing thermalization

“We had already found localization for our interacting QKR in earlier work and set out to reproduce these results in this new study,” Nägerl tells Physics World. “We had not previously realised the significance of our findings and thought that perhaps we were doing something wrong, which turned out not to be the case.”

The MBDL is fragile, however – something the researchers proved by introducing randomness into the laser pulses. A small amount of disorder is enough the destroy the localization effect and restore diffusion, explains Nägerl: the momentum distribution smears out and the kinetic energy of the system rises sharply, meaning that it is absorbing energy.

“This test highlights that quantum coherence is crucial for preventing thermalization in such driven many-body systems,” he says.

Simulating such a system on classical computers is only possible for two or three particles, but the one studied in this work, reported in Science, contains 20 or more. “Our new experiments now provide precious data to which we can compare the QKR model system, which is a paradigmatic one in quantum physics,” adds Nägerl.

Looking ahead, the researchers say they would now like to find out how stable MBDL is to various external perturbations. “In our present work, we report on MBDL in 1D, but would it happen in a 2D or a 3D system?” asks Nägerl. “I would like to do an experiment in which we have a 1D + 1D situation, that is, where the 1D is allowed to communicate with just one neighbouring 1D system (via tunnelling; by lowering the barrier to this system in a controlled way).”

Another way of perturbing the system would be to add a local defect – for example a bump in the potential of a different atom, he says. “Generally speaking, we would like to measure the ‘phase diagram’ for MBDL, where the axes of the graph would quantify the strength of the various perturbations we apply.”

The post Quantum gas keeps its cool appeared first on Physics World.

  •  

Protein qubit can be used as a quantum biosensor

A new optically addressable quantum bit (qubit) encoded in a fluorescent protein could be used as a sensor that can be directly produced inside living cells. The device opens up a new era for fluorescence microscopy to monitor biological processes, say the researchers at the University of Chicago Pritzker School of Molecular Engineering who designed the novel qubit.

Quantum technologies use qubits to store and process information. Unlike classical bits, which can exist in only two states, qubits can exist in a superposition of both these states. This means that computers employing these qubits can simultaneously process multiple streams of information, allowing them to solve problems that would take classical computers years to process.

Qubits can be manipulated and measured with high precision, and in quantum sensing applications they act as nanoscale probes whose quantum state can be initialized, coherently controlled and read out. This allows them to detect minute changes in their environment with exquisite sensitivity.

Optically addressable qubit sensors – that is, those that are read out using light pulses from a laser or other light source – are able to measure nanoscale magnetic fields, electric fields and temperature. Such devices are now routinely employed by researchers working in the physical sciences. However, their use in the life sciences is lagging behind, with most applications still at the proof-of-concept stage.

Difficult to position inside living cells

Many of today’s quantum sensors are based on nitrogen-vacancy (NV) centres, which are crystallographic defects in diamond. These centres occur when two neighbouring carbon atoms in diamond are replaced by a nitrogen atom and an empty lattice site and they act like tiny quantum magnets with different spins. When excited with laser pulses, the fluorescent signal that they emit can be used to monitor slight changes in the magnetic properties of a nearby sample of material. This is because the intensity of the emitted NV centre signal changes with the local magnetic field.

“The problem is that such sensors are difficult to position at well-defined sites inside living cells,” explains Peter Maurer, who co-led this new study together with David Awschalom. “And the fact that they are typically ten times larger than most proteins further restricts their applicability,” he adds.

“So, rather than taking a conventional quantum sensor and trying to camouflage it to enter a biological system, we therefore wanted to explore the idea of using a biological system itself and developing it into a qubit,” says Awschalom.

Fluorescent proteins, which are just 3 nm in diameter, could come into their own here as they can be genetically encoded, allowing cells to produce these sensors directly at the desired location with atomic precision. Indeed, fluorescent proteins have become the “gold standard” in cell biology thanks to this unique ability, says Maurer. And decades of biochemistry research has allowed researchers to generate a vast library of such fluorescent proteins that can be tagged to thousands of different types of biological targets.

“We recognized that these proteins possess optical and spin properties that are strikingly similar to those of qubits formed by crystallographic defects in diamond – namely that they have a metastable triplet state,” explain Awschalom and Maurer. “Building on this insight, we combined techniques from fluorescence microscopy with methods of quantum control to encode and manipulate protein-based qubits.”

In their work, which is detailed in Nature, the researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein known as EYFP and read out its triplet spin state with up to 20% “spin contrast” – measured using optically detected magnetic resonance (ODMR) spectroscopy.

To test the technique, the team genetically modified the protein so that it was expressed in human embryonic kidney cells and Escherichia coli (E. coli) cells. The measured OMDR signals exhibited a contrast of up to 8%. While this performance is not as good as that of NV quantum sensors, the fluorescent proteins open the door to magnetic resonance measurements directly inside living cells – something that NV centres cannot do, says Maurer. “They could thus transform medical and biochemical studies by probing protein folding, monitoring redox states or detecting drug binding at the molecular scale,” he tells Physics World.

“A new dimension for fluorescence microscopy”

Beyond sensing, the unique quantum resonance “signatures” offer a new dimension for fluorescence microscopy, paving the way for highly multiplexed imaging far beyond today’s colour palette, Awschalom adds. Looking further ahead, using arrays of such protein qubits could even allow researchers to explore many-body quantum effects within biologically assembled structures.

Maurer, Awschalom and colleagues say they are now busy trying to improve the stability and sensitivity of their protein-based qubits through protein engineering via “directed evolution” – similar to the way that fluorescent proteins were optimized for microscopy.

“Another goal is to achieve single-molecule detection, enabling readout of the quantum state of individual protein qubits inside cells,” they reveal. “We also aim to expand the palette of available qubits by exploring new fluorescent proteins with improved spin properties and to develop sensing protocols capable of detecting nuclear magnetic resonance signals from nearby biomolecules, potentially revealing structural changes and biochemical modifications at the nanoscale.”

The post Protein qubit can be used as a quantum biosensor appeared first on Physics World.

  •  

Quantum fluid instability produces eccentric skyrmions

Physicists at Osaka Metropolitan University in Japan and the Korea Advanced Institute of Science and Technology (KAIST) claim to have observed the quantum counterpart of the classic Kelvin-Helmholtz instability (KHI), which is the most basic instability in fluids. The effect, seen in a quantum gas of 7Li atoms, produces a new type of exotic vortex pattern called an eccentric fractional skyrmion. The finding not only advances our understanding of complex topological quantum systems, it could also help in the development of next-generation memory and storage devices.

Topological defects occur when a system rapidly transitions from a disordered to an ordered phase. These defects, which can occur in a wide range of condensed matter systems, from liquid crystals and atomic gases to the rapidly cooling early universe, can produce excitations such as solitons, vortices and skyrmions.

Skyrmions, first discovered in magnetic materials, are swirling vortex-like spin structures that extend across a few nanometres in a material. They can be likened to 2D knots in which the magnetic moments rotate about 360° within a plane.

Eccentric fractional skyrmions contain singularities

Skyrmions are topologically stable, which makes them robust to external perturbations, and are much smaller than the magnetic domains used to encode data in today’s disk drives. That makes them ideal building blocks for future data storage technologies such as “racetrack” memories. Eccentric fractional skyrmions (EFSs), which had only been predicted in theory until now, have a crescent-like shape and contain singularities – points in which the usual spin structure breaks down, creating sharp distortions as it becomes unsymmetrical.

“To me, the large crescent moon in the upper right corner of Van Gogh’s ‘The Starry Night’ also looks exactly like an EFS,” says Hiromitsu Takeuchi at Osaka, who co-led this new study with Jae-Yoon Choi of KAIST. “EFSs carry half the elementary charge, which means they do not fit into traditional classifications of topological defects.”

The KHI is a classic phenomenon in fluids in which waves and vortices form at the interface between two fluids moving at different speeds. “To observe the KHI in quantum systems, we need a structure containing a thin superfluid interface (a magnetic domain wall), such as in a quantum gas of 7Li atoms,” says Takeuchi. “We also need experimental techniques that can skilfully control the behaviour of this interface. Both of these criteria have recently been met by Choi’s group.”

The researchers began by cooling a gas of 7Li atoms to near absolute zero temperatures to create a multi-component Bose-Einstein condensate – a quantum superfluid containing two streams flowing at different speeds. At the interface of these streams, they observed vortices, which corresponded to the predicted EFSs.

The behaviour of the KHI is universal

“We have shown that the behaviour of the KHI is universal and exists in both the classical and quantum regimes,” says Takeuchi. This finding could not only lead to a better understanding of quantum turbulence and the unification of quantum and classic hydrodynamics, it could also help in the development of technologies such as next-generation storage and memory devices and spintronics, an emerging technology in which magnetic spin is used to store and transfer information using much less energy than existing electronic devices.

“By further refining the experiment, we might be able to verify certain predictions (some of which were made as long ago as the 19th century) about the wavelength and frequency of KHI-driven interface waves in non-viscous quantum fluids, like the one studied in this work,” he adds.

“In addition to the universal finger pattern we observed, we expect structures like zipper and sealskin patterns, which are unique to such multi-component quantum fluids,” Takeuchi tells Physics World. “As well as experiments, it is necessary to develop a theory that more precisely describes the motion of EFSs, the interaction between these skyrmions and their internal structure in the context of quantum hydrodynamics and spontaneous symmetry breaking.”

The study is detailed in Nature Physics.

The post Quantum fluid instability produces eccentric skyrmions appeared first on Physics World.

  •  

‘Breathing’ crystal reversibly releases oxygen

A new transition-metal oxide crystal that reversibly and repeatedly absorbs and releases oxygen could be ideal for use in fuel cells and as the active medium in clean energy technologies such as thermal transistors, smart windows and new types of batteries. The “breathing” crystal, discovered by scientists at Pusan National University in Korea and Hokkaido University in Japan, is made from strontium, cobalt and iron and contains oxygen vacancies.

Transition-metal oxides boast a huge range of electrical properties that can be tuned all the way from insulating to superconducting. This means they can find applications in areas as diverse as energy storage, catalysis and electronic devices.

Among the different material parameters that can be tuned are the oxygen vacancies. Indeed, ordering these vacancies can produce new structural phases that show much promise for oxygen-driven programmable devices.

Element-specific behaviours

In the new work, a team of researchers led by physicist Hyoungjeen Jeen of Pusan and materials scientist Hiromichi Ohta in Hokkaido studied SrFe0.5Co0.5Ox. The researchers focused on this material, they say, since it belongs to the family of topotactic oxides, which are the main oxides being studied today in solid-state ionics. “However, previous work had not discussed which ion in this compound was catalytically active,” explains Jeen. “What is more, the cobalt-containing topotactic oxides studied so far were fragile and easily fractured during chemical reactions.”

The team succeeded in creating a unique platform from a solid solution of epitaxial SrFe0.5Co0.5O2.5 in which both the cobalt and iron ions bathed in the same chemical environment. “In this way, we were able to test which ion was better for reduction reactions and whether or not it sustained its structural integrity,” Jeen tells Physics World. “We found that our material showed element-specific reduction behaviours and reversible redox reactions.”

The researchers made their material using a pulsed laser deposition technique, ideal for the epitaxial synthesis of multi-element oxides that allowed them to grow SrFe0.5Co0.5O2.5 crystals in which the iron and cobalt ions were randomly located in the crystal. This random arrangement was key to the material’s ability to repeatedly release and absorb oxygen, they say.

“It’s like giving the crystal ‘lungs’ so that it can inhale and exhale oxygen on command,” says Jeen.

Stable and repeatable

This simple breathing picture comes from the difference in the catalytic activity of cobalt and iron in the compound, he explains. Cobalt ions prefer to lose and gain oxygen and these ions are the main sites for the redox activity. However, since iron ions prefer not to lose oxygen during the reduction reaction, they serve as pillars in this architecture. This allows for stable and repeatable oxygen release and uptake.

Until now, most materials that absorb and release oxygen in such a controlled fashion were either too fragile or only functioned at extremely high temperatures. The new material works under more ambient conditions and is stable. “This finding is striking in two ways: only cobalt ions are reduced, and the process leads to the formation of an entirely new and stable crystal structure,” explains Jeen.

The researchers also showed that the material could return to its original form when oxygen was reintroduced, so proving that the process is fully reversible. “This is a major step towards the realization of smart materials that can adjust themselves in real time,” says Ohta. “The potential applications include developing a cathode for intermediate solid oxide fuel cells, an active medium for thermal transistors (devices that can direct heat like electrical switches), smart windows that adjust their heat flow depending on the weather and even new types of batteries.”

Looking ahead, Jeen, Ohta and colleagues aim to investigate the material’s potential for practical applications.

They report their present work in Nature Communications.

The post ‘Breathing’ crystal reversibly releases oxygen appeared first on Physics World.

  •  

Zero-point motion of atoms measured directly for the first time

Physicists in Germany say they have measured the correlated behaviour of atoms in molecules prepared in their lowest quantum energy state for the first time. Using a technique known as Coulomb explosion imaging, they showed that the atoms do not simply vibrate individually. Instead, they move in a coupled fashion that displays fixed patterns.

According to classical physics, molecules with no thermal energy – for example, those held at absolute zero – should not move. However, according to quantum theory, the atoms making up these molecules are never completely “frozen”, so they should exhibit some motion even at this chilly temperature. This motion comes from the atoms’ zero-point energy, which is the minimum energy allowed by quantum mechanics for atoms in their ground state at absolute zero. It is therefore known as zero-point motion.

Reconstructing the molecule’s original structure

To study this motion, a team led by Till Jahnke from the Institute for Nuclear Physics at Goethe University Frankfurt and the Max Planck Institute for Nuclear Physics in Heidelberg used the European XFEL in Hamburg to bombard their sample – an iodopyridine molecule consisting of 11 atoms – with ultrashort, high-intensity X-ray pulses. These high-intensity pulses violently eject electrons out of the iodopyridine, causing its constituent atoms to become positively charged (and thus to repel each other) so rapidly that the molecule essentially explodes.

To image the molecular fragments generated by the explosion, the researchers used a customized version of a COLTRIMS reaction microscope. This approach allowed them to reconstruct the molecule’s original structure.

From this reconstruction, the researchers were able to show that the atoms do not simply vibrate individually, but that they do so in correlated, coordinated patterns. “This is known, of course, from quantum chemistry, but it had so far not been measured in a molecule consisting of so many atoms,” Jahnke explains.

Data challenges

One of the biggest challenges Jahnke and colleagues faced was interpreting what the microscope data was telling them. “The dataset we obtained is super-rich in information and we had already recorded it in 2019 when we began our project,” he says. “It took us more than two years to understand that we were seeing something as subtle (and fundamental) as ground-state fluctuations.”

Since the technique provides detailed information that is hidden to other imaging approaches, such as crystallography, the researchers are now using it to perform further time-resolved studies – for example, of photochemical reactions. Indeed, they performed and published the first measurements of this type at the beginning of 2025, while the current study (which is published in Science) was undergoing peer review.

“We have pushed the boundaries of the current state-of-the-art of this measurement approach,” Jahnke tells Physics World, “and it is nice to have seen a fundamental process directly at work.”

For theoretical condensed matter physicist Asaad Sakhel at Balqa Applied University, Jordan, who was not involved in this study, the new work is “an outstanding achievement”. “Being able to actually ‘see’ zero-point motion allows us to delve deeper into the mysteries of quantum mechanics in our quest to a further understanding of its foundations,” he says.

The post Zero-point motion of atoms measured directly for the first time appeared first on Physics World.

  •  

Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates

Physicists at the Chinese Academy of Sciences (CAS) have used diamond-based quantum sensors to uncover what they say is the first unambiguous experimental evidence for the Meissner effect – a hallmark of superconductivity – in bilayer nickelate materials at high pressures. The discovery could spur the development of highly sensitive quantum detectors that can be operated under high-pressure conditions.

Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. Apart from a sharp drop in electrical resistance, another important sign that a material has crossed this threshold is the appearance of the Meissner effect, in which the material expels a magnetic field from its interior (diamagnetism). This expulsion creates such a strong repulsive force that a magnet placed atop the superconducting material will levitate above it.

In “conventional” superconductors such as solid mercury, the Tc is so low that the materials must be cooled with liquid helium to keep them in the superconducting state. In the late 1980s, however, physicists discovered a new class of superconductors that have a Tabove the boiling point of liquid nitrogen (77 K). These “unconventional” or high-temperature superconductors are derived not from metals but from insulators containing copper oxides (cuprates).

Since then, the search has been on for materials that superconduct at still higher temperatures, and perhaps even at room temperature. Discovering such materials would have massive implications for technologies ranging from magnetic resonance imaging machines to electricity transmission lines.

Enter nickel oxides

In 2019 researchers at Stanford University in the US identified nickel oxides (nickelates) as additional high-temperature superconductors. This created a flurry of interest in the superconductivity community because these materials appear to superconduct in a way that differs from their copper-oxide cousins.

Among the nickelates studied, La3Ni2O7-δ (where δ can range from 0 to 0.04) is considered particularly promising because in 2023, researchers led by Meng Wang of China’s Sun Yat-Sen University spotted certain signatures of superconductivity at a temperature of around 80 K. However, these signatures only appeared when crystals of the material were placed in a device called a diamond anvil cell (DAC). This device subjects samples of material to extreme pressures of more than 400 GPa (or 4 × 106 atmospheres) as it squeezes them between the flattened tips of two tiny, gem-grade diamond crystals.

The problem, explains Xiaohui Yu of the CAS’ Institute of Physics, is that it is not easy to spot the Meissner effect under such high pressures. This is because the structure of the DAC limits the available sample volume and hinders the use of highly sensitive magnetic measurement techniques such as SQUID. Another problem is that the sample used in the 2023 study contains several competing phases that could mix and degrade the signal of the La3Ni2O7-δ.

Nitrogen-vacancy centres embedded as in-situ quantum sensors

In the new work, Yu and colleagues used nitrogen-vacancy (NV) centres embedded in the DAC as in-situ quantum sensors to track and image the Meissner effect in pressurized bilayer La3Ni2O7-δ. This newly developed magnetic sensing technique boasts both high sensitivity and high spatial resolution, Yu says. What is more, it fits perfectly into the DAC high-pressure chamber.

Next, they applied a small external magnetic field of around 120 G. Under these conditions, they measured the optically detected magnetic resonance (ODMR) spectra of the NV centres point by point. They could then extract the local magnetic field from the resonance frequencies of these spectra. “We directly mapped the Meissner effect of the bilayer nickelate samples,” Yu says, noting that the team’s image of the magnetic field clearly shows both a diamagnetic region and a region where magnetic flux is concentrated.

Weak demagnetization signal

The researchers began their project in late 2023, shortly after receiving single-crystal samples of La3Ni2O7-δ from Wang. “However, after two months of collecting data, we still had no meaningful results,” Yu recalls. “From these experiments, we learnt that the demagnetization signal in La3Ni2O7-δ crystals was quite weak and that we needed to improve either the nickelate sample or the sensitivity of the quantum sensor.”

To overcome these problems, they switched to using polycrystalline samples, enhancing the quality of the nickelate samples by doping them with praseodymium to make La2PrNi2O7. This produced a sample with an almost pure bilayer structure and thus a much stronger demagnetization signal. They also used shallow NV centres implanted on the DAC cutlet (the smaller face of the two diamond tips).

“Unlike the NV centres in the original experiments, which were randomly distributed in the pressure-transmitting medium and have relatively large ODMR widths, leading to only moderate sensitivity in the measurements, these shallow centres are evenly distributed and well aligned, making it easier for us to perform magnetic imaging with increased sensitivity,” Yu explains.

These improvements enabled the team to obtain a demagnetization signal from the La2PrNi2O7 and La3Ni2O7-δ samples, he tells Physics World. “We found that the diamagnetic signal from the La2PrNi2O7 samples is about five times stronger than that from the La3Ni2O7-δ ones prepared under similar conditions – a result that is consistent with the fact that the Pr-doped samples are of a better quality.”

Physicist Jun Zhao of Fudan University, China, who was not involved in this work, says that Yu and colleagues’ measurement represents “an important step forward” in nickelate research. “Such measurements are technically very challenging, and their success demonstrates both experimental ingenuity and scientific significance,” he says. “More broadly, their result strengthens the case for pressurized nickelates as a new platform to study high-temperature superconductivity beyond the cuprates. It will certainly stimulate further efforts to unravel the microscopic pairing mechanism.”

As well as allowing for the precise sensing of magnetic fields, NV centres can also be used to accurately measure many other physical quantities that are difficult to measure under high pressure, such as strain and temperature distribution. Yu and colleagues say they are therefore looking to further expand the application of these structures for use as quantum sensors in high-pressure sensing.

They report their current work in National Science Review.

The post Quantum sensors reveal ‘smoking gun’ of superconductivity in pressurized bilayer nickelates appeared first on Physics World.

  •  

Desert dust helps freeze clouds in the northern hemisphere

Micron-sized dust particles in the atmosphere could trigger the formation of ice in certain types of clouds in the Northern Hemisphere. This is the finding of researchers in Switzerland and Germany, who used 35 years of satellite data to show that nanoscale defects on the surface of these aerosol particles are responsible for the effect. Their results, which agree with laboratory experiments on droplet freezing, could be used to improve climate models and to advance studies of cloud seeding for geoengineering.

In the study, which was led by environmental scientist Diego Villanueva of ETH Zürich, the researchers focused on clouds in the so-called mixed-phase regime, which form at temperatures of between −39° and 0°C and are commonly found in mid- and high-latitudes, particularly over the North Atlantic, Siberia and Canada. These mixed-phase regime clouds (MPRCs) are often topped by a liquid or ice layer, and their makeup affects how much sunlight they reflect back into space and how much water they can release as rain or snow. Understanding them is therefore important for forecasting weather and making projections of future climate.

Researchers have known for a while that MPRCs are extremely sensitive to the presence of ice-nucleating particles in their environment. Such particles mainly come from mineral dust aerosols (such as K-feldspar, quartz, albite and plagioclase) that get swept up into the upper atmosphere from deserts. The Sahara Desert in northern Africa, for example, is a prime source of such dust in the Northern Hemisphere.

More dust leads to more ice clouds

Using 35 years of satellite data collected as part of the Cloud_cci project and MERRA-2 aerosol reanalyses, Villanueva and colleagues looked for correlations between dust levels and the formation of ice-topped clouds. They found that at temperatures of between -15°C and -30°C, the more dust there was, the more frequent the ice clouds were. What is more, their calculated increase in ice-topped clouds with increasing dust loading agrees well with previous laboratory experiments that predicted how dust triggers droplet freezing.

The new study, which is detailed in Science, shows that there is a connection between aerosols in the micrometre-size range and cloud ice observed over distances of several kilometres, Villanueva says. “We found that it is the nanoscale defects on the surface of dust aerosols that trigger ice clouds, so the process of ice glaciation spans more than 15 orders of magnitude in length,” he explains.

Thanks to this finding, Villaneuva tells Physics World that climate modellers can use the team’s dataset to better constrain aerosol-cloud processes, potentially helping them to construct better estimates of cloud feedback and global temperature projections.

The result also shows how sensitive clouds are to varying aerosol concentrations, he adds. “This could help bring forward the field of cloud seeding and include this in climate geoengineering efforts.”

The researchers say they have successfully replicated their results using a climate model and are now drafting a new manuscript to further explore the implications of dust-driven cloud glaciation for climate, especially for the Arctic.

The post Desert dust helps freeze clouds in the northern hemisphere appeared first on Physics World.

  •