↩ Accueil

Vue lecture

Quantum-scale thermodynamics offers a tighter definition of entropy

A new, microscopic formulation of the second law of thermodynamics for coherently driven quantum systems has been proposed by researchers in Switzerland and Germany. The researchers applied their formulation to several canonical quantum systems, such as a three-level maser. They believe the result provides a tighter definition of entropy in such systems, and could form a basis for further exploration.

In any physical process, the first law of thermodynamics says that the total energy must always be conserved, with some converted to useful work and the remainder dissipated as heat. The second law of thermodynamics says that, in any allowed process, the total amount of heat (the entropy) must always increase.

“I like to think of work being mediated by degrees of freedom that we control and heat being mediated by degrees of freedom that we cannot control,” explains theoretical physicist Patrick Potts of the University of Basel in Switzerland. “In the macroscopic scenario, for example, work would be performed by some piston – we can move it.” The heat, meanwhile, goes into modes such as phonons generated by friction.

Murky at small scales

This distinction, however, becomes murky at small scales: “Once you go microscopic everything’s microscopic, so it becomes much more difficult to say ‘what is it that that you control – where is the work mediated – and what is it that you cannot control?’,” says Potts.

Potts and colleagues in Basel and at RWTH Aachen University in Germany examined the case of optical cavities driven by laser light, systems that can do work: “If you think of a laser as being able to promote a system from a ground state to an excited state, that’s very important to what’s being done in quantum computers, for example,” says Potts. “If you rotate a qubit, you’re doing exactly that.”

The light interacts with the cavity and makes an arbitrary number of bounces before leaking out. This emergent light is traditionally treated as heat in quantum simulations. However, it can still be partially coherent – if the cavity is empty, it can be just as coherent as the incoming light and can do just as much work.

In 2020, quantum optician Alexia Auffèves of Université Grenoble Alpes in France and colleagues noted that the coherent component of the light exiting a cavity could potentially do work. In the new study, the researchers embedded this in a consistent thermodynamic framework. They studied several examples and formulated physically consistent laws of thermodynamics.

In particular, they looked at the three-level maser, which is a canonical example of a quantum heat engine. However, it has generally been modelled semi-classically by assuming that the cavity contains a macroscopic electromagnetic field.

Work vanishes

“The old description will tell you that you put energy into this macroscopic field and that is work,” says Potts, “But once you describe the cavity quantum mechanically using the old framework then – poof! – the work is gone…Putting energy into the light field is no longer considered work, and whatever leaves the cavity is considered heat.”

The researchers new thermodynamic treatment allows them to treat the cavity quantum mechanically and to parametrize the minimum degree of entropy in the radiation that emerges – how much radiation must be converted to uncontrolled degrees of freedom that can do no useful work and how much can remain coherent.

The researchers are now applying their formalism to study thermodynamic uncertainty relations as an extension of the traditional second law of thermodynamics. “It’s actually a trade-off between three things – not just efficiency and power, but fluctuations also play a role,” says Potts. “So the more fluctuations you allow for, the higher you can get the efficiency and the power at the same time. These three things are very interesting to look at with this new formalism because these thermodynamic uncertainty relations hold for classical systems, but not for quantum systems.”

“This [work] fits very well into a question that has been heavily discussed for a long time in the quantum thermodynamics community, which is how to properly define work and how to  properly define useful resources,” says quantum theorist Federico Cerisola of the UK’s University of Exeter. “In particular, they very convincingly argue that, in the particular family of experiments they’re describing, there are resources that have been ignored in the past when using more standard approaches that can still be used for something useful.”

Cerisola says that, in his view, the logical next step is to propose a system – ideally one that can be implemented experimentally – in which radiation that would traditionally have been considered waste actually does useful work.

The research is described in Physical Review Letters.  

The post Quantum-scale thermodynamics offers a tighter definition of entropy appeared first on Physics World.

  •  

Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time

When I was five years old, my family moved into a 1930s semi-detached house with a long strip of garden. At the end of the garden was a miniature orchard of eight apple trees the previous owners had planted – and it was there that I, much like another significantly more famous physicist, learned an important lesson about gravity.

As I read in the shade of the trees, an apple would sometimes fall with a satisfying thunk into the soft grass beside me. Less satisfyingly, they sometimes landed on my legs, or even my head – and the big cooking apples really hurt. I soon took to sitting on old wooden pallets crudely wedged among the higher branches. It was not comfortable, but at least I could return indoors without bruises.

The effects of gravity become common sense so early in life that we rarely stop to think about them past childhood. In his new book Crush: Close Encounters with Gravity, James Riordon has decided to take us back to the basics of this most fundamental of forces. Indeed, he explores an impressively wide range of topics – from why we dream of falling and why giraffes should not exist (but do), to how black holes form and the existence of “Planet 9”.

Riordon, a physicist turned science writer, makes for a deeply engaging author. He is not afraid to put himself into the story, introducing difficult concepts through personal experience and explaining them with the help of everything including the kitchen sink, which in his hands becomes an analogue for a black hole.

Gravity as a subject can easily be both too familiar and too challenging. In Riordon’s words, “Things with mass attract each other. That’s really all there is to Newtonian gravity.” While Albert Einstein’s theory of general relativity, by contrast, is so intricate that it takes years of university-level study to truly master. Riordon avoids both pitfalls: he manages to make the simple fascinating again, and the complex understandable.

He provides captivating insights into how gravity has shaped the animal kingdom, a perspective I had never much considered. Did you know that tree snakes have their hearts positioned closer to their heads than their land-based cousins? I certainly didn’t. The higher placement ensures a steady blood flow to the brain, even when the snake is climbing vertically. It is one of many examples that make you look again at the natural world with fresh eyes.

Riordon’s treatment of gravity in Einstein’s abstract space–time is equally impressive, perhaps unsurprisingly, as his previous books include Very Easy Relativity and Relatively Easy Relativity. Riordon takes a careful, patient approach – though I have never before heard general relativity reduced to “space–time is squishy”. But why not? The phrase sticks and gives us a handhold as we scale the complications of the theory. For those who want to extend the challenge, a mathematical background to the theory is provided in an appendix, and every chapter is well referenced and accompanied with suggestions for further reading.

If anything, I found myself wanting more examples of gravity as experienced by humans and animals on Earth, as opposed to in the context of the astronomical realm. I found these down-to-earth chapters the most fascinating: they formed a bridge between the vast and the local, reminding us that the same force that governs the orbits of galaxies also brings an apple to the ground. This may be a reaction only felt by astronomers like me, who already spend their days looking upward. I can easily see how the balance Riordon chose is necessary for someone without that background, and Einstein’s gravity does require galactic scales to appreciate, after all.

Crush is a generally uncomplicated and pleasurable read. The anecdotes can sometimes be a little long-winded and there are parts of the book that are not without challenge. But it is pitched perfectly for the curious general reader and even for those dipping their toes into popular science for the first time. I can imagine an enthusiastic A-level student devouring it; it is exactly the kind of book I would have loved at that age. Even if some of it would have gone over my head, Riordon’s enthusiasm and gift for storytelling would have kept me more than interested, as I sat up on that pallet in my favourite apple tree.

I left that house, and that tree, a long time ago, but just a few miles down the road from where I live now stands another, far more famous apple tree. In the garden of Woolsthorpe Manor near Grantham, Newton is said to have watched an apple fall. From that small event, he began to ask the questions that reshaped his and our understanding of the universe. Whether or not the story is true hardly matters – Newton was constantly inspired by the natural world, so it isn’t improbable, and that apple tree remains a potent symbol of curiosity and insight.

“[Newton] could tell us that an apple falls, and how quickly it will do it. As for the question of why it falls, that took Einstein to answer,” writes Riordon. Crush is a crisp and fresh tour through a continuum from orchards to observatories, showing that every planetary orbit, pulse of starlight and even every apple fall is part of the same wondrous story.

  • 2025 MIT Press 288pp £27hb

The post Bring gravity back down to Earth: from giraffes and tree snakes to ‘squishy’ space–time appeared first on Physics World.

  •  

Ice XXI appears in a diamond anvil cell

A new phase of water ice, dubbed ice XXI, has been discovered by researchers working at the European XFEL and PETRA III facilities. The ice, which exists at room temperature and is structurally distinct from all previously observed phases of ice, was produced by rapidly compressing water to high pressures of 2 GPa. The finding could shed light on how different ice phases form at high pressures, including on icy moons and planets.

On Earth, ice can take many forms, and its properties depend strongly on its structure. The main type of naturally-occurring ice is hexagonal ice (Ih), so-called because the water molecules arrange themselves in a hexagonal lattice (this is the reason why snowflakes have six-fold symmetry). However, under certain conditions – usually involving very high pressures and low temperatures – ice can take on other structures. Indeed, 20 different forms of ice have been identified so far, denoted by roman numerals (ice I, II, III and so on up to ice XX).

Pressures of up to 2 GPa allow ice to form even at room temperature

Researchers from the Korea Research Institute of Standards and Science (KRISS) have now produced a 21st form of ice by applying pressures of up to two gigapascals. Such high pressures are roughly 20 000 times higher than normal air pressure at sea level, and they allow ice to form even at room temperature – albeit only within a device known as a dynamic diamond anvil cell (dDAC) that is capable of producing such extremely high pressures.

“In this special pressure cell, samples are squeezed between the tips of two opposing diamond anvils and can be compressed along a predefined pressure pathway,” explains Cornelius Strohm, a member of the DESY HIBEF team that set up the experiment using the High Energy Density (HED) instrument at the European XFEL.

Much more tightly packed molecules

The structure of ice XXI is different from all previously observed phases of ice because its molecules are much more tightly packed. This gives it the largest unit cell volume of all currently known types of ice, says KRISS scientist Geun Woo Lee. It is also metastable, meaning that it can exist even though another form of ice (in this case ice VI) would be more stable under the conditions in the experiment.

“This rapid compression of water allows it to remain liquid up to higher pressures, where it should have already crystallized to ice VI,” explains Lee. “Ice VI is an especially intriguing phase, thought to be present in the interior of icy moons such as Titan and Ganymede. Its highly distorted structure may allow complex transition pathways that lead to metastable ice phases.”

Ice XXI has a body-centred tetragonal crystal structure

To study how the new ice sample formed, the researchers rapidly compressed and decompressed it over 1000 times in the diamond anvil cell while imaging it every microsecond using the European XFEL, which produces megahertz frequency X-ray pulses at extremely high rates. They found that the liquid water crystallizes into different structures depending on how supercompressed it is.

The KRISS team then used the P02.2 beamline at PETRA III to determine that the ice XXI has a body-centred tetragonal crystal structure with a large unit cell (a = b = 20.197 Å and c = 7.891 Å) at approximately 1.6 GPa. This unit cell contains 152 water molecules, resulting in a density of 1.413 g cm−3.

The experiments were far from easy, recalls Lee. Upon crystallization, Ice XXI grows upwards (that is, in the vertical direction), which makes it difficult to precisely analyse its crystal structure. “The difficulty for us is to keep it stable for a long enough period to make precise structural measurements in single crystal diffraction study,” he says.

The multiple pathways of ice crystallization unearthed in this work, which is detailed in Nature Materials, imply that many more ice phases may exist. Lee says it is therefore important to analyse the mechanism behind the formation of these phases. “This could, for example, help us better understand the formation and evolution of these phases on icy moons or planets,” he tells Physics World.

The post Ice XXI appears in a diamond anvil cell appeared first on Physics World.

  •  

Studying the role of the quantum environment in attosecond science

Attosecond science is undoubtedly one of the fastest growing branches of physics today.

Its popularity was demonstrated by the award of the 2023 Nobel Prize in Physics to Anne L’Huillier, Paul Corkum and Ferenc Krausz for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter.

One of the most important processes in this field is dephasing. This happens when an electron loses its phase coherence because of interactions with its surroundings.

This loss of coherence can obscure the fine details of electron dynamics, making it harder to capture precise snapshots of these rapid processes.

The most common way to model this process in light-matter interactions is by using the relaxation time approximation. This approach greatly simplifies the picture as it avoids the need to model every single particle in the system.

Its use is fine for dilute gases, but it doesn’t work as well with intense lasers and denser materials, such as solids, because it greatly overestimates ionisation.

This is a significant problem as ionisation is the first step in many processes such as electron acceleration and high-harmonic generation.

To address this problem, a team led by researchers from the University of Ottawa have developed a new method to correct for this problem.

By introducing a heat bath into the model they were able to represent the many-body environment that interacts with electrons, without significantly increasing the complexity.

This new approach should enable the identification of new effects in attosecond science or wherever strong electromagnetic fields interact with matter.

Read the full article

Strong field physics in open quantum systems – IOPscience

N. Boroumand et al, 2025 Rep. Prog. Phys. 88 070501

 

The post Studying the role of the quantum environment in attosecond science appeared first on Physics World.

  •  

Characterising quantum many-body states

Describing the non-classical properties of a complex many-body system (such as entanglement or coherence) is an important part of quantum technologies.

An ideal tool for this task would work well with large systems, be easily computable and easily measurable. Unfortunately, such a tool for every situation does not yet exist.

With this goal in mind a team of researchers – Marcin Płodzień and Maciej Lewenstein (ICFO, Barcelona, Spain) and Jan Chwedeńczuk (University of Warsaw, Poland) – began work on a special type of quantum state used in quantum computing – graph states.

These states can be visualised as graphs or networks where each vertex represents a qubit, and each edge represents an interaction between pairs of qubits.

The team studied four different shapes of graph states using new mathematical tools they developed. They found that one of these in particular, the Turán graph, could be very useful in quantum metrology.

Their method is (relatively) straightforward and does not require many assumptions. This means that it could be applied to any shape of graph beyond the four studied here.

The results will be useful in various quantum technologies wherever precise knowledge of many-body quantum correlations is necessary.

Read the full article

Many-body quantum resources of graph states – IOPscience

M. Płodzień et al, 2025 Rep. Prog. Phys. 88 077601

 

The post Characterising quantum many-body states appeared first on Physics World.

  •  

Extra carbon in the atmosphere may disrupt radio communications

Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation.

“While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.”

The sporadic E-layer

One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice.

The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+, and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes between around 80‒100 km.

Effects of increasing CO2 concentrations

While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm.

“The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.”

The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100-120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.”

According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides.

“These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.”

The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

  •  

Phase-changing material generates vivid tunable colours

A toy gecko featuring a flexible layer of the thermally tunable colour coating
Switchable camouflage A toy gecko featuring a flexible layer of the thermally tunable colour coating appears greenish blue at room temperature (left); upon heating (right), its body changes to a dark magenta colour. (Courtesy: Aritra Biswa)

Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms.

In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication.

Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours.

The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours.

The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer.

The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater.

“These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes.

The research is described in Proceedings of the National Academy of Sciences.

The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

  •  

Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics

Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies.

Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies.

A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons.

Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality.

This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems.

‘A great honour’

In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer  in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap.

Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths.

The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational.

Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.”

Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based.

Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June.

The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

  •  

Staying the course with lockdowns could end future pandemics in months

As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed.

In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R).

Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics.

What are the most important findings in your work?

One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020.

Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times.

Simulation trajectories a) A plot of the decline in the number of infected individuals over time. b) Probability density of extinction times for the same parameters as in a), showing that the most likely extinction times are measured in months. (Courtesy: Bhavin S. Khatri 2025 EPL 152 11003 DOI 10.1209/0295-5075/ae0c31 CC-BY 4.0 https://creativecommons.org/licenses/by/4.0/)

Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold.

The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation.

In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year.

How realistic is this counterfactual scenario of eliminating SARS-CoV-2 within a year?

Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct.

The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans.

Photo of Bhavin Khatri. He has a salt-and-pepper beard and glasses, he's wearing a button-down shirt with fine red checks that's open at the collar, and he's sitting in front of a window in an office
Modelling the decline of a virus: Theoretical physicist and biologist Bhavin Khatri. (Courtesy: Bhavin Khatri)

As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants.

What lessons would you like experts (and the public) to take from this work when considering future pandemic scenarios?

I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future.

I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits.

What do you plan to do next?

I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir.

It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper.

I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants.

The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.

  •  

When is good enough ‘good enough’?

Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.

When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.

Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.

How to be good enough

So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).

The 80/20 rule: the guiding principle of “good enough”

Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.

Examples from the world of business include the following:

Business sales: 80% of a company’s revenue might come from 20% of its customers.

Company productivity: 80% of your results may come from 20% of your daily tasks.

Software development: 80% of bugs could be caused by 20% of the code.

Quality control: 20% of defects may cause 80% of customer complaints.

Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.

Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.

Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.

Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. 

The benefits of good enough

Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.

A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.

Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.

Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.

Caveats and conclusions

Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.

So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.

The post When is good enough ‘good enough’? appeared first on Physics World.

  •  

Looking for inconsistencies in the fine structure constant

a crystal containing thorium atoms
The core element of the experiment: a crystal containing thorium atoms. (Courtesy: TU Wien)

New high-precision laser spectroscopy measurements on thorium-229 nuclei could shed more light on the fine structure constant, which determines the strength of the electromagnetic interaction, say physicists at TU Wien in Austria.

The electromagnetic interaction is one of the four known fundamental forces in nature, with the others being gravity and the strong and weak nuclear forces. Each of these fundamental forces has an interaction constant that describes its strength in comparison with the others. The fine structure constant, α, has a value of approximately 1/137. If it had any other value, charged particles would behave differently, chemical bonding would manifest in another way and light-matter interactions as we know them would not be the same.

“As the name ‘constant’ implies, we assume that these forces are universal and have the same values at all times and everywhere in the universe,” explains study leader Thorsten Schumm from the Institute of Atomic and Subatomic Physics at TU Wien. “However, many modern theories, especially those concerning the nature of dark matter, predict small and slow fluctuations in these constants. Demonstrating a non-constant fine-structure constant would shatter our current understanding of nature, but to do this, we need to be able to measure changes in this constant with extreme precision.”

With thorium spectroscopy, he says, we now have a very sensitive tool to search for such variations.

Nucleus becomes slightly more elliptic

The new work builds on a project that led, last year, to the worlds’s first nuclear clock, and is based on precisely determining how the thorium-229 (229Th) nucleus changes shape when one of its neutrons transitions from a ground state to a higher-energy state. “When excited, the 229Th nucleus becomes slightly more elliptic,” Schumm explains. “Although this shape change is small (at the 2% level), it dramatically shifts the contributions of the Coulomb interactions (the repulsion between protons in the nucleus) to the nuclear quantum states.”

The result is a change in the geometry of the 229Th nucleus’ electric field, to a degree that depends very sensitively on the value of the fine structure constant. By precisely observing this thorium transition, it is therefore possible to measure whether the fine-structure constant is actually a constant or whether it varies slightly.

After making crystals of 229Th doped in a CaF2 matrix at TU Wien, the researchers performed the next phase of the experiment in a JILA laboratory at the University of Colorado, Boulder, US, firing ultrashort laser pulses at the crystals. While they did not measure any changes in the fine structure constant, they did succeed in determining how such changes, if they exist, would translate into modifications to the energy of the first nuclear excited state of 229Th.

“It turns out that this change is huge, a factor 6000 larger than in any atomic or molecular system, thanks to the high energy governing the processes inside nuclei,” Schumm says. “This means that we are by a factor of 6000 more sensitive to fine structure variations than previous measurements.”

Increasing the spectroscopic accuracy of the 229Th transition

Researchers in the field have debated the likelihood of such an “enhancement factor” for decades, and theoretical predictions of its value have varied between zero and 10 000. “Having confirmed such a high enhancement factor will now allow us to trigger a ‘hunt’ for the observation of fine structure variations using our approach,” Schumm says.

Andrea Caputo of CERN’s theoretical physics department, who was not involved in this work, calls the experimental result “truly remarkable”, as it probes nuclear structure with a precision that has never been achieved before. However, he adds that the theoretical framework is still lacking. “In a recent work published shortly before this work, my collaborators and I showed that the nuclear-clock enhancement factor K is still subject to substantial theoretical uncertainties,” Caputo says. “Much progress is therefore still required on the theory side to model the nuclear structure reliably.”

Schumm and colleagues are now working on increasing the spectroscopic accuracy of their 229Th transition measurement by another one to two orders of magnitude. “We will then start hunting for fluctuations in the transition energy,” he reveals, “tracing it over time and – through the Earth’s movement around the Sun – space.

The present work is detailed in Nature Communications.

The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

  •  

Heat engine captures energy as Earth cools at night

A new heat engine driven by the temperature difference between Earth’s surface and outer space has been developed by Tristan Deppe and Jeremy Munday at the University of California Davis. In an outdoor trial, the duo showed how their engine could offer a reliable source of renewable energy at night.

While solar cells do a great job of converting the Sun’s energy into electricity, they have one major drawback, as Munday explains: “Lack of power generation at night means that we either need storage, which is expensive, or other forms of energy, which often come from fossil fuel sources.”

One solution is to exploit the fact that the Earth’s surface absorbs heat from the Sun during the day and then radiates some of that energy into space at night. While space has a temperature of around −270° C, the average temperature of Earth’s surface is a balmy 15° C. Together, these two heat reservoirs provide the essential ingredients of a heat engine, which is a device that extracts mechanical work as thermal energy flows from a heat source to a heat sink.

Coupling to space

“At first glance, these two entities appear too far apart to be connected through an engine. However, by radiatively coupling one side of the engine to space, we can achieve the needed temperature difference to drive the engine,” Munday explains.

For the concept to work, the engine must radiate the energy it extracts from the Earth within the atmospheric transparency window. This is a narrow band of infrared wavelengths that pass directly into outer space without being absorbed by the atmosphere.

To demonstrate this concept, Deppe and Munday created a Stirling engine, which operates through the cyclical expansion and contraction of an enclosed gas as it moves between hot and cold ends. In their setup, the ends were aligned vertically, with a pair of plates connecting each end to the corresponding heat reservoir.

For the hot end, an aluminium mount was pressed into soil, transferring the Earth’s ambient heat to the engine’s bottom plate. At the cold end, the researchers attached a black-coated plate that emitted an upward stream of infrared radiation within the transparency window.

Outdoor experiments

In a series of outdoor experiments performed throughout the year, this setup maintained a temperature difference greater than 10° C between the two plates during most months. This was enough to extract more than 400 mW per square metre of mechanical power throughout the night.

“We were able to generate enough power to run a mechanical fan, which could be used for air circulation in greenhouses or residential buildings,” Munday describes. “We also configured the device to produce both mechanical and electrical power simultaneously, which adds to the flexibility of its operation.”

With this promising early demonstration, the researchers now predict that future improvements could enable the system to extract as much as 6 W per square metre under the same conditions. If rolled out commercially, the heat engine could help reduce the reliance of solar power on night-time energy storage – potentially opening a new route to cutting carbon emissions.

The research has described in Science Advances.

The post Heat engine captures energy as Earth cools at night appeared first on Physics World.

  •  

Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics

A new microscale version of the flumes that are commonly used to reproduce wave behaviour in the laboratory will make it far easier to study nonlinear hydrodynamics. The device consists of a layer of superfluid helium just a few atoms thick on a silicon chip, and its developers at the University of Queensland, Australia, say it could help us better understand phenomena ranging from oceans and hurricanes to weather and climate.

“The physics of nonlinear hydrodynamics is extremely hard to model because of instabilities that ultimately grow into turbulence,” explains study leader Warwick Bowen of Queensland’s Quantum Optics Laboratory. “It is also very hard to study in experiments since these often require hundreds-of-metre-long wave flumes.”

While such flumes are good for studying shallow-water dynamics like tsunamis and rogue waves, Bowen notes that they struggle to access many of the complex wave behaviours, such as turbulence, found in nature.

Amplifying the nonlinearities in complex behaviours

The team say that the geometrical structure of the new wave-on-a-chip device can be designed at will using lithographic techniques and built in a matter of days. Superfluid helium placed on its surface can then be controlled optomechanically. Thanks to these innovations, the researchers were able to experimentally measure nonlinear hydrodynamics millions of times faster than would be possible using traditional flumes. They could also “amplify” the nonlinearities of complex behaviours, making them orders of magnitude stronger than is possible in even the largest wave flumes.

“This promises to change the way we do nonlinear hydrodynamics, with the potential to discover new equations that better explain the complex physics behind it,” Bowen says. “Such a technique could be used widely to improve our ability to predict both natural and engineered hydrodynamic behaviours.”

So far, the team has measured several effects, including wave steepening, shock fronts and solitary wave fission thanks to the chip. While these nonlinear behaviours had been predicted in superfluids, they had never been directly observed there until now.

Waves can be generated in a very shallow depth

The Quantum Optics Laboratory researchers have been studying superfluid helium for over a decade. A key feature of this quantum liquid is that it flows without resistance, similar to the way electrons move without resistance in a superconductor. “We realized that this behaviour could be exploited in experimental studies of nonlinear hydrodynamics because it allows waves to be generated in a very shallow depth – even down to just a few atoms deep,” Bowen explains.

In conventional fluids, Bowen continues, resistance to motion becomes hugely important at small scales, and ultimately limits the nonlinear strengths accessible in traditional flume-based testing rigs. “Moving from the tens-of-centimetre depths of these flumes to tens-of-nanometres, we realized that superfluid helium could allow us to achieve many orders of magnitude stronger nonlinearities – comparable to the largest flows in the ocean – while also greatly increasing measurement speeds. It was this potential that attracted us to the project.”

The experiments were far from simple, however. To do them, the researchers needed to cryogenically cool the system to near absolute zero temperatures. They also needed to fabricate exceptionally thin superfluid helium films that interact very weakly with light, as well as optical devices with structures smaller than a micron. Combining all these components required what Bowen describes as “something of a hero experiment”, with important contributions coming from the team’s co-leader, Christopher Baker, and Walter Wasserman, who was then a PhD student in the group. The wave dynamics themselves, Bowen adds, were “exceptionally complex” and were analysed by Matthew Reeves, the first author of a Science paper describing the device.

As well as the applications areas mentioned earlier, the team say the new work, which is supported by the US Defense Advanced Research Project Agency’s APAQuS Program, could also advance our understanding of strongly-interacting quantum structures that are difficult to model theoretically. “Superfluid helium is a classic example of such a system,” explains Bowen, “and our measurements represent the most precise measurements of wave physics in these. Other applications may be found in quantum technologies, where the flow of superfluid helium could – somewhat speculatively – replace superconducting electron flow in future quantum computing architectures.”

The researchers now plan to use the device and machine learning techniques to search for new hydrodynamics equations.

The post Microscale ‘wave-on-a-chip’ device sheds light on nonlinear hydrodynamics appeared first on Physics World.

  •  

Electrical charge on objects in optical tweezers can be controlled precisely

An effect first observed decades ago by Nobel laureate Arthur Ashkin has been used to fine tune the electrical charge on objects held in optical tweezers. Developed by an international team led by Scott Waitukaitis of the Institute of Science and Technology Austria, the new technique could improve our understanding of aerosols and clouds.

Optical tweezers use focused laser beams to trap and manipulate small objects about 100 nm to 1 micron in size. Their precision and versatility have made them a staple across fields from quantum optics to biochemistry.

Ashkin shared the 2018 Nobel prize for inventing optical tweezers and in the 1970s he noticed that trapped objects can be electrically charged by the laser light. “However, his paper didn’t get much attention, and the observation has essentially gone ignored,” explains Waitukaitis.

Waitukaitis’ team rediscovered the effect while using optical tweezers to study how charges build up in the ice crystals accumulating inside clouds. In their experiment, micron-sized silica spheres stood in for the ice, but Ashkin’s charging effect got in their way.

Bummed out

“Our goal has always been to study charged particles in air in the context of atmospheric physics – in lightning initiation or aerosols, for example,” Waitukaitis recalls. “We never intended for the laser to charge the particle, and at first we were a bit bummed out that it did so.”

Their next thought was that they had discovered a new and potentially useful phenomenon. “Out of due diligence we of course did a deep dive into the literature to be sure that no one had seen it, and that’s when we found the old paper from Ashkin, “ says Waitukaitis.

In 1976, Ashkin described how optically trapped objects become charged through a nonlinear process whereby electrons absorb two photons simultaneously. These electrons can acquire enough energy to escape the object, leaving it with a positive charge.

Yet beyond this insight, Ashkin “wasn’t able to make much sense of the effect,” Waitukaitis explains. “I have the feeling he found it an interesting curiosity and then moved on.”

Shaking and scattering

To study the effect in more detail, the team modified their optical tweezers setup so its two copper lens holders doubled as electrodes, allowing them to apply an electric field along the axis of the confining, opposite-facing laser beams. If the silica sphere became charged, this field would cause it to shake, scattering a portion of the laser light back towards each lens.

The researchers picked off this portion of the scattered light using a beam splitter, then diverted it to a photodiode, allowing them to track the sphere’s position. Finally, they converted the measured amplitude of the shaking particle into a real-time charge measurement. This allowed them to track the relationship between the sphere’s charge and the laser’s tuneable intensity.

Their measurements confirmed Ashkin’s 1976 hypothesis that electrons on optically-trapped objects undergo two-photon absorption, allowing them to escape. Waitukaitis and colleagues improved on this model and showed how the charge on a trapped object can be controlled precisely by simply adjusting the laser’s intensity.

As for the team’s original research goal, the effect has actually been very useful for studying the behaviour of charged aerosols.

“We can get [an object] so charged that it shoots off little ‘microdischarges’ from its surface due to breakdown of the air around it, involving just a few or tens of electron charges at a time,” Waitukaitis says. “This is going to be really cool for studying electrostatic phenomena in the context of particles in the atmosphere.“

The study is described in Physical Review Letters.

The post Electrical charge on objects in optical tweezers can be controlled precisely appeared first on Physics World.

  •  

Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge

Earlier this autumn I had the pleasure of visiting the Perimeter Institute for Theoretical Physics in Waterloo Canada – where I interviewed four physicists about their research. This is the second of those conversations to appear on the podcast – and it is with Bianca Dittrich, whose research focuses on quantum gravity.

Albert Einstein’s general theory of relativity does a great job at explaining gravity but it is thought to be incomplete because it is incompatible with quantum mechanics. This is an important shortcoming because quantum mechanics is widely considered to be one of science’s most successful theories.

Developing a theory of quantum gravity is a crucial goal in physics, but it is proving to be extremely difficult. In this episode, Dittrich explains some of the challenges and talks about ways forward – including her current research on spin foams. We also chat about the intersection of quantum gravity and condensed matter physics; and the difficulties of testing theories against observational data.

IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics.

The post Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge appeared first on Physics World.

  •  

Can fast qubits also be robust?

National center of competence in research spin
Qubit central: This work was carried out as part of the National Center of Competence in Research SPIN (NCCR SPIN), which is led by the University of Basel, Switzerland. NCCR SPIN focuses on creating scalable spin qubits in semiconductor nanostructures made of silicon and germanium, with the aim of developing small, fast qubits for a universal quantum computer. (Courtesy: A Efimov)

Qubits – the building blocks of quantum computers – are plagued with a seemingly unsurmountable dilemma. If they’re fast, they aren’t robust. And if they’re robust, they aren’t fast. Both qualities are important, because all potentially useful quantum algorithms rely on being able to perform many manipulations on a qubit before its state decays. But whereas faster qubits are typically realized by strongly coupling them to the external environment, enabling them to interact more strongly with the driving field, robust qubits with long coherence times are typically achieved by isolating them from their environment.

These seemingly contradictory requirements made simultaneously fast and robust qubits an unsolved challenge – until now. In an article published in Nature Communications, a team of physicists led by Dominik Zumbühl from the University of Basel, Switzerland show that it is, in fact, possible to increase both the coherence time and operational speed of a qubit, demonstrating a pathway out of this long-standing impasse.

The magic ingredient

The key ingredient driving this discovery is something called the direct Rashba spin-orbit interaction. The best-known example of spin-orbit interaction comes from atomic physics. Consider a hydrogen atom, in which a single electron revolves around a single proton in the nucleus. During this orbital motion, the electron interacts with the static electric field generated by the positively charged nucleus. The electron in turn experiences an effective magnetic field that couples to the electron’s intrinsic magnetic moment, or spin. This coupling of the electron’s orbital motion to its spin is called spin-orbit (SO) interaction.

Aided by collaborators at the University of Oxford, UK and TU Eindhoven in the Netherlands, Zumbühl and colleagues chose to replace this simple SO interaction with a far more complex landscape of electrostatic potential generated by a 10-nanometer-thick germanium wire coated with a thin silicon shell. By removing a single electron from this wire, they create states known as holes that can be used as qubits, with quantum information being encoded in the hole’s spin.

Importantly, the underlying crystal structure of the silicon-coated germanium wire constrains these holes to discrete energy levels called bands. “If you were to mathematically model a low-level hole residing in one of these bands using perturbation theory – a commonly applied method in which more remote bands are treated as corrections to the ground state – you would find a term that looks structurally similar to the spin–orbit interaction known from atomic physics,” explains Miguel Carballido, who conducted the work during his PhD at Basel, and is now a senior research associate at the University of New South Wales’ School of Electrical Engineering and Telecommunications in Sydney, Australia.

By encoding the quantum states in these energy levels, the spin-orbit interaction can be used to drive the hole-qubit between its two spin states. What makes this interaction special is that it can be tuned using an external electric field. Thus, by applying a stronger electric field, the interaction can be strengthened – resulting in faster qubit manipulation.

Comparison of graphs of qubit speed and qubit coherence times, showing showing qubit speed plateauing (top panel) and qubit coherence times peaking (bottom) at an applied electric field around 1330 mV
Uncompromising performance: Results showing qubit speed plateauing (top panel) and qubit coherence times peaking (bottom) at an applied electric field around 1330 mV, showing that qubit speed and coherence times can be simultaneously optimized. (CC BY ND 4.0 MJ Carballido et al. “Compromise-free scaling of qubit speed and coherence” 2025 Nat. Commun. 16 7616)

Reaching a plateau

This ability to make a qubit faster by tuning an external parameter isn’t new. The difference is that whereas in other approaches, a stronger interaction also means higher sensitivity to fluctuations in the driving field, the Basel researchers found a way around this problem. As they increase the electric field, the spin-orbit interaction increases up to a certain point. Beyond this point, any further increase in the electric field will cause the hole to remain stuck within a low energy band. This restricts the hole’s ability to interact with other bands to change its spin, causing the SO interaction strength to drop.

By tuning the electric field to this peak, they can therefore operate in a “plateau” region where the SO interaction is the strongest, but the sensitivity to noise is the lowest. This leads to high coherence times (see figure), meaning that the qubit remains in the desired quantum state for longer. By reaching this plateau, where the qubit is both fast and robust, the researchers demonstrate the ability to operate their device in the “compromise-free” regime.

So, is quantum computing now a solved problem? The researchers’ answer is “not yet”, as there are still many challenges to overcome. “A lot of the heavy lifting is being done by the quasi 1D system provided by the nanowire,” remarks Carballido, “but this also limits scalability.” He also notes that the success of the experiment depends on being able to fabricate each qubit device very precisely, and doing this reproducibly remains a challenge.

The post Can fast qubits also be robust? appeared first on Physics World.

  •  

Did cannibal stars and boson stars populate the early universe?

In the early universe, moments after the Big Bang and cosmic inflation, clusters of exotic, massive particles could have collapsed to form bizarre objects called cannibal stars and boson stars. In turn, these could have then collapsed to form primordial black holes – all before the first elements were able to form.

This curious chain of events is predicted by a new model proposed by a trio of scientists at SISSA, the International School for Advanced Studies in Trieste, Italy.

Their proposal involves a hypothetical moment in the early universe called the early matter-dominated (EMD) epoch. This would have lasted only a few seconds after the Big Bang, but could have been dominated by exotic particles, such as the massive, supersymmetric particles predicted by string theory.

“There are no observations that hint at the existence of an EMD epoch – yet!” says SISSA’s Pranjal Ralegankar. “But many cosmologists are hoping that an EMD phase occurred because it is quite natural in many models.”

Some models of the early universe predict the formation of primordial black holes from quantum fluctuations in the inflationary field. Now, Ralegankar and his colleagues, Daniele Perri and Takeshi Kobayashi propose a new and more natural pathway for forming primordial holes via an EMD epoch.

They postulate that in the first second of existence when the universe was small and incredibly hot, exotic massive particles emerged and clustered in dense haloes. The SISSA physicists propose that the haloes then collapsed into hypothetical objects called cannibal stars and boson stars.

Cannibal stars are powered by particles annihilating each other, which would have allowed the objects to resist further gravitational collapse for a few seconds. However, they would not have produced light like normal stars.

“The particles in a cannibal star can only talk to each other, which is why they are forced to annihilate each other to counter the immense pressure from gravity,” Ralegankar tells Physics World. “They are immensely hot, simply because the particles that we consider are so massive. The temperature of our cannibal stars can range from a few GeV to on the order of 1010 GeV. For comparison, the Sun is on the order of keV.”

Boson stars, meanwhile, would be made from pure a Bose–Einstein condensate, which is a state of matter whereby the individual particles quantum mechanically act as one.

Both the cannibal stars and boson stars would exist within larger haloes that would quickly collapse to form primordial black holes with masses about the same as asteroids (about 1014–1019 kg). All of this could have taken place just 10 s after the Big Bang.

Dark matter possibility

Ralegankar, Perri and Kobayashi point out that the total mass of primordial black holes that their model produces matches the amount of dark matter in the universe.

“Current observations rule out black holes to be dark matter, except in the asteroid-mass range,” says Ralegankar. “We showed that our models can produce black holes in that mass range.”

Richard Massey, who is a dark-matter researcher at Durham University in the UK, agrees that microlensing observations by projects such as the Optical Gravitational Lensing Experiment (OGLE) have ruled out a population of black holes with planetary masses, but not asteroid masses. However, Massey is doubtful that these black holes could make up dark matter.

“It would be pretty contrived for them to make up a large fraction of what we call dark matter,” he says. “It’s possible that dark matter could be these primordial black holes, but they’d need to have been created with the same mass no matter where they were and whatever environment they were in, and that mass would have to be tuned to evade current experimental evidence.”

In the coming years, upgrades to OGLE and the launch of NASA’s Roman Space Telescope should finally provide sensitivity to microlensing events produced by objects in the asteroid mass range, allowing researchers to settle the matter.

It is also possible that cannibal and boson stars exist today, produced by collapsing haloes of dark matter. But unlike those proposed for the early universe, modern cannibal and boson stars would be stable and long-lasting.

“Much work has already been done for boson stars from dark matter, and we are simply suggesting that future studies should also think about the possibility of cannibal stars from dark matter,” explains Ralegankar. “Gravitational lensing would be one way to search for them, and depending on models, maybe also gamma rays from dark-matter annihilation.”

The research is described in Physical Review D.

The post Did cannibal stars and boson stars populate the early universe? appeared first on Physics World.

  •  

Academic assassinations are a threat to global science

The deliberate targeting of scientists in recent years has become one of the most disturbing, and overlooked, developments in modern conflict. In particular, Iranian physicists and engineers have been singled out for almost two decades, with sometimes fatal consequences. In 2007 Ardeshir Hosseinpour, a nuclear physicist at Shiraz University, died in mysterious circumstances that were widely attributed to poisoning or radioactive exposure.

Over the following years, at least five more Iranian researchers have been killed. They include particle physicist Masoud Ali-Mohammadi, who was Iran’s representative at the Synchrotron-light for Experimental Science and Applications in the Middle East project. Known as SESAME, it is the only scientific project in the Middle East where Iran and Israel collaborate.

Others to have died include nuclear engineer Majid Shahriari, another Iranian representative at SESAME, and nuclear physicist Mohsen Fakhrizadeh, who were both killed by bombing or gunfire in Tehran. These attacks were never formally acknowledged, nor were they condemned by international scientific institutions. The message, however, was implicit: scientists in politically sensitive fields could be treated as strategic targets, even far from battlefields.

What began as covert killings of individual researchers has now escalated, dangerously, into open military strikes on academic communities. Israeli airstrikes on residential areas in Tehran and Isfahan during the 12-day conflict between the two countries in June led to at least 14 Iranian scientists and engineers and members of their family being killed. The scientists worked in areas such as materials science, aerospace engineering and laser physics. I believe this shift, from covert assassinations to mass casualties, crossed a line. It treats scientists as enemy combatants simply because of their expertise.

The assassinations of scientists are not just isolated tragedies; they are a direct assault on the global commons of knowledge, corroding both international law and international science. Unless the world responds, I believe the precedent being set will endanger scientists everywhere and undermine the principle that knowledge belongs to humanity, not the battlefield.

Drawing a red line

International humanitarian law is clear: civilians, including academics, must be protected. Targeting scientists based solely on their professional expertise undermines the Geneva Convention and erodes the civilian–military distinction at the heart of international law.

Iran, whatever its politics, remains a member of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. Its scientists are entitled under international law to conduct peaceful research in medicine, energy and industry. Their work is no more inherently criminal than research that other countries carry out in artificial intelligence (AI), quantum technology or genetics.

If we normalize the preemptive assassination of scientists, what stops global rivals from targeting, say, AI researchers in Silicon Valley, quantum physicists in Beijing or geneticists in Berlin? Once knowledge itself becomes a liability, no researcher is safe. Equally troubling is the silence of the international scientific community with organizations such as the UN, UNESCO and the European Research Council as well as leading national academies having not condemned these killings, past or present.

Silence is not neutral. It legitimizes the treatment of scientists as military assets. It discourages international collaboration in sensitive but essential research and it creates fear among younger researchers, who may abandon high-impact fields to avoid risk. Science is built on openness and exchange, and when researchers are murdered for their expertise, the very idea of science as a shared human enterprise is undermined.

The assassinations are not solely Iran’s loss. The scientists killed were part of a global community; collaborators and colleagues in the pursuit of knowledge. Their deaths should alarm every nation and every institution that depends on research to confront global challenges, from climate change to pandemics.

I believe that international scientific organizations should act. At a minimum, they should publicly condemn the assassination of scientists and their families; support independent investigations under international law; as well as advocate for explicit protections for scientists and academic facilities in conflict zones.

Importantly, voices within Israel’s own scientific community can play a critical role too. Israeli academics, deeply committed to collaboration and academic freedom, understand the costs of blurring the boundary between science and war. Solidarity cannot be selective.

Recent events are a test case for the future of global science. If the international community tolerates the targeting of scientists, it sets a dangerous precedent that others will follow. What appears today as a regional assault on scientists from the Global South could tomorrow endanger researchers in China, Europe, Russia or the US.

Science without borders can only exist if scientists are recognized and protected as civilians without borders. That principle is now under direct threat and the world must draw a red line – killing scientists for their expertise is unacceptable. To ignore these attacks is to invite a future in which knowledge itself becomes a weapon, and the people who create it expendable. That is a world no-one should accept.

The post Academic assassinations are a threat to global science appeared first on Physics World.

  •  

DNA as a molecular architect

DNA is a fascinating macromolecule that guides protein production and enables cell replication. It has also found applications in nanoscience and materials design.

Colloidal crystals are ordered structures made from tiny particles suspended in fluid that can bond to other particles and add functionalisation to materials. By controlling colloidal particles, we can build advanced nanomaterials using a bottom-up approach. There are several ways to control colloidal particle design, ranging from experimental conditions such as pH and temperature to external controls like light and magnetic fields.

An exciting approach is to use DNA-mediated processes. DNA binds to colloidal surfaces and regulates how the colloids organize, providing molecular-level control. These connections are reversible and can be broken using standard experimental conditions (e.g., temperature), allowing for dynamic and adaptable systems. One important motivation is their good biocompatibility, which has enabled applications in biomedicine such as drug delivery, biosensing, and immunotherapy.

Programmable Atom Equivalents (PAEs) are large colloidal particles whose surfaces are functionalized with single-stranded DNA, while separate, much smaller DNA-coated linkers, called Electron Equivalents (EEs), roam in solution and mediate bonds between PAEs. In typical PAE-EE systems, the EEs carry multiple identical DNA ends that can all bind the same type of PAE, which limits the complexity of the assemblies and makes it harder to program highly specific connections between different PAE types.

In this study, the researchers investigate how EEs with arbitrary valency, carrying many DNA arms, regulate interactions in a binary mixture of two types of PAEs. Each EE has multiple single-stranded DNA ends of two different types, each complementary to the DNA on one of the PAE species. The team develops a statistical mechanical model to predict how EEs distribute between the PAEs and to calculate the effective interaction, a measure of how strongly the PAEs attract each other, which in turn controls the structures that can form.

Using this model, they inform Monte Carlo simulations to predict system behaviour under different conditions. The model shows quantitative agreement with simulation results and reveals an anomalous dependence of PAE-PAE interactions on EE valency, with interactions converging at high valency. Importantly, the researchers identify an optimal valency that maximizes selectivity between targeted and non-targeted binding pairs. This groundbreaking research provides design principles for programmable self-assembly and offers a framework that can be integrated into DNA nanoscience.

Read the full article

Designed self-assembly of programmable colloidal atom-electron equivalents

Xiuyang Xia et al 2025 Rep. Prog. Phys. 88 078101

Do you want to learn more about this topic?

Assembly of colloidal particles in solution by Kun Zhao and Thomas G Mason (2018)

The post DNA as a molecular architect appeared first on Physics World.

  •  

The link between protein evolution and statistical physics

Proteins are made up of a sequence of building blocks called amino acids. Understanding these sequences is crucial for studying how proteins work, how they interact with other molecules, and how changes (mutations) can lead to diseases.

These mutations happen over vastly different time periods and are not completely random but strongly correlated, both in space (distinct sites along the sequences) and in time (subsequent mutations of the same site).

It turns out that these correlations are very reminiscent of disordered physical systems, notably glasses, emulsions, and foams.

A team of researchers from Italy and France have now used this similarity to build a new statistical model to simulate protein evolution.  They went on to study the role of different factors causing these mutations.

They found that the initial (ancestral) protein sequence has a significant influence on the evolution process, especially in the short term. This means that information from the ancestral sequence can be traced back over a certain period and is not completely lost.

The strength of interactions between different amino acids within the protein affects how long this information persists.

Although ultimately the team did find differences between the evolution of physical systems and that of protein sequences, this kind of insight would not have been possible without using the language of statistical physics, i.e. space-time correlations.

The researchers expect that their results will soon be tested in the lab thanks to upcoming advances in experimental techniques.

Read the full article

Fluctuations and the limit of predictability in protein evolution – IOPscience

S. Rossi et al, 2025 Rep. Prog. Phys. 88 078102

The post The link between protein evolution and statistical physics appeared first on Physics World.

  •  

‘Caustic’ light patterns inspire new glass artwork

UK artist Alison Stott has created a new glass and light artwork – entitled Naturally Focused – that is inspired by the work of theoretical physicist Michael Berry from the University of Bristol.

Stott, who recently competed an MA in glass at Arts University Plymouth, spent over two decades previously working in visual effects for film and television, where she focussed on creating photorealistic imagery.

Her studies touched on how complex phenomena can arise from seemingly simple set-ups, for example in a rotating glass sculpture lit by LEDs.

“My practice inhabits the spaces between art and science, glass and light, craft and experience,” notes Stott. “Working with molten glass lets me embrace chaos, indeterminacy, and materiality, and my work with caustics explores the co-creation of light, matter, and perception.”

The new artwork is based on “caustics” – the curved patterns that form when light is reflected or refracted by curved surfaces or objects

The focal point of the artwork is a hand-blown glass lens that was waterjet-cut into a circle and polished so that its internal structure and optical behaviour are clearly visible. The lens is suspended within stainless steel gyroscopic rings and held by a brass support and stainless stell backplate.

The rings can be tilted or rotated to “activate shifting field of caustic projections that ripple across” the artwork. Mathematical equations are also engraved onto the brass that describe the “singularities of light” that are visible on the glass surface.

The work is inspired by Berry’s research into the relationship between classical and quantum behaviour and how subtle geometric structures govern how waves and particles behave.

Berry recently won the 2025 Isaac Newton Medal and Prize, which is presented by the Institute of Physics, for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”.

Stott says that working with Berry has pushed her understanding of caustics. “The more I learn about how these structures emerge and why they matter across physics, the more compelling they become,” notes Stott. “My aim is to let the phenomena speak for themselves, creating conditions where people can directly encounter physical behaviour and perhaps feel the same awe and wonder I do.”

The artwork will go on display at the University of Bristol following a ceremony to be held on 27 November.

The post ‘Caustic’ light patterns inspire new glass artwork appeared first on Physics World.

  •  

Is your WiFi spying on you?

WiFi networks could pose significant privacy risks even to people who aren’t carrying or using WiFi-enabled devices, say researchers at the Karlsruhe Institute of Technology (KIT) in Germany. According to their analysis, the current version of the technology passively records information that is detailed enough to identify individuals moving through networks, prompting them to call for protective measures in the next iteration of WiFi standards.

Although wireless networks are ubiquitous and highly useful, they come with certain privacy and security risks. One such risk stems from a phenomenon known as WiFi sensing, which the researchers at KIT’s Institute of Information Security and Dependability (KASTEL) define as “the inference of information about the networks’ environment from its signal propagation characteristics”.

“As signals propagate through matter, they interfere with it – they are either transmitted, reflected, absorbed, polarized, diffracted, scattered, or refracted,” they write in their study, which is published in the Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security (CCS ’25). “By comparing an expected signal with a received signal, the interference can be estimated and used for error correction of the received data.”

 An under-appreciated consequence, they continue, is that this estimation contains information about any humans who may have unwittingly been in the signal’s path. By carefully analysing the signal’s interference with the environment, they say, “certain aspects of the environment can be inferred” – including whether humans are present, what they are doing and even who they are.

“Identity inference attack” is a threat

The KASTEL team terms this an “identity inference attack” and describes it as a threat that is as widespread as it is serious. “This technology turns every router into a potential means for surveillance,” says Julian Todt, who co-led the study with his KIT colleague Thorsten Strufe. “For example, if you regularly pass by a café that operates a WiFi network, you could be identified there without noticing it and be recognized later – for example by public authorities or companies.”

While Todt acknowledges that security services, cybercriminals and others do have much simpler ways of tracking individuals – for example by accessing data from CCTV cameras or video doorbells – he argues that the ubiquity of wireless networks lends itself to being co-opted as a near-permanent surveillance infrastructure. There is, he adds, “one concerning property” about wireless networks: “They are invisible and raise no suspicion.”

Identity of individuals could be extracted using a machine-learning model

Although the possibility of using WiFi networks in this way is not new, most previous WiFi-based security attacks worked by analysing so-called channel state information (CSI). These data indicate how a radio signal changes when it reflects off walls, furniture, people or animals. However, the KASTEL researchers note that the latest WiFi standard, known as WiFi 5 (802.11ac), changes the picture by enabling a new and potentially easier form of attack based on beamforming feedback information (BFI).

While beamforming uses similar information as CSI, Todt explains that it does so on the sender’s side instead of the receiver’s. This means that a BFI-based surveillance method would require nothing more than standard devices connected to the WiFi network. “The BFI could be used to create images from different perspectives that might then serve to identify persons that find themselves in the WiFi signal range,” Todt says. “The identity of individuals passing through these radio waves could then be extracted using a machine-learning model. Once trained, this model would make an identification in just a few seconds.”

In their experiments, Todt and colleagues studied 197 participants as they walked through a WiFi field while being simultaneously recorded with CSI and BFI from four different angles. The participants had five different “walking styles” (such as walking normally and while carrying a backpack) as well as different gaits. The researchers found that they could identify individuals with nearly 100% accuracy, regardless of the recording angle or the individual’s walking style or gait.

“Risks to our fundamental rights”

“The technology is powerful, but at the same time entails risks to our fundamental rights, especially to privacy,” says Strufe. He warns that authoritarian states could use the technology to track demonstrators and members of opposition groups, prompting him and his colleagues to “urgently call” for protective measures and privacy safeguards to be included in the forthcoming IEEE 802.11bf WiFi standard.

“The literature on all novel sensing solutions highlights their utility for various novel applications,” says Todt, “but the privacy risks that are inherent to such sensing are often overlooked, or worse — these sensors are claimed to be privacy-friendly without any rationale for these claims. As such, we feel it necessary to point out the privacy risks that novel solutions such as WiFi sensing bring with them.”

The researchers say they would like to see approaches developed that can mitigate the risk of identity inference attack. However, they are aware that this will be difficult, since this type of attack exploits the physical properties of the actual WiFi signal. “Ideally, we would influence the WiFi standard to contain privacy-protections in future versions,” says Todt, “but even the impact of this would be limited as there are already millions of WiFi devices out there that are vulnerable to such an attack.”

The post Is your WiFi spying on you? appeared first on Physics World.

  •  

Reversible degradation phenomenon in PEMWE cells

 

In proton exchange membrane water electrolysis (PEMWE) systems, voltage cycles dropping below a threshold are associated with reversible performance improvements, which remain poorly understood despite being documented in literature. The distinction between reversible and irreversible performance changes is crucial for accurate degradation assessments. One approach in literature to explain this behaviour is the oxidation and reduction of iridium. Iridium-based electrocatalyst activity and stability in PEMWE hinge on their oxidation state, influenced by the applied voltage. Yet, full-cell PEMWE dynamic performance remains under-explored, with a focus typically on stability rather than activity. This study systematically investigates reversible performance behaviour in PEMWE cells using Ir-black as an anodic catalyst. Results reveal a recovery effect when the low voltage level drops below 1.5 V, with further enhancements observed as the voltage decreases, even with a short holding time of 0.1 s. This reversible recovery is primarily driven by improved anode reaction kinetics, likely due to changing iridium oxidation states, and is supported by alignment between the experimental data and a dynamic model that links iridium oxidation/reduction processes to performance metrics. This model allows distinguishing between reversible and irreversible effects and enables the derivation of optimized operation schemes utilizing the recovery effect.

Tobias Krenz
Tobias Krenz

Tobias Krenz is a simulation and modelling engineer at Siemens Energy in the Transformation of Industry business area focusing on reducing energy consumption and carbon-dioxide emissions in industrial processes. He completed his PhD from Liebniz University Hannover in February 2025. He earned a degree from Berlin University of Applied Sciences in 2017 and a MSc from Technische Universität Darmstadt in 2020.

Alexander Rex
Alexander Rex

 

Alexander Rex is a PhD candidate at the Institute of Electric Power Systems at Leibniz University Hannover. He holds a degree in mechanical engineering from Technische Universität Braunschweig, an MEng from Tongji University, and an MSc from Karlsruhe Institute of Technology (KIT). He was a visiting scholar at Berkeley Lab from 2024 to 2025.

The post Reversible degradation phenomenon in PEMWE cells appeared first on Physics World.

  •  

Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness

Ramy Shelbaya has been hooked on physics ever since he was a 12-year-old living in Egypt and read about the Joint European Torus (JET) fusion experiment in the UK. Biology and chemistry were interesting to him but never quite as “satisfying”, especially as they often seemed to boil down to physics in the end. “So I thought, maybe that’s where I need to go,” Shelbaya recalls.

His instincts seem to have led him in the right direction. Shelbaya is now chief executive of Quantum Dice, an Oxford-based start-up he co-founded in 2020 to develop quantum hardware for exploiting the inherent randomness in quantum mechanics. It closed its first funding round in 2021 with a seven-figure investment from a consortium of European investors, while also securing grant funding on the same scale.

Now providing cybersecurity hardware systems for clients such as BT, Quantum Dice is launching a piece of hardware for probabilistic computing, based on the same core innovation. Full of joy and zeal for his work, Shelbaya admits that his original decision to pursue physics was “scary”. Back then, he didn’t know anyone who had studied the subject and was not sure where it might lead.

The journey to a start-up

Fortunately, Shelbaya’s parents were onboard from the start and their encouragement proved “incredibly helpful”. His teachers also supported him to explore physics in his extracurricular reading, instilling a confidence in the subject that eventually led Shelbaya to do undergraduate and master’s degrees in physics at École normale supérieure PSL in France.

He then moved to the UK to do a PhD in atomic and laser physics at the University of Oxford. Just as he was wrapping up his PhD, Oxford University Innovation (OUI) – which manages its technology transfer and consulting activities – launched a new initiative that proved pivotal to Shelbaya’s career.

Ramy Shelbaya
From PhD student to CEO Ramy Shelbaya transformed a research idea into a commercial product after winning a competition for budding entrepreneurs. (Courtesy: Quantum Dice)

OUI had noted that the university generated a lot of IP and research results that could be commercialized but that the academics producing it often favoured academic work over progressing the technology transfer themselves. On the other hand, lots of students were interested in entering the world of business.

To encourage those who might be business-minded to found their own firms, while also fostering more spin-outs from the university’s patents and research, OUI launched the Student Entrepreneurs’ Programme (StEP). A kind of talent show to match budding entrepreneurs with technology ready for development, StEP invited participants to team up, choose commercially promising research from the university, and pitch for support and mentoring to set up a company.

As part of Oxford’s atomic and laser physics department, Shelbaya was aware that it had been developing a quantum random number generator. So when the competition was launched, he collaborated with other competition participants to pitch the device. “My team won, and this is how Quantum Dice was born.”

Random value

The initial technology was geared towards quantum random number generation, for particular use in cybersecurity. Random numbers are at the heart of all encryption algorithms, but generating truly random numbers has been a stumbling block, with the “pseudorandom” numbers people make do with being prone to prediction and hence security violation.

Quantum mechanics provides a potential solution because there is inherent randomness in the values of certain quantum properties. Although for a long time this randomness was “a bane to quantum physicists”, as Shelbaya puts it, Quantum Dice and other companies producing quantum random number generators are now harnessing it for useful technologies.

Where Quantum Dice sees itself as having an edge over its competitors is in its real-time quality assurance of the true quantum randomness of the device’s output. This means it should be able to spot any corruption to the output due to environmental noise or someone tampering with the device, which is an issue with current technologies.

Quantum Dice already offers Quantum Random Number Generator (QRNG) products in a range of form factors that integrate directly within servers, PCs and hardware security systems. Clients can also access the company’s cloud-based solution –  Quantum Entropy-as-a-Service – which, powered by its QRNG hardware, integrates into the Internet of Things and cloud infrastructure.

Recently Quantum Dice has also launched a probabilistic computing processor based on its QRNG for use in algorithms centred on probabilities. These are often geared towards optimization problems that apply in a number of sectors, including supply chains and logistics, finance, telecommunications and energy, as well as simulating quantum systems, and Boltzmann machines – a type of energy-based machine learning model for which Shelbaya says researchers “have long sought efficient hardware”.

Stress testing

After winning the start-up competition in 2019 things got trickier when Quantum Dice was ready to be incorporated, which occurred just at the start of the first COVID-19 lockdown. Shelbaya moved the prototype device into his living room because it was the only place they could ensure access to it, but it turned out the real challenges lay elsewhere.

“One of the first things we needed was investments, and really, at that stage of the company, what investors are investing in is you,” explains Shelbaya, highlighting how difficult this is when you cannot meet in person. On the plus side, since all his meetings were remote, he could speak to investors in Asia in the morning, Europe in the afternoon and the US in the evening, all within the same day.

Another challenge was how to present the technology simply enough so that people would understand and trust it, while not making it seem so simple that anyone could be doing it. “There’s that sweet spot in the middle,” says Shelbaya. “That is something that took time, because it’s a muscle that I had never worked.”

Due rewards

The company performed well for its size and sector in terms of securing investments when their first round of funding closed in 2021. Shelbaya is shy of attributing the success to his or even the team’s abilities alone, suggesting this would “underplay a lot of other factors”. These include the rising interest in quantum technologies, and the advantages of securing government grant funding programmes at the same time, which he feels serves as “an additional layer of certification”.

For Shelbaya every day is different and so are the challenges. Quantum Dice is a small new company, where many of the 17 staff are still fresh from university, so nurturing trust among clients, particularly in the high-stakes world of cybersecurity is no small feat. Managing a group of ambitious, energetic and driven young people can be complicated too.

But there are many rewards, ranging from seeing a piece of hardware finally work as intended and closing a deal with a client, to simply seeing a team “you have been working to develop, working together without you”.

For others hoping to follow a similar career path, Shelbaya’s advice is to do what you enjoy – not just because you will have fun but because you will be good at it too. “Do what you enjoy,” he says, “because you will likely be great at it.”

The post Ramy Shelbaya: the physicist and CEO capitalizing on quantum randomness appeared first on Physics World.

  •