Pope Francis has warned global leaders in Davos that artificial intelligence raises “critical concerns” about humanity’s future and it could exacerbate a growing “crisis of truth”.
Francis said governments and businesses must exercise “due diligence and vigilance” to navigate the complexities of AI.
Errors caused by interactions with the environment – noise – are the Achilles heel of every quantum computer, and correcting them has been called a “defining challenge” for the technology. These two teams, working with very different quantum systems, took significant steps towards overcoming this challenge. In doing so, they made it far more likely that quantum computers will become practical problem-solving machines, not just noisy, intermediate-scale tools for scientific research.
Quantum error correction works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits such as superconducting circuits or trapped atoms. While each physical qubit is noisy, they work together to preserve the quantum state of the logical qubit – at least for long enough to do a computation.
Formidable task
Error correction should become more effective as the number of physical qubits in a logical qubit increases. However, integrating large numbers of physical qubits to create a processor with multiple logical qubits is a formidable task. Furthermore, adding more physical qubits to a logical qubit also adds more noise – and it is not clear whether making logical qubits bigger would make them significantly better. This year’s winners of our Breakthrough of the Year have made significant progress in addressing these issues.
The team led by Lukin and Bluvstein created a quantum processor with 48 logical qubits that can execute algorithms while correcting errors in real time. At the heart of their processor are arrays of neutral atoms. These are grids of ultracold rubidium atoms trapped by optical tweezers. These atoms can be put into highly excited Rydberg states, which enables the atoms to act as physical qubits that can exchange quantum information.
What is more, the atoms can be moved about within an array to entangle them with other atoms. According to Bluvstein, moving groups of atoms around the processor was critical for their success at addressing a major challenge in using logical qubits: how to get logical qubits to interact with each other to perform quantum operations. He describes the system as a “living organism that changes during a computation”.
Their processor used about 300 physical qubits to create up to 48 logical qubits, which were used to perform logical operations. In contrast, similar attempts using superconducting or trapped-ion qubits have only managed to perform logical operations using 1–3 logical qubits.
Willow quantum processor
Meanwhile, the team led by Hartmut Neven made a significant advance in how physical qubits can be combined to create a logical qubit. Using Google’s new Willow quantum processor – which offers up to 105 superconducting physical qubits – they showed that the noise in their logical qubit remained below a maximum threshold as they increased the number of qubits. This means that the logical error rate is suppressed exponentially as the number of physical qubits per logical qubit is increased.
Neven told Physics World that the Google system is “the most convincing prototype of a logical qubit built today”. He said that that Google is on track to develop a quantum processor with 100 or even 1000 logical qubits by 2030. He says that a 1000 logical qubit device could do useful calculations for the development of new drugs or new materials for batteries.
Bluvstein, Lukin and colleagues are already exploring how their processor could be used to study an effect called quantum scrambling. This could shed light on properties of black holes and even provide important clues about the nature of quantum gravity.
You can listen to Neven talk about his team’s research in this podcast. Bluvstein and Lukin talk about their group’s work in this podcast.
The Breakthrough of the Year was chosen by the Physics World editorial team. We looked back at all the scientific discoveries we have reported on since 1 January and picked the most important. In addition to being reported in Physics World in 2024, the breakthrough must meet the following criteria:
Significant advance in knowledge or understanding
Importance of work for scientific progress and/or development of real-world applications
Of general interest to Physics World readers
Before we picked our winners, we released the Physics World Top 10 Breakthroughs for 2024, which served as our shortlist. The other nine breakthroughs are listed below in no particular order.
To a team of researchers at Stanford University in the US for developing a method to make the skin of live mice temporarily transparent. One of the challenges of imaging biological tissue using optical techniques is that tissue scatters light, which makes it opaque. The team, led by Zihao Ou (now at The University of Texas at Dallas), Mark Brongersma and Guosong Hong, found that the common yellow food dye tartrazine strongly absorbs near-ultraviolet and blue light and can help make biological tissue transparent. Applying the dye onto the abdomen, scalp and hindlimbs of live mice enabled the researchers to see internal organs, such as the liver, small intestine and bladder, through the skin without requiring any surgery. They could also visualize blood flow in the rodents’ brains and the fine structure of muscle sarcomere fibres in their hind limbs. The effect can be reversed by simply rinsing off the dye. This “optical clearing” technique has so far only been conducted on animals. But if extended to humans, it could help make some types of invasive biopsies a thing of the past.
To the AEgIS collaboration at CERN, and Kosuke Yoshioka and colleagues at the University of Tokyo, for independently demonstrating laser cooling of positronium. Positronium, an atom-like bound state of an electron and a positron, is created in the lab to allow physicists to study antimatter. Currently, it is created in “warm” clouds in which the atoms have a large distribution of velocities, making precision spectroscopy difficult. Cooling positronium to low temperatures could open up novel ways to study the properties of antimatter. It also enables researchers to produce one to two orders of magnitude more antihydrogen – an antiatom comprising a positron and an antiproton that’s of great interest to physicists. The research also paves the way to use positronium to test current aspects of the Standard Model of particle physics, such as quantum electrodynamics, which predicts specific spectral lines, and to probe the effects of gravity on antimatter.
To Roman Bauer at the University of Surrey, UK, Marco Durante from the GSI Helmholtz Centre for Heavy Ion Research, Germany, and Nicolò Cogno from GSI and Massachusetts General Hospital/Harvard Medical School, US, for creating a computational model that could improve radiotherapy outcomes for patients with lung cancer. Radiotherapy is an effective treatment for lung cancer but can harm healthy tissue. To minimize radiation damage and help personalize treatment, the team combined a model of lung tissue with a Monte Carlo simulator to simulate irradiation of alveoli (the tiny air sacs within the lungs) at microscopic and nanoscopic scales. Based on the radiation dose delivered to each cell and its distribution, the model predicts whether each cell will live or die, and determines the severity of radiation damage hours, days, months or even years after treatment. Importantly, the researchers found that their model delivered results that matched experimental observations from various labs and hospitals, suggesting that it could, in principle, be used within a clinical setting.
To Walter de Heer, Lei Ma and colleagues at Tianjin University and the Georgia Institute of Technology, and independently to Marcelo Lozada-Hidalgo of the University of Manchester and a multinational team of colleagues, for creating a functional semiconductor made from graphene, and for using graphene to make a switch that supports both memory and logic functions, respectively. The Manchester-led team’s achievement was to harness graphene’s ability to conduct both protons and electrons in a device that performs logic operations with a proton current while simultaneously encoding a bit of memory with an electron current. These functions are normally performed by separate circuit elements, which increases data transfer times and power consumption. Conversely, de Heer, Ma and colleagues engineered a form of graphene that does not conduct as easily. Their new “epigraphene” has a bandgap that, like silicon, could allow it to be made into a transistor, but with favourable properties that silicon lacks, such as high thermal conductivity.
To David Moore, Jiaxiang Wang and colleagues at Yale University, US, for detecting the nuclear decay of individual helium nuclei by embedding radioactive lead-212 atoms in a micron-sized silica sphere and measuring the sphere’s recoil as nuclei escape from it. Their technique relies on the conservation of momentum, and it can gauge forces as small as 10-20 N and accelerations as tiny as 10-7 g, where g is the local acceleration due to the Earth’s gravitational pull. The researchers hope that a similar technique may one day be used to detect neutrinos, which are much less massive than helium nuclei but are likewise emitted as decay products in certain nuclear reactions.
To Andrew Denniston at the Massachusetts Institute of Technology in the US, Tomáš Ježo at Germany’s University of Münster and an international team for being the first to unify two distinct descriptions of atomic nuclei. They have combined the particle physics perspective – where nuclei comprise quarks and gluons – with the traditional nuclear physics view that treats nuclei as collections of interacting nucleons (protons and neutrons). The team has provided fresh insights into short-range correlated nucleon pairs – which are fleeting interactions where two nucleons come exceptionally close and engage in strong interactions for mere femtoseconds. The model was tested and refined using experimental data from scattering experiments involving 19 different nuclei with very different masses (from helium-3 to lead-208). The work represents a major step forward in our understanding of nuclear structure and strong interactions.
To Jelena Vučković, Joshua Yang, Kasper Van Gasse, Daniil Lukin, and colleagues at Stanford University in the US for developing a compact, integrated titanium:sapphire laser that needs only a simple green LED as a pump source. They have reduced the cost and footprint of a titanium:sapphire laser by three orders of magnitude and the power consumption by two. Traditional titanium:sapphire lasers have to be pumped with high-powered lasers – and therefore cost in excess of $100,000. In contrast, the team was able to pump its device using a $37 green laser diode. The researchers also achieved two things that had not been possible before with a titanium:sapphire laser. They were able to adjust the wavelength of the laser light and they were able to create a titanium:sapphire laser amplifier. Their device represents a key step towards the democratization of a laser type that plays important roles in scientific research and industry.
To two related teams for their clever use of entangled photons in imaging. Both groups include Chloé Vernière and Hugo Defienne of Sorbonne University in France, who as duo used quantum entanglement to encode an image into a beam of light. The impressive thing is that the image is only visible to an observer using a single-photon sensitive camera – otherwise the image is hidden from view. The technique could be used to create optical systems with reduced sensitivity to scattering. This could be useful for imaging biological tissues and long-range optical communications. In separate work, Vernière and Defienne teamed up with Patrick Cameron at the UK’s University of Glasgow and others to use entangled photons to enhance adaptive optical imaging. The team showed that the technique can be used to produce higher-resolution images than conventional bright-field microscopy. Looking to the future, this adaptive optics technique could play a major role in the development of quantum microscopes.
To the China National Space Administration for the first-ever retrieval of material from the Moon’s far side, confirming China as one of the world’s leading space nations. Landing on the lunar far side – which always faces away from Earth – is difficult due to its distance and terrain of giant craters with few flat surfaces. At the same time, scientists are interested in the unexplored far side and why it looks so different from the near side. The Chang’e-6 mission was launched on 3 May consisting of four parts: an ascender, lander, returner and orbiter. The ascender and lander successfully touched down on 1 June in the Apollo basin, which lies in the north-eastern side of the South Pole-Aitken Basin. The lander used its robotic scoop and drill to obtain about 1.9 kg of materials within 48 h. The ascender then lifted off from the top of the lander and docked with the returner-orbiter before the returner headed back to Earth, landing in Inner Mongolia on 25 June. In November, scientists released the first results from the mission finding that fragments of basalt – a type of volcanic rock – date back to 2.8 billion years ago, indicating that the lunar far side was volcanically active at that time. Further scientific discoveries can be expected in the coming months and years ahead as scientists analyze more fragments.
Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.
In this episode of the Physics World Weekly podcast, Bluvstein and Lukin explain the crucial role that error correction is playing in the development of practical quantum computers. They also describe how atoms are moved around their quantum processor and why this coordinated motion allowed them to create logical qubits and use those qubits to perform quantum computations.
Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.
In this episode of the Physics World Weekly podcast, Neven talks about Google’s new Willow quantum processor, which integrates 105 superconducting physical qubits. He also explains how his team used these qubits to create logical qubits with error rates that dropped exponentially with the number of physical qubits used. He also outlines Googles ambitious plan to create a processor with 100, or even 1000, logical qubits by 2030.
Physics World‘s coverage of the Breakthrough of the Year is supported by Reports on Progress in Physics, which offers unparalleled visibility for your ground-breaking research.
Researchers at Google Quantum AI and collaborators have developed a quantum processor with error rates that get progressively smaller as the number of quantum bits (qubits) grows larger. This achievement is a milestone for quantum error correction, as it could, in principle, lead to an unlimited increase in qubit quality, and ultimately to an unlimited increase in the length and complexity of the algorithms that quantum computers can run.
Noise is an inherent feature of all physical systems, including computers. The bits in classical computers are protected from this noise by redundancy: some of the data is held in more than one place, so if an error occurs, it is easily identified and remedied. However, the no-cloning theorem of quantum mechanics dictates that once a quantum state is measured – a first step towards copying it – it is destroyed. “For a little bit, people were surprised that quantum error correction could exist at all,” observes Michael Newman, a staff research scientist at Google Quantum AI.
Beginning in the mid-1990s, however, information theorists showed that this barrier is not insurmountable, and several codes for correcting qubit errors were developed. The principle underlying all of them is that multiple physical qubits (such as individual atomic energy levels or states in superconducting circuits) can be networked to create a single logical qubit that collectively holds the quantum information. It is then possible to use “measure” qubits to determine whether an error occurred on one of the “data” qubits without affecting the state of the latter.
“In quantum error correction, we basically track the state,” Newman explains. “We say ‘Okay, what errors are happening?’ We figure that out on the fly, and then when we do a measurement of the logical information – which gives us our answer – we can reinterpret our measurement according to our understanding of what errors have happened.”
Keeping error rates low
In principle, this procedure makes it possible for infinitely stable qubits to perform indefinitely long calculations – but only if error rates remain low enough. The problem is that each additional physical qubit introduces a fresh source of error. Increasing the number of physical qubits in each logical qubit is therefore a double-edged sword, and the logical qubit’s continued stability depends on several factors. These include the ability of the quantum processor’s (classical) software to detect and interpret errors; the specific error-correction code used; and, importantly, the fidelity of the physical qubits themselves.
In 2023, Newman and colleagues at Google Quantum AI showed that an error-correction code called the surface code (which Newman describes as having “one of the highest error-suppression factors of any quantum code”) made it just about possible to “win” at error correction by adding more physical qubits to the system. Specifically, they showed that a distance-5 array logical qubit made from 49 superconducting transmon qubits had a slightly lower error rate than a distance-3 array qubit made from 17 such qubits. But the margin was slim. “We knew that…this wouldn’t persist,” Newman says.
“Convincing, exponential error suppression”
In the latest work, which is published in Nature, a Google Quantum AI team led by Hartmut Neven unveil a new superconducting processor called Willow with several improvements over the previous Sycamore chip. These include gates (the building blocks of logical operations) that retain their “quantumness” five times longer and a Google Deepmind-developed machine learning algorithm that interprets errors in real time. When the team used this new tech to create nine surface code distance-3 arrays, four distance-5 arrays and one 101-qubit distance-7 array on their 105-qubit processor, the error rate was suppressed by a factor of 2.4 as additional qubits were added.
“This is the first time we have seen convincing, exponential error suppression in the logical qubits as we increase the number of physical qubits,” says Newman. “That’s something people have been trying to do for about 30 years.”
With gates that remain stable for hours on end, quantum computers should be able to run the large, complex algorithms people have always hoped for. “We still have a long way to go, we still need to do this at scale,” Newman acknowledges. “But the first time we pushed the button on this Willow chip and I saw the lattice getting larger and larger and the error rate going down and down, I thought ‘Wow! Quantum error correction is really going to work…Quantum computing is really going to work!’”
Mikhail Lukin, a physicist at Harvard University in the US who also works on quantum error correction, calls the Google Quantum AI result “a very important step forward in the field”. While Lukin’s own group previously demonstrated improved quantum logic operations between multiple error-corrected atomic qubits, he notes that the present work showed better logical qubit performance after multiple cycles of error correction. “In practice, you’d like to see both of these things come together to enable deep, complex quantum circuits,” he says. “It’s very early, there are a lot of challenges remaining, but it’s clear that – in different platforms and moving in different directions – the fundamental principles of error correction have now been demonstrated. It’s very exciting.”
In computing, quantum mechanics is a double-edged sword. While computers that use quantum bits, or qubits, can perform certain operations much faster than their classical counterparts, these qubits only maintain their quantum nature – their superpositions and entanglement – for a limited time. Beyond this so-called coherence time, interactions with the environment, or noise, lead to loss of information and errors. Worse, because quantum states cannot be copied – a consequence of quantum mechanics known as the no-cloning theorem – or directly observed without collapsing the state, correcting these errors requires more sophisticated strategies than the simple duplications used in classical computing.
One such strategy is known as an approximate quantum error correction (AQEC) code. Unlike exact QEC codes, which aim for perfect error correction, AQEC codes help quantum computers return to almost, though not exactly, their intended state. “When we can allow mild degrees of approximation, the code can be much more efficient,” explains Zi-Wen Liu, a theoretical physicist who studies quantum information and computation at China’s Tsinghua University. “This is a very worthwhile trade-off.”
The problem is that the performance and characteristics of AQEC codes are poorly understood. For instance, AQEC conventionally entails the expectation that errors will become negligible as system size increases. This can in fact be achieved simply by appending a series of redundant qubits to the logical state for random local noise; the likelihood of the logical information being affected would, in that case, be vanishingly small. However, this approach is ultimately unhelpful. This raises the questions: What separates good (that is, non-trivial) codes from bad ones? Is this dividing line universal?
Establishing a new boundary
So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so.
To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity.
Circuit complexity, an important concept in both computer science and physics, represents the optimal cost of a computational process. This cost can be assessed in many ways, with the most intuitive metrics being the minimum time or the “size” of computation required to prepare a quantum state using local gate operations. For instance, how long does it take to link up the individual qubits to create the desired quantum states or transformations needed to complete a computational task?
The researchers found that if the subsystem variance falls below a certain threshold, any code within this regime is considered a nontrivial AQEC code and subject to a lower bound of circuit complexity. This finding is highly general and does not depend on the specific structures of the system. Hence, by establishing this boundary, the researchers gained a more unified framework for evaluating and using AQEC codes, allowing them to explore broader error correction schemes essential for building reliable quantum computers.
A quantum leap
But that wasn’t all. The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature.
One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. These conditions include long-range entanglement, which is a circuit complexity condition, and topological entanglement entropy, which quantifies the extent of long-range entanglement. The new framework clarifies the connection between these entanglement conditions and topological quantum order, allowing researchers to better understand these exotic phases of matter.
A more surprising connection, though, concerns one of the deepest questions in modern physics: how do we reconcile quantum mechanics with Einstein’s general theory of relativity? While quantum mechanics governs the behavior of particles at the smallest scales, general relativity accounts for gravity and space-time on a cosmic scale. These two pillars of modern physics have some incompatible intersections, creating challenges when applying quantum mechanics to strongly gravitational systems.
In the 1990s, a mathematical framework called the anti-de Sitter/conformal field theory correspondence (AdS/CFT) emerged as a way of using CFT to study quantum gravity even though it does not incorporate gravity. As it turns out, the way quantum information is encoded in CFT has conceptual ties to QEC. Indeed, these ties have driven recent advances in our understanding of quantum gravity.
By studying CFT systems at low energies and identifying connections between code properties and intrinsic CFT features, the researchers discovered that the CFT codes that pass their AQEC threshold might be useful for probing certain symmetries in quantum gravity. New insights from AQEC codes could even lead to new approaches to spacetime and gravity, helping to bridge the divide between quantum mechanics and general relativity.
Some big questions remain unanswered, though. One of these concerns the line between trivial and non-trivial codes. For instance, what happens to codes that live close to the boundary? The researchers plan to investigate scenarios where AQEC codes could outperform exact codes, and to explore ways to make the implications for quantum gravity more rigorous. They hope their study will inspire further explorations of AQEC’s applications to other interesting physical systems.