↩ Accueil

Vue normale

Reçu avant avant-hier

Quantum computing on the verge: correcting errors, developing algorithms and building up the user base

31 octobre 2025 à 15:20

When it comes to building a fully functional “fault-tolerant” quantum computer, companies and government labs all over the world are rushing to be the first over the finish line. But a truly useful universal quantum computer capable of running complex algorithms would have to entangle millions of coherent qubits, which are extremely fragile. Because of environmental factors such as temperature, interference from other electronic systems in hardware, and even errors in measurement, today’s devices would fail under an avalanche of errors long before reaching that point.

So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).

Protected states

The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.

There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”

At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.

1 From many to few

Turning unreliable physical qubits into a logical qubit
(Courtesy: Riverlane)

To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.

According to Maria Maragkou, commercial vice-president of quantum software company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.

With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).

That overhead has been steadily reduced over the past several years, and at the end of last year researchers at Google announced that their 105-qubit Willow quantum chip passed the break-even threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. This means that in principle such arrays could be scaled up without errors accumulating.

2 Error correction in action

Illustration of the error correction cycle
(Courtesy: Riverlane)

The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.

Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.

It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).

Others suspect the breakthrough threshold may be a little lower: Steve Brierly, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.

Platform independent

Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.

“At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierly thinks this will require fault-tolerant machines.

“In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”

Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.

On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”

Early adopters?

Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?

In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.

IBM and RIKEN quantum systems
Joint effort In June 2025, IBM in the US and Japan’s national research laboratory RIKEN, unveiled the first IBM Quantum System Two that is being used outside the US. It involved IBM’s 156-qubit IBM Heron quantum computing system (left) being paired with RIKEN’s supercomputer Fugaku (right) — one of the most powerful classical systems on Earth. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. (Courtesy: IBM and RIKEN)

For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”

Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).

“For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”

3 Structural insights

Modelling materials using quantum computing
(Courtesy: Phasecraft)

A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.

Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.

Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)

One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.

Building the market

Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.

The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.

Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”

IBM roadmap of quantum development
The road ahead IBM’s current roadmap charts how the company plans on scaling up its devices to achieve a fault-tolerant device by 2029. Alongside hardware development, the firm will also focus on developing new algorithms and software for these devices. (Courtesy: IBM)

It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.

According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?

But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”

Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”

However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.

That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.

  • In the first instalment of this two-part article, Philip Ball explores the latest developments in the quantum-computing industry

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.

Quantum computing and AI join forces for particle physics

23 octobre 2025 à 15:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

Quantum Echoes - Fini le bullshit, l'informatique quantique devient enfin vérifiable !

Par :Korben
23 octobre 2025 à 11:48

Pendant 30 ans, les experts en informatique quantique vous demandaient de les croire sur parole du genre “Mon ordi quantique est 13 000 fois plus rapides que ton PC Windows XP…”. Mais bon, ils sont rigolo car c’était impossible à vérifier ce genre de conneries… M’enfin ça c’était jusqu’à présent car Google vient d’annoncer Quantum Echoes , et on va enfin savoir grâce à ce truc, ce que l’informatique quantique a vraiment dans le ventre.

Depuis 2019 et la fameuse “suprématie quantique” de Google , on était en fait coincé dans un paradoxe de confiance assez drôle. Google nous disait “regardez, on a résolu un problème qui prendrait 10 milliards de milliards d’années à un supercalculateur”. Bon ok, j’veux bien les croire mais comment on vérifie ? Bah justement, on pouvait pas ! C’est un peu comme les promesses des gouvernements, ça n’engage que les gros teubés qui y croient ^^.

Heureusement grâce à Quantum Echoes, c’est la fin de cette ère du “Faites-nous confiance” car pour la première fois dans l’histoire de l’informatique quantique, un algorithme peut être vérifié de manière reproductible . Vous lancez le calcul sur la puce Willow de Google, vous obtenez un résultat. Vous relancez, vous obtenez le même. Votre pote avec un ordi quantique similaire lance le même truc, et il obtient le même résultat. Ça semble basique, mais pour le quantique, c’est incroyable !!

Willow, la puce quantique de Google

L’algorithme en question s’appelle OTOC (Out-Of-Time-Order Correlator), et il fonctionne comme un écho ultra-sophistiqué. Vous envoyez un signal dans le système quantique, vous perturbez un qubit, puis vous inversez précisément l’évolution du signal pour écouter l’écho qui revient. Cet écho quantique se fait également amplifier par interférence constructive, un phénomène où les ondes quantiques s’additionnent et deviennent plus fortes. Du coup, ça permet d’obtenir une mesure d’une précision hallucinante.

En partenariat avec l’Université de Californie à Berkeley, Google a testé ça sur deux molécules, une de 15 atomes et une autre de 28 atomes et les résultats obtenus sur leur ordinateur quantique correspondaient exactement à ceux de la RMN (Résonance Magnétique Nucléaire) traditionnelle. Sauf que Quantum Echoes va 13 000 fois plus vite qu’un supercalculateur classique pour ce type de calcul.

En gros, ce qui aurait pris 3 ans sur une machine classique prend 2 heures sur un Willow.

Cette vitesse, c’est impressionnant mais ce qui change la donne dans cette annonce, c’est cette notion de vérifiabilité ! Bref, c’est fini le bullshit, maintenant la structure de systèmes quantiques (des molécules aux aimants en passant par les trous noirs) sera vérifiable et comparable.

Et les applications concrètes sont déjà plutôt bien identifiées : Découverte de médicaments, pour comprendre comment les molécules se lient à leurs cibles, la science des matériaux, pour caractériser la structure moléculaire de nouveaux polymères ou les composants de batteries, la fusion nucléaire…etc tout ce qui nécessite de modéliser des phénomènes quantiques avec une précision extrême !

Google compare ça à un “quantum-scope”, capable de mesurer des phénomènes naturels auparavant inobservables un peu comme l’ont été le télescope et le microscope qui nous ont donné accès à de nouveaux mondes invisibles. Le Quantum Echoes nous donne un accès ce monde quantique sauf que cette fois, on pourra vérifier que la réalité est identique à celle annoncée par les scientifiques.

Source

Advances in quantum error correction showcased at Q2B25

7 octobre 2025 à 17:00

This year’s Q2B meeting took place at the end of last month in Paris at the Cité des Sciences et de l’Industrie, a science museum in the north-east of the city. The event brought together more than 500 attendees and 70 speakers – world-leading experts from industry, government institutions and academia. All major quantum technologies were highlighted: computing, AI, sensing, communications and security.

Among the quantum computing topics was quantum error correction (QEC) – something that will be essential for building tomorrow’s fault-tolerant machines. Indeed, it could even be the technology’s most important and immediate challenge, according to the speakers on the State of Quantum Error Correction Panel: Paul Hilaire of Telecom Paris/IP Paris, Michael Vasmer of Inria, Quandela’s Boris Bourdoncle, Riverlane’s Joan Camps and Christophe Vuillot from Alice & Bob.

As was clear from the conference talks, quantum computers are undoubtedly advancing in leaps and bounds. One of their most important weak points, however, is that their fundamental building blocks (quantum bits, or qubits) are highly prone to errors. These errors are caused by interactions with the environment – also known as noise – and correcting them will require innovative software and hardware. Today’s machines are only capable of running on average a few hundred operations before an error occurs; but in the future, we will have to develop quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp) or even a trillion error-free operations (TeraQuOps).

QEC works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits, such as superconducting circuits or trapped atoms. Each physical qubit is noisy, but they work together to preserve the quantum state of the logical qubit – at least for long enough to perform a calculation. It was Peter Shor who first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A technique known as syndrome decoding is then used to diagnose which error was the likely source of corruption on an encoded state. The error can then be reversed by applying a corrective operation depending on the syndrome.

Prototype quantum computer from NVIDIA
Computing advances A prototype quantum computer from NVIDIA that makes use of seven qubits. (Courtesy: Isabelle Dumé)

While error correction should become more effective as the number of physical qubits in a logical qubit increases, adding more physical qubits to a logical qubit also adds more noise. Much progress has been made in addressing this and other noise issues in recent years, however.

“We can say there’s a ‘fight’ when increasing the length of a code,” explains Hilaire. “Doing so allows us to correct more errors, but we also introduce more sources of errors. The goal is thus being able to correct more errors than we introduce. What I like with this picture is the clear idea of the concept of a fault-tolerant threshold below which fault-tolerant quantum computing becomes feasible.”

Developments in QEC theory

Speakers at the Q2B25 meeting shared a comprehensive overview of the most recent advancements in the field – and they are varied. First up, concatenated error correction codes. Prevalent in the early days of QEC, these fell by the wayside in favour of codes like surface code, but are making a return as recent work has shown. Concatenated codes can achieve constant encoding rates and a quantum computer operating on a linear, nearest-neighbour connectivity was recently put forward. Directional codes, the likes of which are being developed by Riverlane, are also being studied. These leverage native transmon qubit logic gates – for example, iSWAP gates – and could potentially outperform surface codes in some aspects.

The panellists then described bivariate bicycle codes, being developed by IBM, which offer better encoding rates than surface codes. While their decoding can be challenging for real-time applications, IBM’s “relay belief propagation” (relay BP) has made progress here by simplifying decoding strategies that previously involved combining BP with post-processing. The good thing is that this decoder is actually very general and works for all the “low-density parity check codes” — one of the most studied class of high performance QEC codes (these also include, for example, surface codes and directional codes).

There is also renewed interest in decoders that can be parallelized and operate locally within a system, they said. These have shown promise for codes like the 1D repetition code, which could revive the concept of self-correcting or autonomous quantum memory. Another possibility is the increased use of the graphical language ZX calculus as a tool for optimizing QEC circuits and understanding spacetime error structures.

Hardware-specific challenges

The panel stressed that to achieve robust and reliable quantum systems, we will need to move beyond so-called hero experiments. For example, the demand for real-time decoding at megahertz frequencies with microsecond latencies is an important and unprecedented challenge. Indeed, breaking down the decoding problem into smaller, manageable bits has proven difficult so far.

There are also issues with qubit platforms themselves that need to be addressed: trapped ions and neutral atoms allow for high fidelities and long coherence times, but they are roughly 1000 times slower than superconducting and photonic qubits and therefore require algorithmic or hardware speed-ups. And that is not all: solid-state qubits (such as superconducting and spin qubits) suffer from a “yield problem”, with dead qubits on manufactured chips. Improved fabrication methods will thus be crucial, said the panellists.

Q2B25

 

Collaboration between academia and industry

The discussions then moved towards the subject of collaboration between academia and industry. In the field of QEC, such collaboration is highly productive today, with joint PhD programmes and shared conferences like Q2B, for example. Large companies also now boast substantial R&D departments capable of funding high-risk, high-reward research, blurring the lines between fundamental and application-oriented research. Both sectors also use similar foundational mathematics and physics tools.

At the moment there’s an unprecedented degree of openness and cooperation in the field. This situation might change, however, as commercial competition heats up, noted the panellists. In the future, for example, researchers from both sectors might be less inclined to share experimental chip details.

Last, but certainly not least, the panellists stressed the urgent need for more PhDs trained in quantum mechanics to address the talent deficit in both academia and industry. So, if you were thinking of switching to another field, perhaps now could be the time to jump.

The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.

Protein qubit can be used as a quantum biosensor

18 septembre 2025 à 14:00

A new optically addressable quantum bit (qubit) encoded in a fluorescent protein could be used as a sensor that can be directly produced inside living cells. The device opens up a new era for fluorescence microscopy to monitor biological processes, say the researchers at the University of Chicago Pritzker School of Molecular Engineering who designed the novel qubit.

Quantum technologies use qubits to store and process information. Unlike classical bits, which can exist in only two states, qubits can exist in a superposition of both these states. This means that computers employing these qubits can simultaneously process multiple streams of information, allowing them to solve problems that would take classical computers years to process.

Qubits can be manipulated and measured with high precision, and in quantum sensing applications they act as nanoscale probes whose quantum state can be initialized, coherently controlled and read out. This allows them to detect minute changes in their environment with exquisite sensitivity.

Optically addressable qubit sensors – that is, those that are read out using light pulses from a laser or other light source – are able to measure nanoscale magnetic fields, electric fields and temperature. Such devices are now routinely employed by researchers working in the physical sciences. However, their use in the life sciences is lagging behind, with most applications still at the proof-of-concept stage.

Difficult to position inside living cells

Many of today’s quantum sensors are based on nitrogen-vacancy (NV) centres, which are crystallographic defects in diamond. These centres occur when two neighbouring carbon atoms in diamond are replaced by a nitrogen atom and an empty lattice site and they act like tiny quantum magnets with different spins. When excited with laser pulses, the fluorescent signal that they emit can be used to monitor slight changes in the magnetic properties of a nearby sample of material. This is because the intensity of the emitted NV centre signal changes with the local magnetic field.

“The problem is that such sensors are difficult to position at well-defined sites inside living cells,” explains Peter Maurer, who co-led this new study together with David Awschalom. “And the fact that they are typically ten times larger than most proteins further restricts their applicability,” he adds.

“So, rather than taking a conventional quantum sensor and trying to camouflage it to enter a biological system, we therefore wanted to explore the idea of using a biological system itself and developing it into a qubit,” says Awschalom.

Fluorescent proteins, which are just 3 nm in diameter, could come into their own here as they can be genetically encoded, allowing cells to produce these sensors directly at the desired location with atomic precision. Indeed, fluorescent proteins have become the “gold standard” in cell biology thanks to this unique ability, says Maurer. And decades of biochemistry research has allowed researchers to generate a vast library of such fluorescent proteins that can be tagged to thousands of different types of biological targets.

“We recognized that these proteins possess optical and spin properties that are strikingly similar to those of qubits formed by crystallographic defects in diamond – namely that they have a metastable triplet state,” explain Awschalom and Maurer. “Building on this insight, we combined techniques from fluorescence microscopy with methods of quantum control to encode and manipulate protein-based qubits.”

In their work, which is detailed in Nature, the researchers used a near-infrared laser pulse to optically address a yellow fluorescent protein known as EYFP and read out its triplet spin state with up to 20% “spin contrast” – measured using optically detected magnetic resonance (ODMR) spectroscopy.

To test the technique, the team genetically modified the protein so that it was expressed in human embryonic kidney cells and Escherichia coli (E. coli) cells. The measured OMDR signals exhibited a contrast of up to 8%. While this performance is not as good as that of NV quantum sensors, the fluorescent proteins open the door to magnetic resonance measurements directly inside living cells – something that NV centres cannot do, says Maurer. “They could thus transform medical and biochemical studies by probing protein folding, monitoring redox states or detecting drug binding at the molecular scale,” he tells Physics World.

“A new dimension for fluorescence microscopy”

Beyond sensing, the unique quantum resonance “signatures” offer a new dimension for fluorescence microscopy, paving the way for highly multiplexed imaging far beyond today’s colour palette, Awschalom adds. Looking further ahead, using arrays of such protein qubits could even allow researchers to explore many-body quantum effects within biologically assembled structures.

Maurer, Awschalom and colleagues say they are now busy trying to improve the stability and sensitivity of their protein-based qubits through protein engineering via “directed evolution” – similar to the way that fluorescent proteins were optimized for microscopy.

“Another goal is to achieve single-molecule detection, enabling readout of the quantum state of individual protein qubits inside cells,” they reveal. “We also aim to expand the palette of available qubits by exploring new fluorescent proteins with improved spin properties and to develop sensing protocols capable of detecting nuclear magnetic resonance signals from nearby biomolecules, potentially revealing structural changes and biochemical modifications at the nanoscale.”

The post Protein qubit can be used as a quantum biosensor appeared first on Physics World.

❌