↩ Accueil

Vue lecture

Quantum computing: hype or hope?

Unless you’ve been living under a stone, you can’t have failed to notice that 2025 marks the first 100 years of quantum mechanics. A massive milestone, to say the least, about which much has been written in Physics World and elsewhere in what is the International Year of Quantum Science and Technology (IYQ). However, I’d like to focus on a specific piece of quantum technology, namely quantum computing.

I keep hearing about quantum computers, so people must be using them to do cool things, and surely they will soon be as commonplace as classical computers. But as a physicist-turned-engineer working in the aerospace sector, I struggle to get a clear picture of where things are really at. If I ask friends and colleagues when they expect to see quantum computers routinely used in everyday life, I get answers ranging from “in the next two years” to “maybe in my lifetime” or even “never”.

Before we go any further, it’s worth reminding ourselves that quantum computing relies on several key quantum properties, including superposition, which gives rise to the quantum bit, or qubit. The basic building block of a quantum computer – the qubit – exists as a combination of 0 and 1 states at the same time and is represented by a probabilistic wave function. Classical computers, in contrast, use binary digital bits that are either 0 or 1.

Also vital for quantum computers is the notion of entanglement, which is when two or more qubits are co-ordinated, allowing them to share their quantum information. In a highly correlated system, a quantum computer can explore many paths simultaneously. This “massive scale” parallel processing is how quantum may solve certain problems exponentially faster than a classical computer.

The other key phenomenon for quantum computers is quantum interference. The wave-like nature of qubits means that when different probability amplitudes are in phase, they combine constructively to increase the likelihood of the right solution. Conversely, destructive interference occurs when amplitudes are out of phase, making it more likely to get the wrong answer.

Quantum interference is important in quantum computing because it allows quantum algorithms to amplify the probability of correct answers and suppress incorrect ones, making calculations much faster. Along with superposition and entanglement, it means that quantum computers could process and store vast numbers of probabilities at once, outstripping even the best classical supercomputers.

Towards real devices

To me, it all sounds exciting, but what have quantum computers ever done for us so far? It’s clear that quantum computers are not ready to be deployed in the real world. Significant technological challenges need to be overcome before they become fully realisable. In any case, no-one is expecting quantum computers to displace classical computers “like for like”: they’ll both be used for different things.

Yet it seems that the very essence of quantum computing is also its Achilles heel. Superposition, entanglement and interference – the quantum properties that will make it so powerful – are also incredibly difficult to create and maintain. Qubits are also extremely sensitive to their surroundings. They easily lose their quantum state due to interactions with the environment, whether via stray particles, electromagnetic fields, or thermal fluctuations. Known as decoherence, it makes quantum computers prone to error.

That’s why quantum computers need specialized – and often cryogenically controlled – environments to maintain the quantum states necessary for accurate computation. Building a quantum system with lots of interconnected qubits is therefore a major, expensive engineering challenge, with complex hardware and extreme operating conditions. Developing “fault-tolerant” quantum hardware and robust error-correction techniques will be essential if we want reliable quantum computation.

As for the development of software and algorithms for quantum systems, there’s a long way to go, with a lack of mature tools and frameworks. Quantum algorithms require fundamentally different programming paradigms to those used for classical computers. Put simply, that’s why building reliable, real-world deployable quantum computers remains a grand challenge.

What does the future hold?

Despite the huge amount of work that still lies in store, quantum computers have already demonstrated some amazing potential. The US firm D-Wave, for example, claimed earlier this year to have carried out simulations of quantum magnetic phase transitions that wouldn’t be possible with the most powerful classical devices. If true, this was the first time a quantum computer had achieved “quantum advantage” for a practical physics problem (whether the problem was worth solving is another question).

There is also a lot of research and development going on around the world into solving the qubit stability problem. At some stage, there will likely be a breakthrough design for robust and reliable quantum computer architecture. There is probably a lot of technical advancement happening right now behind closed doors.

The first real-world applications of quantum computers will be akin to the giant classical supercomputers of the past. If you were around in the 1980s, you’ll remember Cray supercomputers: huge, inaccessible beasts owned by large corporations, government agencies and academic institutions to enable vast amounts of calculations to be performed (provided you had the money).

And, if I believe what I read, quantum computers will not replace classical computers, at least not initially, but work alongside them, as each has its own relative strengths. Quantum computers will be suited for specific and highly demanding computational tasks, such as drug discovery, materials science, financial modelling, complex optimization problems and increasingly large artificial intelligence and machine-learning models.

These are all things beyond the limits of classical computer resource. Classical computers will remain relevant for everyday tasks like web browsing, word processing and managing databases, and they will be essential for handling the data preparation, visualization and error correction required by quantum systems.

And there is one final point to mention, which is cyber security. Quantum computing poses a major threat to existing encryption methods, with potential to undermine widely used public-key cryptography. There are concerns that hackers nowadays are storing their stolen data in anticipation of future quantum decryption.

Having looked into the topic, I can now see why the timeline for quantum computing is so fuzzy and why I got so many different answers when I asked people when the technology would be mainstream. Quite simply, I still can’t predict how or when the tech stack will pan out. But as IYQ draws to a close, the future for quantum computers is bright.

The post Quantum computing: hype or hope? appeared first on Physics World.

  •  

Is materials science the new alchemy for the 21st century?

For many years, I’ve been a judge for awards and prizes linked to research and innovation in engineering and physics. It’s often said that it’s better to give than to receive, and it’s certainly true in this case. But another highlight of my involvement with awards is learning about cutting-edge innovations I either hadn’t heard of or didn’t know much about.

One area that never fails to fascinate me is the development of new and advanced materials. I’m not a materials scientist – my expertise lies in creating monitoring systems for engineering – so I apologize for any over-simplification in what follows. But I do want to give you a sense of just how impressive, challenging and rewarding the field of materials science is.

It’s all too easy to take advanced materials for granted. We are in constant contact with them in everyday life, whether it’s through applications in healthcare, electronics and computing or energy, transport, construction and process engineering. But what are the most important materials innovations right now – and what kinds of novel materials can we expect in future?

Drivers of innovation

There are several – and all equally important – drivers when it comes to materials development. One is the desire to improve the performance of products we’re already familiar with. A second is the need to develop more sustainable materials, whether that means replacing less environmentally friendly solutions or enabling new technology. Third, there’s the drive for novel developments, which is where some of the most ground-breaking work is occurring.

On the environmental front, we know that there are many products with components that could, in principle, be recycled. However, the reality is that many products end up in landfill because of how they’ve been constructed. I was recently reminded of this conundrum when I heard a research presentation about the difficulties of recycling solar panels.

Solar farm in the evening sun
Green problem Solar panels often fail to be recycled at their end of their life despite containing reusable materials. (Courtesy: iStock/Milos Muller)

Photovoltaic cells become increasingly inefficient with time and most solar panels aren’t expected to last more than about 30 years. Trouble is, solar panels are so robustly built that recycling them requires specialized equipment and processes. More often than not, solar panels just get thrown away despite mostly containing reusable materials such as glass, plastic and metals – including aluminium and silver.

It seems ironic that solar panels, which enable sustainable living, could also contribute significantly to landfill. In fact, the problem could escalate significantly if left unaddressed. There are already an estimated 1.8 million solar panels in use the UK, and potentially billions around the world, with a rapidly increasing install base. Making solar panels more sustainable is surely a grand challenge in materials science.

Waste not, want not

Another vital issue concerns our addiction to new tech, which means we rarely hang on to objects until the end of their life; I mean, who hasn’t been tempted by a shiny new smartphone even though the old one is perfectly adequate? That urge for new objects means we need more materials and designs that can be readily re-used or recycled, thereby reducing waste and resource depletion.

As someone who works in the aerospace industry, I know first-hand how companies are trying to make planes more fuel efficient by developing composite materials that are stronger and can survive higher temperatures and pressures – for example carbon fibre and composite matrix ceramics. The industry also uses “additive manufacturing” to enable more intricate component design with less resultant waste.

Plastics are another key area of development. Many products are made from single type, recyclable materials, such as polyethylene or polypropylene, which benefit from being light, durable and capable of withstanding chemicals and heat. Trouble is, while polyethene and polypropene can be recycled, they both create the tiny “microplastics” that, as we know all too well, are not good news for the environment.

Person holding eco plastic garbage bio bags in rolls outdoors
Sustainable challenge Material scientists will need to find practical bio-based alternatives to conventional plastics to avoid polluting microplastics entering the seas and oceans. (Courtesy: iStock/Dmitriy Sidor)

Bio-based materials are becoming more common for everyday items. Think about polylactic acid (PLA), which is a plant-based polymer derived from renewable resources such as cornstarch or sugar cane. Typically used for food or medical packaging, it’s usually said to be “compostable”, although this is a term we need to view with caution.

Sadly, PLA does not degrade readily in natural environments or landfill. To break it down, you need high-temperature, high-moisture industrial composting facilities. So whilst PLAs come from natural plants, they are not straightforward to recycle, which is why single-use disposable items, such as plastic cutlery, drinking straws and plates, are no longer permitted to be made from it.

Thankfully, we’re also seeing greater use of more sustainable, natural fibre composites, such as flax, hemp and bamboo (have you tried bamboo socks or cutlery?). All of which brings me to an interesting urban myth, which is that in 1941 legendary US car manufacturer Henry Ford built a car apparently made entirely of a plant-based plastic – dubbed the “soybean” car (see box).

The soybean car: fact or fiction?

Soybean car frame patent
Crazy or credible? Soybean car frame patent signed by Henry Ford and Eugene Turenne Gregorie. (Courtesy: Image in public domain)

Henry Ford’s 1941 “soybean” car, which was built entirely of a plant-based plastic, was apparently motivated by a need to make vehicles lighter (and therefore more fuel efficient), less reliant on steel (which was in high demand during the Second World War) and safer too. The exact ingredients of the plastic are, however, not known since there were no records kept.

Speculation is that it was a combination of soybeans, wheat, hemp, flax and ramie (a kind of flowering nettle). Lowell Overly, a Ford designer who had major involvement in creating the car, said it was “soybean fibre in a phenolic resin with formaldehyde used in the impregnation”. Despite being a mix of natural and synthetic materials – and not entirely made of soybeans – the car was nonetheless a significant advancement for the automotive industry more than eight decades ago.

Avoiding the “solar-panel trap”

So what technology developments do we need to take materials to the next level? The key will be to avoid what I coin the “solar-panel trap” and find materials that are sustainable from cradle to grave. We have to create an environmentally sustainable economic system that’s based on the reuse and regeneration of materials or products – what some dub the “circular economy”.

Sustainable composites will be essential. We’ll need composites that can be easily separated, such as adhesives that dissolve in water or a specific solvent, so that we can cleanly, quickly and cheaply recover valuable materials from complex products. We’ll also need recycled composites, using recycled carbon fibre, or plastic combined with bio-based resins made from renewable sources like plant-based oils, starches and agricultural waste (rather than fossil fuels).

Vital too will be eco-friendly composites that combine sustainable composite materials (such as natural fibres) with bio-based resins. In principle, these could be used to replace traditional composite materials and to reduce waste and environmental impact.

Another important trend is developing novel metals and complex alloys. As well as enhancing traditional applications, these are addressing future requirements for what may become commonplace applications, such as wide-scale hydrogen manufacture, transportation and distribution.

Soft and stretchy

Then there are “soft composites”. These are advanced, often biocompatible materials that combine softer, rubbery polymers with reinforcing fibres or nanoparticles to create flexible, durable and functional materials that can be used for soft robotics, medical implants, prosthetics and wearable sensors. These materials can be engineered for properties like stretchability, self-healing, magnetic actuation and tissue integration, enabling innovative and patient-friendly healthcare solutions.

Wearable electronic monitors on patients' arms
Medical magic Wearable electronic materials could transform how we monitor human health. (Shutterstock/Guguart)

And have you heard of e-textiles, which integrate electronic components into everyday fabrics? These materials could be game-changing for healthcare applications by offering wearable, non-invasive monitoring of physiological information such as heart rate and respiration.

Further applications could include advanced personal protective equipment (PPE), smart bandages and garments for long-term rehabilitation and remote patient care. Smart textiles could revolutionize medical diagnostics, therapy delivery and treatment by providing personalized digital healthcare solutions.

Towards “new gold”

I realize I have only scratched the surface of materials science – an amazing cauldron of ideas where physics, chemistry and engineering work hand in hand to deliver groundbreaking solutions. It’s a hugely and truly important discipline. With far greater success than the original alchemists, materials scientists are adept at creating the “new gold”.

Their discoveries and inventions are making major contributions to our planet’s sustainable economy from the design, deployment and decommission of everyday items, as well as finding novel solutions that will positively impact way we live today. Surely it’s an area we should celebrate and, as physicists, become more closely involved in.

The post Is materials science the new alchemy for the 21st century? appeared first on Physics World.

  •  

Garbage in, garbage out: why the success of AI depends on good data

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

  •