↩ Accueil

Vue normale

Reçu avant avant-hier

When is good enough ‘good enough’?

1 décembre 2025 à 12:00

Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.

When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.

Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.

How to be good enough

So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).

The 80/20 rule: the guiding principle of “good enough”

Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.

Examples from the world of business include the following:

Business sales: 80% of a company’s revenue might come from 20% of its customers.

Company productivity: 80% of your results may come from 20% of your daily tasks.

Software development: 80% of bugs could be caused by 20% of the code.

Quality control: 20% of defects may cause 80% of customer complaints.

Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.

Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.

Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.

Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. 

The benefits of good enough

Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.

A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.

Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.

Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.

Caveats and conclusions

Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.

So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.

The post When is good enough ‘good enough’? appeared first on Physics World.

Quantum computing: hype or hope?

3 novembre 2025 à 15:00

Unless you’ve been living under a stone, you can’t have failed to notice that 2025 marks the first 100 years of quantum mechanics. A massive milestone, to say the least, about which much has been written in Physics World and elsewhere in what is the International Year of Quantum Science and Technology (IYQ). However, I’d like to focus on a specific piece of quantum technology, namely quantum computing.

I keep hearing about quantum computers, so people must be using them to do cool things, and surely they will soon be as commonplace as classical computers. But as a physicist-turned-engineer working in the aerospace sector, I struggle to get a clear picture of where things are really at. If I ask friends and colleagues when they expect to see quantum computers routinely used in everyday life, I get answers ranging from “in the next two years” to “maybe in my lifetime” or even “never”.

Before we go any further, it’s worth reminding ourselves that quantum computing relies on several key quantum properties, including superposition, which gives rise to the quantum bit, or qubit. The basic building block of a quantum computer – the qubit – exists as a combination of 0 and 1 states at the same time and is represented by a probabilistic wave function. Classical computers, in contrast, use binary digital bits that are either 0 or 1.

Also vital for quantum computers is the notion of entanglement, which is when two or more qubits are co-ordinated, allowing them to share their quantum information. In a highly correlated system, a quantum computer can explore many paths simultaneously. This “massive scale” parallel processing is how quantum may solve certain problems exponentially faster than a classical computer.

The other key phenomenon for quantum computers is quantum interference. The wave-like nature of qubits means that when different probability amplitudes are in phase, they combine constructively to increase the likelihood of the right solution. Conversely, destructive interference occurs when amplitudes are out of phase, making it less likely to get the wrong answer.

Quantum interference is important in quantum computing because it allows quantum algorithms to amplify the probability of correct answers and suppress incorrect ones, making calculations much faster. Along with superposition and entanglement, it means that quantum computers could process and store vast numbers of probabilities at once, outstripping even the best classical supercomputers.

Towards real devices

To me, it all sounds exciting, but what have quantum computers ever done for us so far? It’s clear that quantum computers are not ready to be deployed in the real world. Significant technological challenges need to be overcome before they become fully realisable. In any case, no-one is expecting quantum computers to displace classical computers “like for like”: they’ll both be used for different things.

Yet it seems that the very essence of quantum computing is also its Achilles heel. Superposition, entanglement and interference – the quantum properties that will make it so powerful – are also incredibly difficult to create and maintain. Qubits are also extremely sensitive to their surroundings. They easily lose their quantum state due to interactions with the environment, whether via stray particles, electromagnetic fields, or thermal fluctuations. Known as decoherence, it makes quantum computers prone to error.

That’s why quantum computers need specialized – and often cryogenically controlled – environments to maintain the quantum states necessary for accurate computation. Building a quantum system with lots of interconnected qubits is therefore a major, expensive engineering challenge, with complex hardware and extreme operating conditions. Developing “fault-tolerant” quantum hardware and robust error-correction techniques will be essential if we want reliable quantum computation.

As for the development of software and algorithms for quantum systems, there’s a long way to go, with a lack of mature tools and frameworks. Quantum algorithms require fundamentally different programming paradigms to those used for classical computers. Put simply, that’s why building reliable, real-world deployable quantum computers remains a grand challenge.

What does the future hold?

Despite the huge amount of work that still lies in store, quantum computers have already demonstrated some amazing potential. The US firm D-Wave, for example, claimed earlier this year to have carried out simulations of quantum magnetic phase transitions that wouldn’t be possible with the most powerful classical devices. If true, this was the first time a quantum computer had achieved “quantum advantage” for a practical physics problem (whether the problem was worth solving is another question).

There is also a lot of research and development going on around the world into solving the qubit stability problem. At some stage, there will likely be a breakthrough design for robust and reliable quantum computer architecture. There is probably a lot of technical advancement happening right now behind closed doors.

The first real-world applications of quantum computers will be akin to the giant classical supercomputers of the past. If you were around in the 1980s, you’ll remember Cray supercomputers: huge, inaccessible beasts owned by large corporations, government agencies and academic institutions to enable vast amounts of calculations to be performed (provided you had the money).

And, if I believe what I read, quantum computers will not replace classical computers, at least not initially, but work alongside them, as each has its own relative strengths. Quantum computers will be suited for specific and highly demanding computational tasks, such as drug discovery, materials science, financial modelling, complex optimization problems and increasingly large artificial intelligence and machine-learning models.

These are all things beyond the limits of classical computer resource. Classical computers will remain relevant for everyday tasks like web browsing, word processing and managing databases, and they will be essential for handling the data preparation, visualization and error correction required by quantum systems.

And there is one final point to mention, which is cyber security. Quantum computing poses a major threat to existing encryption methods, with potential to undermine widely used public-key cryptography. There are concerns that hackers nowadays are storing their stolen data in anticipation of future quantum decryption.

Having looked into the topic, I can now see why the timeline for quantum computing is so fuzzy and why I got so many different answers when I asked people when the technology would be mainstream. Quite simply, I still can’t predict how or when the tech stack will pan out. But as IYQ draws to a close, the future for quantum computers is bright.

The post Quantum computing: hype or hope? appeared first on Physics World.

❌