↩ Accueil

Vue normale

Reçu avant avant-hier

Nobel prizes you’ve never heard of: how a Swedish inventor was honoured for a technology that nearly killed him

2 octobre 2025 à 10:00
Black-and-white photograph of Nils Gustaf Dalén. He's wearing an old-fashioned high-collared shirt and has a large, bushy moustache.
Nils Gustaf Dalén. (Courtesy: Nobel Foundation archive)

The winner of the 1912 Nobel Prize for Physics was, by some margin, the unlikeliest physics Nobel laureate in history. He wasn’t a physicist, for starters. He wasn’t even a chemist. He was an inventor by the name of Nils Gustaf Dalén, and the invention that won him the prize was closely connected – in more ways than one – to the industrial accident that almost cost him his life.

To understand why members of the Royal Swedish Academy of Sciences plumped for Dalén in 1912 over his more famous contemporaries (including such luminaries as Max Planck and Albert Einstein) it helps to know a bit about the man himself. Like Alfred Nobel, Dalén was Swedish, born in 1869 in the small farming community of Stengstorp. Located around 140 km north-east of Gothenburg, Stengstorp is now home to a museum in Dalén’s honour, but as a young man, he did not seem like museum material. On the contrary, he was incredibly lazy – so lazy, in fact, that he invented a machine to make coffee and turn the light on for him in the mornings.

This ingenious device brought Dalén some local notoriety, but his big break came when Sweden’s most famous inventor at the time, Gustaf de Laval, saw him demonstrate a device for measuring milk fat content. Encouraged by de Laval to attend university, Dalén sold his family’s farm and enrolled at what is now the Chalmers University of Technology. In 1896 he earned his doctorate, and after a year at ETH Zürich in Switzerland, he returned to Sweden to set up his first engineering firm.

A light in the darkness

The engineering challenge that set Dalén on the path to the Nobel was hugely important in a country like Sweden with a long, complex coastline. Years before the advent of GPS, or even reliable radio communications, lighthouses were the main way of warning ships away from danger. However, they were extremely expensive and hard to maintain. As well as needing 24-hour attention from skilled and hardy humans, they required huge amounts of propane fuel, necessitating frequent (and frequently dangerous) resupply trips.

The obvious way of reducing these costs was to make lighthouses burn something else. Acetylene was attractive because it could be manufactured in industrial quantities, and it produced a bright light when burned. Unfortunately, it was also highly explosive, meaning it couldn’t be safely bottled or shipped.

To tame the acetylene dragon, Dalén developed three separate inventions. The first was a combination of asbestos and diatomaceous earth that Dalén called “agamassan” after his company (Aktiebogalet Gasaccumulator) and the Swedish word for compound, massan. By filling a container with agamassan, wetting it with acetone and then forcing acetylene into the container under pressure, Dalén showed that the acetylene would dissolve in the acetone and become trapped within the agamassan like water in a sponge. Under these conditions, it could be shipped, stored and even dropped without exploding.

Having made acetylene safe to use, Dalén turned his hand to making it economical. His second invention was a device that automatically turned the acetylene supply on and off. This saved fuel and enabled the light to flash (distinguishing it from other light sources on the shore) without the need for cumbersome rotation mechanisms.

Photo of a lighthouse on a small rock in a bay with the coastline clearly visible close behind
Let there be light The first light designed to use Gustaf Dalén’s technology is located near Djurgården in Stockholm, Sweden. It has since been converted to run on electricity. (Courtesy: Holger Ellgard, CC BY-SA 3.0)

Dalén’s third invention enabled even greater automation. Rather than relying on human lighthouse-keepers to switch acetylene burners on at night and off in the morning, Dalén developed a valve that could do it automatically. This valve worked by means of a set of metal rods, one of which was blackened while the others were polished. When the blackened rod absorbed enough heat from the Sun, it would expand and close the valve. At dusk, or in foggy conditions, the blackened rod would return to the temperature of the others, contract, and open the valve.

Choosing a laureate

While Dalén was perfecting the use of acetylene gas for lighthouses, the Nobel Committee for Physics was getting on with its usual business of recommending candidates for the prize. In 1909 the committee suggested the radio pioneer Guglielmo Marconi and his academic counterpart Karl Ferdinand Braun. The wider Academy accepted this choice. In 1910 the committee recommended the father of modern molecular science, Johannes Diderik van der Waals, and he also won the Academy’s approval. In 1911 the quantum theorist Wilhelm Wien, whose joint nomination with Max Planck in 1908 provoked such bitter disputes that neither of them got the prize, finally got the nod from both the committee and the Academy (Planck would have to wait for his prize until 1918).

By the early autumn of 1912, there was every indication that the Academy would again accept the committee’s recommendation, which was Heike Kammerlingh Onnes, who had liquified helium for the first time in 1908 and subsequently used it to discover superconductivity. Although Dalén had also been nominated, Mats Larsson, a physicist at Stockholm University who served on the committee between 2016 and 2023, says he wasn’t a serious contender.

“It’s clear from the report from the Nobel committee to the Academy that they recognize there is an importance to Dalén’s inventions, but it doesn’t reach the standard for a Nobel prize,” says Larsson. With only a single nomination from a member of the Academy’s technical section, Larsson adds, “Dalén is not even on the shortlist.”

An industrial accident

Then, before the Academy could vote, tragedy struck. On 27 September 1912, during an experiment so risky it was performed in a quarry rather than in Aktiebogalet Gasaccumulator’s Stockholm factory, an explosion left Dalén seriously injured. The next day, Sweden’s national paper of record, Dagens Nyheter, put the accident on its front page, describing Dalén’s face as “unrecognizable” and his right side as “horribly massacred and burned”. Though conscious and talking when taken to hospital, he was not expected to survive.

Gustaf Dalén and his wife Elma arm in arm
A devoted couple Gustaf Dalén and his wife Elma outside their home in 1937. (Courtesy: Wikimedia Commons)

Nobel prizes cannot be awarded posthumously. If Dalén had died of his injuries, it is unlikely that his colleagues would have voted to honour him. But though Dalén’s doctors could not save his eyesight, they did save his life. By the time the Academy convened to vote on the 1912 Nobel prizes, he was recovering in the care of his family and very much on the minds of his sympathetic colleagues.

We don’t know exactly what happened next. “The material [in the Nobel archives] is very meagre,” Larsson explains. “It just says there was a vote and Dalén won the prize.”

Still, it’s easy to imagine that someone in the Academy must have pled Dalén’s cause. “This is our national hero who fought the war against ignorance and against darkness,” agrees Karl Grandin, who directs the Academy’s Center for History of Science. “And he loses his sight in the purpose of bringing light to the world. It was a symbolic thing.”

Dalén’s most enduring invention

Dalén was too unwell to attend the usual Nobel prize celebrations in Stockholm. Instead, he sent his brother, a physician, to accept the prize on his behalf. Eventually, though, he recovered well enough to resume his duties at Aktiebogalet Gasaccumulator. In time, he even returned to inventing. And herein lies the final twist in his story.

During his convalescence, the blind Dalén noticed something that had apparently escaped his attention when he could still see. His wife, Elma, worked very hard around the house, and cooking for him and their four children was especially tiresome. It would be much easier, Dalén decided, if she had a device that could cook several dishes at once, at different temperatures.

In 1922, ten years after losing his sight and winning the Nobel prize, Dalén unveiled the invention that would become his most enduring. Named, like agamassan, after the initials of his company, the AGA cooker is still sold today, bringing warmth to kitchens just as its inventor brought safe, effective and economical illumination to lighthouses. Gustaf Dalén may be the least likely physics Nobel laureate in history, but it would be facile to dismiss him as undeserving. After all, how many other physics laureates can boast of saving hundreds of thousands of lives at sea, while also relieving the drudgery of hundreds of thousands back home?

The post Nobel prizes you’ve never heard of: how a Swedish inventor was honoured for a technology that nearly killed him appeared first on Physics World.

Nobel prizes you’ve never heard of: how an obscure version of colour photography beat quantum theory to the most prestigious prize in physics

1 octobre 2025 à 13:00
Black-and-white photo of Gabriel Lippmann. He's dressed formally, in a suit with a bow tie tucked beneath the collar, and he's wearing round spectacles. He has a large moustache with pointy, waxed ends.
Gabriel Lippmann. (Courtesy: Nobel Foundation archive)

By the time Gabriel Lippmann won the Nobel Prize for Physics, his crowning scientific achievement was already obsolete – and he probably knew it. Four days after receiving the 1908 prize “for his method of reproducing colours photographically based on the phenomenon of interference”, Lippmann, a Frenchman with a waxed moustache that would shame a silent film villain, ended his Nobel lecture with the verbal equivalent of a Gallic shrug.

After nearly 20 years of work, he admitted, the minimum exposure time for his method – one minute in full sunlight – was still “too long for the portrait”. Though further improvements were possible, he concluded, “Life is short and progress is slow.”

Why did Lippmann win a Nobel prize for a method that not even he seemed to believe in? It certainly wasn’t for a lack of alternatives. The early 1900s were a heady time for physics discoveries and inventions, and other Nobels of the era reflect this. In 1906 the Royal Swedish Academy of Sciences awarded the physics prize to J J Thomson for discovering the electron. In 1907 its members voted for Albert Michelson of the aether-defying Michelson–Morley experiment. So what made the Academy choose, in 1908, a version of colour photography that wouldn’t even let you take a selfie?

An elegant solution

Let’s start with the method itself. Unlike other imaging processes, Lippmann photography directly records the entire colour spectrum of an object. It does this by using standing waves of light to produce interference fringes in a light-sensitive emulsion backed by a mirrored surface. The longer the wavelength of light given off by the object, the larger the separation between the fringes. It’s an elegant application of classical wave theory. It’s easy to see why Edwardian-era physicists loved it.

A photo of bright red flowers in a vase. The colours are very vivid
Capturing colour A still life taken by Lippmann using his method between 1890 and 1910. By the latter part of this period, the method had fallen out of favour, superseded by the simpler Autochrome process. (Photo in public domain)

Lippmann’s method also has an important practical advantage. Because his photographs don’t require pigments, they retain their colour over time. Consequently, the images Lippmann showed off in his Nobel lecture look as brilliant today as they did in 1908.

The method’s disadvantages, though, are numerous. As well as needing long exposure times, the colours in Lippmann photographs are hard to see. Because they are virtual, like a hologram, they are only accurate when viewed face-on, in perpendicular light. Lippmann’s original method also required highly toxic liquid mercury to make the mirrored back surface of each photographic plate. Though modern versions have eliminated this, it’s not surprising that Lippmann’s method is now largely the domain of hobbyists and artists.

A French connection

If technical merit can’t explain Lippmann’s Nobel, could it perhaps have been due to politics? The easiest way to answer this question is to look in the Nobel archives. Although the names of Nobel prize nominees and the people who nominated them are initially secret, this secrecy is lifted after 50 years. The nomination records for Lippmann’s era are therefore very much available, and they show that he was a popular candidate. Between 1901 and 1908, he received 23 nominations from 12 different people – including previous laureates, foreign members of the Academy, and scientists from prestigious universities invited to make nominations in specific years.

Funnily enough, though, all of them were French.

Faced with this apparent conspiracy to stamp the French tricolour on the Nobel medal, Karl Grandin, who directs the Academy’s Center for History of Science, concedes that such nationalistic campaigns were “quite common in the first years”. However, this doesn’t mean they were successful: “Sometimes when all the members of the French Academy have signed a nomination, it might be impressive at one point, but it might also be working in the opposite way,” he says.

A clash of personalities

Because Nobel Foundation statutes stipulate that discussions and vote numbers from the prize-awarding meeting of the Academy are not recorded, Grandin can’t say exactly how Lippmann came out on top in 1908. He does, however, have access to an illuminating article written in 1981 by a theoretical physicist, Bengt Nagel.

Drawing on the private letters and diaries of Academy members as well as the Nobel archives, Nagel showed that personal biases played a significant role in the awarding of the 1908 prize. It’s a complicated story, but the most important strand of it centres on Svante Arrhenius, the Swedish physical chemist who’d won the Nobel Prize for Chemistry five years earlier.

Today, Arrhenius is best known for predicting that putting carbon dioxide in the Earth’s atmosphere will affect the climate. In his own lifetime, though, Grandin says that Arrhenius was also known for having a long-running personality conflict with a wealthy Swedish mathematician called Gustaf Mittag-Leffler.

“Stockholm at the time was a small place,” Grandin explains. “Everyone knew each other, and it wasn’t big enough to host both Arrhenius and Mittag-Leffler.”

Arrhenius and Mittag-Leffler
Double trouble The feuding between Svante Arrhenius (left) and Gustaf Mittag-Leffler played a major role in Gabriel Lippmann winning the Nobel Prize for Physics. (Images in public domain)

Arrhenius wasn’t the chair of the Nobel physics committee in 1908. That honour fell to Knut Angstrom, son of the Angstrom the unit is named after. Still, Arrhenius’ prestige and outsized personality gave him considerable influence. After much debate, the committee agreed to recommend his preferred choice for the prize, Max Planck, to the full Academy.

This choice, however, was not problem-free. Planck’s theory of the quantization of matter was still relatively new in 1908, and his work was not demonstrably guiding experiments. If anything, it was the other way around. In principle, the committee could have dealt with this by recommending that Planck share the prize with a quantum experimentalist. Unfortunately, no such person had been nominated.

That was awkward, and it gave Mittag-Leffler the ammunition he needed. When the matter went to the Academy for a vote, he used members’ doubts about quantum theory to argue against Arrhenius’ choice. It worked. In Mittag-Leffler’s telling, Planck got only 13 votes. Lippmann, the committee’s second choice, got 46.

A consensus winner

Afterwards, Mittag-Leffler boasted about his victory. “Arrhenius wanted to give it to Planck…but his report, which he had nevertheless managed to have unanimously accepted by the committee, was so stupid that I could easily have crushed it,” he wrote to a French colleague. “Two members even declared that after hearing me, they changed their opinion and voted for Lippmann. I would have had nothing against sharing the prize between [quantum theorist Wilhelm] Wien and Planck,” Mittag-Leffler added, “but to give it to Planck alone would have been to reward ideas that are still very obscure and require verification by mathematics and experimentation.”

A photo of the Matterhorn rising above an Alpine landscape. The colours are a little washed out, but do not appear artificially tinted
True colours A photo of the Matterhorn taken by Gabriel Lippmann between 1891 and 1899, using his then-newly-developed method of colour photography. The colours appear as they did in the original image, and were not added afterwards. (Image in public domain)

Lippmann’s work posed no such difficulties, and that seems to have swung it for him. In a letter to a colleague after the dust had settled, Angstrom called Lippmann “obviously a prizeworthy candidate who did not give rise to any objections”. However, Angstrom added, he “could not deny that the radiation laws constitute a more important advance in physical science than Lippmann’s colour photography”.

Much has been written about excellent scientists getting overlooked for prizes because of biases against them. The flip side of this – that merely good scientists sometimes win prizes because of biases in their favour – is usually left unacknowledged. Nevertheless, it happens, and in 1908 it happened to Gabriel Lippmann – a good scientist who won a Nobel prize not because he did the most important work, but because his friends clubbed together to support him; because Academy members were wary of his quantum rivals; and above all because a grudge-holding mathematician and an egotistical chemist had a massive beef with each other.

And then, four years later, it happened again, to someone else.

The post Nobel prizes you’ve never heard of: how an obscure version of colour photography beat quantum theory to the most prestigious prize in physics appeared first on Physics World.

Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’

29 septembre 2025 à 12:00

What skills do you use every day in your job?

As a planetary scientist, I use mathematics, physics, geology and atmospheric science. But as the principal investigator of Juno, I also have to manage the Juno team, and interface with politicians, people at NASA headquarters and other administrators. In that capacity, I need to be able to talk about topics at various technical levels, because many of the people I’m speaking with are not actively researching planetary science. I need a broad range of skills, but one of the most important is to be able to recognize when I don’t have the right expertise and need to find someone who can help.

The surface of Jupiter
Pretty amazing Hurricane-like spiral wind patterns near Jupiter’s north pole as seen by NASA’s Juno mission, of which Scott Bolton is principal investigator. (Courtesy: NASA/JPL/Caltech SwRIMSSS Gerald Eichstädt, SeánDoran)

What do you like best and least about your job?

I really love being part of a mission that’s discovering new information and new ideas about how the universe works. It’s exciting to be at the edge of something, where you are part of a team that’s seeing an image or an aspect of how nature works for the first time. The discovery element is truly inspirational. I also love seeing how a mixture of scientists with different expertise, skills and backgrounds can come together to understand something new. Watching that process unfold is very exciting to me.

Some tasks I like least are related to budget exercises, administrative tasks and documentation. Some government rules and regulations can be quite taxing and require a lot of time to ensure forms and documents are completed correctly. Occasionally, an urgent action item will appear requiring an immediate response and having to drop current work to fit in a new task. As a result, my normal work gets delayed, and this can be frustrating. I consider one of my main jobs to shelter the team from these extraneous tasks so they can get their work done.

What do you know today that you wish you’d known at the start of your career?

The most important thing I know now is that if you really believe in something, you should stick to it. You should not give up. You should keep trying, keep working at it, and find people who can collaborate with you to make it happen. Early on, I didn’t realize how important it was to combine forces with people who complemented my skills in order to achieve goals.

The other thing I wish I had known is that taking time to figure out the best way to approach a challenge, question or problem is beneficial to achieving one’s goals.  That was a very valuable lesson to learn. We should resist the temptation to rush into finding the answer – instead, it’s worthwhile to take the time to think about the question and develop an approach.

The post Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’ appeared first on Physics World.

‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information

22 septembre 2025 à 10:00

A few weeks ago, I experienced a classic annoyance of modern life: one of my computer games stopped working. The cause? An “update” to the emulator that translates old games into programs that today’s machines can execute. In my case, this update broke the translation process, and the tenuous thread of hardware and software connecting my laptop to the game’s 30-year-old code was severed.

For individuals, failures like this are irritating. But for the wider digital ecosystem, they’re a real problem – so much so, in fact, that Vint Cerf, who’s known as one of the “fathers of the Internet”, made them the subject of his talk at last week’s Heidelberg Laureate Forum (HLF) in Heidelberg, Germany.

“My big worry is that all this digital stuff won’t be there when we would like it to be there, or when our descendants would like to have it,” Cerf said.

How it used to work

Historically, the best ways of preserving information involved writing it on durable materials such as clay tablets, high-quality paper, or a form of animal skin known as vellum. These media, Cerf observed, “have one thing in common: they don’t require electricity to be stored and preserved.”

Digital media, in contrast, are much less robust. “Many of them are magnetic, and the magnetic material wears away after a while,” Cerf explained. Consequently, some old tapes are now so fragile that attempting to read them can actually lift the magnetic material off the surface: “You read it once and that’s it. It’s now transparent tape,” he said.

Being able to read data is just the beginning, though. As my broken computer game shows, you also need programs and equipment that can persuade those data to do things. “That’s often the thing that goes first,” Cerf told me in a press conference after his talk. For example, when Cerf recently tried to retrieve data from an old three-and-a-half-inch floppy disk, he discovered that doing so would require three additional components: a drive that could read the disk, a program that could open the files stored on the disk and an old computer that could run the program. “I needed a whole lot of software help and several stages in order to make that digital content useful,” Cerf said.

Creating ‘digital vellum’

As for how to fix this problem and create a digital version of vellum, Cerf, who has been the “Chief Internet Evangelist” at Google since 2005, listed three ideas that he finds interesting. The first involves a New Jersey, US-based company called SPhotonix that does research and development work in the UK and Switzerland. It’s using lasers to write bits of data into chunks of quartz crystal, which is a very long-lasting medium. However, each crystal is roughly the size of a hockey puck, and Cerf thinks that “real work” still needs to be done to organize the information the material holds.

The second idea is partly inspired by the clay tablets that proved so successful at preserving cuneiform writing from ancient Mesopotamia. Cerabyte, a start-up with facilities in Austria, Germany and the US, has developed a ceramic material that its founders claim could “store all data virtually forever”.

The third idea, and the one that seems to appeal most to Cerf, is to write digital information into DNA. That might sound like an inherently fragile medium, but as Cerf pointed out, “It’s actually a very robust molecule – otherwise, life wouldn’t have persisted for several billion years.” Provided you dehydrate the DNA first, he added, it lasts for “quite a long time”.

The question of how to read such information is not an easy one, and Cerf doesn’t have an answer to it. He is, however, hopeful that someone will find one. At the HLF, where he is such a revered figure that even the journalists want to take photos with him, he issued a call to arms for the young researchers in the audience. “I want you to appreciate the scope of the work that is required to preserve digital things,” Cerf told them. Without that work, he added, “recreating a digital environment in 100 years is not going to be a trivial matter.”

The post ‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information appeared first on Physics World.

The pros and cons of reinforcement learning in physical science

17 septembre 2025 à 12:30

Today’s artificial intelligence (AI) systems are built on data generated by humans. They’re trained on huge repositories of writing, images and videos, most of which have been scraped from the Internet without the knowledge or consent of their creators. It’s a vast and sometimes ill-gotten treasure trove of information – but for machine-learning pioneer David Silver, it’s nowhere near enough.

“I think if you provide the knowledge that humans already have, it doesn’t really answer the deepest question for AI, which is how it can learn for itself to solve problems,” Silver told an audience at the 12th Heidelberg Laureate Forum (HLF) in Heidelberg, Germany, on Monday.

Silver’s proposed solution is to move from the “era of human data”, in which AI passively ingests information like a student cramming for an exam, into what he calls the “era of experience” in which it learns like a baby exploring its world. In his HLF talk on Monday, Silver played a sped-up video of a baby repeatedly picking up toys, manipulating them and putting them down while crawling and rolling around a room. To murmurs of appreciation from the audience, he declared, “I think that provides a different perspective of how a system might learn.”

Silver, a computer scientist at University College London, UK, has been instrumental in making this experiential learning happen in the virtual worlds of computer science and mathematics. As head of reinforcement learning at Google DeepMind, he was instrumental in developing AlphaZero, an AI system that taught itself to play the ancient stones-and-grid game of Go. It did this via a so-called “reward function” that pushed it to improve over many iterations, without ever being taught the game’s rules or strategy.

More recently, Silver coordinated a follow-up project called AlphaProof that treats formal mathematics as a game. In this case, AlphaZero’s reward is based on getting correct proofs. While it isn’t yet outperforming the best human mathematicians, in 2024 it achieved silver-medal standard on problems at the International Mathematical Olympiad.

Learning in the physics playroom

Could a similar experiential learning approach work in the physical sciences? At an HLF panel discussion on Tuesday afternoon, particle physicist Thea Klaeboe Åarrestad began by outlining one possible application. Whenever CERN’s Large Hadron Collider (LHC) is running, Åarrestad explained, she and her colleagues in the CMS experiment must control the magnets that keep protons on the right path as they zoom around the collider. Currently, this task is performed by a person, working in real time.

Four people sitting on a stage with a large screen in the background. Another person stands beside them
Up for discussion: A panel discussion on machine learning in physical sciences at the Heidelberg Laureate Forum. l-r: Moderator George Musser, Kyle Cranmer, Thea Klaeboe Åarrestad, David Silver and Maia Fraser. (Courtesy: Bernhard Kreutzer/HLFF)

In principle, Åarrestad continued, a reinforcement-learning AI could take over that job after learning by experience what works and what doesn’t. There’s just one problem: if it got anything wrong, the protons would smash into a wall and melt the beam pipe. “You don’t really want to do that mistake twice,” Åarrestad deadpanned.

For Åarrestad’s fellow panellist Kyle Cranmer, a particle physicist who works on data science and machine learning at the University of Wisconsin-Madison, US, this nightmare scenario symbolizes the challenge with using reinforcement learning in physical sciences. In situations where you’re able to do many experiments very quickly and essentially for free – as is the case with AlphaGo and its descendants – you can expect reinforcement learning to work well, Cranmer explained. But once you’re interacting with a real, physical system, even non-destructive experiments require finite amounts of time and money.

Another challenge, Cranmer continued, is that particle physics already has good theories that predict some quantities to multiple decimal places. “It’s not low-hanging fruit for getting an AI to come up with a replacement framework de novo,” Cranmer said. A better option, he suggested, might be to put AI to work on modelling atmospheric fluid dynamics, which are emergent phenomena without first-principles descriptions. “Those are super-exciting places to use ideas from machine learning,” he said.

Not for nuclear arsenals

Silver, who was also on Tuesday’s panel, agreed that reinforcement learning isn’t always the right solution. “We should do this in areas where mistakes are small and it can learn from those small mistakes to avoid making big mistakes,” he said. To general laughter, he added that he would not recommend “letting an AI loose on nuclear arsenals”, either.

Reinforcement learning aside, both Åarrestad and Cranmer are highly enthusiastic about AI. For Cranmer, one of the most exciting aspects of the technology is the way it gets scientists from different disciplines talking to each other. The HLF, which aims to connect early-career researchers with senior figures in mathematics and computer science, is itself a good example, with many talks in the weeklong schedule devoted to AI in one form or another.

For Åarrestad, though, AI’s most exciting possibility relates to physics itself. Because the LHC produces far more data than humans and present-day algorithms can handle, Åarrestad explained, much of it is currently discarded. The idea that, as a result, she and her colleagues could be throwing away major discoveries sometimes keeps her up at night. “Is there new physics below 1 TeV?” Åarrestad wondered.

Someday, maybe, an AI might be able to tell us.

The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.

Are we heading for a future of superintelligent AI mathematicians?

16 septembre 2025 à 21:54

When researchers at Microsoft released a list of the 40 jobs most likely to be affected by generative artificial intelligence (gen AI), few outsiders would have expected to see “mathematician” among them. Yet according to speakers at this year’s Heidelberg Laureate Forum (HLF), which connects early-career researchers with distinguished figures in mathematics and computer science, computers are already taking over many tasks formerly performed by human mathematicians – and the humans have mixed feelings about it.

One of those expressing disquiet is Yang-Hui He, a mathematical physicist at the London Institute for Mathematical Sciences. In general, He is extremely keen on AI. He’s written a textbook about the use of AI in mathematics, and he told the audience at an HLF panel discussion that he’s been peddling machine-learning techniques to his mathematical physics colleagues since 2017.

More recently, though, He has developed concerns about gen AI specifically. “It is doing mathematics so well without any understanding of mathematics,” he said, a note of wonder creeping into his voice. Then, more plaintively, he added, “Where is our place?”

AI advantages

Some of the things that make today’s gen AI so good at mathematics are the same as the ones that made Google’s DeepMind so good at the game of Go. As the theoretical computer scientist Sanjeev Arora pointed out in his HLF talk, “The reason it’s better than humans is that it’s basically tireless.” Put another way, if the 20th-century mathematician Alfréd Rényi once described his colleagues as “machines for turning coffee into theorems”, one advantage of 21st-century AI is that it does away with the coffee.

Arora, however, sees even greater benefits. In his view, AI’s ability to use feedback to improve its own performance – a technique known as reinforcement learning – is particularly well-suited to mathematics.

In the standard version of reinforcement learning, Arora explains, the AI model is given a large bank of questions, asked to generate many solutions and told to use the most correct ones (as labelled by humans) to refine its model. But because mathematics is so formalized, with answers that are so verifiably true or false, Arora thinks it will soon be possible to replace human correctness checkers with AI “proof assistants”. Indeed, he’s developing one such assistant himself, called Lean, with his colleagues at Princeton University in the US.

Humans in the loop?

But why stop there? Why not use AI to generate mathematical questions as well as producing and checking their solutions? Indeed, why not get it to write a paper, peer review it and publish it for its fellow AI mathematicians – which are, presumably, busy combing the literature for information to help them define new questions?

Arora clearly thinks that’s where things are heading, and many of his colleagues seem to agree, at least in part. His fellow HLF panellist Javier Gómez-Serrano, a mathematician at Brown University in the US, noted that AI is already generating results in a day or two that would previously have taken a human mathematician months. “Progress has been quite quick,” he said.

The panel’s final member, Maia Fraser of the University of Ottawa, Canada, likewise paid tribute to the “incredible things that are possible with AI now”.  But Fraser, who works on mathematical problems related to neuroscience, also sounded a note of caution. “My concern is the speed of the changes,” she told the HLF audience.

The risk, Fraser continued, is that some of these changes may end up happening by default, without first considering whether humans want or need them. While we can’t un-invent AI, “we do have agency” over what we want, she said.

So, do we want a world in which AI mathematicians take humans “out of the loop” entirely? For He, the benefits may outweigh the disadvantages. “I really want to see a proof of the Riemann hypothesis,” he said,  to ripples of laughter. If that means that human mathematicians “become priests to oracles”, He added, so be it.

The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.

Juno: the spacecraft that is revolutionizing our understanding of Jupiter

11 septembre 2025 à 15:55

This episode of the Physics World Weekly podcast features Scott Bolton, who is principal investigator on NASA’s Juno mission to Jupiter. Launched in 2011, the mission has delivered important insights into the nature of the gas-giant planet. In this conversation with Physics World’s Margaret Harris, Bolton explains how Juno continues to change our understanding of Jupiter and other gas giants.

Bolton and Harris chat about the mission’s JunoCam, which has produced some gorgeous images of Jupiter and it moons.

Although the Juno mission was expected to last only a few years, the spacecraft is still going strong despite operating in Jupiter’s intense radiation belts. Bolton explains how the Juno team has rejuvenated radiation-damaged components, which has provided important insights for those designing future missions to space.

However Juno’s future is uncertain. Despite its great success, the mission is currently scheduled to end at the end of September, which is something that Bolton also addresses in the conversation.

The post Juno: the spacecraft that is revolutionizing our understanding of Jupiter appeared first on Physics World.

William Phillips: why quantum physics is so ‘deliciously weird’

25 août 2025 à 12:00
William Phillips
Entranced by quantum William Phillips. (Courtesy: NIST)

William Phillips is a pioneer in the world of quantum physics. After graduating from Juniata College in Pennsylvania in 1970, he did a PhD with Dan Kleppner at the Massachusetts Institute of Technology (MIT), where he measured the magnetic moment of the proton in water. In 1978 Phillips joined the National Bureau of Standards in Gaithersburg, Maryland, now known as the National Institute of Standards and Technology (NIST), where he is still based.

Phillips shared the 1997 Nobel Prize for Physics with Steven Chu and Claude Cohen-Tannoudji for their work on laser cooling. The technique uses light from precisely tuned laser beams to slow atoms down and cool them to just above absolute zero. As well as leading to more accurate atomic clocks, laser cooling proved vital for the creation of Bose–Einstein condensates – a form of matter where all constituent particles are in the same quantum state.

To mark the International Year of Quantum Science and Technology, Physics World online editor Margaret Harris sat down with Phillips in Gaithersburg to talk about his life and career in physics. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.

How did you become interested in quantum physics?

As an undergraduate, I was invited by one of the professors at my college to participate in research he was doing on electron spin resonance. We were using the flipping of unpaired spins in a solid sample to investigate the structure and behaviour of a particular compound. Unlike a spinning top, electrons can spin only in two possible orientations, which is pretty weird and something I found really fascinating. So I was part of the quantum adventure even as an undergraduate.

What did you do after graduating?

I did a semester at Argonne National Laboratory outside Chicago, working on electron spin resonance with two physicists from Argentina. Then I was invited by Dan Kleppner – an amazing physicist – to do a PhD with him at the Massachusetts Institute of Technology. He really taught me how to think like a physicist. It was in his lab that I first encountered tuneable lasers, another wonderful tool for using the quantum properties of matter to explore what’s going on at the atomic level.

A laser-cooling laboratory set-up
Chilling out William Phillips working on laser-cooling experiments in his laboratory circa 1986. (Courtesy: NIST)

Quantum mechanics is often viewed as being weird, counter-intuitive and strange. Is that also how you felt?

I’m the kind of person entranced by everything in the natural world. But even in graduate school, I don’t think I understood just how strange entanglement is. If two particles are entangled in a particular way, and you measure one to be spin “up”, say, then the other particle will necessarily be spin “down” – even though there’s no connection between them. Not even a signal travelling at the speed of light could get from one particle to the other to tell it, “You’d better be ‘down’ because the first one was measured to be ‘up’.” As a graduate student I didn’t understand how deliciously weird nature is because of quantum mechanics.

Is entanglement the most challenging concept in quantum mechanics?

It’s not that hard to understand entanglement in a formal sense. But it’s hard to get your mind wrapped around it because it’s so weird and distinct from the kinds of things that we experience on a day-to-day basis. The thing that it violates – local realism – seems so reasonable. But experiments done first by John Clauser and then Alain Aspect and Anton Zeilinger, who shared the Nobel Prize for Physics in 2022, basically proved that it happens.

What quantum principle has had the biggest impact on your work?

Superposition has enabled the creation of atomic clocks of incredible precision. When I first came to NIST in 1978, when it was still called the National Bureau of Standards, the very best clock in the world was in our labs in Boulder, Colorado. It was good to one part in 1013.

Because of Einstein’s general relativity, clocks run slower if they’re deeper in a gravitational potential. The effect isn’t big: Boulder is about 1.5 km above sea level and a clock there would run faster than a sea level clock by about 1.5 parts in 1013. So if you had two such clocks – one at sea level and one in Boulder – you’d barely be able to resolve the difference. Now, at least in part because of the laser cooling and trapping ideas that my group and I have worked on, one can resolve a difference of less than 1 mm with the clocks that exist today. I just find that so amazing.

What research are you and your colleagues at NIST currently involved in?

Our laboratory has been a generator of ideas and techniques that could be used by people who make atomic clocks. Jun Ye, for example, is making clocks from atoms trapped in a so-called optical lattice of overlapping laser beams that are better than one part in 1018 – two orders of magnitude better than the caesium clocks that define the second. These newer types of clocks could help us to redefine the second.

We’re also working on quantum information. Ordinary digital information is stored and processed using bits that represent 0 or 1. But the beauty of qubits is that they can be in a superposition state, which is both 0 and 1. It might sound like a disaster because one of the great strengths of binary information is there’s no uncertainty; it’s one thing or another. But putting quantum bits into superpositions means you can do a problem in a lot fewer operations than using a classical device.

In 1994, for example, Peter Shor devised an algorithm that can factor numbers quantum mechanically much faster, or using far fewer operations, than with an ordinary classical computer. Factoring is a “hard problem”, meaning that the number of operations to solve it grows exponentially with the size of the number. But if you do it quantum mechanically, it doesn’t grow exponentially – it becomes an “easy” problem, which I find absolutely amazing. Changing the hardware on which you do the calculation changes the complexity class of a problem.

How might that change be useful in practical terms?

Shor’s algorithm is important because of public key encryption, which we use whenever we buy something online with a credit card. A company sends your computer a big integer number that they’ve generated by multiplying two smaller numbers together. That number is used to encrypt your credit card number. Somebody trying to intercept the transmission can’t get any useful information because it would take centuries to factor this big number. But if an evildoer had a quantum computer, they could factor the number, figure out your credit card and use it to buy TVs or whatever evildoers buy.

Now, we don’t have quantum computers that can do this yet – they can’t even do simple problems, let alone factor big numbers. But if somebody did do that, they could decrypt messages that do matter, such as diplomatic or military secrets. Fortunately, quantum mechanics comes to the rescue through something called the no-cloning theorem. These quantum forms of encryption prevent an eavesdropper from intercepting a message, duplicating it and using it – it’s not allowed by the laws of physics.

William Phillips performing a demo
Sharing the excitement William Phillips performing a demo during a lecture at the Sigma Pi Sigma Congress in 2000. (Courtesy: AIP Emilio Segrè Visual Archives)

Quantum processors can be made from different qubits – not just cold atoms but trapped ions, superconducting circuits and others, too. Which do you think will turn out best?

My attitude is that it’s too early to settle on one particular platform. It may well be that the final quantum computer is a hybrid device, where computations are done on one platform and storage is done on another. Superconducting quantum computers are fast, but they can’t store information for long, whereas atoms and ions can store information for a really long time – they’re robust and isolated from the environment, but are slow at computing. So you might use the best features of different platforms in different parts of your quantum computer.

But what do I know? We’re a long way from having quantum computers that can do interesting problems faster than classical device. Sure, you might have heard somebody say they’ve used a quantum computer to solve a problem that would take a classical device a septillion years. But they’ve probably chosen a problem that was easy for a quantum computer and hard for a classical computer – and it was probably a problem nobody cares about.

When do you think we’ll see quantum computers solving practical problems?

People are definitely going to make money from factoring numbers and doing quantum chemistry. Learning how molecules behave could make a big difference to our lives. But none of this has happened yet, and we may still be pretty far away from it. In fact, I have proposed a bet with my colleague Carl Williams, who says that by 2045 we will have a quantum computer that can factor numbers that a classical computer of that time cannot. My view is we won’t. I expect to be dead by then. But I hope the bet will encourage people to solve the problems to make this work, like error correction. We’ll also put up money to fund a scholarship or a prize.

What do you think quantum computers will be most useful for in the nearer term?

What I want is a quantum computer that can tackle problems such as magnetism. Let’s say you have a 1D chain of atoms with spins that can point up or down. Quantum magnetism is a hard problem because with n spins there are 2n possible states and calculating the overall magnetism of a chain of more than a few tens of spins is impossible for a brute-force classical computer. But a quantum computer could do the job.

There are quantum computers that already have lots of qubits but you’re not going to get a reliable answer from them. For that you have to do error correction by assembling physical qubits into what’s known as a logical qubit.  They let you determine whether an error has happened and fix it, which is what people are just starting to do. It’s just so exciting right now.

What development in quantum physics should we most look out for?

The two main challenges are: how many logical qubits we can entangle with each other; and for how long they can maintain their coherence. I often say we need an “immortal” qubit, one that isn’t killed by the environment and lasts long enough to be used to do an interesting calculation. That’ll determine if you really have a competent quantum computer.

Reflecting on your career so far, what are you most proud of?

Back in around 1988, we were just fooling around in the lab trying to see if laser cooling was working the way it was supposed to. First indications were: everything’s great. But then we discovered that the temperature to which you could laser cool atoms was lower than everybody said was possible based on the theory at that time. This is called sub-Doppler laser cooling, and it was an accidental discovery; we weren’t looking for it.

People got excited and our friends in Paris at the École Normale came up with explanations for what was going on. Steve Chu, who was at that point at Stanford University, was also working on understanding the theory behind it, and that really changed things in an important way. In fact, all of today’s laser-cooled caesium atomic clocks use that feature that the temperature is lower than the original theory of laser cooling said it was.

William Phillips at the IYQ 2025 opening ceremony
Leading light William Phillips spoke at the opening ceremony of the International Year of Quantum Science and Technology (IYQ 2025) at UNESCO headquarters in Paris earlier this year. (© UNESCO/Marie Etchegoyen. Used with permission.)

Another thing that has been particularly important is Bose–Einstein condensation, which is an amazing process that happens because of a purely quantum-mechanical feature that makes atoms of the same kind fundamentally indistinguishable. It goes back to the work of Satyendra Nath Bose, who 100 years ago came up with the idea that photons are indistinguishable and therefore that the statistical mechanics of photons would be different from the usual statistical mechanics of Boltzmann or Maxwell.

Bose–Einstein condensates, where almost all the atoms are in the same quantum state, were facilitated by our discovery that the temperature could be so much lower. To get this state, you’ve got to cool the atoms to a very low temperature – and it helps if the atoms are colder to start with.

Did you make any other accidental discoveries?

We also accidentally discovered optical lattices. In 1968 a Russian physicist named Vladilen Letokhov came up with the idea of trapping atoms in a standing wave of light. This was 10 years before laser cooling arrived and made it possible to do such a thing, but it was a great idea because the atoms are trapped over such a small distance that a phenomenon called Dicke narrowing gets rid of the Doppler shift.

Everybody knew this was a possibility, but we weren’t looking for it. We were trying to measure the temperature of the atoms in the laser-cooling configuration, and the idea we came up with was to look at the Doppler shift of the scattered light. Light comes in, and if it bounces off an atom that’s moving, there’ll be a Doppler shift, and we can measure that Doppler shift and see the distribution of velocities.

So we did that, and the velocity distribution just floored us. It was so odd. Instead of being nice and smooth, there was a big sharp peak right in the middle. We didn’t know what it was. We thought briefly that we might have accidentally made a Bose–Einstein condensate, but then we realized, no, we’re trapping the atoms in an optical lattice so the Doppler shift goes away.

It wasn’t nearly as astounding as sub-Doppler laser cooling because it was expected, but it was certainly interesting, and it is now used for a number of applications, including the next generation of atomic clocks.

How important is serendipity in research?

Learning about things accidentally has been a recurring theme in our laboratory. In fact, I think it’s an important thing for people to understand about the way that science is done. Often, science is done not because people are working towards a particular goal but because they’re fooling around and see something unexpected. If all of our science activity is directed toward specific goals, we’ll miss a lot of really important stuff that allows us to get to those goals. Without this kind of curiosity-driven research, we won’t get where we need to go.

In a nutshell, what does quantum meant to you?

Quantum mechanics was the most important discovery of 20th-century physics. Wave–particle duality, which a lot of people would say was the “ordinary” part of quantum mechanics, has led to a technological revolution that has transformed our daily lives. We all walk around with mobile phones that wouldn’t exist were it not for quantum mechanics. So for me, quantum mechanics is this idea that waves are particles and particles are waves.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post William Phillips: why quantum physics is so ‘deliciously weird’ appeared first on Physics World.

❌