↩ Accueil

Vue normale

Reçu avant avant-hier

Neural networks discover unstable singularities in fluid systems

14 novembre 2025 à 09:56

Significant progress towards answering one of the Clay Mathematics Institute’s seven Millennium Prize Problems has been achieved using deep learning. The challenge is to establish whether or not the Navier-Stokes equation of fluid dynamics develops singularities. The work was done by researchers in the US and UK – including some at Google Deepmind. Some team members had already shown that simplified versions of the equation could develop stable singularities, which reliably form. In the new work, the researchers found unstable singularities, which form only under very specific conditions.

The Navier–Stokes partial differential equation was developed in the early 19th century by Claude-Louis Navier and George Stokes. It has proved its worth for modelling incompressible fluids in scenarios including water flow in pipes; airflow around aeroplanes; blood moving in veins; and magnetohydrodynamics in plasmas.

No-one has yet proved, however, whether smooth, non-singular solutions to the equation always exist in three dimensions. “In the real world, there is no singularity…there is no energy going to infinity,” says fluid dynamics expert Pedram Hassanzadeh of the University of Chicago. “So if you have an equation that has a singularity, it tells you that there is some physics that is missing.” In 2000, the Clay Mathematics Institute in Denver, Colorado listed this proof as one of seven key unsolved problems in mathematics, offering a reward of $1,000,000 for an answer.

Computational approaches

Researchers have traditionally tackled the problem analytically, but in recent decades high-level computational simulations have been used to assist in the search. In a 2023 paper, mathematician Tristan Buckmaster of New York University and colleagues used a special type of machine learning algorithm called a physics-informed neural network to address the question.

“The main difference is…you represent [the solution] in a highly non-linear way in terms of a neural network,” explains Buckmaster. This allows it to occupy a lower-dimensional space with fewer free parameters, and therefore to be optimized more efficiently. Using this approach, the researchers successfully obtained the first stable singularity in the Euler equation. This is an analogy to the Navier-Stokes equation that does not include viscosity.

A stable singularity will still occur if the initial conditions of the fluid are changed slightly – although the time taken for them to form may be altered. An unstable singularity, however, may never occur if the initial conditions are perturbed even infinitesimally. Some researchers have hypothesized that any singularities in the Navier-Stokes equation must be unstable, but finding unstable singularities in a computer model is extraordinarily difficult.

“Before our result there hadn’t been an unstable singularity for an incompressible fluid equation found numerically,” says geophysicist Ching-Yao Lai of California’s Stanford University.

Physics-informed neural network

In the new work the authors of the original paper and others teamed up with researchers at Google Deepmind to search for unstable singularities in a bounded 3D version of the Euler equation using a physics-informed neural network. “Unlike conventional neural networks that learn from vast datasets, we trained our models to match equations that model the laws of physics,” writes Yongji Wang of New York University and Stanford on Deepmind’s blog. “The network’s output is constantly checked against what the physical equations expect, and it learns by minimizing its ‘residual’, the amount by which its solution fails to satisfy the equations.”

After an exhaustive search at a precision that is orders of magnitude higher than a normal deep learning protocol, the researchers discovered new families of singularities in the 3D Euler equation. They also found singularities in the related incompressible porous media equation used to model fluid flows in soil or rock; and in the Boussinesq equation that models atmospheric flows.

The researchers also gleaned insights into the strength of the singularities. This could be important as stronger singularities might be less readily smoothed out by viscosity when moving from the Euler equation to the Navier-Stokes equation. The researchers are now seeking to model more open systems to study the problem in a more realistic space.

Hassanzadeh, who was not involved in the work, believes that it is significant – although the results are not unexpected. “If the Euler equation tells you that ‘Hey, there is a singularity,’ it just tells you that there is physics that is missing and that physics becomes very important around that singularity,” he explains. “In the case of Euler we know that you get the singularity because, at the very smallest scales, the effects of viscosity become important…Finding a singularity in the Euler equation is a big achievement, but it doesn’t answer the big question of whether Navier-Stokes is a representation of the real world, because for us Navier-Stokes represents everything.”

He says the extension to studying the full Navier-Stokes equation will be challenging but that “they are working with the best AI people in the world at Deepmind,” and concludes “I’m sure it’s something they’re thinking about”.

The work is available on the arXiv pre-print server.

The post Neural networks discover unstable singularities in fluid systems appeared first on Physics World.

Playing games by the quantum rulebook expends less energy

12 novembre 2025 à 10:00

Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.

Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.

In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.

This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.

Planning for future contingencies

To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.

“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”

For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.

The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.

Implications of the research

Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.

Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.

In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.

For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.

  • This article was amended on 19 November 2025 to correct a reference to the minimum energy cost of erasing information. It is the Landauer minimum, not the Landau minimum.

The post Playing games by the quantum rulebook expends less energy appeared first on Physics World.

The forgotten pioneers of computational physics

11 novembre 2025 à 11:00

When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before.

These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.

This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.

Human and mechanical computers

Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.

Three women in a basement lab performing calculations by hand
Human computers The term “computer” originally referred to people who performed calculations by hand. Here, Kay McNulty, Alyse Snyder and Sis Stump operate the differential analyser in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, circa 1942–1945. (Courtesy: US government)

In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.

As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.

The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.

The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.

Electronic computers

Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.

Two women adjusting switches on a large room-sized computer
World first The ENIAC was the first programmable, electronic, general-purpose digital computer. It was built at the US Army’s Ballistic Research Laboratory in 1945, then moved to the University of Pennsylvania in 1946. Its initial team of six coders and operators were all women, including Betty Jean Jennings (later Bartik – left of photo) and Frances Bilas (later Spence – right of photo). They are shown preparing the computer for Demonstration Day in February 1946. (Courtesy: US Army/ ARL Technical Library)

Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.

Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.

Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).

Early algorithms

The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.

In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys. 21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.

Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).

Black and white photo of two men looking at a chess board on a table in front of large rack of computer switches
A mind for chess Paul Stein (left) and Nicholas Metropolis play “Los Alamos” chess against the MANIAC. “Los Alamos” chess was a simplified version of the game, with the bishops removed to reduce the MANIAC’s processing time between moves. The computer still needed about 20 minutes between moves. The MANIAC became the first computer to beat a human opponent at chess in 1956. (Courtesy: US government / Los Alamos National Laboratory)

Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.

The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.

Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.

In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.

The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.

Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.

The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.

The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.

The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.

A clear-cut division of labour?

In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.

Three black and white photos of people operating a large room-sized computer
Skilled role Operating the ENIAC required an analytical mind as well as technical skills. (Top) Irwin Goldstein setting the switches on one of the ENIAC’s function tables at the Moore School of Electrical Engineering in 1946. (Middle) Gloria Gordon (later Bolotsky – crouching) and Ester Gerston (standing) wiring the right side of the ENIAC with a new program, c. 1946. (Bottom) Glenn A Beck changing a tube on the ENIAC. Replacing a bad tube meant checking among the ENIAC’s 19,000 possibilities. (Courtesy: US Army / Harold Breaux; US Army / ARL Technical Library; US Army)

There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H 43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.

Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.

Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science 122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine 29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.

Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.

With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”

An evolving identity

Black and white photo of a woman using equipment to punch a program onto paper tape
What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory)

In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.

1930s–1940s

Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.

Late 1940s – early 1950s

A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.

Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.

As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”

For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.

1960s

Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.

1980s – early 2000s

Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.

Early 2010s

A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.

Overlooked then, overlooked now

Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?

Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.

But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.

The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.

Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.

The post The forgotten pioneers of computational physics appeared first on Physics World.

‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information

22 septembre 2025 à 10:00

A few weeks ago, I experienced a classic annoyance of modern life: one of my computer games stopped working. The cause? An “update” to the emulator that translates old games into programs that today’s machines can execute. In my case, this update broke the translation process, and the tenuous thread of hardware and software connecting my laptop to the game’s 30-year-old code was severed.

For individuals, failures like this are irritating. But for the wider digital ecosystem, they’re a real problem – so much so, in fact, that Vint Cerf, who’s known as one of the “fathers of the Internet”, made them the subject of his talk at last week’s Heidelberg Laureate Forum (HLF) in Heidelberg, Germany.

“My big worry is that all this digital stuff won’t be there when we would like it to be there, or when our descendants would like to have it,” Cerf said.

How it used to work

Historically, the best ways of preserving information involved writing it on durable materials such as clay tablets, high-quality paper, or a form of animal skin known as vellum. These media, Cerf observed, “have one thing in common: they don’t require electricity to be stored and preserved.”

Digital media, in contrast, are much less robust. “Many of them are magnetic, and the magnetic material wears away after a while,” Cerf explained. Consequently, some old tapes are now so fragile that attempting to read them can actually lift the magnetic material off the surface: “You read it once and that’s it. It’s now transparent tape,” he said.

Being able to read data is just the beginning, though. As my broken computer game shows, you also need programs and equipment that can persuade those data to do things. “That’s often the thing that goes first,” Cerf told me in a press conference after his talk. For example, when Cerf recently tried to retrieve data from an old three-and-a-half-inch floppy disk, he discovered that doing so would require three additional components: a drive that could read the disk, a program that could open the files stored on the disk and an old computer that could run the program. “I needed a whole lot of software help and several stages in order to make that digital content useful,” Cerf said.

Creating ‘digital vellum’

As for how to fix this problem and create a digital version of vellum, Cerf, who has been the “Chief Internet Evangelist” at Google since 2005, listed three ideas that he finds interesting. The first involves a New Jersey, US-based company called SPhotonix that does research and development work in the UK and Switzerland. It’s using lasers to write bits of data into chunks of quartz crystal, which is a very long-lasting medium. However, each crystal is roughly the size of a hockey puck, and Cerf thinks that “real work” still needs to be done to organize the information the material holds.

The second idea is partly inspired by the clay tablets that proved so successful at preserving cuneiform writing from ancient Mesopotamia. Cerabyte, a start-up with facilities in Austria, Germany and the US, has developed a ceramic material that its founders claim could “store all data virtually forever”.

The third idea, and the one that seems to appeal most to Cerf, is to write digital information into DNA. That might sound like an inherently fragile medium, but as Cerf pointed out, “It’s actually a very robust molecule – otherwise, life wouldn’t have persisted for several billion years.” Provided you dehydrate the DNA first, he added, it lasts for “quite a long time”.

The question of how to read such information is not an easy one, and Cerf doesn’t have an answer to it. He is, however, hopeful that someone will find one. At the HLF, where he is such a revered figure that even the journalists want to take photos with him, he issued a call to arms for the young researchers in the audience. “I want you to appreciate the scope of the work that is required to preserve digital things,” Cerf told them. Without that work, he added, “recreating a digital environment in 100 years is not going to be a trivial matter.”

The post ‘Father of the Internet’ Vint Cerf expresses concern about the longevity of digital information appeared first on Physics World.

❌