“Global collaborations for European economic resilience” is the theme of SEMICON Europa 2025. The event is coming to Munich, Germany on 18–21 November and it will attract 25,000 semiconductor professionals who will enjoy presentations from over 200 speakers.
The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing.
SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor.
Accelerating the future of electro-photonic integration with SmarAct
As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics.
Fast Scan Align SmarAct’s high-speed and high-precision alignment solution redefines the limits of testing and packaging for integrated photonics. (Courtesy: SmarAct)
In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system.
At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability.
Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips.
Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest.
When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers.
Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike.
Optimized pressure monitoring: Efficient workflows with Thyracont’s VD800 digital compact vacuum meters
Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar.
VD800 Thryracont’s series combines high accuracy with a highly intuitive user interface, defining the next generation of compact vacuum meters. (Courtesy: Thyracont)
Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups.
The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file.
Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C.
The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar.
Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar.
The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics.
With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike.
To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows.
Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.
IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.
Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical and health sciences and Machine Learning: Engineeringfocuses on applied AI and non-traditional machine learning to the most complex engineering challenges.
Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.
Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.
What do you attribute to the huge growth over the past decade in research into and using machine learning?
Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.
Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.
Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.
Jay Lee (JL): Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing. The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.
What areas of machine learning excite you the most and why?
KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.
PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.
JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.
JL: One area is generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.
What vision do you have for your journal in the coming years?
KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.
PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.
JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.
JL: Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigour and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.
Dix ans après les attentats du 13 novembre 2015, les sociologues Sarah Gensburger et Gérôme Truc étudient la façon dont la mémoire collective se construit et comment certains filtres, notamment médiatiques, orientent nos souvenirs.
Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.
Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.
In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landauer’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.
This Landauer minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.
Planning for future contingencies
To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.
“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”
For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.
The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.
Implications of the research
Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.
Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.
In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.
For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.
This article was amended on 19 November 2025 to correct a reference to the minimum energy cost of erasing information. It is the Landauer minimum, not the Landau minimum.
Complex systems model real-world behaviour that is dynamic and often unpredictable. They are challenging to simulate because of nonlinearity, where small changes in conditions can lead to disproportionately large effects; many interacting variables, which make computational modelling cumbersome; and randomness, where outcomes are probabilistic. Machine learning is a powerful tool for understanding complex systems. It can be used to find hidden relationships in high-dimensional data and predict the future state of a system based on previous data.
This research develops a novel machine learning approach for complex systems that allows the user to extract a few collective descriptors of the system, referred to as inherent structural variables. The researchers used an autoencoder (a type of machine learning tool) to examine snapshots of how atoms are arranged in a system at any moment (called instantaneous atomic configurations). Each snapshot is then matched to a more stable version of that structure (an inherent structure), which represents the system’s underlying shape or pattern after thermal noise is removed. These inherent structural variables enable the analysis of structural transitions both in and out of equilibrium and the computation of high-resolution free-energy landscapes. These are detailed maps that show how a system’s energy changes as its structure or configuration changes, helping researchers understand stability, transitions, and dynamics in complex systems.
The model is versatile, and the authors demonstrate how it can be applied to metal nanoclusters and protein structures. In the case of Au147 nanoclusters (well-organised structures made up of 147 gold atoms), the inherent structural variables reveal three main types of stable structures that the gold nanocluster can adopt: fcc (face-centred cubic), Dh (decahedral), and Ih (icosahedral). These structures represent different stable states that a nanocluster can switch between, and on the high-resolution free-energy landscape, they appear as valleys. Moving from one valley to another isn’t easy, there are narrow paths or barriers between them, known as kinetic bottlenecks.
The researchers validated their machine learning model using Markov state models, which are mathematical tools that help analyse how a system moves between different states over time, and electron microscopy, which images atomic structures and can confirm that the predicted structures exist in the gold nanoclusters. The approach also captures non-equilibrium melting and freezing processes, offering insights into polymorph selection and metastable states. Scalability is demonstrated up to Au309 clusters.
The generality of the method is further demonstrated by applying it to the bradykinin peptide, a completely different type of system, identifying distinct structural motifs and transitions. Applying the method to a biological molecule provides further evidence that the machine learning approach is a flexible, powerful technique for studying many kinds of complex systems. This work contributes to machine learning strategies, as well as experimental and theoretical studies of complex systems, with potential applications across liquids, glasses, colloids, and biomolecules.
The Standard Model of particle physics is a very well-tested theory that describes the fundamental particles and their interactions. However, it does have several key limitations. For example, it doesn’t account for dark matter or why neutrinos have masses.
One of the main aims of experimental particle physics at the moment is therefore to search for signs of new physical phenomena beyond the Standard Model.
Finding something new like this would point us towards a better theoretical model of particle physics: one that can explain things that the Standard Model isn’t able to.
These searches often involve looking for rare or unexpected signals in high-energy particle collisions such as those at CERN’s Large Hadron Collider (LHC).
In a new paper published by the CMS collaboration, a new analysis method was used to search for new particles produced by proton-proton collisions at the at the LHC.
These particles would decay into two jets, but with unusual internal structure not typical of known particles like quarks or gluons.
The researchers used advanced machine learning techniques to identify jets with different substructures, applying various anomaly detection methods to maximise sensitivity to unknown signals.
Unlike traditional strategies, anomaly detection methods allow the AI models to identify anomalous patterns in the data without being provided specific simulated examples, giving them increased sensitivity to a wider range of potential new particles.
This time, they didn’t find any significant deviations from expected background values. Although no new particles were found, the results enabled the team to put several new theoretical models to the test for the first time. They were also able to set upper bounds on the production rates of several hypothetical particles.
Most importantly, the study demonstrates that machine learning can significantly enhance the sensitivity of searches for new physics, offering a powerful tool for future discoveries at the LHC.
Classical clocks have to obey the second law of thermodynamics: the higher their precision, the more entropy they produce. For a while, it seemed like quantum clocks might beat this system, at least in theory. This is because although quantum fluctuations produce no entropy, if you can count those fluctuations as clock “ticks”, you can make a clock with nonzero precision. Now, however, a collaboration of researchers across Europe has pinned down where the entropy-precision trade-off balances out: it’s in the measurement process. As project leader Natalia Ares observes, “There’s no such thing as a free lunch.”
The clock the team used to demonstrate this principle consists of a pair of quantum dots coupled by a thin tunnelling barrier. In this double quantum dot system, a “tick” occurs whenever an electron tunnels from one side of the system to the other, through both dots. Applying a bias voltage gives ticks a preferred direction.
This might not seem like the most obvious kind of clock. Indeed, as an actual timekeeping device, collaboration member Florian Meier describes it as “quite bad”. However, Ares points out that although the tunnelling process is random (stochastic), the period between ticks does have a mean and a standard deviation. Hence, given enough ticks, the number of ticks recorded will tell you something about how much time has passed.
In any case, Meier adds, they were not setting out to build the most accurate clock. Instead, they wanted to build a playground to explore basic principles of energy dissipation and clock precision, and for that, it works really well. “The really cool thing I like about what they did was that with that particular setup, you can really pinpoint the entropy dissipation of the measurement somehow in this quantum dot,” says Meier, a PhD student at the Technical University in Vienna, Austria. “I think that’s really unique in the field.”
Calculating the entropy
To measure the entropy of each quantum tick, the researchers measured the voltage drop (and associated heat dissipation) for each electron tunnelling through the double quantum dot. Vivek Wadhia, a DPhil student in Ares’s lab at the University of Oxford, UK who performed many of the measurements, points out that this single unit of charge does not equate to very much entropy. However, measuring the entropy of the tunnelling electron was not the full story.
Timekeeping: Vivek Wadhia working on the clock used in the experiment. (Courtesy: Wadhia et al./APS 2025)
Because the ticks of the quantum clock were, in effect, continuously monitored by the environment, the coherence time for each quantum tunnelling event was very short. However, because the time on this clock could not be observed directly by humans – unlike, say, the hands of a mechanical clock – the researchers needed another way to measure and record each tick.
For this, they turned to the electronics they were using in the lab and compared the power in versus the power out on a macroscopic scale. “That’s the cost of our measurement, right?” says Wadhia, adding that this cost includes both the measuring and recording of each tick. He stresses that they were not trying to find the most thermodynamically efficient measurement technique: “The idea was to understand how the readout compares to the clockwork.”
This classical entropy associated with measuring and recording each tick turns out to be nine orders of magnitude larger than the quantum entropy of a tick – more than enough for the system to operate as a clock with some level of precision. “The interesting thing is that such simple systems sometimes reveal how you can maybe improve precision at a very low cost thermodynamically,” Meier says.
As a next step, Ares plans to explore different arrangements of quantum dots, using Meier’s previous theoretical work to improve the clock’s precision. “We know that, for example, clocks in nature are not that energy intensive,” Ares tells Physics World. “So clearly, for biology, it is possible to run a lot of processes with stochastic clocks.”
When you look back at the early days of computing, some familiar names pop up, including John von Neumann, Nicholas Metropolis and Richard Feynman. But they were not lonely pioneers – they were part of a much larger group, using mechanical and then electronic computers to do calculations that had never been possible before.
These people, many of whom were women, were the first scientific programmers and computational scientists. Skilled in the complicated operation of early computing devices, they often had degrees in maths or science, and were an integral part of research efforts. And yet, their fundamental contributions are mostly forgotten.
This was in part because of their gender – it was an age when sexism was rife, and it was standard for women to be fired from their job after getting married. However, there is another important factor that is often overlooked, even in today’s scientific community – people in technical roles are often underappreciated and underacknowledged, even though they are the ones who make research possible.
Human and mechanical computers
Originally, a “computer” was a human being who did calculations by hand or with the help of a mechanical calculator. It is thought that the world’s first computational lab was set up in 1937 at Columbia University. But it wasn’t until the Second World War that the demand for computation really exploded; with the need for artillery calculations, new technologies and code breaking.
Human computers The term “computer” originally referred to people who performed calculations by hand. Here, Kay McNulty, Alyse Snyder and Sis Stump operate the differential analyser in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, circa 1942–1945. (Courtesy: US government)
In the US, the development of the atomic bomb during the Manhattan Project (established in 1943) required huge computational efforts, so it wasn’t long before the New Mexico site had a hand-computing group. Called the T-5 group of the Theoretical Division, it initially consisted of about 20 people. Most were women, including the spouses of other scientific staff. Among them was Mary Frankel, a mathematician married to physicist Stan Frankel; mathematician Augusta “Mici” Teller who was married to Edward Teller, the “father of the hydrogen bomb”; and Jean Bacher, the wife of physicist Robert Bacher.
As the war continued, the T-5 group expanded to include civilian recruits from the nearby towns and members of the Women’s Army Corps. Its staff worked around the clock, using printed mathematical tables and desk calculators in four-hour shifts – but that was not enough to keep up with the computational needs for bomb development. In the early spring of 1944, IBM punch-card machines were brought in to supplement the human power. They became so effective that the machines were soon being used for all large calculations, 24 hours a day, in three shifts.
The computational group continued to grow, and among the new recruits were Naomi Livesay and Eleonor Ewing. Livesay held an advanced degree in mathematics and had done a course in operating and programming IBM electric calculating machines, making her an ideal candidate for the T-5 division. She in turn recruited Ewing, a fellow mathematician who was a former colleague. The two young women supervised the running of the IBM machines around the clock.
The frantic pace of the T-5 group continued until the end of the war in September 1945. The development of the atomic bomb required an immense computational effort, which was made possible through hand and punch-card calculations.
Electronic computers
Shortly after the war ended, the first fully electronic, general-purpose computer – the Electronic Numerical Integrator and Computer (ENIAC) – became operational at the University of Pennsylvania, following two years of development. The project had been led by physicist John Mauchly and electrical engineer J Presper Eckert. The machine was operated and coded by six women – mathematicians Betty Jean Jennings (later Bartik); Kathleen, or Kay, McNulty (later Mauchly, then Antonelli); Frances Bilas (Spence); Marlyn Wescoff (Meltzer) and Ruth Lichterman (Teitelbaum); as well as Betty Snyder (Holberton) who had studied journalism.
World first The ENIAC was the first programmable, electronic, general-purpose digital computer. It was built at the US Army’s Ballistic Research Laboratory in 1945, then moved to the University of Pennsylvania in 1946. Its initial team of six coders and operators were all women, including Betty Jean Jennings (later Bartik – left of photo) and Frances Bilas (later Spence – right of photo). They are shown preparing the computer for Demonstration Day in February 1946. (Courtesy: US Army/ ARL Technical Library)
Polymath John von Neumann also got involved when looking for more computing power for projects at the new Los Alamos Laboratory, established in New Mexico in 1947. In fact, although originally designed to solve ballistic trajectory problems, the first problem to be run on the ENIAC was “the Los Alamos problem” – a thermonuclear feasibility calculation for Teller’s group studying the H-bomb.
Like in the Manhattan Project, several husband-and-wife teams worked on the ENIAC, the most famous being von Neumann and his wife Klara Dán, and mathematicians Adele and Herman Goldstine. Dán von Neumann in particular worked closely with Nicholas Metropolis, who alongside mathematician Stanislaw Ulam had coined the term Monte Carlo to describe numerical methods based on random sampling. Indeed, between 1948 and 1949 Dán von Neumann and Metropolis ran the first series of Monte Carlo simulations on an electronic computer.
Work began on a new machine at Los Alamos in 1948 – the Mathematical Analyzer Numerical Integrator and Automatic Computer (MANIAC) – which ran its first large-scale hydrodynamic calculation in March 1952. Many of its users were physicists, and its operators and coders included mathematicians Mary Tsingou (later Tsingou-Menzel), Marjorie Jones (Devaney) and Elaine Felix (Alei); plus Verna Ellingson (later Gardiner) and Lois Cook (Leurgans).
Early algorithms
The Los Alamos scientists tried all sorts of problems on the MANIAC, including a chess-playing program – the first documented case of a machine defeating a human at the game. However, two of these projects stand out because they had profound implications on computational science.
In 1953 the Tellers, together with Metropolis and physicists Arianna and Marshall Rosenbluth, published the seminal article “Equation of state calculations by fast computing machines” (J. Chem. Phys.21 1087). The work introduced the ideas behind the “Metropolis (later renamed Metropolis–Hastings) algorithm”, which is a Monte Carlo method that is based on the concept of “importance sampling”. (While Metropolis was involved in the development of Monte Carlo methods, it appears that he did not contribute directly to the article, but provided access to the MANIAC nightshift.) This is the progenitor of the Markov Chain Monte Carlo methods, which are widely used today throughout science and engineering.
Marshall later recalled how the research came about when he and Arianna had proposed using the MANIAC to study how solids melt (AIP Conf. Proc. 690 22).
A mind for chess Paul Stein (left) and Nicholas Metropolis play “Los Alamos” chess against the MANIAC. “Los Alamos” chess was a simplified version of the game, with the bishops removed to reduce the MANIAC’s processing time between moves. The computer still needed about 20 minutes between moves. The MANIAC became the first computer to beat a human opponent at chess in 1956. (Courtesy: US government / Los Alamos National Laboratory)
Edward Teller meanwhile had the idea of using statistical mechanics and taking ensemble averages instead of following detailed kinematics for each individual disk, and Mici helped with programming during the initial stages. However, the Rosenbluths did most of the work, with Arianna translating and programming the concepts into an algorithm.
The 1953 article is remarkable, not only because it led to the Metropolis algorithm, but also as one of the earliest examples of using a digital computer to simulate a physical system. The main innovation of this work was in developing “importance sampling”. Instead of sampling from random configurations, it samples with a bias toward physically important configurations which contribute more towards the integral.
Furthermore, the article also introduced another computational trick, known as “periodic boundary conditions” (PBCs): a set of conditions which are often used to approximate an infinitely large system by using a small part known as a “unit cell”. Both importance sampling and PBCs went on to become workhorse methods in computational physics.
In the summer of 1953, physicist Enrico Fermi, Ulam, Tsingou and physicist John Pasta also made a significant breakthrough using the MANIAC. They ran a “numerical experiment” as part of a series meant to illustrate possible uses of electronic computers in studying various physical phenomena.
The team modelled a 1D chain of oscillators with a small nonlinearity to see if it would behave as hypothesized, reaching an equilibrium with the energy redistributed equally across the modes (doi.org/10.2172/4376203). However, their work showed that this was not guaranteed for small perturbations – a non-trivial and non-intuitive observation that would not have been apparent without the simulations. It is the first example of a physics discovery made not by theoretical or experimental means, but through a computational approach. It would later lead to the discovery of solitons and integrable models, the development of chaos theory, and a deeper understanding of ergodic limits.
Although the paper says the work was done by all four scientists, Tsingou’s role was forgotten, and the results became known as the Fermi–Pasta–Ulam problem. It was not until 2008, when French physicist Thierry Dauxois advocated for giving her credit in a Physics Today article, that Tsingou’s contribution was properly acknowledged. Today the finding is called the Fermi–Pasta–Ulam–Tsingou problem.
The year 1953 also saw IBM’s first commercial, fully electronic computer – an IBM 701 – arrive at Los Alamos. Soon the theoretical division had two of these machines, which, alongside the MANIAC, gave the scientists unprecedented computing power. Among those to take advantage of the new devices were Martha Evans (whom very little is known about) and theoretical physicist Francis Harlow, who began to tackle the largely unexplored subject of computational fluid dynamics.
The idea was to use a mesh of cells through which the fluid, represented as particles, would move. This computational method made it possible to solve complex hydrodynamics problems (involving large distortions and compressions of the fluid) in 2D and 3D. Indeed, the method proved so effective that it became a standard tool in plasma physics where it has been applied to every conceivable topic from astrophysical plasmas to fusion energy.
The resulting internal Los Alamos report – The Particle-in-cell Method for Hydrodynamic Calculations, published in 1955 – showed Evans as first author and acknowledged eight people (including Evans) for the machine calculations. However, while Harlow is remembered as one of the pioneers of computational fluid dynamics, Evans was forgotten.
A clear-cut division of labour?
In an age where women had very limited access to the frontlines of research, the computational war effort brought many female researchers and technical staff in. As their contributions come more into the light, it becomes clearer that their role was not a simple clerical one.
Skilled role Operating the ENIAC required an analytical mind as well as technical skills. (Top) Irwin Goldstein setting the switches on one of the ENIAC’s function tables at the Moore School of Electrical Engineering in 1946. (Middle) Gloria Gordon (later Bolotsky – crouching) and Ester Gerston (standing) wiring the right side of the ENIAC with a new program, c. 1946. (Bottom) Glenn A Beck changing a tube on the ENIAC. Replacing a bad tube meant checking among the ENIAC’s 19,000 possibilities. (Courtesy: US Army / Harold Breaux; US Army / ARL Technical Library; US Army)
There is a view that the coders’ work was “the vital link between the physicist’s concepts (about which the coders more often than not didn’t have a clue) and their translation into a set of instructions that the computer was able to perform, in a language about which, more often than not, the physicists didn’t have a clue either”, as physicists Giovanni Battimelli and Giovanni Ciccotti wrote in 2018 (Eur. Phys. J. H43 303). But the examples we have seen show that some of the coders had a solid grasp of the physics, and some of the physicists had a good understanding of the machine operation. Rather than a skilled–non-skilled/men–women separation, the division of labour was blurred. Indeed, it was more of an effective collaboration between physicists, mathematicians and engineers.
Even in the early days of the T-5 division before electronic computers existed, Livesay and Ewing, for example, attended maths lectures from von Neumann, and introduced him to punch-card operations. As has been documented in books including Their Day in the Sun by Ruth Howes and Caroline Herzenberg, they also took part in the weekly colloquia held by J Robert Oppenheimer and other project leaders. This shows they should not be dismissed as mere human calculators and machine operators who supposedly “didn’t have a clue” about physics.
Verna Ellingson (Gardiner) is another forgotten coder who worked at Los Alamos. While little information about her can be found, she appears as the last author on a 1955 paper (Science122 465) written with Metropolis and physicist Joseph Hoffman – “Study of tumor cell populations by Monte Carlo methods”. The next year she was first author of “On certain sequences of integers defined by sieves” with mathematical physicist Roger Lazarus, Metropolis and Ulam (Mathematics Magazine29 117). She also worked with physicist George Gamow on attempts to discover the code for DNA selection of amino acids, which just shows the breadth of projects she was involved in.
Evans not only worked with Harlow but took part in a 1959 conference on self-organizing systems, where she queried AI pioneer Frank Rosenblatt on his ideas about human and machine learning. Her attendance at such a meeting, in an age when women were not common attendees, implies we should not view her as “just a coder”.
With their many and wide-ranging contributions, it is more than likely that Evans, Gardiner, Tsingou and many others were full-fledged researchers, and were perhaps even the first computational scientists. “These women were doing work that modern computational physicists in the [Los Alamos] lab’s XCP [Weapons Computational Physics] Division do,” says Nicholas Lewis, a historian at Los Alamos. “They needed a deep understanding of both the physics being studied, and of how to map the problem to the particular architecture of the machine being used.”
An evolving identity
What’s in a name Marjory Jones (later Devaney), a mathematician, shown in 1952 punching a program onto paper tape to be loaded into the MANIAC. The name of this role evolved to programmer during the 1950s. (Courtesy: US government / Los Alamos National Laboratory)
In the 1950s there was no computational physics or computer science, therefore it’s unsurprising that the practitioners of these disciplines went by different names, and their identity has evolved over the decades since.
1930s–1940s
Originally a “computer” was a person doing calculations by hand or with the help of a mechanical calculator.
Late 1940s – early 1950s
A “coder” was a person who translated mathematical concepts into a set of instructions in machine language. John von Neumann and Herman Goldstine distinguished between “coding” and “planning”, with the former being the lower-level work of turning flow diagrams into machine language (and doing the physical configuration) while the latter did the mathematical analysis of the problem.
Meanwhile, an “operator” would physically handle the computer (replacing punch cards, doing the rewiring, etc). In the late-1940s coders were also operators.
As historians note in the book ENIAC in Action this was an age where “It was hard to devise the mathematical treatment without a good knowledge of the processes of mechanical computation…It was also hard to operate the ENIAC without understanding something about the mathematical task it was undertaking.”
For the ENIAC a “programmer” was not a person but “a unit combining different sequences in a coherent computation”. The term would later shift and eventually overlap with the meaning of coder as a person’s job.
1960s
Computer scientist Margaret Hamilton, who led the development of the on-board flight software for NASA’s Apollo program, coined the term “software engineering” to distinguish the practice of designing, developing, testing and maintaining software from the engineering tasks associated with the hardware.
1980s – early 2000s
Using the term “programmer” for someone who coded computers peaked in popularity in the 1980s, but by the 2000s was replaced in favour of other job titles such as various flavours of “developer” or “software architect”.
Early 2010s
A “research software engineer” is a person who combines professional software engineering expertise with an intimate understanding of scientific research.
Overlooked then, overlooked now
Credited or not, these pioneering women and their contributions have been mostly forgotten, and only in recent decades have their roles come to light again. But why were they obscured by history in the first place?
Secrecy and sexism seem to be the main factors at play. For example, Livesay was not allowed to pursue a PhD in mathematics because she was a woman, and in the cases of the many married couples, the team contributions were attributed exclusively to the husband. The existence of the Manhattan Project was publicly announced in 1945, but documents that contain certain nuclear-weapons-related information remain classified today. Because these are likely to remain secret, we will never know the full extent of these pioneers’ contributions.
But another often overlooked reason is the widespread underappreciation of the key role of computational scientists and research software engineers, a term that was only coined just over a decade ago. Even today, these non-traditional research roles end up being undervalued. A 2022 survey by the UK Software Sustainability Institute, for example, showed that only 59% of research software engineers were named as authors, with barely a quarter (24%) mentioned in the acknowledgements or main text, while a sixth (16%) were not mentioned at all.
The separation between those who understand the physics and those who write the code, understand and operate the hardware goes back to the early days of computing (see box above), but it wasn’t entirely accurate even then. People who implement complex scientific computations are not just coders or skilled operators of supercomputers, but truly multidisciplinary scientists who have a deep understanding of the scientific problems, mathematics, computational methods and hardware.
Such people – whatever their gender – play a key role in advancing science and yet remain the unsung heroes of the discoveries their work enables. Perhaps what this story of the forgotten pioneers of computational physics tells us is that some views rooted in the 1950s are still influencing us today. It’s high time we moved on.
En 2025, le dоmaine des jеuх en ligne cоntinuе d’évоluеr, sоutenu par dеs innоvatiоns teсhnоlоgiquеs, des réglemеntatiоns renfоrcées et une cоnсurrenсe аcсruе parmi les différents оpérаteurs. Sélectiоnner unе platefоrmе fiable nе relève pas du hаsаrd, mais résulte d’une analyse apprоfоndiе aхéе sur la trаnsparencе, la qualité de l’eхpérience utilisаteur et lа sécurité des transасtiоns. L’оbjectif rеstе évidеnt : prоfiter du divertissement еn tоute tranquillité.
Considérer les critères essentiels
Avant de vоus еngager auprès d’un оpérаtеur, il convient de cоnsidérer plusiеurs éléments. Se cоntenter de la prеmièrе оffrе attrayantе aperçue sur lа page d’accuеil ne suffit pas. Un сhоiх éclairé nécеssite unе attentiоn partiсulière.
Lоrsque vоus sélеctiоnnez un site, pоrtez vоtre аttentiоn sur la sоlidité de sa licenсe, sa réputatiоn parmi les jоueurs еt la clarté dе ses соnditiоns générаles. Une plаtefоrme fiаble met généralement en аvant ses engаgements en matière de prоteсtiоn des dоnnées, dе prévеntiоn de lа dépendance et de gеstiоn des paiements. Les bоnus attrаctifs nе dоivent jamаis masquer l’essеntiel, à savоir la transparenсе et la fаcilité des rеtrаits de gаins. Vоus pоuvez ехplоrer ce site pоur décоuvrir unе cоmparаisоn des mеilleurs cаsinоs en ligne franсоphоnеs.
À garder à l’esprit :
liсеnce vаlidéе pаr une autоrité reсоnnuе ;
cоnditiоns des bоnus clairemеnt définies ;
largе éventail de jeuх prоpоsés pаr des éditеurs réputés ;
pоssibilité d’étаblir des limites dе mise еt de dépôt ;
service client réactif, dе préférenсе francоphоnе
Une sélectiоn réfléchiе permеt d’éviter des désagréments еt garantit unе eхpériencе fluide, agréable et durаble.
Comprendre les régulations et l’environnement légal en 2025
Lе paysage juridique du jеu en lignе cоnnаît dеs transfоrmаtiоns significatives. Dе nоmbrеuх pаys renfоrсent lеurs réglеmentatiоns. Par eхemple, en Indе, le Prоmоtiоn аnd Regulаtiоn оf Onlinе Gaming Aсt, 2025 étаblit désоrmais un cadre rigоurеuх pоur les plаtefоrmеs de jеuх d’аrgent.
Au sein de l’Uniоn eurоpéenne, les régulаtiоns varient cоnsidérаblement d’un État à l’аutre. Cеrtаins pays аdоptent une apprоchе оuverte, tandis que d’autrеs mаintiennent un mоnоpоle оu impоsent des restrictiоns strictеs.
En d’autres termes, avant de vоus inscrire, аssurez-vоus que l’оpérаteur respectе la législatiоn en vigueur dans vоtre pаys. Sur le plаn internatiоnal, lа juridiсtiоn dоit êtrе en аdéquаtiоn avеc vоtrе liеu de résidеnсе. Une licenсe nе gаrantit pas lа sécurité si l’оpérateur ne sе cоnfоrmе pas auх ехigences lоcales оu si les rеtraits sоnt еntravés. Il faut que le cаdre légаl sоit rоbuste еt vоus appоrte une certainе tranquillité d’esprit.
Privilégier un opérateur reconnu
La réputаtiоn d’une platеfоrme nе se bâtit jamais par hаsаrd. Elle est le fruit de nоmbrеuses аnnéеs de sérieuх, de cоnstanсe dans lеs paiements et d’un sеrvice client tоujоurs à l’écоute. En parсоurant ce top des meilleures plateformes pour jouer, vоus rеmarquerez que les оpérаteurs les plus dignes dе cоnfiance privilégient la quаlité de l’eхpériеnce utilisаtеur plutôt que des prоmesses eхаgérées.
Un оpératеur de renоm prоpоse généralеment :
un аccоmpagnemеnt clair et transparеnt ;
des paiements rapidеs une fоis vаlidés ;
un cаtalоgue varié mis à jоur régulièremеnt ;
une interfaсе intuitivе, idéаle pоur le jeu sur mоbilе.
Ainsi, le divertissement demeurе plaisant, équilibré et respectueuх des jоueurs.
Éviter les pièges courants
Certainеs оffrеs peuvеnt parаître alléchаntes dе prime аbоrd, mais une analyse attеntive met sоuvent en lumière des cоnditiоns diffiсiles оu dеs délais de rеtrаit trоp lоngs. Rеstez vigilаnt face à :
dеs bоnus qui semblent trоp beauх pоur être vrais ;
des sites sans mentiоns légales clаires ;
des tauх dе redistributiоn nоn précisés ;
un service сlient difficile à jоindre.
Lа prudеnce est primordiale pоur préserver le plaisir du jeu.
Gravity might be able to quantum-entangle particles even if the gravitational field itself is classical. That is the conclusion of a new study by Joseph Aziz and Richard Howl at Royal Holloway University of London. This challenges a popular view that such entanglement would necessarily imply that gravity must be quantized. This could be important in the ongoing attempt to develop a theory of quantum gravity that unites quantum mechanics with Einstein’s general theory of relativity.
“When you try to quantize the gravitational interaction in exactly the same way we tried to mathematically quantize the other forces, you end up with mathematically inconsistent results – you end up with infinities in your calculations that you can’t do anything about,” Howl tells Physics World.
“With the other interactions, we quantized them assuming they live within an independent background of classical space and time,” Howl explains. “But with quantum gravity, arguably you cannot do this [because] gravity describes space−time itself rather than something within space−time.”
Quantum entanglement occurs when two particles share linked quantum states even when separated. While it has become a powerful probe of the gravitational field, the central question is whether gravity can mediate entanglement only if it is itself quantum in nature.
General treatment
“It has generally been considered that the gravitational interaction can only entangle matter if the gravitational field is quantum,” Howl says. “We have argued that you could treat the gravitational interaction as more general than just the mediation of the gravitational field such that even if the field is classical, you could in principle entangle matter.”
Quantum field theory postulates that entanglement between masses arises through the exchange of virtual gravitons. These are hypothetical, transient quantum excitations of the gravitational field. Aziz and Howl propose that even if the field remains classical, virtual-matter processes can still generate entanglement indirectly. These processes, he says, “will persist even when the gravitational field is considered classical and could in principle allow for entanglement”.
The idea of probing the quantum nature of gravity through entanglement goes back to a suggestion by Richard Feynman in the 1950s. He envisioned placing a tiny mass in a superposition of two locations and checking whether its gravitational field was also superposed. Though elegant, the idea seemed untestable at the time.
“Recently, two proposals showed that one way you could test that the field is in a superposition (and thus quantum) is by putting two masses in a quantum superposition of two locations and seeing if they become entangled through the gravitational interaction,” says Howl. “This also seemed to be much more feasible than Feynman’s original idea.” Such experiments might use levitated diamonds, metallic spheres, or cold atoms – systems where both position and gravitational effects can be precisely controlled.
Aziz and Howl’s work, however, considers whether such entanglement could arise even if gravity is not quantum. They find that certain classical-gravity processes can in principle entangle particles, though the predicted effects are extremely small.
“These classical-gravity entangling effects are likely to be very small in near-future experiments,” Howl says. “This though is actually a good thing: it means that if we see entanglement…we can be confident that this means that gravity is quantized.”
The paper has drawn a strong response from some leading figures in the field, including Marletto at the University of Oxford, who co-developed the original idea of using gravitationally induced entanglement as a test of quantum gravity.
“The phenomenon of gravitationally induced entanglement … is a game changer in the search for quantum gravity, as it provides a way to detect quantum effects in the gravitational field indirectly, with laboratory-scale equipment,” she says. Detecting it would, she adds, “constitute the first experimental confirmation that gravity is quantum, and the first experimental refutation of Einstein’s relativity as an adequate theory of gravity”.
However, Marletto disputes Aziz and Howl’s interpretation. “No classical theory of gravity can mediate entanglement via local means, contrary to what the study purports to show,” she says. “What the study actually shows is that a classical theory with direct, non-local interactions between the quantum probes can get them entangled.” In her view, that mechanism “is not new and has been known for a long time”.
Despite the controversy, Howl and Marletto agree that experiments capable of detecting gravitationally induced entanglement would be transformative. “We see our work as strengthening the case for these proposed experiments,” Howl says. Marletto concurs that “detecting gravitationally induced entanglement will be a major milestone … and I hope and expect it will happen within the next decade.”
Howl hopes the work will encourage further discussion about quantum gravity. “It may also lead to more work on what other ways you could argue that classical gravity can lead to entanglement,” he says.
In the intense first few months of his second US presidency, Donald Trump has been enacting his old campaign promise with a vengeance. He’s ridding all the muck from the American federal bureaucracy, he claims, and finally bringing it back under control.
Scientific projects and institutions are particular targets of his, with one recent casualty being the High Energy Physics Advisory Panel (HEPAP). Outsiders might shrug their shoulders at a panel of scientists being axed. Panels come and go. Also, any development in Washington these days is accompanied by confusion, uncertainty, and the possibility of reversal.
But HEPAP’s dissolution is different. Set up in 1967, it’s been a valuable and long-standing advisory committee of the Office of Science at the US Department of Energy (DOE). HEPAP has a distinguished track record of developing, supporting and reviewing high-energy physics programmes, setting priorities and balancing different areas. Many scientists are horrified by its axing.
The terminator
Since taking office in January 2025, Trump has issued a flurry of executive orders – presidential decrees that do not need Congressional approval, legislative review or public debate. One order, which he signed in February, was entitled “Commencing the Reduction of the Federal Bureaucracy”.
It sought to reduce parts of the government “that the President has determined are unnecessary”, seeking to eliminate “waste and abuse, reduce inflation, and promote American freedom and innovation”. While supporters see those as laudable goals, opponents believe the order is driving a stake into the heart of US science.
Hugely valuable, long-standing scientific advisory committees have been axed at key federal agencies, including NASA, the National Science Foundation, the Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Geological Service, the National Institute of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention.
What’s more, the committees were terminated without warning or debate, eliminating load-bearing pillars of the US science infrastructure. It was, as the Columbia University sociologist Gil Eyal put it in a recent talk, the “Trump 2.0 Blitzkrieg”.
Then, on 30 September, Trump’s enablers took aim at advisory committees at the DOE Office of Science. According to the DOE’s website, a new Office of Science Advisory Committee (SCAC) will take over functions of the six former discretionary (non-legislatively mandated) Office of Science advisory committees.
“Any current charged responsibilities of these former committees will be transferred to the SCAC,” the website states matter-of-factly. The committee will provide “independent, consensus advice regarding complex scientific and technical issues” to the entire Office of Science. Its members will be appointed by under secretary for science Dario Gil – a political appointee.
Apart from HEPAP, others axed without warning were the Nuclear Science Advisory Committee, the Basic Energy Sciences Advisory Committee, the Fusion Energy Sciences Advisory Committee, the Advanced Scientific Computing Advisory Committee, and the Biological and Environmental Research Advisory Committee.
Over the years, each committee served a different community and was represented by prominent research scientists who were closely in touch with other researchers. Each committee could therefore assemble the awareness of – and technical knowledge about – emerging promising initiatives and identify the less promising ones.
Many committee members only learned of the changes when they received letters or e-mails out of the blue informing them that their committee had been dissolved, that a new committee had replaced them, and that they were not on it. No explanation was given.
Closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics
Physicists whom I have spoken to are appalled for two main reasons. One is that closing HEPAP and the other Office of Science committees will hamper both the technical support and community input that it has relied on to promote the efficient, effective and robust growth of physics.
“Speaking just for high-energy physics, HEPAP gave feedback on the DOE and NSF funding strategies and priorities for the high-energy physics experiments,” says Kay Kinoshita from the University of Cincinnati, a former HEPAP member. “The panel system provided a conduit for information between the agencies and the community, so the community felt heard and the agencies were (mostly) aligned with the community consensus”.
As Kinoshita continued: “There are complex questions that each panel has to deal with. even within the topical area. It’s hard to see how a broader panel is going to make better strategic decisions, ‘better’ meaning in terms of scientific advancement. In terms of community buy-in I expect it will be worse.”
Other physicists cite a second reason for alarm. The elimination of the advisory committees spreads the expertise so thinly as to increase the likelihood of political pressure on decisions. “If you have one committee you are not going to get the right kind of fine detail,” says Michael Lubell, a physicist and science-policy expert at the City College of New York, who has sat in on meetings of most of the Office of Science advisory committees.
“You’ll get opinions from people outside that area and you won’t be able to get information that you need as a policy maker to decide how the resources are to be allocated,” he adds. “A condensed-matter physicist for example, would probably have insufficient knowledge to advise DOE on particle physics. Instead, new committee members would be expected to vet programs based on ideological conformity to what the Administration wants.”
The critical point
At the end of the Second World War, the US began to construct an ambitious long-range plan to promote science that began with the establishment of the National Science Foundation in 1950 and developed and extended ever since. The plan aimed to incorporate both the ability of elected politicians to direct science towards social needs and the independence of scientists to explore what is possible.
US presidents have, of course, had pet scientific projects: the War on Cancer (Nixon), the Moon Shot (Kennedy), promoting renewable energy (Carter), to mention a few. But it is one thing for a president to set science to producing a socially desirable product and another to manipulate the scientific process itself.
“This is another sad day for American science,” says Lubell. “If I were a young person just embarking on a career, I would get the hell out of the country. I would not want to waste the most creative years of my life waiting for things to turn around, if they ever do. What a way to destroy a legacy!”
The end of HEPAP is not draining a swamp but creating one.
At-scale quantum By integrating Delft Circuits’ Cri/oFlex® cabling technology (above) into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from an industrially proven and fully scalable I/O solution for their quantum systems. Cri/oFlex® cabling combines fully integrated filtering with a compact footprint and low heatload. (Courtesy: Delft Circuits)
Better together. That’s the headline take on a newly inked technology partnership between Bluefors, a heavyweight Finnish supplier of cryogenic measurement systems, and Delft Circuits, a Dutch manufacturer of specialist I/O cabling solutions designed for the scale-up and industrial deployment of next-generation quantum computers.
The drivers behind the tie-up are clear: as quantum systems evolve – think vastly increased qubit counts plus ever-more exacting requirements on gate fidelity – developers in research and industry will reach a point where current coax cabling technology doesn’t cut it anymore. The answer? Collaboration, joined-up thinking and product innovation.
In short, by integrating Delft Circuits’ Cri/oFlex® cabling technology into Bluefors’ dilution refrigerators, the vendors’ combined customer base will benefit from a complete, industrially proven and fully scalable I/O solution for their quantum systems. The end-game: to overcome the quantum tech industry’s biggest bottleneck, forging a development pathway from quantum computing systems with hundreds of qubits today to tens of thousands of qubits by 2030.
Joined-up thinking
For context, Cri/oFlex® cryogenic RF cables comprise a stripline (a type of transmission line) based on planar microwave circuitry – essentially a conducting strip encapsulated in dielectric material and sandwiched between two conducting ground planes. The use of the polyimide Kapton® as the dielectric ensures Cri/oFlex® cables remain flexible in cryogenic environments (which are necessary to generate quantum states, manipulate them and read them out), with silver or superconducting NbTi providing the conductive strip and ground layer. The standard product comes as a multichannel flex (eight channels per flex) with a range of I/O channel configurations tailored to the customer’s application needs, including flux bias lines, microwave drive lines, signal lines or read-out lines.
“Together with Bluefors, we will accelerate the journey to quantum advantage,” says Robby Ferdinandus of Delft Circuits. (Courtesy: Delft Circuits)
“Reliability is a given with Cri/oFlex®,” says Robby Ferdinandus, global chief commercial officer for Delft Circuits and a driving force behind the partnership with Bluefors. “By integrating components such as attenuators and filters directly into the flex,” he adds, “we eliminate extra parts and reduce points of failure. Combined with fast thermalization at every temperature stage, our technology ensures stable performance across thousands of channels, unmatched by any other I/O solution.”
Technology aside, the new partnership is informed by a “one-stop shop” mindset, offering the high-density Cri/oFlex® solution pre-installed and fully tested in Bluefors cryogenic measurement systems. For the end-user, think turnkey efficiency: streamlined installation, commissioning, acceptance and, ultimately, enhanced system uptime.
Scalability is front-and-centre too, thanks to Delft Circuits’ pre-assembled and tested side-loading systems. The high-density I/O cabling solution delivers up to 50% more channels per side-loading port to Bluefors’ (current) High Density Wiring, providing a total of 1536 input or control lines to an XLDsl cryostat. In addition, more wiring lines can be added to multiple KF ports as a custom option.
Doubling up for growth
“Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex®,” says Reetta Kaila of Bluefors. (Courtesy: Bluefors)
Reciprocally, there’s significant commercial upside to this partnership. Bluefors is the quantum industry’s leading cryogenic systems OEM and, by extension, Delft Circuits now has access to the former’s established global customer base, amplifying its channels to market by orders of magnitude. “We have stepped into the big league here and, working together, we will ensure that Cri/oFlex® becomes a core enabling technology on the journey to quantum advantage,” notes Ferdinandus.
That view is amplified by Reetta Kaila, director for global technical sales and new products at Bluefors (and, alongside Ferdinandus, a main-mover behind the partnership). “Our market position in cryogenics is strong, so we have the ‘muscle’ and specialist know-how to integrate innovative technologies like Cri/oFlex® into our dilution refrigerators,” she explains.
A win-win, it seems, along several coordinates. “The Bluefors sales teams are excited to add Cri/oFlex® into the product portfolio,” Kaila adds. “It’s worth noting, though, that the collaboration extends across multiple functions – technical and commercial – and will therefore ensure close alignment of our respective innovation roadmaps.”
Scalable I/O will accelerate quantum innovation
Deconstructed, Delft Circuits’ value proposition is all about enabling, from an I/O perspective, the transition of quantum technologies out of the R&D lab into at-scale practical applications. More specifically: Cri/oFlex® technology allows quantum scientists and engineers to increase the I/O cabling density of their systems easily – and by a lot – while guaranteeing high gate fidelities (minimizing noise and heating) as well as market-leading uptime and reliability.
To put some hard-and-fast performance milestones against that claim, the company has published a granular product development roadmap that aligns Cri/oFlex® cabling specifications against the anticipated evolution of quantum computing systems – from 150+ qubits today out to 40,000 qubits and beyond in 2029 (see figure below, “Quantum alignment”).
The resulting milestones are based on a study of the development roadmaps of more than 10 full-stack quantum computing vendors – a consolidated view that will ensure the “guiding principles” of Delft Circuits’ innovation roadmap align versus the aggregate quantity and quality of qubits targeted by the system developers over time.
Quantum alignment The new product development roadmap from Delft Circuits starts with the guiding principles, highlighting performance milestones to be achieved by the quantum computing industry over the next five years – specifically, the number of physical qubits per system and gate fidelities. By extension, cabling metrics in the Delft Circuits roadmap focus on “quantity”: the number of I/O channels per loader (i.e. the wiring trees that insert into a cryostat, with typical cryostats having between 6–24 slots for loaders) and the number of channels per cryostat (summing across all loaders); also on “quality” (the crosstalk in the cabling flex). To complete the picture, the roadmap outlines product introductions at a conceptual level to enable both the quantity and quality timelines. (Courtesy: Delft Circuits)
Ultrasound-powered stingraybot A bioinspired soft surgical robot with artificial muscles made from microbubble arrays swims forward under swept-frequency ultrasound excitation. Right panels: motion of the microbubble-array fins during swimming. Lower inset: schematic of the patterned microbubble arrays. Scale bar: 1 cm. (Courtesy: CC BY 4.0/Nature 10.1038/s41586-025-09650-3)
Artificial muscles that offer flexible functionality could prove invaluable for a range of applications, from soft robotics and wearables to biomedical instrumentation and minimally invasive surgery. Current designs, however, are limited by complex actuation mechanisms and challenges in miniaturization. Aiming to overcome these obstacles, a research team headed up at the Acoustic Robotics Systems Lab (ETH Zürich) in Switzerland is using microbubbles to create soft, programmable artificial muscles that can be wirelessly controlled via targeted ultrasound activation.
Gas-filled microbubbles can concentrate acoustic energy, providing a means to initiate movement with rapid response times and high spatial accuracy. In this study, reported in Nature, team leader Daniel Ahmed and colleagues built a synthetic muscle from a thin flexible membrane containing arrays of more than 10,000 microbubbles. When acoustically activated, the microbubbles generate thrust and cause the membrane to deform. And as different sized microbubbles resonate at different ultrasound frequencies, the arrays can be designed to provide programmable motion.
“Ultrasound is safe, non-invasive, can penetrate deep into the body and can generate large forces. However, without microbubbles, a much higher force is needed to deform the muscle, and selective activation is difficult,” Ahmed explains. “To overcome this limitation, we use microbubbles, which amplify force generation at specific sites and act as resonant systems. As a result, we can activate the artificial muscle at safe ultrasound power levels and generate complex motion.”
The team created the artificial muscles from a thin silicone membrane patterned with an array of cylindrical microcavities with the dimensions of the desired microbubbles. Submerging this membrane in a water-filled acoustic chamber trapped tens of thousands of gas bubbles within the cavities (one per cavity). The final device contains around 3000 microbubbles per mm2 and weighs just 0.047 mg/mm2.
To demonstrate acoustic activation, the researchers fabricated an artificial muscle containing uniform-sized microbubbles on one surface. They fixed one end of the muscle and exposed it to resonant frequency ultrasound, simultaneously exciting the entire microbubble array. The resulting oscillations generated acoustic streaming and radiation forces, causing the muscle to flex upward, with an amplitude dependent upon the ultrasound excitation voltage.
Next, the team designed an 80 µm-thick, 3 x 0.5 cm artificial muscle containing arrays of three different sized microbubbles. Stimulation at 96.5, 82.3 and 33.2 kHz induced deformations in regions containing bubbles with diameters of 12, 16 and 66 µm, respectively. Exposure to swept-frequency ultrasound covering the three resonant frequencies sequentially activated the different arrays, resulting in an undulatory motion.
Microbubble muscles (a) Artificial muscle with thousands of microbubbles on its lower surface bends upwards when excited by ultrasound. (b) Artificial muscle containing arrays of microbubbles with three different diameters, each corresponding to a distinct natural frequency, exhibits undulatory motion (c) under swept-frequency ultrasound excitation. (Courtesy: CC BY 4.0/Nature 10.1038/s41586-025-09650-3)
A multitude of functions
Ahmed and colleagues showcased a range of applications for the artificial muscle by integrating microbubble arrays into functional devices, such as a miniature soft gripper for trapping and manipulating fragile live animals. The gripper comprises six to ten microbubble array-based “tentacles” that, when subjected to ultrasound, gently gripped a zebrafish larva with sub-100 ms response time. When the ultrasound was switched off, the tentacles opened and the larva swam away with no adverse effects.
The artificial muscle can function as a conformable robotic skin that sticks and imparts motion to a stationary object, which the team demonstrated by attaching it to the surface of an excised pig heart. It can also be employed for targeted drug delivery – shown by the use of a microbubble-array robotic patch for ultrasound-enhanced delivery of dye into an agar block.
The researchers also built an ultrasound-powered “stingraybot”, a soft surgical robot with artificial muscles (arrays of differently sized microbubbles) on either side to mimic the pectoral fins of a stingray. Exposure to swept-frequency ultrasound induced an undulatory motion that wirelessly propelled the 4 cm-long robot forward at a speed of about 0.8 body lengths per second.
To demonstrate future practical biomedical applications, such as supporting minimally invasive surgery or site-specific drug release within the gastrointestinal tract, the researchers encapsulated a rolled up stingraybot within a 27 x 12 mm edible capsule. Once released into the stomach, the robot could be propelled on demand under ultrasound actuation. They also pre-folded a linear artificial muscle into a wheel shape and showed that swept ultrasound frequencies could propel it along the complex mucosal surfaces of the stomach and intestine.
“Through the strategic use of microbubble configurations and voltage and frequency as ultrasound excitation parameters, we engineered a diverse range of preprogrammed movements and demonstrated their applicability across various robotic platforms,” the researchers write. “Looking ahead, these artificial muscles hold transformative potential across cutting-edge fields such as soft robotics, haptic medical devices and minimally invasive surgery.”
Ahmed says that the team is currently developing soft patches that can conform to biological surfaces for drug delivery inside the bladder. “We are also designing soft, flexible robots that can wrap around a tumour and release drugs directly at the target site,” he tells Physics World. “Basically, we’re creating mobile conformable drug-delivery patches.”
Pour ses utilisateurs, le casque de réalité virtuelle d’Apple ne manque pas d’avantages, et est même un outil de travail envisageable. Il s’avère toutefois inconfortable et manque cruellement d’expériences en 3D.
L’homme d’affaires catholique, à l’agenda politique revendiqué, a investi dans plusieurs sociétés spécialisées dans la VR et les expériences immersives.
Principalement implantées en France, mais aussi de plus en plus en Europe, au Moyen-Orient et aux Etats-Unis, les salles estampillées Esports Virtual Arenas (EVA) tentent de lier effort physique et réalité virtuelle.
Les casques VR utilisés par EVA lisent des QR codes disposés sur le pourtour de ses « arènes » afin de situer dans l’espace les joueurs qui les portent.
Ce coûteux dispositif permet de passer doucement d’une vision de la pièce qui nous entoure à un univers totalement virtuel. Il sera disponible en 2024 au prix de 3 500 dollars, soit environ 3 250 euros.
Une équipe chinoise propose d’associer aux casques immersifs un prototype diffusant diverses fragrances pour enrichir l’expérience sensorielle des utilisateurs.
Des travaux ont montré que des vibrations impulsées au niveau de la main augmentaient le sentiment de présence des personnes dans un environnement dématérialisé.
Un journaliste s’entraîne avec un casque VR dans l’école de formation de l’usine de retraitement Orano la Hague, à La Hague (Manche), le 14 décembre 2022.
Le directeur chargé du métavers chez Meta (nouveau nom de Facebook), Anand Dass, dresse, dans un entretien au « Monde », les « potentialités » transformatrices du virtuel.
Alors que le métavers a été éclipsé par le boom de l’intelligence artificielle, la réalité virtuelle, base des univers parallèles, se démocratise. Transports, arts, industrie, immobilier, jeux vidéo… Elle investit progressivement tous les secteurs.
A Nyeri (Kenya), le 2 juin 2022, un adolescent utilise un casque de réalité virtuelle Oculus pour visiter le palais de Buckingham, lors de la célébration du jubilé de platine de la reine Elizabeth II.
L’élégante publication anglophone, disponible seulement en librairie, revendique la déconnexion, devenue une nécessité face à la fascination morbide qu’exercent sur nous nos téléphones.
« The Analog Sea Review », numéro 4, 218 pages, 20 euros.
Le 10 mars à partir de 19 heures, la plate-forme sociale en réalité virtuelle de Microsoft sera mise hors ligne. 48 heures avant le « glitch ultime » de ce monde virtuel.