↩ Accueil

Vue normale

Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study

6 novembre 2025 à 16:09

International research collaborations will be increasingly led by scientists in China over the coming decade. That is according to a new study by researchers at the University of Chicago, which finds that the power balance in international science has shifted markedly away from the US and towards China over the last 25 years (Proc. Natl. Acad. Sci. 122 e2414893122).

To explore China’s role in global science, the team used a machine-learning model to predict the lead researchers of almost six million scientific papers that involved international collaboration listed by online bibliographic catalogue OpenAlex. The model was trained on author data from 80 000 papers published in high-profile journals that routinely detail author contributions, including team leadership.

The study found that between 2010 and 2012 there were only 4429 scientists from China who were likely to have led China-US collaborations. By 2023, this number had risen to 12714, meaning that the proportion of team leaders affiliated with Chinese institutions had risen from 30% to 45%.

Key areas

If this trend continues, China will hit “leadership parity” with the US in chemistry, materials science and computer science by 2028, with maths, physics and engineering being level by 2031. The analysis also suggests that China will achieve leadership parity with the US in eight “critical technology” areas by 2030, including AI, semiconductors, communications, energy and high-performance computing.

For China-UK partnerships, the model found that equality had already been reached in 2019, while EU and China leadership roles will be on par this year or next. The authors also found that China has been actively training scientists in nations in the “Belt and Road Initiative” which seeks to connect China closer to the world through investments and infrastructure projects.

This, the researchers warn, limits the ability to isolate science done in China. Instead, they suggest that it could inspire a different course of action, with the US and other countries expanding their engagement with the developing world to train a global workforce and accelerate scientific advancements beneficial to their economies.

The post Scientific collaborations increasingly more likely to be led by Chinese scientists, finds study appeared first on Physics World.

Unlocking the potential of 2D materials: graphene and much more

6 novembre 2025 à 15:49

This episode explores the scientific and technological significance of 2D materials such as graphene. My guest is Antonio Rossi, who is a researcher in 2D materials engineering at the Italian Institute of Technology in Genoa.

Rossi explains why 2D materials are fundamentally different than their 3D counterparts – and how these differences are driving scientific progress and the development of new and exciting technologies.

Graphene is the most famous 2D material and Rossi talks about today’s real-world applications of graphene in coatings. We also chat about the challenges facing scientists and engineers who are trying to exploit graphene’s unique electronic properties.

Rossi’s current research focuses on two other promising 2D materials – tungsten disulphide and hexagonal boron nitride. He explains why tungsten disulphide shows great technological promise because of its favourable electronic and optical properties; and why hexagonal boron nitride is emerging as an ideal substrate for creating 2D devices.

Artificial intelligence (AI) is becoming an important tool in developing new 2D materials. Rossi explains how his team is developing feedback loops that connect AI with the fabrication and characterization of new materials. Our conversation also touches on the use of 2D materials in quantum science and technology.

IOP Publishing’s new Progress In Series: Research Highlights website offers quick, accessible summaries of top papers from leading journals like Reports on Progress in Physics and Progress in Energy. Whether you’re short on time or just want the essentials, these highlights help you expand your knowledge of leading topics.

The post Unlocking the potential of 2D materials: graphene and much more appeared first on Physics World.

Ultrasound probe maps real-time blood flow across entire organs

6 novembre 2025 à 10:35

Microcirculation – the flow of blood through the smallest vessels – is responsible for distributing oxygen and nutrients to tissues and organs throughout the body. Mapping this flow at the whole-organ scale could enhance our understanding of the circulatory system and improve diagnosis of vascular disorders. With this aim, researchers at the Institute Physics for Medicine Paris (Inserm, ESPCI-PSL, CNRS) have combined 3D ultrasound localization microscopy (ULM) with a multi-lens array method to image blood flow dynamics in entire organs with micrometric resolution, reporting their findings in Nature Communications.

“Beyond understanding how an organ functions across different spatial scales, imaging the vasculature of an entire organ reveals the spatial relationships between macro- and micro-vascular networks, providing a comprehensive assessment of its structural and functional organization,” explains senior author Clement Papadacci.

The 3D ULM technique works by localizing intravenously injected microbubbles. Offering a spatial resolution roughly ten times finer than conventional ultrasound, 3D ULM can map and quantify micro-scale vascular structures. But while the method has proved valuable for mapping whole organs in small animals, visualizing entire organs in large animals or humans is hindered by the limitations of existing technology.

To enable wide field-of-view coverage while maintaining high-resolution imaging, the team – led by PhD student Nabil Haidour under Papadacci’s supervision – developed a multi-lens array probe. The probe comprises an array of 252 large (4.5 mm²) ultrasound transducer elements. The use of large elements increases the probe’s sensitive area to a total footprint of 104 x 82 mm, while maintaining a relatively low element count.

Each transducer element is equipped with an individual acoustic diverging lens. “Large elements alone are too directive to create an image, as they cannot generate sufficient overlap or interference between beams,” Papadacci explains. “The acoustic lenses reduce this directivity, allowing the elements to focus and coherently combine signals in reception, thus enabling volumetric image formation.”

Whole-organ imaging

After validating their method via numerical simulations and phantom experiments, the team used a multi-lens array probe driven by a clinical ultrasound system to perform 3D dynamic ULM of an entire explanted porcine heart – considered an ideal cardiac model as its vascular anatomies and dimensions are comparable to those of humans.

The heart was perfused with microbubble solution, enabling the probe to visualize the whole coronary microcirculation network over a large volume of 120 x 100 x 82 mm, with a spatial resolution of around 125 µm. The technique enabled visualization of both large vessels and the finest microcirculation in real time. The team also used a skeletonization algorithm to measure vessel radii at each voxel, which ranged from approximately 75 to 600 µm.

As well as structural imaging, the probe can also assess flow dynamics across all vascular scales, with a high temporal resolution of 312 frames/s. By tracking the microbubbles, the researchers estimated absolute flow velocities ranging from 10 mm/s in small vessels to over 300 mm/s in the largest. They could also differentiate arteries and veins based on the flow direction in the coronary network.

In vivo demonstrations

Next, the researchers used the multi-lens array probe to image the entire kidney and liver of an anaesthetized pig at the Veterinary school of Maison Alfort, with the probe positioned in front of the kidney or liver, respectively, and held using an articulated arm. They employed electrocardiography to synchronize the ultrasound acquisitions with periods of minimal respiratory motion and injected microbubble solution intravenously into the animal’s ear.

In vivo imaging of a porcine kidney
In vivo imaging Left: 3D microbubble density map of the porcine kidney. Centre: 3D flow map of microbubble velocity distribution. Right: 3D flow map showing arterial (red) and venous (blue) flow. (Courtesy: CC BY 4.0/Nat. Commun. 10.1038/s41467-025-64911-z)

The probe mapped the vascular network of the kidney over a 60 x 80 x 40 mm volume with a spatial resolution of 147 µm. The maximum 3D absolute flow velocity was approximately 280 mm/s in the large vessels and the vessel radii ranged from 70 to 400 µm. The team also used directional flow measurements to identify the arterial and venous flow systems.

Liver imaging is more challenging due to respiratory, cardiac and stomach motions. Nevertheless, 3D dynamic ULM enabled high-depth visualization of a large volume of liver vasculature (65 x 100 x 82 mm) with a spatial resolution of 200 µm. Here, the researchers used dynamic velocity measurement to identify the liver’s three blood networks (arterial, venous and portal veins).

“The combination of whole-organ volumetric imaging with high-resolution vascular quantification effectively addresses key limitations of existing modalities, such as ultrasound Doppler imaging, CT angiography and 4D flow MRI,” they write.

Clinical applications of 3D dynamic ULM still need to be demonstrated, but Papadacci suggests that the technique has strong potential for evaluating kidney transplants, coronary microcirculation disorders, stroke, aneurysms and neoangiogenesis in cancer. “It could also become a powerful tool for monitoring treatment response and vascular remodelling over time,” he adds.

Papadacci and colleagues anticipate that translation to human applications will be possible in the near future and plan to begin a clinical trial early in 2026.

The post Ultrasound probe maps real-time blood flow across entire organs appeared first on Physics World.

Sortie Hugo BD – Lore Olympus tome 9

6 novembre 2025 à 00:39

Sortie Hugo BD – Lore Olympus tome 9

La suite de la fameuse bande dessinée Lore Olympus vient tout juste de sortir. En effet, le tome 9 est disponible depuis hier ! Pour rappel, c’est Rachel Smythe qui en est à l’origine.

 

Le tome 9 de Lore Olympus est disponible en librairie

Le monde souterrain a une reine !

Perséphone et Hadès sont enfin réunis lorsque la déesse du printemps, bannie, retourne aux Enfers pour revendiquer sa place de reine. Maintenant qu’Hadès et Perséphone ont vaincu et emprisonné à nouveau Cronos, assoiffé de pouvoir, plus rien ne peut les séparer, et les années d’éloignement n’ont fait qu’accroître leur désir l’un pour l’autre. Mais les autres Olympiens ne peuvent s’empêcher de s’en mêler, poussant le couple à officialiser les choses par un couronnement – et un mariage.

Ignorant ceux qui tentent de définir leur relation, Hadès et Perséphone s’efforcent de vivre à leur propre rythme et de se concentrer sur la reconstruction des Enfers. Ils commencent par enquêter sur la façon dont Cronos a pu s’échapper et apprennent l’horrible vérité : il a capturé un jeune dieu puissant dont les capacités lui permettent de projeter ses pensées en dehors du Tartare – des pensées qu’il utilise pour tourmenter Héra. Bien que la forme physique de Cronos soit enfermée, l’Olympe ne sera jamais libre tant qu’ils n’auront pas sauvé le jeune dieu de son emprise.

Lore Olympus tome 9 Lore Olympus tome 9 Lore Olympus tome 9

Ce tome 9 est disponible depuis hier en librairie pour le tarif habituel, soit 24,95 €.

Sortie Hugo BD – Lore Olympus tome 9 a lire sur Vonguru.

Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success

5 novembre 2025 à 15:00
Inge Lehmann
Enigmatic Inge Lehmann around the time she quit her job at Denmark’s Geodetic Institute in 1953. (Courtesy: GEUS)

In the 1930s a little-known Danish seismologist calculated that the Earth has a solid inner core, within the liquid outer core identified just a decade earlier. The international scientific community welcomed Inge Lehmann as a member of the relatively new field of geophysics – yet in her home country, Lehmann was never really acknowledged as more than a very competent keeper of instruments.

It was only after retiring from her seismologist job aged 65 that Lehmann was able to devote herself full time to research. For the next 30 years, Lehmann worked and published prolifically, finally receiving awards and plaudits that were well deserved. However, this remarkable scientist, who died in 1993 aged 104, rarely appears in short histories of her field.

In a step to address this, we now have a biography of Lehmann: If I Am Right, and I Know I Am by Hanne Strager, a Danish biologist, science museum director and science writer. Strager pieces together Lehmann’s life in great detail, as well as providing potted histories of the scientific areas that Lehmann contributed to.

A brief glance at the chronology of Lehmann’s education and career would suggest that she was a late starter. She was 32 when she graduated with a bachelor’s degree in mathematics from the University of Copenhagen, 40 when she received her master’s degree in geodosy and was appointed state geodesist for Denmark. Lehmann faced a litany of struggles in her younger years, from health problems and money issues to the restrictions placed on most women’s education in the first decades of the 20th century.

The limits did not come from her family. Lehmann and her sister were sent to good schools, she was encouraged to attend university, and was never pressed to get married, which would likely have meant the end of her education. When she asked her father’s permission to go to the University of Cambridge, his objection was the cost – though the money was found and Lehmann duly went to Newnham College in 1910. While there she passed all the preliminary exams to study for Cambridge’s legendarily tough mathematical tripos but then her health forced her to leave.

Lehmann was suffering from stomach pains; she had trouble sleeping; her hair was falling out. And this was not her first breakdown. She had previously studied for a year at the University of Copenhagen before then, too, dropping out and moving to the countryside to recover her health.

The cause of Lehmann’s recurrent breakdowns is unknown. They unfortunately fed into the prevailing view of the time that women were too fragile for the rigours of higher learning. Strager attempts to unpick these historical attitudes from Lehmann’s very real medical issues. She posits that Lehmann had severe anxiety or a physical limitation to how hard she could push herself. But this conclusion fails to address the hostile conditions Lehmann was working in.

In Cambridge Lehmann formed firm friendships that lasted the rest of her life. But women there did not have the same access to learning as men. They were barred from most libraries and laboratories; could not attend all the lectures; were often mocked and belittled by professors and male students. They could sit exams but, even if they passed, would not be awarded a degree. This was a contributing factor when after the First World War Lehmann decided to complete her undergraduate studies in Copenhagen rather than Cambridge.

More than meets the eye

Lehmann is described as quiet, shy, reticent. But she could be eloquent in writing and once her career began she established connections with scientists all over the world by writing to them frequently. She was also not the wallflower she initially appeared to be. When she was hired as an assistant at Denmark’s Institute for the Measurement of Degrees, she quickly complained that she was being using as an office clerk, not a scientist, and she would not have accepted the job had she known this was the role. She was instead given geometry tasks that she found intellectually stimulating, which led her to seismology.

Unfortunately, soon after this Lehmann’s career development stalled. While her title of “state geodesist” sounds impressive, she was the only seismologist in Denmark for decades, responsible for all the seismographs in Denmark and Greenland. Her days were filled with the practicalities of instrument maintenance and publishing reports of all the data collected.

Photo of six people and a dog outside a low wooden building in a snowy landscape
Intrepid Inge Lehmann at the Ittoqqortootmitt (Scoresbysund) seismic station in Greenland c. 1928. A keen hiker, Lehmann was comfortable in cold and remote environments. (Courtesy: GEUS)

Despite repeated requests Lehmann didn’t receive an assistant, which meant she never got round to completing a PhD, though she did work towards one in her evenings and weekends. Time and again opportunities for career advancement went to men who had the title of doctor but far less real experience in geophysics. Even after she co-founded the Danish Geophysical Society in 1934, her native country overlooked her.

The breakthrough that should have changed this attitude from the men around her came in 1936, when she published “P’ ”. This innocuous sounding paper was revolutionary, but based firmly in the P wave and S wave measurements that Lehmann routinely monitored.

In If I Am Right, and I Know I Am, Strager clearly explains what P and S waves are. She also highlights why they were being studied by both state seismologist Lehmann and Cambridge statistician Harold Jeffreys, and how they led to both scientists’ biggest breakthroughs.

After any seismological disturbance, P and S waves propagate through the Earth. P waves move at different speeds according to the material they encounter, while S waves cannot pass through liquid or air. This knowledge allowed Lehmann to calculate whether any fluctuations in seismograph readings were earthquakes, and if so where the epicentre was located. And it led to Jeffreys’ insight that the Earth must have a liquid core.

Lehmann’s attention to detail meant she spotted a “discontinuity” in P waves that did not quite match a purely liquid core. She immediately wrote to Jeffreys that she believed there was another layer to the Earth, a solid inner core, but he was dismissive – which led to her writing the statement that forms the title of this book. Undeterred, she published her discovery in the journal of the International Union of Geodesy and Geophysics.

Home from home

In 1951 Lehmann visited the institution that would become her second home: the Lamont Geological Observatory in New York state. Its director Maurice Ewing invited her to work there on a sabbatical, arranging all the practicalities of travel and housing on her behalf.

Here, Lehmann finally had something she had lacked her entire career: friendly collaboration with colleagues who not only took her seriously but also revered her. Lehmann took retirement from her job in Denmark and began to spend months of every year at the Lamont Observatory until well into her 80s.

Photo of four women in front of a blackboard looking at a table covered with cakes
Valued colleague A farewell party held for Inge Lehmann in 1954 at Lamont Geological Observatory after one of her research stays. (Courtesy: GEUS)

Though Strager tells us this “second phase” of Lehmann’s career was prolific, she provides little detail about the work Lehmann did. She initially focused on detecting nuclear tests during the Cold War. But her later work was more varied, and continued after she lost most of her vision. Lehmann published her final paper aged 99.

If I Am Right, and I Know I Am is bookended with accounts of Strager’s research into one particular letter sent to Lehmann, an anonymous (because the final page has been lost) declaration of love. It’s an insight into the lengths Strager went to – reading all the surviving correspondence to and from Lehmann; interviewing living relatives and colleagues; working with historians both professional and amateur; visiting archives in several countries.

But for me it hit the wrong tone. The preface and epilogue are mostly speculation about Lehmann’s love life. Lehmann destroyed a lot of her personal correspondence towards the end of her life, and chose what papers to donate to an archive. To me those are the actions of a woman who wants to control the narrative of her life – and does not want her romances to be written about. I would have preferred instead another chapter about her later work, of which we know she was proud.

But for the majority of its pages, this is a book of which Strager can be proud. I came away from it with great admiration for Lehmann and an appreciation for how lonely life was for many women scientists even in recent history.

  • 2025 Columbia University Press 308 pp, £25hb

The post Inge Lehmann: the ground-breaking seismologist who faced a rocky road to success appeared first on Physics World.

Rapidly spinning black holes put new limit on ultralight bosons

5 novembre 2025 à 13:28

The LIGO–Virgo–KAGRA collaboration has detected strong evidence for second-generation black holes, which were formed from earlier mergers of smaller black holes. The two gravitational wave signals provide one of the strongest confirmations to date for how Einstein’s general theory of relativity describes rotating black holes. Studying such objects also provides a testbed for probing new physics beyond the Standard Model.

Over the past decade, the global network of interferometers operated by LIGO, Virgo, and KAGRA have detected close to 300 gravitational waves (GWs) – mostly from the mergers of binary black holes.

In October 2024 the network detected a clear signal that pointed back to a merger that occurred 700 million light-years away. The progenitor black holes were 20 and 6 solar masses and the larger object was spinning at 370 Hz, which makes it one of the fastest-spinning black holes ever observed.

Just one month later, the collaboration detected the coalescence of another highly imbalanced binary (17 and 8 solar masses), 2.4 billion light-years away. This signal was even more unusual – showing for the first time that the larger companion was spinning in the opposite direction of the binary orbit.

Massive and spinning

While conventional wisdom says black holes should not be spinning at such high rates, the observations were not entirely unexpected. “With both events having one black hole, which is both significantly more massive than the other and rapidly spinning, [the observations] provide tantalizing evidence that these black holes were formed from previous black hole mergers,” explains Stephen Fairhurst at Cardiff University, spokesperson of the LIGO Collaboration. If this were the case, the two GW signals – called GW241011 and GW241110 – are first observations of second-generation black holes. This is because when a binary merges, the resulting second-generation object tends to have a large spin.

The GW241011 signal was particularly clear, which allowed the team to make the third-ever observation of higher harmonic modes. These are overtones in the GW signal that become far clearer when the masses of the coalescing bodies are highly imbalanced.

The precision of the GW241011 measurement provides one of the most stringent verifications so far of general relativity. The observations also support Roy Kerr’s prediction that rapid rotation distorts the shape of a black hole.

Kerr and Einstein confirmed

“We now know that black holes are shaped like Einstein and Kerr predicted, and general relativity can add two more checkmarks in its list of many successes,” says team member Carl-Johan Haster at the University of Nevada, Las Vegas. “This discovery also means that we’re more sensitive than ever to any new physics that might lie beyond Einstein’s theory.”

This new physics could include hypothetical particles called ultralight bosons. These could form in clouds just outside the event horizons of spinning black holes, and would gradually drain a black hole’s rotational energy via a quantum effect called superradiance.

The idea is that the observed second-generation black holes had been spinning for billions of years before their mergers occurred. This means that if ultralight bosons were present, they cannot have removed lots of angular momentum from the black holes. This places the tightest constraint to date on the mass of ultralight bosons.

“Planned upgrades to the LIGO, Virgo and KAGRA detectors will enable further observations of similar systems,” Fairhurst says. “They will enable us to better understand both the fundamental physics governing these black hole binaries and the astrophysical mechanisms that lead to their formation.”

Haster adds, “Each new detection provides important insights about the universe, reminding us that each observed merger is both an astrophysical discovery but also an invaluable laboratory for probing the fundamental laws of physics”.

The observations are described in The Astrophysical Journal Letters.

The post Rapidly spinning black holes put new limit on ultralight bosons appeared first on Physics World.

Monde : des citoyens plus altruistes qu’on le croit

5 novembre 2025 à 10:42
Pour la première fois, des scientifiques mesurent l’avis des citoyens sur les politiques de redistribution mondiale des richesses et de lutte contre le dérèglement climatique. Ces mesures reçoivent un soutien quasi général et massif, mais plus important en Europe qu’aux États-Unis.

Making quantum computers more reliable

5 novembre 2025 à 09:42

Quantum error correction codes protect quantum information from decoherence and quantum noise, and are therefore crucial to the development of quantum computing and the creation of more reliable and complex quantum algorithms. One example is the five-qubit error correction code, five being the minimum number of qubits required to fix single-qubit errors. These contain five physical qubits (a basic off/on unit of quantum information made using trapped ions, superconducting circuits, or quantum dots) to correct one logical qubit (a collection of physical qubits arranged in such a way as to correct errors). Yet imperfections in the hardware can still lead to quantum errors.

A method of testing quantum error correction codes is self-testing. Self-testing is a powerful tool for verifying quantum properties using only input-output statistics, treating quantum devices as black boxes. It has evolved from bipartite systems consisting of two quantum subsystems, to multipartite entanglement, where entanglement is among three or more subsystems, and now to genuinely entangled subspaces, where every state is fully entangled across all subsystems. Genuinely entangled subspaces offer stronger, guaranteed entanglement than general multipartite states, making them more reliable for quantum computing and error correction.

In this research, self-testing techniques are used to certify genuinely entangled logical subspaces within the five-qubit code on photonic and superconducting platforms. This is achieved by preparing informationally complete logical states that span the entire logical space, meaning the set is rich enough to fully characterize the behaviour of the system. They deliberately introduce basic quantum errors by simulating Pauli errors on the physical qubit, which mimics real-world noise. Finally, they use mathematical tests known as Bell inequalities, adapted to the framework used in quantum error correction, to check whether the system evolves in the initial logical subspaces after the errors are introduced.

Extractability measures tell you how close the tested quantum system is to the ideal target state, with 1 being a perfect match. The certification is supported by extractability measures of at least 0.828 ± 0.006 and 0.621 ± 0.007 for the photonic and superconducting systems, respectively. The photonic platform achieved a high extractability score, meaning the logical subspace was very close to the ideal one. The superconducting platform had a lower score but still showed meaningful entanglement. These scores show that the self-testing method works in practice and confirm strong entanglement in the five-qubit code on both platforms.

This research contributes to the advancement of quantum technologies by providing robust methods for verifying and characterizing complex quantum structures, which is essential for the development of reliable and scalable quantum systems. It also demonstrates that device-independent certification can extend beyond quantum states and measurements to more general quantum structures.

Read the full article

Certification of genuinely entangled subspaces of the five qubit code via robust self-testing

Yu Guo et al 2025 Rep. Prog. Phys. 88 050501

Do you want to learn more about this topic?

Quantum error correction for beginners by Simon J DevittWilliam J Munro and Kae Nemoto (2013)

The post Making quantum computers more reliable appeared first on Physics World.

Quantum ferromagnets without the usual tricks: a new look at magnetic excitations

5 novembre 2025 à 09:36

For almost a century, physicists have tried to understand why and how materials become magnetic. From refrigerator magnets to magnetic memories, the microscopic origins of magnetism remain a surprisingly subtle puzzle — especially in materials where electrons behave both like individual particles and like a collective sea.

In most transition-metal compounds, magnetism comes from the dance between localized and mobile electrons. Some electrons stay near their home atoms and form tiny magnetic moments (spins), while others roam freely through the crystal. The interaction between these two types of electrons produces “double-exchange” ferromagnetism — the mechanism that gives rise to the rich magnetic behaviour of materials such as manganites, famous for their colossal magnetoresistance (a dramatic change in electrical resistance under a magnetic field). Traditionally, scientists modelled this behaviour by treating the localized spins as classical arrows — big and well-defined, like compass needles. This approximation works well enough for explaining basic ferromagnetism, but experiments over the last few decades have revealed strange features that defy the classical picture. In particular, neutron scattering studies of manganites showed that the collective spin excitations, called magnons, do not behave as expected. Their energy spectrum “softens” (the waves slow down) and their sharp signals blur into fuzzy continua — a sign that the magnons are losing their coherence. Until now, these effects were usually blamed on vibrations of the atomic lattice (phonons) or on complex interactions between charge, spin, and orbital motion.

2025-november-researchgroup-Herbrych
Left to right: Adriana Moreo and Elbio Dagotto from University of Tennessee (USA), Takami Tohyama from Tokyo University of Science (Japan), and Marcin Mierzejewski and Jacek Herbrych from Wrocław University of Technology (Courtesy: Herbrych/Wrocław University of Science and Technology)

A new theoretical study challenges that assumption. By going fully quantum mechanical — treating every localized spin not as a classical arrow but as a true quantum object that can fluctuate, entangle, and superpose — the researchers have reproduced these puzzling experimental observations without invoking phonons at all. Using two powerful model systems (a quantum version of the Kondo lattice and a two-orbital Hubbard model), the team simulated how electrons and spins interact when no semiclassical approximations are allowed. The results reveal a subtle quantum landscape. Instead of a single type of electron excitation, the system hosts two. One behaves like a spinless fermion — a charge carrier stripped of its magnetic identity. The other forms a broad, “incoherent” band of excitations arising from local quantum triplets. These incoherent states sit close to the Fermi level and act as a noisy background — a Stoner-like continuum — that the magnons can scatter off. The result: magnons lose their coherence and energy in just the way experiments observe.

Perhaps most surprisingly, this mechanism doesn’t rely on the crystal lattice at all. It’s an intrinsic consequence of the quantum nature of the spins themselves. Larger localized spins, such as those in classical manganites, tend to suppress the effect — explaining why decoherence is weaker in some materials than others. Consequently, the implications reach beyond manganites. Similar quantum interplay may occur in iron-based superconductors, ruthenates, and heavy-fermion systems where magnetism and superconductivity coexist. Even in materials without permanent local moments, strong electronic correlations can generate the same kind of quantum magnetism.

In short, this work uncovers a purely electronic route to complex magnetic dynamics — showing that the quantum personality of the electron alone can mimic effects once thought to require lattice distortions. By uniting electronic structure and spin excitations under a single, fully quantum description, it moves us one step closer to understanding how magnetism truly works in the most intricate materials.

Read the full article

Magnon damping and mode softening in quantum double-exchange ferromagnets

A Moreo et al 2025 Rep. Prog. Phys. 88 068001

Do you want to learn more about this topic?

Nanoscale electrodynamics of strongly correlated quantum materials by Mengkun LiuAaron J Sternbach and D N Basov (2017)

The post Quantum ferromagnets without the usual tricks: a new look at magnetic excitations appeared first on Physics World.

Le MacBook Air M2 en promo à 699 €, son plus bas historique

4 janvier 2026 à 07:25

Mise à jour 04/01 — 699 €, c’est le nouveau prix plancher du MacBook Air M2. Il était déjà disponible à ce prix chez Boulanger en milieu de semaine. C’est au tour de Darty de le proposer à ce prix en collaboration avec Rakuten. Pour bénéficier de cette offre, il suffit de saisir le code RAKUTEN50 lors de la commande. La transaction est gérée par Rakuten, mais la livraison est l’œuvre de Darty.

Cette configuration embarque 16 Go de RAM et 256 Go d’espace de stockage. Une offre à ne pas rater, si vous cherchez un Mac à petit prix.

Mise à jour 26/12 — Boulanger poursuit sa double remise sur le MacBook Air M2 minuit qui le fait tomber à seulement 724 €, son prix le plus bas. La machine est affichée à 749 €, mais une fois dans le panier, une remise supplémentaire de 25 € est appliquée.

MacBook Air M2 minuit. Image MacGeneration.

Lancé en 2022, le MacBook Air M2 est très agréable à utiliser : il est léger, silencieux, performant et endurant. Deux générations lui ont succédé, mais la formule n’a pas changé, si bien qu’il reste tout à fait dans le coup aujourd’hui. Les 16 Go de RAM sont suffisants pour les usages classiques. Les 256 Go de stockage peuvent, eux, être trop faibles pour certains, mais on peut pallier le problème avec un SSD externe.

Test du MacBook Air M2 : le saut dans l

Test du MacBook Air M2 : le saut dans l'air moderne


Mise à jour 20/12 — Petit à petit, le MacBook Air M2 se rapproche de la barre psychologique des 700 €. Ces derniers jours, on voit fleurir de plus en plus d’offres éphémères entre 720 et 750 €. Aujourd’hui, la meilleure nous vient du duo Rakuten / Darty : en saisissant le code DARTY10, vous pouvez obtenir le portable d’Apple à 739 €. Il s’agit d’une configuration avec 16 Go de RAM et 256 Go de SSD. La transaction est effectuée via Rakuten, mais la livraison est assurée par Darty. Amazon de son côté propose la même configuration pour 749 €.

Mise à jour 16/12 — Amazon riposte à son tour à Boulanger et propose le même MacBook Air M2 16 Go à 725 € !

Mise à jour 15/12 — En 2026, le prix des Mac pourrait à nouveau augmenter, mais 2026, c’est encore (un peu) loin. Autant dire qu’on ne reverra peut-être pas de si tôt un MacBook Air à 724 € ! À ce prix, vous pouvez obtenir chez Boulanger le MacBook Air M2 équipé de 16 Go et 256 de mémoire vive. Il s’agit bien entendu d’un modèle neuf ! Pour l’obtenir à ce prix, pensez à saisir le code NOEL25.

Mise à jour 11/12 — Le MacBook Air M2 est proposé ce jour à 749 € chez Boulanger ! Il s’agit du même modèle : 16 Go de RAM et 256 Go de SSD.

Mise à jour 09/12 — Depuis le Black Friday, les prix ont tendance à repartir à la hausse sur certaines configurations de Mac. Il reste toutefois de bonnes affaires à saisir ! Après avoir été proposé pendant quelques jours à 799 €, le MacBook Air M2 avec 16 Go de RAM et 256 Go de stockage est de nouveau affiché à 775 €. Mais la vraie surprise vient de Cdiscount, qui ne s’est pas contenté de s’aligner : le site dégaine une contre-offensive encore plus agressive. Avec le code MBA25, le même MacBook Air M2 tombe à 750 €, tout simplement l’un des meilleurs prix jamais vus pour ce modèle.

Le MacBook Air M4, lui, est proposé à 942,11 €. Il était resté longtemps à 899 €.

Mise à jour 3/12 — Amazon vient de baisser à nouveau le prix du MacBook Air M2 16 Go. Il est proposé au prix de 748 € !

Mise à jour 26/11 — Chaque jour, le MacBook Air M2 abandonne quelques euros. Le voilà disponible pour 773 € sur Amazon ! Pour l’avoir à ce prix, il vous faut activer le coupon qui est proposé !

Mise à jour 21/11 — Le prix du MacBook Air M2 repart à la baisse sur Amazon. Il est affiché ce jour à 798 €, mais Amazon lui retranche 15 € au moment de passer la commande. Ce qui nous ramène le MacBook Air M2 à 783 € !

Mise à jour le 14 novembre 14:10 : Le prix du MacBook Air M2 continue de dégringoler : on peut l’obtenir pour 773 € en ce moment chez Cdiscount. Il faudra pour cela entrer le code POMME25 à l’étape du paiement. Il s’agit de la version 256 Go et avec 16 Go de RAM. La machine est vendue et expédiée par Cdiscount. Ne traînez pas trop, car rien n’indique jusqu’à quand l’offre restera en ligne.

Article original : Si Apple a diminué récemment le prix du MacBook Air M4 13 pouces, qui est passé à 1 099 €, il n'y a pas encore de Mac portable réellement low cost dans la gamme… du moins pas chez Apple directement. En effet, de nombreux revendeurs proposent encore le MacBook Air M2 à la vente, dans sa variante dotée de 16 Go de RAM et de 256 Go de stockage. Et Amazon propose même une (petite) réduction : il est à 798 €, son prix le plus bas chez Amazon1.

Le MacBook Air M2 en version Minuit. Image MacGeneration

La machine a été lancée en 2022 à 1 500 € (avec 8 Go de RAM), et c'est un ordinateur portable toujours performant, très autonome et silencieux, contrairement aux MacBook Pro M5, par exemple. Le MacBook Air M4 a évidemment un système sur puce plus moderne et plus performant, mais la puce M2 ne démérite pas. C'est la version noire (Minuit) qui est proposée à ce prix, et elle n'a qu'un défaut : elle est (très) sensible aux traces de doigts. Mais pour le reste, le MacBook Air M2 reste un excellent appareil, surtout à ce prix.


  1. Soyons honnêtes : il est depuis quelques semaines à 799 €, mais ça reste une bonne affaire souvent méconnue.  ↩︎

Fluid-based laser scanning technique could improve brain imaging

4 novembre 2025 à 14:00

Using a new type of low-power, compact, fluid-based prism to steer the beam in a laser scanning microscope could transform brain imaging and help researchers learn more about neurological conditions such as Alzheimer’s disease.

The “electrowetting prism” utilized was developed by a team led by Juliet Gopinath from the electrical, computer and energy engineering and physics departments at the University of Colorado at Boulder (CU Boulder) and Victor Bright from CU Boulder’s mechanical engineering department, as part of their ongoing collaboration on electrically controllable optical elements for improving microscopy techniques.

“We quickly became interested in biological imaging, and work with a neuroscience group at University of Colorado Denver Anschutz Medical Campus that uses mouse models to study neuroscience,” Gopinath tells Physics World. “Neuroscience is not well understood, as illustrated by the neurodegenerative diseases that don’t have good cures. So a great benefit of this technology is the potential to study, detect and treat neurodegenerative diseases such as Alzheimer’s, Parkinson’s and schizophrenia,” she explains.

The researchers fabricated their patented electrowetting prism using custom deposition and lithography methods. The device consists of two immiscible liquids housed in a 5 mm tall, 4 mm diameter glass tube, with a dielectric layer on the inner wall coating four independent electrodes. When an electric field is produced by applying a potential difference between a pair of electrodes on opposite sides of the tube, it changes the surface tension and therefore the curvature of the meniscus between the two liquids. Light passing through the device is refracted by a different amount depending on the angle of tilt of the meniscus (as well as on the optical properties of the liquids chosen), enabling beams to be steered by changing the voltage on the electrodes.

Beam steering for scanning in imaging and microscopy can be achieved via several means, including mechanically controlled mirrors, glass prisms or acousto-optic deflectors (in which a sound wave is used to diffract the light beam). But, unlike the new electrowetting prisms, these methods consume too much power and are not small or lightweight enough to be used for miniature microscopy of neural activity in the brains of living animals.

In tests detailed in Optics Express, the researchers integrated their electrowetting prism into an existing two-photon laser scanning microscope and successfully imaged individual 5 µm-diameter fluorescent polystyrene beads, as well as large clusters of those beads.

They also used computer simulation to study how the liquid–liquid interface moved, and found that when a sinusoidal voltage is used for actuation, at 25 and 75 Hz, standing wave resonance modes occur at the meniscus – a result closely matched by a subsequent experiment that showed resonances at 24 and 72 Hz. These resonance modes are important for enhancing device performance since they increase the angle through which the meniscus can tilt and thus enable optical beams to be steered through a greater range of angles, which helps minimize distortions when raster scanning in two dimensions.

Bright explains that this research built on previous work in which an electrowetting prism was used in a benchtop microscope to image a mouse brain. He cites seeing the individual neurons as a standout moment that, coupled with the current results, shows their prism is now “proven and ready to go”.

Gopinath and Bright caution that “more work is needed to allow human brain scans, such as limiting voltage requirements, allowing the device to operate at safe voltage levels, and miniaturization of the device to allow faster scan speeds and acquiring images at a much faster rate”. But they add that miniaturization would also make the device useful for endoscopy, robotics, chip-scale atomic clocks and space-based communication between satellites.

The team has already begun investigating two other potential applications: LiDAR (light detection and ranging) systems and optical coherence tomography (OCT). Next, the researchers “hope to integrate the device into a miniaturized microscope to allow imaging of the brain in freely moving animals in natural outside environments,” they say. “We also aim to improve the packaging of our devices so they can be integrated into many other imaging systems.”

The post Fluid-based laser scanning technique could improve brain imaging appeared first on Physics World.

Intrigued by quantum? Explore the 2025 Physics World Quantum Briefing 2.0

3 novembre 2025 à 15:51

To coincide with a week of quantum-related activities organized by the Institute of Physics (IOP) in the UK, Physics World has just published a free-to-read digital magazine to bring you up to date about all the latest developments in the quantum world.

The 62-page Physics World Quantum Briefing 2.0 celebrates the International Year of Quantum Science and Technology (IYQ) and also looks ahead to a quantum-enhanced future.

Marking 100 years since the advent of quantum mechanics, IYQ aims to raise awareness of the impact of quantum physics and its myriad future applications, with a global diary of quantum-themed public talks, scientific conferences, industry events and more.

The 2025 Physics World Quantum Briefing 2.0, which follows on from the first edition published in May, contains yet more quantum topics for you to explore and is once again divided into “history”, “mystery” and “industry”.

You can find out more about the contributions of Indian physicist Satyendra Nath Bose to quantum science; explore weird phenomena such as causal order and quantum superposition; and discover the latest applications of quantum computing.

A century after quantum mechanics was first formulated, many physicists are still undecided on some of the most basic foundational questions. There’s no agreement on which interpretation of quantum mechanics holds strong; whether the wavefunction is merely a mathematical tool or a true representation of reality; or what impact an observer has on a quantum state.

Some of the biggest unanswered questions in physics – such as finding the quantum/classical boundary or reconciling gravity and quantum mechanics – lie at the heart of these conundrums. So as we look to the future of quantum – from its fundamentals to its technological applications – let us hope that some answers to these puzzles will become apparent as we crack the quantum code to our universe.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Intrigued by quantum? Explore the 2025 <em>Physics World Quantum Briefing 2.0</em> appeared first on Physics World.

Sortie Michel Lafon – Just Twilight Tomes 1 et 2

4 novembre 2025 à 00:30

Sortie Michel Lafon – Just Twilight Tomes 1 et 2

Les éditions Michel Lafon nous dévoilent un tout nouveau webtoon en ce mois d’octobre : Just Twilight de Kang Ki et Woo Jihye. En plus, l’éditeur nous ravie avec la sortie des deux premiers tomes simultanément.

 

Just Twilight – sortie des tomes 1 et 2

Pour le tome 1 collector : une couverture métallisée exclusive, contenant 4 ex-libris format polaroïds, une planche de stickers et un marque-page !

Just Twilight Just Twilight

Joon-yeong, meilleure élève de son lycée, cherche désespérément un refuge pour étudier, loin du domicile familial qu’elle fuit. Par hasard, elle découvre une maison abandonnée dans la forêt, idéale pour travailler en paix. Mais le lieu est déjà occupé : Beom-jin, son camarade de classe à mauvaise réputation, s’y cache lui aussi. Une étrange cohabitation se met alors en place entre ces deux lycéens que tout oppose, unis par l’envie de préserver ce havre secret. Et dans ce lieu hors du monde, quelque chose d’inattendu pourrait bien naître.

Just Twilight

Les tomes 1 et 2 sont sortis simultanément le 16 octobre dernier. Le premier tome est disponible au tarif de 16,95 € tandis que vous trouverez tome 2 pour 14,95 €.

Sortie Michel Lafon – Just Twilight Tomes 1 et 2 a lire sur Vonguru.

Open Bar de novembre 2025

3 novembre 2025 à 15:49
Dans la grande série « Depuis que je suis à la retraite, je ne sais plus quel jour on est », je viens de me faire rappeler à l’ordre (encore heureux, merci, Daniel et merci, Ysengrain): j’ai oublié l’Open Bar de ce début du mois. Et ce n’est pas la première fois! Bon, j’ai fait un examen neurologique pour être sûr, je n’ai aucune dégénérescence cognitive et toutes mes mémoires vont bien. ... Continuer la lecture

Quantum computing: hype or hope?

3 novembre 2025 à 15:00

Unless you’ve been living under a stone, you can’t have failed to notice that 2025 marks the first 100 years of quantum mechanics. A massive milestone, to say the least, about which much has been written in Physics World and elsewhere in what is the International Year of Quantum Science and Technology (IYQ). However, I’d like to focus on a specific piece of quantum technology, namely quantum computing.

I keep hearing about quantum computers, so people must be using them to do cool things, and surely they will soon be as commonplace as classical computers. But as a physicist-turned-engineer working in the aerospace sector, I struggle to get a clear picture of where things are really at. If I ask friends and colleagues when they expect to see quantum computers routinely used in everyday life, I get answers ranging from “in the next two years” to “maybe in my lifetime” or even “never”.

Before we go any further, it’s worth reminding ourselves that quantum computing relies on several key quantum properties, including superposition, which gives rise to the quantum bit, or qubit. The basic building block of a quantum computer – the qubit – exists as a combination of 0 and 1 states at the same time and is represented by a probabilistic wave function. Classical computers, in contrast, use binary digital bits that are either 0 or 1.

Also vital for quantum computers is the notion of entanglement, which is when two or more qubits are co-ordinated, allowing them to share their quantum information. In a highly correlated system, a quantum computer can explore many paths simultaneously. This “massive scale” parallel processing is how quantum may solve certain problems exponentially faster than a classical computer.

The other key phenomenon for quantum computers is quantum interference. The wave-like nature of qubits means that when different probability amplitudes are in phase, they combine constructively to increase the likelihood of the right solution. Conversely, destructive interference occurs when amplitudes are out of phase, making it less likely to get the wrong answer.

Quantum interference is important in quantum computing because it allows quantum algorithms to amplify the probability of correct answers and suppress incorrect ones, making calculations much faster. Along with superposition and entanglement, it means that quantum computers could process and store vast numbers of probabilities at once, outstripping even the best classical supercomputers.

Towards real devices

To me, it all sounds exciting, but what have quantum computers ever done for us so far? It’s clear that quantum computers are not ready to be deployed in the real world. Significant technological challenges need to be overcome before they become fully realisable. In any case, no-one is expecting quantum computers to displace classical computers “like for like”: they’ll both be used for different things.

Yet it seems that the very essence of quantum computing is also its Achilles heel. Superposition, entanglement and interference – the quantum properties that will make it so powerful – are also incredibly difficult to create and maintain. Qubits are also extremely sensitive to their surroundings. They easily lose their quantum state due to interactions with the environment, whether via stray particles, electromagnetic fields, or thermal fluctuations. Known as decoherence, it makes quantum computers prone to error.

That’s why quantum computers need specialized – and often cryogenically controlled – environments to maintain the quantum states necessary for accurate computation. Building a quantum system with lots of interconnected qubits is therefore a major, expensive engineering challenge, with complex hardware and extreme operating conditions. Developing “fault-tolerant” quantum hardware and robust error-correction techniques will be essential if we want reliable quantum computation.

As for the development of software and algorithms for quantum systems, there’s a long way to go, with a lack of mature tools and frameworks. Quantum algorithms require fundamentally different programming paradigms to those used for classical computers. Put simply, that’s why building reliable, real-world deployable quantum computers remains a grand challenge.

What does the future hold?

Despite the huge amount of work that still lies in store, quantum computers have already demonstrated some amazing potential. The US firm D-Wave, for example, claimed earlier this year to have carried out simulations of quantum magnetic phase transitions that wouldn’t be possible with the most powerful classical devices. If true, this was the first time a quantum computer had achieved “quantum advantage” for a practical physics problem (whether the problem was worth solving is another question).

There is also a lot of research and development going on around the world into solving the qubit stability problem. At some stage, there will likely be a breakthrough design for robust and reliable quantum computer architecture. There is probably a lot of technical advancement happening right now behind closed doors.

The first real-world applications of quantum computers will be akin to the giant classical supercomputers of the past. If you were around in the 1980s, you’ll remember Cray supercomputers: huge, inaccessible beasts owned by large corporations, government agencies and academic institutions to enable vast amounts of calculations to be performed (provided you had the money).

And, if I believe what I read, quantum computers will not replace classical computers, at least not initially, but work alongside them, as each has its own relative strengths. Quantum computers will be suited for specific and highly demanding computational tasks, such as drug discovery, materials science, financial modelling, complex optimization problems and increasingly large artificial intelligence and machine-learning models.

These are all things beyond the limits of classical computer resource. Classical computers will remain relevant for everyday tasks like web browsing, word processing and managing databases, and they will be essential for handling the data preparation, visualization and error correction required by quantum systems.

And there is one final point to mention, which is cyber security. Quantum computing poses a major threat to existing encryption methods, with potential to undermine widely used public-key cryptography. There are concerns that hackers nowadays are storing their stolen data in anticipation of future quantum decryption.

Having looked into the topic, I can now see why the timeline for quantum computing is so fuzzy and why I got so many different answers when I asked people when the technology would be mainstream. Quite simply, I still can’t predict how or when the tech stack will pan out. But as IYQ draws to a close, the future for quantum computers is bright.

The post Quantum computing: hype or hope? appeared first on Physics World.

Les étonnants pouvoirs du microbiote végétal

3 novembre 2025 à 13:48
Le microbiote des plantes ? L’holobionte végétal ? Des concepts méconnus, mais cruciaux pour la santé des plantes et l’agriculture durable. Explications avec l’écologue Philippe Vandenkoornhuyse, réputé mondialement pour avoir contribué à révéler leur rôle.

Modular cryogenics platform adapts to new era of practical quantum computing

3 novembre 2025 à 10:45
2025-10-iceoxford-creo-main-image
Modular and scalable: the ICE-Q cryogenics platform delivers the performance and reliability needed for professional computing environments while also providing a flexible and extendable design. The standard configuration includes a cooling module, a payload with a large sample space, and a side-loading wiring module for scalable connectivity (Courtesy: ICEoxford)

At the centre of most quantum labs is a large cylindrical cryostat that keeps the delicate quantum hardware at ultralow temperatures. These cryogenic chambers have expanded to accommodate larger and more complex quantum systems, but the scientists and engineers at UK-based cryogenics specialist ICEoxford have taken a radical new approach to the challenge of scalability. They have split the traditional cryostat into a series of cube-shaped modules that slot into a standard 19-inch rack mount, creating an adaptable platform that can easily be deployed alongside conventional computing infrastructure.

“We wanted to create a robust, modular and scalable solution that enables different quantum technologies to be integrated into the cryostat,” says Greg Graf, the company’s engineering manager. “This approach offers much more flexibility, because it allows different modules to be used for different applications, while the system also delivers the efficiency and reliability that are needed for operational use.”

The standard configuration of the ICE-Q platform has three separate modules: a cryogenics unit that provides the cooling power, a large payload for housing the quantum chip or experiment, and a patent-pending wiring module that attaches to the side of the payload to provide the connections to the outside world. Up to four of these side-loading wiring modules can be bolted onto the payload at the same time, providing thousands of external connections while still fitting into a standard rack. For applications where space is not such an issue, the payload can be further extended to accommodate larger quantum assemblies and potentially tens of thousands of radio-frequency or fibre-optic connections.

The cube-shaped form factor provides much improved access to these external connections, whether for designing and configuring the system or for ongoing maintenance work. The outer shell of each module consists of panels that are easily removed, offering a simple mechanism for bolting modules together or stacking them on top of each other to provide a fully scalable solution that grows with the qubit count.

The flexible design also offers a more practical solution for servicing or upgrading an installed system, since individual modules can be simply swapped over as and when needed. “For quantum computers running in an operational environment it is really important to minimize the downtime,” says Emma Yeatman, senior design engineer at ICEoxford. “With this design we can easily remove one of the modules for servicing, and replace it with another one to keep the system running for longer. For critical infrastructure devices, it is possible to have built-in redundancy that ensures uninterrupted operation in the event of a failure.”

Other features have been integrated into the platform to make it simple to operate, including a new software system for controlling and monitoring the ultracold environment. “Most of our cryostats have been designed for researchers who really want to get involved and adapt the system to meet their needs,” adds Yeatman. “This platform offers more options for people who want an out-of-the-box solution and who don’t want to get hands on with the cryogenics.”

Such a bold design choice was enabled in part by a collaborative research project with Canadian company Photonic Inc, funded jointly by the UK and Canada, that was focused on developing an efficient and reliable cryogenics platform for practical quantum computing. That R&D funding helped to reduce the risk of developing an entirely new technology platform that addresses many of the challenges that ICEoxford and its customers had experienced with traditional cryostats. “Quantum technologies typically need a lot of wiring, and access had become a real issue,” says Yeatman. “We knew there was an opportunity to do better.”

However, converting a large cylindrical cryostat into a slimline and modular form factor demanded some clever engineering solutions. Perhaps the most obvious was creating a frame that allows the modules to be bolted together while still remaining leak tight. Traditional cryostats are welded together to ensure a leak-proof seal, but for greater flexibility the ICEoxford team developed an assembly technique based on mechanical bonding.

The side-loading wiring module also presented a design challenge. To squeeze more wires into the available space, the team developed a high-density connector for the coaxial cables to plug into. An additional cold-head was also integrated into the module to pre-cool the cables, reducing the overall heat load generated by such large numbers of connections entering the ultracold environment.

2025-10-iceoxford-image-a4-system-render
Flexible for the future: the outer shell of the modules is covered with removable panels that make it easy to extend or reconfigure the system (Courtesy: ICEoxford)

Meanwhile, the speed of the cooldown and the efficiency of operation have been optimized by designing a new type of heat exchanger that is fabricated using a 3D printing process. “When warm gas is returned into the system, a certain amount of cooling power is needed just to compress and liquefy that gas,” explains Kelly. “We designed the heat exchangers to exploit the returning cold gas much more efficiently, which enables us to pre-cool the warm gas and use less energy for the liquefaction.”

The initial prototype has been designed to operate at 1 K, which is ideal for the photonics-based quantum systems being developed by ICEoxford’s research partner. But the modular nature of the platform allows it to be adapted to diverse applications, with a second project now underway with the Rutherford Appleton Lab to develop a module that that will be used at the forefront of the global hunt for dark matter.

Already on the development roadmap are modules that can sustain temperatures as low as 10 mK – which is typically needed for superconducting quantum computing – and a 4 K option for trapped-ion systems. “We already have products for each of those applications, but our aim was to create a modular platform that can be extended and developed to address the changing needs of quantum developers,” says Kelly.

As these different options come onstream, the ICEoxford team believes that it will become easier and quicker to deliver high-performance cryogenic systems that are tailored to the needs of each customer. “It normally takes between six and twelve months to build a complex cryogenics system,” says Graf. “With this modular design we will be able to keep some of the components on the shelf, which would allow us to reduce the lead time by several months.”

More generally, the modular and scalable platform could be a game-changer for commercial organizations that want to exploit quantum computing in their day-to-day operations, as well as for researchers who are pushing the boundaries of cryogenics design with increasingly demanding specifications. “This system introduces new avenues for hardware development that were previously constrained by the existing cryogenics infrastructure,” says Kelly. “The ICE-Q platform directly addresses the need for colder base temperatures, larger sample spaces, higher cooling powers, and increased connectivity, and ensures our clients can continue their aggressive scaling efforts without being bottlenecked by their cooling environment.”

  • You can find out more about the ICE-Q platform by contacting the ICEoxford team at iceoxford.com, or via email at sales@iceoxford.com. They will also be presenting the platform at the UK’s National Quantum Technologies Showcase in London on 7 November, with a further launch at the American Physical Society meeting in March 2026.

The post Modular cryogenics platform adapts to new era of practical quantum computing appeared first on Physics World.

Portable source could produce high-energy muon beams

3 novembre 2025 à 10:00

Due to government shutdown restrictions currently in place in the US, the researchers who headed up this study have not been able to comment on their work

Laser plasma acceleration (LPA) may be used to generate multi-gigaelectronvolt muon beams, according to physicists at the Lawrence Berkeley National Laboratory (LBNL) in the US. Their work might help in the development of ultracompact muon sources for applications such as muon tomography – which images the interior of large objects that are inaccessible to X-ray radiography.

Muons are charged subatomic particles that are produced in large quantities when cosmic rays collide with atoms 15–20 km high up in the atmosphere. Muons have the same properties as electrons but are around 200 times heavier. This means they can travel much further through solid structures than electrons. This property is exploited in muon tomography, which analyses how muons penetrate objects and then exploits this information to produce 3D images.

The technique is similar to X-ray tomography used in medical imaging, with the cosmic-ray radiation taking the place of artificially generated X-rays and muon trackers the place of X-ray detectors. Indeed, depending on their energy, muons can traverse metres of rock or other materials, making them ideal for imaging thick and large structures. As a result, the technique has been used to peer inside nuclear reactors, pyramids and volcanoes.

As many as 10,000 muons from cosmic rays reach each square metre of the Earth’s surface every minute. These naturally produced particles have unpredictable properties, however, and they also only come from the vertical direction. This fixed directionality means that can take months to accumulate enough data for tomography.

Another option is to use the large numbers of low-energy muons that can be produced in proton accelerator facilities by smashing a proton beam onto a fixed carbon target. However, these accelerators are large and expensive facilities, limiting their use in muon tomography.

A new compact source

Physicists led by Davide Terzani have now developed a new compact muon source based on LPA-generated electron beams. Such a source, if optimized, could be deployed in the field and could even produce muon beams in specific directions.

In LPA, an ultra-intense, ultra-short, and tightly focused laser pulse propagates into an “under-dense” gas. The pulse’s extremely high electric field ionizes the gas atoms, freeing the electrons from the nuclei, so generating a plasma. The ponderomotive force, or radiation pressure, of the intense laser pulse displaces these electrons and creates an electrostatic wave that produces accelerating fields orders of magnitude higher than what is possible in the traditional radio-frequency cavities used in conventional accelerators.

LPAs have all the advantages of an ultra-compact electron accelerator that allows for muon production in a small-size facility such as BeLLA, where Terzani and his colleagues work. Indeed, in their experiment, they succeeded in generating a 10 GeV electron beam in a 30 cm gas target for the first time.

The researchers collided this beam with a dense target, such as tungsten. This slows the beam down so that it emits Bremsstrahlung, or braking radiation, which interacts with the material, producing secondary products that include lepton–antilepton pairs, such as electron–positron and muon–antimuon pairs. Behind the converter target, there is also a short-lived burst of muons that propagates roughly along the same axis as the incoming electron beam. A thick concrete shielding then filters most of the secondary products, letting the majority of muons pass through it.

Crucially, Terzani and colleagues were able to separate the muon signal from the large background radiation – something that can be difficult to do because of the inherent inefficiency of the muon production process. This allowed them to identify two different muon populations coming from the accelerator. These were a collimated, forward directed population, generated by pair production; and a low-energy, isotropic, population generated by meson decay.

Many applications

Muons can ne used in a range of fields, from imaging to fundamental particle physics. As mentioned, muons from cosmic rays are currently used to inspect large and thick objects not accessible to regular X-ray radiography – a recent example of this is the discovery of a hidden chamber in Khufu’s Pyramid. They can also be used to image the core of a burning blast furnace or nuclear waste storage facilities.

While the new LPA-based technique cannot yet produce muon fluxes suitable for particle physics experiments – to replace a muon injector, for example – it could offer the accelerator community a convenient way to test and develop essential elements towards making a future muon collider.

The experiment in this study, which is detailed in Physical Review Accelerators and Beams, focused on detecting the passage of muons, unequivocally proving their signature. The researchers conclude that they now have a much better understanding of the source of these muons.

Unfortunately, the original programme that funded this research has ended, so future studies are limited at the moment. Not to be disheartened, the researchers say they strongly believe in the potential of LPA-generated muons and are working on resuming some of their experiments. For example, they aim to measure the flux and the spectrum of the resulting muon beam using completely different detection techniques based on ultra-fast particle trackers, for example.

The LBNL team also wants to explore different applications, such as imaging deep ore deposits – something that will be quite challenging because it poses strict limitations on the minimum muon energy required to penetrate soil. Therefore, they are looking into how to increase the muon energy of their source.

The post Portable source could produce high-energy muon beams appeared first on Physics World.

Test – Kit SoloCam E42 de eufy

6 novembre 2025 à 00:22

Que vaut le kit de 2 caméras SoloCam E42 de eufy ?

Vous le savez chez Vonguru, on adore tout ce qui est domotique et chez eufy, on a toujours le choix de ce côté-là ! Aujourd’hui, c’est la sécurité qui est mise en avant avec le nouveau kit de 2 caméras SoloCam E42 ! Au programme de cette nouveauté, résolution 4K UHD, Protection 360° sans angles morts, détection IA et suivi intelligent !

Voyons ensemble ce que vaut ce nouveau kit, muni de sa HomeBase 3. Vous retrouverez ce kit au prix de 479 € hors promotion directement sur le site de la marque ou bien sur Amazon. Place au test !

 

Unboxing

On retrouvera les couleurs bleu et blanche bien caractéristiques de la marque, avec sur la face avant un rappel de la marque ainsi que du nom du modèle, ici eufy SoloCam E42 2-Cam Kit ainsi que quelques arguments marketings sous formes de listes mais aussi avec quelques pictogrammes. On retrouvera également en visuel deux caméras aux côtés de leur HomeBase.

À gauche, on nous parlera en détails de l’absence d’abonnement et de la sécurisation en local de vos données, chose que nous apprécions beaucoup avec eufy, tandis qu’à droite, nous aurons cette fois un visuel d’un extérieur surveillé par la caméra, quelques fonctionnalités listées en anglais et un rappel de l’application, eufySecurity, à télécharger.

eufy SoloCam E42

 

Caractéristiques techniques

Usages recommandés pour le produit Sécurité extérieure
Marque eufy Security
Nom de modèle Solo Cam E42
Technologie de connectivité Sans fil
Utilisation intérieure/extérieure Extérieure
Protocole de connectivité WLAN
Type de fixation Installation murale
Résolution d’enregistrement vidéo 4k
Couleur Blanc
Nombre d’articles 2

Fonctionnalités

  • Netteté ultime – avec sa véritable résolution 4K UHD, cette caméra ne passe à côté d’aucun détail. Elle peut même reconnaître une plaque d’immatriculation jusqu’à 10 m de distance.
  • Détection IA et suivi intelligent – l’IA intégrée détecte instantanément les mouvements et suit automatiquement les personnes, les véhicules, ou les événements importants qui se déroulent à portée de vue. Les fausses alarmes sont minimisées, et votre propriété est sécurisée.
  • Protection 360° sans angles morts – l’angle de vue large fournit une couverture complète, minimise les angles morts et vous permet de garder un œil sur votre palier, votre entrée ou votre jardin.
  • Sirène détection de mouvement – protégez votre domicile grâce au stroboscope puissant et déclenché par le mouvement, qui effraie les visiteurs indésirables et vous informe en instantané de tous les comportements inhabituels.
  • Sécurité assurée avec la technologie SolarPlus 2.0 – deux heures d’ensoleillement direct suffisent pour que votre caméra fonctionne toute la journée. Une utilisation en continu, sans entretien, et dans toutes les conditions météorologiques.
  • Sans frais mensuels – insérez une carte microSD de 128 GB pour stocker vos images de manière confidentielles et éviter les frais d’inscription. Attention : carte microSD non incluse.

eufy SoloCam E42

 

Contenu

  • Deux caméras de surveillance sans fil avec panneaux solaires.

  • Deux supports de montage ajustables.

  • Visserie pour fixation murale.

  • Câbles pour le chargement initial.

  • HomeBase 3
  • Fiches secteurs selon région pour le branchement de la HomeBase
  • Câble de raccordement pour la HomeBase
  • Des stickers

eufy SoloCam E42

 

Installation

Commençons par parler de l’installation, aussi bien hardware que software, qui dans les deux cas n’a plus vraiment de secret pour nous. On rappellera cependant de BIEN vérifier la porter de son Wi-Fi AVANT de percer son mur car oui, il vous faudra percer 🙂

  • Préparation :

    • Chargez les caméras via USB-C.

    • Choisissez un emplacement extérieur dégagé, à 2–3 m de hauteur, bien exposé au soleil si vous utilisez les panneaux solaires intégrés.

  • Fixation :

    • Montez les caméras avec les supports fournis.

    • Orientez-les légèrement vers le bas pour une meilleure couverture et moins de fausses alertes.

  • Connexion Wi-Fi :

    • Installez l’application Eufy Security et créez un compte si vous n’en avez pas encore un.

    • Ajoutez les caméras : appuyez sur le bouton SYNC, scannez le QR code via l’application et connectez-les au Wi-Fi 2,4 GHz.

  • Paramètres dans l’application :

    • Renommez chaque caméra.

    • Configurez zones de détection et notifications (personnes, véhicules, animaux).

    • Choisissez la résolution et le stockage (microSD).

  • Tests :

    • Vérifiez le flux en direct, la détection de mouvement et la vision nocturne.

 

Test & Application 

Lorsqu’on pense « sécurité extérieure » sans prise de tête, on cherche : une image nette, des alertes fiables, pas d’abonnement, et une installation qui ne transforme pas votre mur en chantier, que vous soyez propriétaire ou locataire. Le kit 2 caméras SoloCam E42 de eufy coche à première vue beaucoup de ces cases : résolution 4K, alimentation solaire/batterie, detection IA, et hub inclus (HomeBase S380) pour aller plus loin. Je suis déjà munie de caméras intérieures et extérieures de eufy, et j’avais très hâte de tester ces nouveaux modèles.

D’ailleurs, ayant déjà d’une HomeBase, cette dernière se trouve toujours dans mon Rack Server, bien au chaud aux côtés des copains.

RackMate hub

L’installation, c’est vu. Parlons donc de tout le reste. Côté image, Eufy ne déçoit pas. Le capteur 4K de la SoloCam E42 offre un rendu vraiment très bon, aussi bien de jour que de nuit. Les détails sont précis, les couleurs équilibrées, et la compression ne dégrade pas la qualité du flux, même sur un réseau Wi-Fi standard. La vision nocturne en couleur est également au rendez-vous, grâce à un projecteur LED intégré. La caméra bascule automatiquement entre vision IR et couleur selon la luminosité, garantissant une visibilité constante.

Eufy intègre ici une détection intelligente de mouvement qui distingue les personnes, véhicules et animaux. Fini les notifications inutiles dès qu’une feuille bouge ou qu’un insecte passe devant la lentille. La précision est excellente, surtout pour une caméra sans station de base. Les alertes arrivent rapidement sur smartphone, accompagnées d’une courte vidéo enregistrée en local sur carte microSD (jusqu’à 128 Go). Et surtout : aucun abonnement n’est nécessaire. Tout est stocké et géré en interne.

eufy SoloCam E42

Autre bonne surprise : la communication bidirectionnelle. Depuis l’application, on peut parler directement via le haut-parleur de la caméra. Pratique pour répondre à un livreur ou dissuader un intrus. La E42 embarque aussi une sirène et un flash lumineux, activables automatiquement ou manuellement. Un combo efficace pour faire fuir quiconque s’approcherait un peu trop près.

Le grand atout de la SoloCam E42, c’est son autonomie. En usage normal, la batterie tient plusieurs mois, et le panneau solaire intégré assure une recharge continue. En pratique, même par temps couvert, la caméra maintient un niveau de batterie stable. Pour ceux qui n’aiment pas grimper à l’échelle tous les trois mois, c’est un vrai luxe. Pour vous dire, mon kit 4 caméras S330 n’a JAMAIS eu besoin d’être rechargé. Les panneaux font le boulot tout le temps.

eufy SoloCam E42

Après quelques jours d’utilisation, on oublie la présence du système. Les notifications sont pertinentes, le flux vidéo rapide, et l’application parfaitement fluide.
Eufy a trouvé un excellent équilibre ici entre ergonomie, performance et tranquillité d’esprit. On regrettera cependant que ces nouvelles caméras ne filment pas en dôme, ce que l’on espère voir apparaître chez eufy dans les mois à venir avec de nouveaux modèles, car c’est bien là la seule fonctionnalité qui manque à ses caméras haut de gamme.

J’aime également que les caméras intègrent directement les panneaux comme sur d’autres modèles de la marque déjà testé, mais pour abriter tout ce condensé de technologie, il faut plus d’autonomie je présume, d’où la présence d’un panneau plus grand. Cela permet également de positionner ledit panneau à un autre endroit si l’emplacement choisi est peu exposé au soleil. C’est donc à vous de voir ce que vous préférez selon vos besoins et vos envies esthétiques.

Conclusion 

Le kit de 2 caméras SoloCam E42 de Eufy coche toutes les cases : image 4K, autonomie solaire, installation en quelques minutes et stockage local sécurisé. C’est une solution idéale pour ceux qui veulent protéger leur maison sans s’encombrer d’un système complexe. Au vu de sa rotation à 360, on apprécie une installation par exemple sur un poteau, afin de l’utiliser pleinement.

En clair, ce kit de caméra fonctionne vraiment très bien, autonome et discret, qui fait exactement ce qu’on attend de lui, sans surcoût avec une installation somme toute facile et une app bien rôdée. Cependant, le prix est tout de même élevé. 479 € hors promotion directement sur le site de la marque ou bien sur Amazon, ce n’est pas à la porter de toutes les bourses.

Test – Kit SoloCam E42 de eufy a lire sur Vonguru.

Test – Withings ScanWatch 2 2025

3 novembre 2025 à 00:22

Test de la montre connectée Withings ScanWatch 2 2025

Nous vous en avions parlé lors de son annonce par la marque, la ScanWatch 2 2025 est maintenant disponible en boutique. Il s’agit pour rappel, d’une montre connectée hybride et dotée d’une autonomie de 30 jours. Ce modèle, grâce à son application permet aussi un suivi du cycle menstruel, sans parler des divers relevés permettant de vérifier la bonne santé de notre cœur, le nombre de pas que l’on marche chaque jour, la température corporelle ou bien de suivre un score de qualité du sommeil.

La ScanWatch 2 2025 est disponible dès à présent au tarif de 349,45 €

Bref, que vaut cette nouvelle ScanWatch au quotidien ? Réponse dans ce test !

 

Déballage

La ScanWatch 2 2025 arrive dans une boîte blanche avec la montre de représentée dessus. Il s’agit d’une boîte identique à la ScanWatch 2 sortie en 2023. À l’arrière, nous retrouvons quelques informations complémentaires sur sa compatibilité (iphone, ipad, smartphones avec Android 10 au minimum). Nous avons aussi le droit à diverses informations techniques ainsi que celles de recyclage. Sur les côtés, la marque met en avant son application Withings App.

Withings ScanWatch 2 2025 Withings ScanWatch 2 2025

Dès que l’on ouvre cette petite boîte, nous découvrons la montre installée sur une base en carton. On la retrouve aux côtés de :

  • un socle de chargement
  • et son câble USB-C vers USB-A
  • un guide d’installation
  • une carte promotionnelle des produits Withings
Withings ScanWatch 2 2025 Withings ScanWatch 2 2025

Spécifications techniques

 

Test

Mise en route

Pour utiliser la ScanWatch 2 2025 pour la première fois, c’est relativement rapide et facile. Il faudra en effet installer la montre sur votre poignet et au préalable télécharger l’application de Withings (la même que pour les autres produits de la marque). On pensera bien entendu à mettre son smartphone en Bluetooth et à activer la localisation puis on lancera l’application de la marque. Grâce au bouton disponible tout en haut à droite de l’application, il sera possible d’installer la montre. En moins de 5 minutes, tout est prêt à l’emploi !

Le plus « dur » en soit sera de mettre la bonne heure sur la montre, si elle n’est pas parfaitement mise par défaut. Il faudra alors tourner les aiguilles sur l’application pour la mettre à l’heure en direct, mais cela reste un détail.

Withings ScanWatch 2 2025

Petit point négatif concernant la montre en sortie de boîte. Effectivement, le bracelet était un peu marqué par l’emballage. Néanmoins, cette marque s’atténue au bout de quelques jours de portage mais reste encore légèrement visible après 10 jours au poignet. C’est un peu dommage pour le coup.

Withings ScanWatch 2 2025

 

Utilisation quotidienne

Au quotidien, la ScanWatch 2 2025 est assez confortable à utiliser et à porter. Néanmoins, le bracelet aura tendance à marquer un peu la peau. Et si je le desserre, ne serait-ce qu’un peu, il ne sera plus assez serré pour moi. Il faudra donc peut-être envisager un bracelet dans une autre matière pour la vie courante. Par contre, c’est aussi ce qui permet à la montre connectée de supporter l’immersion… Donc à voir selon votre utilisation quotidienne, si vous souhaitez la garder pendant la douche, à la piscine, ou non.

À l’usage, j’ai remarqué que ce modèle avait tendance à irriter ma peau, contrairement à la ScanWatch première du nom. En effet, si je la porte pendant plusieurs jours d’affilés, sans la retirer, je me retrouve avec des rougeurs et irritations sur le poignet (en dessous du cadran et aussi au niveau du bracelet silicone). C’est un point qui me fait penser qu’il faudra sûrement opter pour un bracelet cuir peut-être si vous êtes également concerné par ce soucis.

Sur ce modèle, contrairement aux montres et bracelets Fitbit par exemple, l’écran restera bien visible en extérieur ! Et ce, peut importe l’ensoleillement.

Sinon, la ScanWatch 2 2025 est dotée d’un bon nombre de fonctionnalités intéressantes pour suivre sa santé de manière générale. On pourra notamment vérifier comme je disais plus haut, le nombre de pas faits par jour, le rythme cardiaque mais on pourra aussi faire un électrocardiogramme ou encore vérifier ses perturbations respiratoires pendant le sommeil ou la quantité d’oxygène dans le sang. Dans le même ordre d’idées, nous avons un relevé de notre température disponible dans l’application. Une température de « base » est analysée par la montre, elle pourra ensuite vous dire si vous situez en dessous de cette moyenne ou bien au dessus.

Il s’agit d’un excellent allier pour faire attention à sa santé, surtout dans notre ère moderne où le télétravail a été beaucoup généralisé. Toutefois, il faudra quand même faire attention aux données relevées par la ScanWatch 2 2025 et ne pas hésiter à aller consulter un médecin en cas de doute. La technologie peut être défaillante, alors faites bien attention à vous.

Withings ScanWatch 2 2025 Withings ScanWatch 2 2025 Withings ScanWatch 2 2025 Withings ScanWatch 2 2025

Avec son application, la marque propose aussi un suivi du cycle menstruel. C’est une option très intéressante pour les personnes qui n’ont pas spécialement d’appli pour leur suivi. D’autant plus que Withings propose actuellement une offre pour bénéficier de l’application Clue pendant un an. À titre personnel, c’est une application que j’utilise depuis des années, je trouve cela plutôt chouette. Du côté de Withings, nous pourrons aussi noter nos périodes menstruelles, nos symptômes directement sur l’application. En plus de cela, directement sur la montre (en utilisant la molette), il sera possible de noter quelques informations sur notre cycle. C’est rapide et pratique si l’on a pas le temps d’ouvrir l’appli.

Esthétiquement parlant, la montre connectée de Withings est très élégante. On est loin de l’aspect sportif des montres Fitbit. Ici, notre exemplaire du jour possède des couleurs bleues foncées et un joli cadran couleur or rose. Sans parler de son cadran avec aiguilles qui donne un aspect haut de gamme non négligeable.

En plus de cela, si l’envie vous prends, de nombreux autres bracelets sont disponibles sur le site de la marque. Vous permettant ainsi de changer de couleur au gré de vos envies. Néanmoins, ils ne sont pas donnés puisqu’ils se trouvent entre 19,95 € pour les plus « simples » et 49,95 € pour d’autres (ceux en cuir notamment). Ceci dit, la marque Fitbit, ne les vend pas beaucoup moins chers de son côté !

 

Autonomie et recharge

Utilisant les montres Withings depuis plusieurs années maintenant, je peux vous assurer que la batterie d’un mois tient ses promesses. C’est vraiment top, on ne passe pas son temps à recharger sa montre.

 

Application

L’application de Withings est vraiment complète et intuitive. Par défaut, au démarrage, nous nous retrouverons sur notre accueil répertoriant la plupart des données relevées et nous donnant accès à des notifications. Nous avons notamment le nombre de pas relevés, notre temps de sommeil, fréquence cardiaque moyenne, poids etc. De quoi tout vérifier en peu de temps !

Application Withings Application Withings Application Withings
Suivi du cycle

Dans la partie suivi du cycle, il nous sera possible de suivre nos règles, de les ajouter, d’ajouter aussi différents symptômes que l’on pourrait ressentir grâce au petit « + ». L’appli nous indique aussi notre phase (folliculaire, ovulation, lutéale), la fenêtre de fertilité et les dates de nos potentielles prochaines règles. De plus, en se basant sur notre température corporelle, l’application est censée pouvoir détecter notre ovulation.

Application Withings Application Withings Application Withings Application Withings
Progresser et partager

Dans progresser, Withings nous donne des programmes et astuces pour améliorer notre santé. Tandis que la partie « partager » concerne plutôt le suivi de la santé et le partage à nos médecins entre autre.

Application Withings Application Withings

Conclusion

Au final, nous retrouvons une nouvelle fois une montre connectée pleine de fonctionnalités, qui nous rappellera notamment celles disponibles dans les bracelets/montres de la marque Fitbit. Elle est agréable à porter au quotidien et apportera une dose d’élégance non négligeable à votre poignet, n’étant pas du tout typé sport. Attention toutefois aux peaux sensibles qui pourront être irritées par les matériaux de cette dernière. Dans ce cas, il faudra peut-être envisager l’achat d’un bracelet en matière naturelle, type cuir.

Sinon, ce modèle bénéficie d’une grande autonomie de 1 mois. C’est une autonomie qui est ultra satisfaisante, nous n’avons pas l’impression de la recharger toutes les trente secondes.

On regrettera toutefois, à la sortie de la boîte, avoir constaté que le bracelet de notre montre était marqué. Bien que cela s’estompe petit à petit mais reste tout de même visible pendant une quinzaine de jours.

Withings ScanWatch 2 2025

Malheureusement, c’est un constat que l’on fait à chaque fois, les accessoires pour la ScanWatch 2 2025 ne sont pas donnés. Il faudra rajouter entre 20 et 50 euros pour bénéficier d’un bracelet de rechange. La montre en elle-même est disponible au tarif de 349,95 €.

Mais, au vu des fonctionnalités qu’elle intègre et de son élégance, je ne peux que la recommander. Il n’y a pas de réel point noir sur ce modèle, mise à part, un prix qui pique et malheureusement un abonnement mensuel à rajouter si on veut profiter pleinement de toutes les fonctionnalités qu’elle peut proposer !

Argent Award Vonguru

Merci à Withings !

Test – Withings ScanWatch 2 2025 a lire sur Vonguru.

Sortie Mana Books – Persona Le Livre de Cuisine Officiel

2 novembre 2025 à 00:11

Sortie Mana Books – Persona Le Livre de Cuisine Officiel

Comme régulièrement, Mana Books sort des nouveautés sur le thème du manga ou encore du jeu vidéo. Aujourd’hui, nous allons vous parler d’un tout nouveau livre de recettes qui vient de sortir : Persona Le Livre de Cuisine Officiel de Jarrett Melendez.

 

Découvrez tous un tas de recettes dans Persona Le Livre de Cuisine Officiel

Persona Le Livre de Cuisine Officiel Persona Le Livre de Cuisine Officiel

Persona Le Livre de cuisine officiel vous séduira autant comme objet collector que comme hommage culinaire à la saga Persona

Plongez dans les moments culinaires mémorables et réconfortants des jeux Persona, que ce soit en cuisinant avec les membres du SEES ou en traînant sur les banquettes familières du café Leblanc ! Vos statistiques sont-elles suffisamment élevées pour relever le défi du méga bolde bœuf spécial jour de pluie ? Êtes-vous prêt à affronter le Cosmic Tower Burger ? Vous le saurez en parcourant ce florilège de recettes emblématiques de la franchise Persona. De Tatsumi Port Island à Inaba en passant par Tokyo, découvrez les plats incroyables qui rapprochent nos personnages favoris et leur donnent la force nécessaire pour les combats à venir !

Persona Le Livre de Cuisine Officiel

Ce livre de recettes est disponible dès à présent en librairie. Vous le trouverez pour le prix de 29,90 €.

Sortie Mana Books – Persona Le Livre de Cuisine Officiel a lire sur Vonguru.

Quantum computing on the verge: correcting errors, developing algorithms and building up the user base

31 octobre 2025 à 15:20

When it comes to building a fully functional “fault-tolerant” quantum computer, companies and government labs all over the world are rushing to be the first over the finish line. But a truly useful universal quantum computer capable of running complex algorithms would have to entangle millions of coherent qubits, which are extremely fragile. Because of environmental factors such as temperature, interference from other electronic systems in hardware, and even errors in measurement, today’s devices would fail under an avalanche of errors long before reaching that point.

So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).

Protected states

The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.

There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”

At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.

1 From many to few

Turning unreliable physical qubits into a logical qubit
(Courtesy: Riverlane via www.riverlane.com)

Qubits are so fragile that their quantum state is very susceptible to the local environment, and can easily be lost through the process of decoherence. Current quantum computers therefore have very high error rates – roughly one error in every few hundred operations. For quantum computers to be truly useful, this error rate will have to be reduced to the scale of one in a million; especially as larger more complex algorithms would require one in a billion or even trillion error rates. This requires real-time quantum error correction (QEC).

To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.

According to Maria Maragkou, commercial vice-president of quantum error-correction company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.

With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).

That overhead has been steadily reduced over the past several years, and at the end of last year researchers at Google announced that their 105-qubit Willow quantum chip passed the break-even threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. This means that in principle such arrays could be scaled up without errors accumulating.

2 Error correction in action

Illustration of the error correction cycle
(Courtesy: Riverlane via www.riverlane.com)

The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.

Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.

It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).

Others suspect the breakthrough threshold may be a little lower: Steve Brierley, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.

Platform independent

Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.

“At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierley thinks this will require fault-tolerant machines.

“In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”

Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.

On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”

Early adopters?

Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?

In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.

IBM and RIKEN quantum systems
Joint effort In June 2025, IBM in the US and Japan’s national research laboratory RIKEN, unveiled the IBM Quantum System Two, the first to be used outside the US. It involved IBM’s 156-qubit IBM Heron quantum computing system (left) being paired with RIKEN’s supercomputer Fugaku (right) — one of the most powerful classical systems on Earth. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. (Courtesy: IBM and RIKEN)

For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”

Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).

“For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”

3 Structural insights

Modelling materials using quantum computing
(Courtesy: Phasecraft)

A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.

Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.

Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)

One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.

Building the market

Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.

The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.

Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”

IBM roadmap of quantum development
The road ahead IBM’s current roadmap charts how the company plans on scaling up its devices to achieve a fault-tolerant device by 2029. Alongside hardware development, the firm will also focus on developing new algorithms and software for these devices. (Courtesy: IBM)

It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.

According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?

But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”

Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”

However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.

That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.

  • In the first instalment of this two-part article, Philip Ball explores the latest developments in the quantum-computing industry

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.

❌