↩ Accueil

Vue normale

index.feed.received.before_yesterday

Smartphone sensors and antihydrogen could soon put relativity to the test

10 mai 2025 à 14:36

Researchers on the AEgIS collaboration at CERN have designed an experiment that could soon boost our understanding of how antimatter falls under gravity. Created by a team led by Francesco Guatieri at the Technical University of Munich, the scheme uses modified smartphone camera sensors to improve the spatial resolution of measurements of antimatter annihilations. This approach could be used in rigorous tests of the weak equivalence principle (WEP).

The WEP is a key concept of Albert Einstein’s general theory of relativity, which underpins our understanding of gravity. It suggests that within a gravitational field, all objects of should be accelerated at the same rate, regardless of their mass or whether they are matter or antimatter. Therefore, if matter and antimatter accelerate at different rates in freefall, it would reveal serious problems with the WEP.

In 2023 the ALPHA-g experiment at CERN was the first to observe how antimatter responds to gravity. They found that it falls down, with the tantalizing possibility that antimatter’s gravitational response is weaker than matter’s. Today, there are several experiments that are seeking to improve on this observation.

Falling beam

AEgIS’ approach is to create a horizontal beam of cold antihydrogen atoms and observe how the atoms fall under gravity. The drop will be measured by a moiré deflectometer in which a beam passes through two successive and aligned grids of horizontal slits before striking a position-sensitive detector. As the beam falls under gravity between the grids, the effect is similar to a slight horizontal misalignment of the grids. This creates a moiré pattern – or superlattice – that results in the particles making a distinctive pattern on the detector. By detecting a difference in the measured moiré pattern and that predicted by WEP, the AEgIS collaboration hopes to reveal a discrepancy with general relativity.

However, as Guatieri explains, a number of innovations are required for this to work. “For AEgIS to work, we need a detector with incredibly high spatial resolution. Previously, photographic plates were the only option, but they lacked real-time capabilities.”

AEgIS physicists are addressing this by developing a new vertexing detector. Instead of focussing on the antiparticles directly, their approach detects the secondary particles produced when the antimatter annihilates on contact with the detector. Tracing the trajectories of these particles back to their vertex gives the precise location of the annihilation.

Vertexing detector

Borrowing from industry, the team has created its vertexing detector using an array of modified mobile-phone camera sensors (see figure). Gautieri had already used this approach to measure the real-time positions of low-energy positrons (anti-electrons) with unprecedented precision.

“Mobile camera sensors have pixels smaller than 1 micron,” Guatieri describes. “We had to strip away the first layers of the sensors, which are made to deal with the advanced integrated electronics of mobile phones. This required high-level electronic design and micro-engineering.”

With these modifications in place, the team measured the positions of antiproton annihilations to within just 0.62 micron: making their detector some 35 times more precise than previous designs.

Many benefits

“Our solution, demonstrated for antiprotons and directly applicable to antihydrogen, combines photographic-plate-level resolution, real-time diagnostics, self-calibration and a good particle collection surface, all in one device,” Gautieri says.

With some further improvements, the AEgIS team is confident that their vertexing detector with boost the resolution of the freefall of horizontal antihydrogen beams – allowing rigorous tests of the WEC.

AEgIS team member Ruggero Caravita of Italy’s University of Trento adds, “This game-changing technology could also find broader applications in experiments where high position resolution is crucial, or to develop high-resolution trackers”. He says, “Its extraordinary resolution enables us to distinguish between different annihilation fragments, paving the way for new research on low-energy antiparticle annihilation in materials”.

The research is described in Science Advances.

The post Smartphone sensors and antihydrogen could soon put relativity to the test appeared first on Physics World.

‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study

9 mai 2025 à 13:38

Chatbots could boost students’ interest in maths and physics and make learning more enjoyable. So say researchers in Germany, who have compared the emotional response of students using artificial intelligence (AI) texts to learn physics compared to those who only read traditional textbooks. The team, however, found no difference in test performance between the two groups.

The study has been led by Julia Lademann, a physics-education researcher from the University of Cologne, who wanted to see if AI could boost students’ interested in physics. They did this by creating a customized chatbot using OpenAI’s ChatGPT model with a tone and language that was considered accessible to second-year high-school students in Germany.

After testing the chatbot for factual accuracy and for its use of motivating language, the researchers prompted it to generate explanatory text on proportional relationships in physics and mathematics. They then split 214 students, who had an average age of 11.7, into two groups. One was given textbook material on the topic along with chatbot text, while the control group only got the textbook .

The researchers first surveyed the students’ interest in mathematics and physics and then gave them 15 minutes to review the learning material. Their interest was assessed again afterwards along with the students’ emotional state and “cognitive load” – the mental effort required to do the work – through a series of questionnaires.

Higher confidence

The chatbot was found to significantly enhance students’ positive emotions – including pleasure and satisfaction, interest in the learning material and self-belief in their understanding of the subject — compared with those who only used textbook text. “The text of the chatbot is more human-like, more conversational than texts you will find in a textbook,” explains Lademann. “It is more chatty.”

Chatbot text was also found to reduce cognitive load. “The group that used the chatbot explanation experience higher positive feelings about the subject [and] they also had a higher confidence in their learning comprehension,” adds Lademann.

Tests taken within 30 minutes of the “learning phase” of the experiment, however, found no difference in performance between students that received the AI-generated explanatory text and the control group, despite the former receiving more information. Lademann says this could be due to the short study time of 15 minutes.

The researchers say that while their findings suggest that AI could provide a superior learning experience for students, further research is needed to assess its impact on learning performance and long-term outcomes. “It is also important that this improved interest manifests in improved learning performance,” Lademann adds.

Lademann would now like to see “longer term studies with a lot of participants and with children actually using the chatbot”. Such research would explore the potential key strength of chatbots; their ability to respond in real time to student’s queries and adapt their learning level to each individual student.

The post ‘Chatty’ artificial intelligence could improve student enthusiasm for physics and maths, finds study appeared first on Physics World.

European centre celebrates 50 years at the forefront of weather forecasting

8 mai 2025 à 15:01

What is the main role of the European Centre for Medium-Range Weather Forecasts (ECMWF)?

Making weather forecasts more accurate is at the heart of what we do at the ECMWF, working in close collaboration with our member states and their national meteorological services (see box below). That means enhanced forecasting for the weeks and months ahead as well as seasonal and annual predictions. We also have a remit to monitor the atmosphere and the environment – globally and regionally – within the context of a changing climate.

How does the ECMWF produce its weather forecasts?

Our task is to get the best representation, in a 3D sense, of the current state of the atmosphere versus key metrics like wind, temperature, humidity and cloud cover. We do this via a process of reanalysis and data assimilation: combining the previous short-range weather forecast, and its component data, with the latest atmospheric observations – from satellites, ground stations, radars, weather balloons and aircraft. Unsurprisingly, using all this observational data is a huge challenge, with the exploitation of satellite measurements a significant driver of improved forecasting over the past decade.

In what ways do satellite measurements help?

Consider the EarthCARE satellite that was launched in May 2024 by the European Space Agency (ESA) and is helping ECMWF to improve its modelling of clouds, aerosols and precipitation. EarthCARE has a unique combination of scientific instruments – a cloud-profiling radar, an atmospheric lidar, a multispectral imager and a broadband radiometer – to infer the properties of clouds and how they interact with solar radiation as well as thermal-infrared radiation emitted by different layers of the atmosphere.

How are you combining such data with modelling?

The ECMWF team is learning how to interpret and exploit the EarthCARE data to directly initiate our models. Put simply, mathematical models that better represent clouds and, in turn, yield more accurate forecasts. Indirectly, EarthCARE is also revealing a clearer picture of  the fundamental physics governing cloud formation, distribution and behaviour. This is just one example of numerous developments taking advantage of new satellite data. We are looking forward, in particular, to fully exploiting next-generation satellite programmes from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) – including the EPS-SG polar-orbiting system and the Meteosat Third Generation geostationary satellite for continuous monitoring over Europe, Africa and the Indian Ocean.

ECMWF high-performance computing centre
Big data, big opportunities: the ECMWF’s high-performance computing facility in Bologna, Italy, is the engine-room of the organization’s weather and climate modelling efforts. (Courtesy: ECMWF)

What other factors help improve forecast accuracy?

We talk of “a day, a decade” improvement in weather forecasting, such that a five-day forecast now is as good as a three-day forecast 20 years ago. A richer and broader mix of observational data underpins that improvement, with diverse data streams feeding into bigger supercomputers that can run higher-resolution models and better algorithms. Equally important is ECMWF’s team of multidisciplinary scientists, whose understanding of the atmosphere and climate helps to optimize our models and data assimilation methods. A case study in this regard is Destination Earth, an ambitious European Union initiative to create a series of “digital twins” – interactive computer simulations – of our planet by 2030. Working with ESA and EUMETSTAT, the ECMWF is building the software and data environment for Destination Earth as well as developing the first two digital twins.

What are these two twins?

Our Digital Twin on Weather-Induced and Geophysical Extremes will assess and predict environmental extremes to support risk assessment and management. Meanwhile, in collaboration with others, the Digital Twin on Climate Change Adaptation complements and extends existing capabilities for the analysis and testing of “what if” scenarios – supporting sustainable development and climate adaptation and mitigation policy-making over multidecadal timescales.

Progress in machine learning and AI has been dramatic over the past couple of years

What kind of resolution will these models have?

Both digital twins integrate sea, atmosphere, land, hydrology and sea ice and their deep connections with a resolution currently impossible to reach. Right now, for example, the ECMWF’s operational forecasts cover the whole globe in a 9 km grid – effectively a localized forecast every 9 km. With Destination Earth, we’re experimenting with 4 km, 2 km, and even 1 km grids.

In February, the ECMWF unveiled a 10-year strategy to accelerate the use of machine learning and AI. How will this be implemented?

The new strategy prioritizes growing exploitation of data-driven methods anchored on established physics-based modelling – rapidly scaling up our previous deployment of machine learning and AI. There are also a variety of hybrid approaches combining data-driven and physics-based modelling.

What will this help you achieve?

On the one hand, data assimilation and observations will help us to directly improve as well as initialize our physics-based forecasting models – for example, by optimizing uncertain parameters or learning correction terms. We are also investigating the potential of applying machine-learning techniques directly on observations – in effect, to make another step beyond the current state-of-the-art and produce forecasts without the need for reanalysis or data assimilation.

How is machine learning deployed at the moment?

Progress in machine learning and AI has been dramatic over the past couple of years – so much so that we launched our Artificial Intelligence Forecasting System (AIFS) back in February. Trained on many years of reanalysis and using traditional data assimilation, AIFS is already an important addition to our suite of forecasts, though still working off the coat-tails of our physics-based predictive models. Another notable innovation is our Probability of Fire machine-learning model, which incorporates multiple data sources beyond weather prediction to identify regional and localized hot-spots at risk of ignition. Those additional parameters – among them human presence, lightning activity as well as vegetation abundance and its dryness – help to pinpoint areas of targeted fire risk, improving the model’s predictive skill by up to 30%.

What do you like most about working at the ECMWF?

Every day, the ECMWF addresses cutting-edge scientific problems – as challenging as anything you’ll encounter in an academic setting – by applying its expertise in atmospheric physics, mathematical modelling, environmental science, big data and other disciplines. What’s especially motivating, however, is that the ECMWF is a mission-driven endeavour with a straight line from our research outcomes to wider societal and economic benefits.

ECMWF at 50: new frontiers in weather and climate prediction

The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organization supported by 35 states – 23 member states and 12 co-operating states. Established in 1975, the centre employs around 500 staff from more than 30 countries at its headquarters in Reading, UK, and sites in Bologna, Italy, and Bonn, Germany. As a research institute and 24/7 operational service, the ECMWF produces global numerical weather predictions four times per day and other data for its member/cooperating states and the broader meteorological community.

The ECMWF processes data from around 90 satellite instruments as part of its daily activities (yielding 60 million quality-controlled observations each day for use in its Integrated Forecasting System). The centre is a key player in Copernicus – the Earth observation component of the EU’s space programme – by contributing information on climate change for the Copernicus Climate Change Service; atmospheric composition to the Copernicus Atmosphere Monitoring Service; as well as flooding and fire danger for the Copernicus Emergency Management Service. This year, the ECMWF is celebrating its 50th anniversary and has a series of celebratory events scheduled in Bologna (15–19 September) and Reading (1–5 December).

The post European centre celebrates 50 years at the forefront of weather forecasting appeared first on Physics World.

Beyond the Big Bang: reopening the doors on how it all began

7 mai 2025 à 12:00

“The universe began with a Big Bang.”

I’ve said this neat line more times than I can count at the start of a public lecture. It summarizes one of the most incomprehensible ideas in science: that the universe began in an extreme, hot, dense and compact state, before expanding and evolving into everything we now see around us. The certainty of the simple statement is reassuring, and it is an easy way of quickly setting the background to any story in astronomy.

But what if it isn’t just an oversimplified summary? What if it is misleading, perhaps even wholly inaccurate?

The Battle of the Big Bang: the New Tales of Our Cosmic Origin aims to dismantle the complacency many of us have fallen into when it comes to our knowledge of the earliest time. And it succeeds – if you push through the opening pages.

When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith

Early on, authors Niayesh Afshordi and Phil Halper say “in some sense the theory of the Big Bang cannot be trusted”, which caused me to raise an eyebrow and wonder what I had let myself in for. After all, for many astronomers, myself included, the Big Bang is practically gospel. And therein lies the problem. When a theory becomes so widely accepted that it is immune to question, we’ve moved from science supported by evidence to belief upheld by faith.

It is easy to read the first few pages of The Battle of the Big Bang with deep scepticism but don’t worry, your eyebrows will eventually lower. That the universe has evolved from a “hot Big Bang” is not in doubt – observations such as the measurements of the cosmic microwave background leave no room for debate. But the idea that the universe “began” as a singularity – a region of space where the curvature of space–time becomes infinite – is another matter. The authors argue that no current theory can describe such a state, and there is no evidence to support it.

An astronomical crowbar

Given the confidence with which we teach it, many might have assumed the Big Bang theory beyond any serious questioning, thereby shutting the door on their own curiosity. Well, Afshordi and Halper have written the popular science equivalent of a crowbar, gently prising that door back open without judgement, keen only to share the adventure still to be had.

A cosmologist at the University of Waterloo, Canada, Afshordi is obsessed with finding observational ways of solving problems in fundamental physics, and is known for his creative alternative theories, such as a non-constant speed of light. Meanwhile Halper, a science popularizer, has carved out a niche by interviewing leading voices in early universe cosmology on YouTube, often facilitating fierce debates between competing thinkers. The result is a book that is both authoritative and accessible – and refreshingly free from ego.

Over 12 chapters, the book introduces more than two dozen alternatives to the Big Bang singularity, with names as tongue-twisting as the theories are mind-bending. For most readers, and even this astrophysicist, the distinctions between the theories quickly blur. But that’s part of the point. The focus isn’t on convincing you which model is correct, it’s about making clear that many alternatives exist that are all just as credible (give or take). Reading this book feels like walking through an art gallery with a knowledgeable and thoughtful friend explaining each work’s nuance. They offer their own opinions in hushed tones, but never suggest that their favourite should be yours too, or even that you should have a favourite.

If you do find yourself feeling dizzy reading about the details of holographic cosmology or eternal inflation, then it won’t be long before an insight into the nature of scientific debate or a crisp analogy brings you up for air. This is where the co-authorship begins to shine: Halper’s presence is felt in the moments when complicated theories are reduced to an idea anyone can relate to; while Afshordi brings deep expertise and an insider’s view of the cosmological community. These vivid and sometimes gossipy glimpses into the lives and rivalries of his colleagues paint a fascinating picture. It is a huge cast of characters – including Roger Penrose, Alan Guth and Hiranya Peiris – most of whom appear only for a page. But even though you won’t remember all the names, you are left with the feeling that Big Bang cosmology is a passionate, political and philosophical side of science very much still in motion.

Keep the door open

The real strength of this book is its humility and lack of defensiveness. As much as reading about the theory behind a multiverse is interesting, as a scientist, I’m always drawn to data. A theory that cannot be tested can feel unscientific, and the authors respect that instinct. Surprisingly, some of the most fantastical ideas, like pre-Big Bang cosmologies, are testable. But the tools required are almost science fiction themselves – such as a fleet of gravitational-wave detectors deployed in space. It’s no small task, and one of the most delightful moments in the book is a heartfelt thank you to taxpayers, for funding the kind of fundamental research that might one day get us to an answer.

In the concluding chapters, the authors pre-emptively respond to scepticism, giving real thought to discussing when thinking outside the box becomes going beyond science altogether. There are no final answers in this book, and it does not pretend to offer any. In fact, it actively asks the reader to recognize that certainty does not belong at the frontiers of science. Afshordi doesn’t mind if his own theories are proved wrong, the only terror for him is if people refuse to ask questions or pursue answers simply because the problem is seen as intractable.

Curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain

A book that leaves you feeling like you understand less about the universe than when you started it might sound like it has failed. But when that “understanding” was an illusion based on dogma, and a book manages to pry open a long-sealed door in your mind, that’s a success.

The Battle of the Big Bang offers both intellectual humility and a reviving invitation to remain eternally open-minded. It reminded me of how far I’d drifted from being one of the fearless schoolchildren who, after I declare with certainty that the universe began with a Big Bang, ask, “But what came before it?”. That curiosity, unashamed and persistent, is far more scientific than shutting the door for fear of the uncertain.

  • May 2025 University of Chicago Press 360pp $32.50/£26.00 hb

The post Beyond the Big Bang: reopening the doors on how it all began appeared first on Physics World.

Exoplanet could be in a perpendicular orbit around two brown dwarfs

30 avril 2025 à 17:52

The first strong evidence for an exoplanet with an orbit perpendicular to that of the binary system it orbits has been observed by astronomers in the UK and Portugal. Based on observations from the ESO’s Very Large Telescope (VLT), researchers led by Tom Baycroft, a PhD student at the University of Birmingham, suggest that such an exoplanet is required to explain the changing orientation in the orbit of a pair of brown dwarfs – objects that are intermediate in mass between the heaviest gas-giant planets and the lightest stars.

The Milky Way is known to host a diverse array of planetary systems, providing astronomers with extensive insights into how planets form and systems evolve. One thing that is evident is that most exoplanets (planets that orbit stars other than the Sun) and systems that have been observed so far bear little resemblance to Earth and the solar system.

Among the most interesting planets are the circumbinaries, which orbit two stars in a binary system. So far, 16 of these planets have been discovered. In each case, they have been found to orbit in the same plane as the orbits of their binary host stars. In other words, the planetary system is flat. This is much like the solar system, where each planet orbits the Sun within the same plane.

“But there has been evidence that planets might exist in a different configuration around a binary star,” Baycroft explains. “Inclined at 90° to the binary, these polar orbiting planets have been theorized to exist, and discs of dust and gas have been found in this configuration.”

Especially interesting

Baycroft’s team had set out to investigate a binary pair of brown dwarfs around 120 light–years away. The system is called 2M1510 and each brown dwarf is only about 45 million years old and they have masses about 18 times that of Jupiter. The pair are especially interesting because they are eclipsing: periodically passing in front of each other from our line of sight. When observed by the VLT, this unique vantage allowed the astronomers to determine the masses and radii of the stars and the nature of their orbit.

“This is a rare object, one of only two eclipsing binary brown dwarfs, which is useful for understanding how brown dwarfs form and evolve,” Baycroft explains. “In our study, we were not looking for a planet, only aiming to improve our understanding of the brown dwarfs.”

Yet as they analysed the VLT’s data, the team noticed something strange about pair’s orbit. Doppler shifts in the light they emitted revealed that their elliptical orbit was slowly changing orientation in an apsidal precession.

Not unheard of

This behaviour is not unheard of. In its orbit around the Sun, Mercury undergoes apsidal precession, which is explained by Albert Einstein’s general theory of relativity. But Baycroft says that the precession must have had an entirely different cause in the brown-dwarf pair.

“Unlike Mercury, this precession is going backwards, in the opposite direction to the orbit,” he explains. “Ruling out any other causes for this, we find that the best explanation is that there is a companion to the binary on a polar orbit, inclined at close to 90° relative to the binary.” As it exerts its gravitational pull on the binary pair, the inclination of this third, smaller body induces a gradual rotation in the orientation of the binary’s elliptical orbit.

For now, the characteristics of this planet are difficult to pin down and the team believe its mass could lie anywhere between 10–100 Earths. All the same, the astronomers are confident that their results now confirm the possibility of polar exoplanets existing in circumbinary orbits – providing valuable guidance for future observations.

“This result exemplifies how the many different configurations of planetary systems continue to astound us,” Baycroft comments. “It also paves the way for more studies aiming to find out how common such polar orbits may be.”

The observations are described in Science Advances.

The post Exoplanet could be in a perpendicular orbit around two brown dwarfs appeared first on Physics World.

Mathematical genius: celebrating the life and work of Emmy Noether

30 avril 2025 à 11:00
Mathematical genius Emmy Noether, around 1900. (Public domain. Photographer unknown)

In his debut book, Einstein’s Tutor: the Story of Emmy Noether and the Invention of Modern Physics, Lee Phillips champions the life and work of German mathematician Emmy Noether (1882–1935). Despite living a life filled with obstacles, injustices and discrimination as a Jewish mathematician, Noether revolutionized the field and discovered “the single most profound result in all of physics”. Phillips’ book weaves the story of her extraordinary life around the central subject of “Noether’s theorem”, which itself sits at the heart of a fascinating era in the development of modern theoretical physics.

Noether grew up at a time when women had few rights. Unable to officially register as a student, she was instead able to audit courses at the University of Erlangen in Bavaria, with the support of her father who was a mathematics professor there. At the time, young Noether was one of only two female auditors in the university of 986 students. Just two years previously, the university faculty had declared that mixed-sex education would “overthrow academic order”. Despite going against this formidable status quo, she was able to graduate in 1903.

Noether continued her pursuit of advanced mathematics, travelling to the “[world’s] centre of mathematics” – the University of Göttingen. Here, she was able to sit in the lectures of some of the brightest mathematical minds of the time – Karl Schwarzschild, Hermann Minkowski, Otto Blumenthal, Felix Klein and David Hilbert. While there, the law finally changed: women were, at last, allowed to enrol as students at university. In 1904 Noether returned to the University of Erlangen to complete her postgraduate dissertation under the supervision of Paul Gordan. At the time, she was the only woman to matriculate alongside 46 men.

Despite being more than qualified, Noether was unable to secure a university position after graduating from her PhD in 1907. Instead, she worked unpaid for almost a decade – teaching her father’s courses and supervising his PhD students. As of 1915, Noether was the only woman in the whole of Europe with a PhD in mathematics. She had worked hard to be recognized as an expert on symmetry and invariant theory, and eventually accepted an invitation from Klein and Hilbert to work alongside them in Göttingen. Here, the three of them would meet Albert Einstein to discuss his latest project – a general theory of relativity.

Infiltrating the boys’ club

In Einstein’s Tutor, Phillips paints an especially vivid picture of Noether’s life at Göttingen, among colleagues including Klein, Hilbert and Einstein, who loom large and bring a richness to the story. Indeed, much of the first three chapters are dedicated to these men, setting the scene for Noether’s arrival in Göttingen. Phillips makes it easy to imagine these exceptionally talented and somewhat eccentric individuals working at the forefront of mathematics and theoretical physics together. And it was here, when supporting Einstein with the development of general relativity (GR), that Noether discovered a profound result: for every symmetry in the universe, there is a corresponding conservation law.

Throughout the book, Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of GR. Einstein himself “expressed wonderment at what happened to his equations in her hands, how he never imagined that things could be expressed with such elegance and generality”. Phillips argues that Einstein should not be credited as the sole architect of GR. Indeed, the contributions of Grossman, Klein, Besso, Hilbert, and crucially, Noether, remain largely unacknowledged – a wrong that Phillips is trying to right with this book.

Phillips makes the case that, without Noether, Einstein would never have been able to get to the heart of general relativity

A key theme running through Einstein’s Tutor is the importance of the support and allyship that Noether received from her male contemporaries. While at Göttingen, there was a battle to allow Noether to receive her habilitation (eligibility for tenure). Many argued in her favour but considered her an exception, and believed that in general, women were not suited as university professors. Hilbert, in contrast, saw her sex as irrelevant (famously declaring “this is not a bath house”) and pointed out that science requires the best people, of which she was one. Einstein also fought for her on the basis of equal rights for women.

Eventually, in 1919 Noether was allowed to habilitate (as an exception to the rule) and was promoted to professor in 1922. However, she was still not paid for her work. In fact, her promotion came with the specific condition that she remained unpaid, making it clear that Noether “would not be granted any form of authority over any male employee”. Hilbert however, managed to secure a contract with a small salary for her from the university administration.

Her allies rose to the cause again in 1933, when Noether was one of the first Jewish academics to be dismissed under the Nazi regime. After her expulsion, German mathematician Helmut Hasse convinced 14 other colleagues to write letters advocating for her importance, asking that she be allowed to continue as a teacher to a small group of advanced students – the government denied this request.

When the time came to leave Germany, many colleagues wrote testimonials in her support for immigration, with one writing “She is one of the 10 or 12 leading mathematicians of the present generation in the entire world.” Rather than being placed at a prestigious university or research institute (Hermann Weyl and Einstein were both placed at “the men’s university”, the Institute for Advanced Study in Princeton), it was recommended she join Bryn Mawr, a women’s college in Pennsylvania, US. Her position there would “compete with no-one… the most distinguished feminine mathematician connected with the most distinguished feminine university”. Phillips makes clear his distaste for the phrasing of this recommendation. However, all accounts show that she was happy at Bryn Mawr and stayed there until her unexpected death in 1935 at the age of 53.

Noether’s legacy

With a PhD in theoretical physics, Phillips has worked for many years in both academia and industry. His background shows itself clearly in some unusual writing choices. While his writing style is relaxed and conversational, it includes the occasional academic turn of phrase (e.g. “In this chapter I will explain…”), which feels out of place in a popular-science book. He also has a habit of piling repetitive and overly sincere praise onto Noether. I personally prefer stories that adopt the “show, don’t tell” approach – her abilities speak for themselves, so it should be easy to let the reader come to their own conclusions.

Phillips has made the ambitious choice to write a popular-science book about complex mathematical concepts such as symmetries and conservation laws that are challenging to explain, especially to general readers. He does his best to describe the mathematics and physics behind some of the key concepts around Noether’s theorem. However, in places, you do need to have some familiarity with university-level physics and maths to properly follow his explanations. The book also includes a 40-page appendix filled with additional physics content, which I found unnecessary.

Einstein’s Tutor does achieve its primary goal of familiarizing the reader with Emmy Noether and the tremendous significance of her work. The final chapter on her legacy breezes quickly through developments in particle physics, astrophysics, quantum computers, economics and XKCD Comics to highlight the range and impact this single theorem has had. Phillips’ goal was to take Noether into the mainstream, and this book is a small step in the right direction. As cosmologist and author Katie Mack summarizes perfectly: “Noether’s theorem is to theoretical physics what natural selection is to biology.”

  • 2024 Hachette UK £25hb 368pp

The post Mathematical genius: celebrating the life and work of Emmy Noether appeared first on Physics World.

Brain region used for speech decoding also supports BCI cursor control

30 avril 2025 à 10:00

Sending an email, typing a text message, streaming a movie. Many of us do these activities every day. But what if you couldn’t move your muscles and navigate the digital world? This is where brain–computer interfaces (BCIs) come in.

BCIs that are implanted in the brain can bypass pathways damaged by illness and injury. They analyse neural signals and produce an output for the user, such as interacting with a computer.

A major focus for scientists developing BCIs has been to interpret brain activity associated with movements to control a computer cursor. The user drives the BCI by imagining arm and hand movements, which often originate in the dorsal motor cortex. Speech BCIs, which restore communication by decoding attempted speech from neural activity in sensorimotor cortical areas such as the ventral precentral gyrus, have also been developed.

Researchers at the University of California, Davis recently found that the same part of the brain that supported a speech BCI could also support computer cursor control for an individual with amyotrophic lateral sclerosis (ALS). ALS is progressive neurodegenerative disease affecting the motor neurons in the brain and spinal cord.

“Once that capability [to control a computer mouse] became reliably achievable roughly a decade ago, it stood to reason that we should go after another big challenge, restoring speech, that would help people unable to speak. And from there – and this is where this new paper comes in – we recognized that patients would benefit from both of these capabilities [speech and computer cursor control],” says Sergey Stavisky, who co-directs the UC Davis Neuroprosthetics Lab with David Brandman.

Their clinical case study suggests that computer cursor control may not be as body-part-specific as scientists previously believed. If results are replicable, this could enable the creation of multi-modal BCIs that restore communication and movement to people with paralysis. The researchers share information about their cursor BCI and the case study in the Journal of Neural Engineering.

The study participant, a 45-year-old man with ALS, had previous success working with a speech BCI. The researchers recorded neural activity from the participant’s ventral precentral gyrus while he imagined controlling a computer cursor, and built a BCI to interpret that neural activity and predict where and when he wanted to move and click the cursor. The participant then used the new cursor BCI to send texts and emails, watch Netflix, and play The New York Times Spelling Bee game on his personal computer.

“This finding, that the tiny region of the brain we record from has a lot more than just speech information, has led to the participant also being able to control his own computer on a daily basis, and get back some independence for him and his family,” says first author Tyler Singer-Clark, a graduate student in biomedical engineering at UC Davis.

The researchers found that most of the information driving cursor control came from one of the participant’s four implanted microelectrode arrays, while click information was available on all four of the BCI arrays.

“The neural recording arrays are the same ones used in many prior studies,” explains Singer-Clark. “The result that our cursor BCI worked well given this choice makes it all the more convincing that this brain area (speech motor cortex) has untapped potential for controlling BCIs in multiple useful ways.”

The researchers are working to incorporate more computer actions into their cursor BCI, to make the control faster and more accurate, and to reduce calibration time. They also note that it’s important to replicate these results in more people to understand how generalizable the results of their case study may be.

The research was conducted as part of the BrainGate2 clinical trial.

The post Brain region used for speech decoding also supports BCI cursor control appeared first on Physics World.

Curiouser and curiouser: delving into quantum Cheshire cats

29 avril 2025 à 12:00

Most of us have heard of Schrödinger’s eponymous cat, but it is not the only feline in the quantum physics bestiary. Quantum Cheshire cats may not be as well known, yet their behaviour is even more insulting to our classical-world common sense.

These quantum felines get their name from the Cheshire cat in Lewis Carroll’s Alice’s Adventures in Wonderland, which disappears leaving its grin behind. As Alice says: I’ve often seen a cat without a grin, but a grin without a cat! It’s the most curious thing I ever saw in my life!”

Things are curiouser in the quantum world, where the property of a particle seems to be in a different place from the particle itself. A photon’s polarization, for example, may exist in a totally different location from the photon itself: that’s a quantum Cheshire cat.

While the prospect of disembodied properties might seem disturbing, it’s a way of interpreting the elegant predictions of quantum mechanics. That at least was the thinking when quantum Cheshire cats were first put forward by Yakir AharonovSandu PopescuDaniel Rohrlich and Paul Skrzypczyk in an article published in 2013 (New J. Phys. 15 113015).

Strength of a measurement

To get to grips with the concept, remember that making a measurement on a quantum system will “collapse” it into one of its eigenstates – think of opening the box and finding Schrödinger’s cat either dead or alive. However, by playing on the trade-off between the strength of a measurement and the uncertainty of the result, one can gain a tiny bit of information while disturbing the system as little as possible. If such a measurement is done many times, or on an ensemble of particles, it is possible to average out the results, to obtain a precise value.

First proposed in the 1980s, this method of teasing out information from the quantum system by a series of gentle pokes is known as weak measurement. While the idea of weak measurement in itself does not appear a radical departure from quantum formalism, “an entire new world appeared” as Popescu puts it. Indeed, Aharonov and his collaborators have spent the last four decades investigating all kinds of scenarios in which weak measurement can lead to unexpected consequences, with the quantum Cheshire cat being one they stumbled upon.

In their 2013 paper, Aharonov and colleagues imagined a simple optical interferometer set-up, in which the “cat” is a photon that can be in either the left or the right arm, while the “grin” is the photon’s circular polarization. The cat (the photon) is first prepared in a certain superposition state, known as pre-selection. After it enters the set-up, the cat can leave via several possible exits. The disembodiment between particle and property appears in the cases in which the particle emerges in a particular exit (post-selection).

Certain measurements, analysing the properties of the particle, are performed while the particle is in the interferometer (in between the pre- and post-selection). Being weak measurements, they have to be carried out many times to get the average. For certain pre- and post-selection, one finds the cat will be in the left arm while the grin is in the right. It’s a Cheshire cat disembodied from its grin.

The mathematical description of this curious state of affairs was clear, but the interpretation seemed preposterous and the original article spent over a year in peer review, with its eventual publication still sparking criticism. Soon after, experiments with polarized neutrons (Nature Comms 5 4492) and photons (Phys. Rev. A 94 012102) tested the original team’s set-up. However, these experiments and subsequent tests, despite confirming the theoretical predictions, did not settle the debate – after all, the issue was with the interpretation.

A quantum of probabilities

To come to terms with this perplexing notion, think of the type of pre- and post-selected set-up as a pachinko machine, in which a ball starts at the top in a single pre-selected slot and goes down through various obstacles to end up in a specific point (post-selection): the jackpot hole. If you count how many balls hit the jackpot hole, you can calculate the probability distribution. In the classical world, measuring the position and properties of the ball at different points, say with a camera, is possible.

This observation will not affect the trajectory of the ball, or the probability of the jackpot. In a quantum version of the pachinko machine, the pre- and post-selection will work in a similar way, except you could feed in balls in superposition states. A weak measurement will not disturb the system so multiple measurements can tease out the probability of certain outcomes. The measurement result will not yield an eigenvalue, which corresponds to a physical property of the system, but weak values, and the way one should interpret these is not clear-cut.

1 Split particle property

Illustration of 5 pachinko machines with cats sat on or behind them. Below, a series of diagrams explain how the quantum cat's grin can be separated from its body
(Illustration courtesy: Mayank Shreshtha)

Quantum Cheshire cats are a curious phenomenon, whereby the property of a quantum particle can be completely separate from the particle itself. A photon’s polarization, for example, may exist at a location where there is no photon at all. In this illustration, our quantum Cheshire cats (the photons) are at a pachinko parlour. Depending on certain pre- and post-selection criteria, the cats end up in one location – in one arm of the detector or the other – and their grins in a different location, on the chairs.  

To make sense of this in a quantum sense, we need an intuitive mental image, even a limited one. This is why quantum Cheshire cats are a powerful metaphor, but they are also more than that, guiding researchers into new directions. Indeed, since the initial discovery, Aharonov, Popescu and colleagues have stumbled  upon more surprises.

In 2021 they generalized the quantum Cheshire cat effect to a dynamical picture in which the “disembodied” property can propagate in space (Nature Comms 12 4770). For example, there could be a flow of angular momentum without anything carrying it (Phys. Rev. A 110 L030201). In another generalization, Aharonov imagined a massive particle with a mass that could be measured in one place with no momentum, while its momentum could be measured in another place without its mass (Quantum 8 1536). A gedankenexperiment to test this effect would involve a pair of nested Mach–Zehnder interferometers with moving mirrors and beam splitters.

Provocative interpretations

If you find these ideas bewildering, you’re in good company. “They’re brain teasers,” explains Jonte Hance, a researcher in quantum foundations at Newcastle University, UK. In fact, Hance thinks that quantum Cheshire cats are a great way to get people interested in the foundations of quantum mechanics.

Physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions

Sure, the early years of quantum physics saw famous debates between Niels Bohr and Albert Einstein, culminating in the criticism in the Einstein–Podolski–Rosen (EPR) paradox (Phys. Rev. 47 777) in 1935. But after that, physicists were too busy applying quantum mechanics to various problems to be bothered with foundational questions.

This lack of interest in quantum fundamentals is perfectly illustrated by two anecdotes, the first involving Aharonov himself. When he was studying physics at Technion in Israel in the 1950s, he asked Nathan Rosen (the R of the EPR) about working on the foundations of quantum mechanics. The topic was deemed so unfashionable that Rosen advised him to focus on applications. Luckily, Aharonov ignored the advice and went on to work with American quantum theorist David Bohm.

The other story concerns Alain Aspect, who in 1975 visited CERN physicist John Bell to ask for advice on his plans to do an experimental test of Bell’s inequalities to settle the EPR paradox. Bell’s very first question was not about the details of the experiment – but whether Aspect had a permanent position (Nature Phys. 3 674). Luckily, Aspect did, so he carried out the test, which went on to earn him a share of the 2022 Nobel Prize for Physics.

As quantum computing and quantum information began to emerge, there was a brief renaissance in quantum foundations culminating in the early 2010s. But over the past decade, with many of aspects of quantum physics reaching commercial fruition, research interest has shifted firmly once again towards applications.

Despite popular science’s constant reminder of how “weird” quantum mechanics is, physicists often take the pragmatic “shut up and calculate” approach. Hance says that researchers “tend to forget how weird quantum mechanics is, and to me you need that intuition of it being weird”. Indeed, paradoxes like Schrödinger’s cat and EPR have attracted and inspired generations of physicists and have been instrumental in the development of quantum technologies.

The point of the quantum Cheshire cat, and related paradoxes, is to challenge our intuition and provoke us to think outside the box. That’s important even if applications may not be immediately in sight. “Most people agree that although we know the basic laws of quantum mechanics, we don’t really understand what quantum mechanics is all about,” says Popescu.

Aharonov and colleagues’ programme is to develop a correct intuition that can guide us further. “We strongly believe that one can find an intuitive way of thinking about quantum mechanics,” adds Popescu. That may, or may not, involve felines.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Curiouser and curiouser: delving into quantum Cheshire cats appeared first on Physics World.

India must boost investment in quantum technologies to become world leader, says report

28 avril 2025 à 17:00

India must intensify its efforts in quantum technologies as well as boost private investment if it is to become a leader in the burgeoning field. That is according to the first report from India’s National Quantum Mission (NQM), which also warns that the country must improve its quantum security and regulation to make its digital infrastructure quantum-safe.

Approved by the Indian government in 2023, the NQM is an eight-year $750m (60bn INR) initiative that aims to make the country a leader in quantum tech. Its new report focuses on developments in four aspects of NQM’s mission: quantum computing; communication; sensing and metrology; and materials and devices.

Entitled India’s International Technology Engagement Strategy for Quantum Science, Technology and Innovation, the report finds that India’s research interests include error-correction algorithms for quantum computers. It is also involved in building quantum hardware with superconducting circuits, trapped atoms/ions and engineered quantum dots.

The NQM-supported Bengaluru-based startup QPiAI, for example, recently developed a 25-superconducting qubit quantum computer called “Indus”, although the qubits were fabricated abroad.

Ajay Sood, principal scientific advisor to the Indian government, told Physics World that while India is strong in “software-centric, theoretical and algorithmic aspects of quantum computing, work on completely indigenous development of quantum computing hardware is…at a nascent stage.”

Sood, who is a physicist by training, adds that while there are a few groups working on different platforms, these are at less than 10-qubit stage. “[It is] important for [India] to have indigenous capabilities for fabricating qubits and other ancillary hardware for quantum computers,” he says

India is also developing secure protocols and satellite-based systems and implementing quantum systems for precision measurements. QNu Labs – another Begalaru startup – is, for example, developing a quantum-safe communication-chip module to secure satellite and drone communications with built-in quantum randomness and security micro-stack.

Lagging behind

The report highlights the need for greater involvement of Indian industry in hardware-related activities. Unlike other countries, India struggles with limited industry funding, in which most comes from angel investors, with limited participation from institutional investors such as venture-capital firms, tech corporates and private equity funds.

There are many areas of quantum tech that are simply not being pursued in India

Arindam Ghosh

The report also calls for more indigenous development of essential sensors and devices such as single-photon detectors, quantum repeaters, and associated electronics, with necessary testing facilities for quantum communication. “There is also room for becoming global manufacturers and suppliers for associated electronic or cryogenic components,” says Sood. “Our industry should take this opportunity.”

India must work on its quantum security and regulation as well, according to the report. It warns that the Indian financial sector, which is one the major drivers for quantum tech applications, “risks lagging behind” in quantum security and regulation, with limited participation of Indian financial-service providers.

“Our cyber infrastructure, especially related to our financial systems, power grids, and transport systems, need to be urgently protected by employing the existing and evolving post quantum cryptography algorithms and quantum key distribution technologies,” says Sood.

India currently has about 50 educational programmes in various universities and institutions. Yet Arindam Ghosh, who runs the Quantum Technology Initiative at the India Institute of Science, Bangalore, says that the country faces a lack of people going into quantum-related careers.

“In spite of [a] very large number of quantum-educated graduates, the human resource involved in developing quantum technologies is abysmally small,” says Ghosh. “As a result, there are many areas of quantum tech that are simply not being pursued in India.”  Other problems, according to Ghosh, include “modest” government funding compared to other countries as well as “slow and highly bureaucratic” government machinery.

Sood, however, is optimistic, pointing out recent Indian initiatives such as setting up hardware fabrication and testing facilities, supporting start-ups as well as setting up a $1.2bn (100bn INR) fund to promote “deep-tech” startups. “[With such initiatives] there is every reason to believe that India would emerge even stronger in the field,” says Sood.

The post India must boost investment in quantum technologies to become world leader, says report appeared first on Physics World.

Quantum transducer enables optical control of a superconducting qubit

28 avril 2025 à 10:10
Optical micrograph of a quantum transducer
Quantum transducer A niobium microwave LC resonator (silver) is capacitively coupled to two hybridized lithium niobate racetrack resonators in a paperclip geometry (black) to exchange energy between the microwave and optical domains using the electro-optic effect. (Courtesy: Lončar group/Harvard SEAS)

The future of quantum communication and quantum computing technologies may well revolve around superconducting qubits and quantum circuits, which have already been shown to improve processing capabilities over classical supercomputers – even when there is noise within the system. This scenario could be one step closer with the development of a novel quantum transducer by a team headed up at the Harvard John A Paulson School of Engineering and Applied Sciences (SEAS).

Realising this future will rely on systems having hundreds (or more) logical qubits (each built from multiple physical qubits). However, because superconducting qubits require ultralow operating temperatures, large-scale refrigeration is a major challenge – there is no technology available today that can provide the cooling power to realise such large-scale qubit systems.

Superconducting microwave qubits are a promising option for quantum processor nodes, but they currently require bulky microwave components. These components create a lot of heat that can easily disrupt the refrigeration systems cooling the qubits.

One way to combat this cooling conundrum is to use a modular approach, with small-scale quantum processors connected via quantum links, and each processor having its own dilution refrigerator. Superconducting qubits can be accessed using microwave photons between 3 and 8 GHz, thus the quantum links could be used to transmit microwave signals. The downside of this approach is that it would require cryogenically cooled links between each subsystem.

On the other hand, optical signals at telecoms frequency (around 200 THz) can be generated using much smaller form factor components, leading to lower thermal loads and noise, and can be transmitted via low-loss optical fibres. The transduction of information between optical and microwave frequencies is therefore key to controlling superconducting microwave qubits without the high thermal cost.

The large energy gap between microwave and optical photons makes it difficult to control microwave qubits with optical signals and requires a microwave–optical quantum transducer (MOQT). These MOQTs provide a coherent, bidirectional link between microwave and optical frequencies while preserving the quantum states of the qubit. A team led by SEAS researcher Marko Lončar has now created such a device, describing it in Nature Physics.

Electro-optic transducer controls superconducting qubits

Lončar and collaborators have developed a thin-film lithium niobate (TFLN) cavity electro-optic (CEO)-based MOQT (clad with silica to aid thermal dissipation and mitigate optical losses) that converts optical frequencies into microwave frequencies with low loss. The team used the CEO-MOQT to facilitate coherent optical driving of a superconducting qubit (controlling the state of the quantum system by manipulating its energy).

The on-chip transducer system contains three resonators: a microwave LC resonator capacitively coupled to two optical resonators using the electro-optic effect. The device creates hybridized optical modes in the transducer that enables a resonance-enhanced exchange of energy between the microwave and optical modes.

The transducer uses a process known as difference frequency generation to create a new frequency output from two input frequencies. The optical modes – an optical pump in a classical red-pumping regime and an optical idler – interact to generate a microwave signal at the qubit frequency, in the form of a shaped, symmetric single microwave photon.

This microwave signal is then transmitted from the transducer to a superconducting qubit (in the same refrigerator system) using a coaxial cable. The qubit is coupled to a readout resonator that enables its state to be read by measuring the transmission of a readout pulse.

The MOQT operated with a peak conversion efficiency of 1.18% (in both microwave-to-optical and optical-to-microwave regimes), low microwave noise generation and the ability to drive Rabi oscillations in a superconducting qubit. Because of the low noise, the researchers state that stronger optical-pump fields could be used without affecting qubit performance.

Having effectively demonstrated the ability to control superconducting circuits with optical light, the researchers suggest a number of future improvements that could increase the device performance by orders of magnitude. For example, microwave and optical coupling losses could be reduced by fabricating a single-ended microwave resonator directly onto the silicon wafer instead of on silica. A flux tuneable microwave cavity could increase the optical bandwidth of the transducer. Finally, the use of improved measurement methods could improve control of the qubits and allow for more intricate gate operations between qubit nodes.

The researchers suggest this type of device could be used for networking superconductor qubits when scaling up quantum systems. The combination of this work with other research on developing optical readouts for superconducting qubit chips “provides a path towards forming all-optical interfaces with superconducting qubits…to enable large scale quantum processors,” they conclude.

The post Quantum transducer enables optical control of a superconducting qubit appeared first on Physics World.

Could an extra time dimension reconcile quantum entanglement with local causality?

25 avril 2025 à 14:33

Nonlocal correlations that define quantum entanglement could be reconciled with Einstein’s theory of relativity if space–time had two temporal dimensions. That is the implication of new theoretical work that extends nonlocal hidden variable theories of quantum entanglement and proposes a potential experimental test.

Marco Pettini, a theoretical physicist at Aix Marseille University in France, says the idea arose from conversations with the mathematical physicist Roger Penrose – who shared the 2020 Nobel Prize for Physics for showing that the general theory of relativity predicted black holes. “He told me that, from his point of view, quantum entanglement is the greatest mystery that we have in physics,” says Pettini. The puzzle is encapsulated by Bell’s inequality, which was derived in the mid-1960s by the Northern Irish physicist John Bell.

Bell’s breakthrough was inspired by the 1935 Einstein–Podolsky–Rosen paradox, a thought experiment in which entangled particles in quantum superpositions (using the language of modern quantum mechanics) travel to spatially separated observers Alice and Bob. They make measurements of the same observable property of their particles. As they are superposition states, the outcome of neither measurement is certain before it is made. However, as soon as Alice measures the state, the superposition collapses and Bob’s measurement is now fixed.

Quantum scepticism

A sceptic of quantum indeterminacy could hypothetically suggest that the entangled particles carried hidden variables all along, so that when Alice made her measurement, she simply found out the state that Bob would measure rather than actually altering it. If the observers are separated by a distance so great that information about the hidden variable’s state would have to travel faster than light between them, then hidden variable theory violates relativity. Bell derived an inequality showing the maximum degree of correlation between the measurements possible if each particle carried such a “local” hidden variable, and showed it was indeed violated by quantum mechanics.

A more sophisticated alternative investigated by the theoretical physicists David Bohm and his student Jeffrey Bub, as well as by Bell himself, is a nonlocal hidden variable. This postulates that the particle – including the hidden variable – is indeed in a superposition and defined by an evolving wavefunction. When Alice makes her measurement, this superposition collapses. Bob’s value then correlates with Alice’s. For decades, researchers believed the wavefunction collapse could travel faster than light without allowing superliminal exchange of information – therefore without violating the special theory of relativity. However, in 2012 researchers showed that any finite-speed collapse propagation would enable superluminal information transmission.

“I met Roger Penrose several times, and while talking with him I asked ‘Well, why couldn’t we exploit an extra time dimension?’,” recalls Pettini. Particles could have five-dimensional wavefunctions (three spatial, two temporal), and the collapse could propagate through the extra time dimension – allowing it to appear instantaneous. Pettini says that the problem Penrose foresaw was that this would enable time travel, and the consequent possibility that one could travel back through the “extra time” to kill one’s ancestors or otherwise violate causality. However, Pettini says he “recently found in the literature a paper which has inspired some relatively standard modifications of the metric of an enlarged space–time in which massive particles are confined with respect to the extra time dimension…Since we are made of massive particles, we don’t see it.”

Toy model

Pettini believes it might be possible to test this idea experimentally. In a new paper, he proposes a hypothetical experiment (which he describes as a toy model), in which two sources emit pairs of entangled, polarized photons simultaneously. The photons from one source are collected by recipients Alice and Bob, while the photons from the other source are collected by Eve and Tom using identical detectors. Alice and Eve compare the polarizations of the photons they detect. Alice’s photon must, by fundamental quantum mechanics, be entangled with Bob’s photon, and Eve’s with Tom’s, but otherwise simple quantum mechanics gives no reason to expect any entanglement in the system.

Pettini proposes, however, that Alice and Eve should be placed much closer together, and closer to the photon sources, than to the other observers. In this case, he suggests, the communication of entanglement through the extra time dimension when the wavefunction of Alice’s particle collapses, transmitting this to Bob, or when Eve’s particle is transmitted to Tom would also transmit information between the much closer identical particles received by the other woman. This could affect the interference between Alice’s and Eve’s photons and cause a violation of Bell’s inequality. “[Alice and Eve] would influence each other as if they were entangled,” says Pettini. “This would be the smoking gun.”

Bub, now a distinguished professor emeritus at the University of Maryland, College Park, is not holding his breath. “I’m intrigued by [Pettini] exploiting my old hidden variable paper with Bohm to develop his two-time model of entanglement, but to be frank I can’t see this going anywhere,” he says. “I don’t feel the pull to provide a causal explanation of entanglement, and I don’t any more think of the ‘collapse’ of the wave function as a dynamical process.” He says the central premise of Pettini’s – that adding an extra time dimension could allow the transmission of entanglement between otherwise unrelated photons, is “a big leap”. “Personally, I wouldn’t put any money on it,” he says.

The research is described in Physical Review Research.

The post Could an extra time dimension reconcile quantum entanglement with local causality? appeared first on Physics World.

Light-activated pacemaker is smaller than a grain of rice

24 avril 2025 à 17:50

The world’s smallest pacemaker to date is smaller than a single grain of rice, optically controlled and dissolves after it’s no longer needed. According to researchers involved in the work, the pacemaker could work in human hearts of all sizes that need temporary pacing, including those of newborn babies with congenital heart defects.

“Our major motivation was children,” says Igor Efimov, a professor of medicine and biomedical engineering, in a press release from Northwestern University. Efimov co-led the research with Northwestern bioelectronics pioneer John Rogers.

“About 1% of children are born with congenital heart defects – regardless of whether they live in a low-resource or high-resource country,” Efimov explains. “Now, we can place this tiny pacemaker on a child’s heart and stimulate it with a soft, gentle, wearable device. And no additional surgery is necessary to remove it.”

The current clinical standard-of-care involves sewing pacemaker electrodes directly onto a patient’s heart muscle during surgery. Wires from the electrodes protrude from the patient’s chest and connect to an external pacing box. Placing the pacemakers – and removing them later – does not come without risk. Complications include infection, dislodgment, torn or damaged tissues, bleeding and blood clots.

To minimize these risks, the researchers sought to develop a dissolvable pacemaker, which they introduced in Nature Biotechnology in 2021. By varying the composition and thickness of materials in the devices, Rogers’ lab can control how long the pacemaker functions before dissolving. The dissolvable device also eliminates the need for bulky batteries and wires.

“The heart requires a tiny amount of electrical stimulation,” says Rogers in the Northwestern release. “By minimizing the size, we dramatically simplify the implantation procedures, we reduce trauma and risk to the patient, and, with the dissolvable nature of the device, we eliminate any need for secondary surgical extraction procedures.”

Light-controlled pacing
Light-controlled pacing When the wearable device (left) detects an irregular heartbeat, it emits light to activate the pacemaker. (Courtesy: John A Rogers/Northwestern University)

The latest iteration of the device – reported in Nature – advances the technology further. The pacemaker is paired with a small, soft, flexible, wireless device that is mounted onto the patient’s chest. The skin-interfaced device continuously captures electrocardiogram (ECG) data. When it detects an irregular heartbeat, it automatically shines a pulse of infrared light to activate the pacemaker and control the pacing.

“The new device is self-powered and optically controlled – totally different than our previous devices in those two essential aspects of engineering design,” says Rogers. “We moved away from wireless power transfer to enable operation, and we replaced RF wireless control strategies – both to eliminate the need for an antenna (the size-limiting component of the system) and to avoid the need for external RF power supply.”

Measurements demonstrated that the pacemaker – which is 1.8 mm wide, 3.5 mm long and 1 mm thick – delivers as much stimulation as a full-sized pacemaker. Initial studies in animals and in the human hearts of organ donors suggest that the device could work in human infants and adults. The devices are also versatile, the researchers say, and could be used across different regions of the heart or the body. They could also be integrated with other implantable devices for applications in nerve and bone healing, treating wounds and blocking pain.

The next steps for the research (supported by the Querrey Simpson Institute for Bioelectronics, the Leducq Foundation and the National Institutes of Health) include further engineering improvements to the device. “From the translational standpoint, we have put together a very early-stage startup company to work individually and/or in partnerships with larger companies to begin the process of designing the device for regulatory approval,” Rogers says.

The post Light-activated pacemaker is smaller than a grain of rice appeared first on Physics World.

Harvard University sues Trump administration as attacks on US science deepen

24 avril 2025 à 14:55

Harvard University is suing the Trump administration over its plan to block up to $9bn of government research grants to the institution. The suit, filed in a federal court on 21 April, claims that the administration’s “attempt to coerce and control” Harvard violates the academic freedom protected by the first amendment of the US constitution.

The action comes in the wake of the US administration claiming that Harvard and other universities have not protected Jewish students during pro-Gaza campus demonstrations. Columbia University has already agreed to change its teaching policies and clamp down on demonstrations in the hope of regaining some $400,000 of government grants.

Harvard president Alan Garber also sought negotiations with the administration on ways that it might satisfy its demands. But a letter sent to Garber dated 11 April, signed by three Trump administration officials, asserted that the university had “failed to live up to both the intellectual and civil rights conditions that justify federal investments”.

The letter demanded that Harvard reform and restructure its governance, stop all diversity, equality and inclusion (DEI) programmes and reform how it hires staff and students. It also said Harvard must stop recruiting international students who are “hostile to American values” and provide an audit on “viewpoint diversity” on admissions and hiring.

Some administration sources suggested that the letter, which effectively insists on government oversight of Harvard’s affairs, was an internal draft sent to Harvard by mistake. Nevertheless, Garber decided to end negotiations, leading Harvard to instead sue the government over the blocked funds.

We stand for the values that have made American higher education a beacon for the world

Alan Garber

A letter on 14 April from Harvard’s lawyers states that the university is “committed to fighting antisemitism and other forms of bigotry in its community”. It adds that it is “open to dialogue” about what it has done, and is planning to do, to “improve the experience of every member” of its community but concludes that Harvard “is not prepared to agree to demands that go beyond the lawful authority of this or any other administration”.

Writing in an open letter to the community dated 22 April, Garber says that “we stand for the values that have made American higher education a beacon for the world”. The administration has hit back by threatening to withdraw Harvard’s non-profit status, tax its endowment and jeopardise its ability to enrol overseas students, who currently make up more than 27% of its intake.

Budget woes

The Trump administration is also planning swingeing cuts to government science agencies. If its budget request for 2026 is approved by Congress, funding for NASA’s Science Mission Directorate would be almost halved from $7.3bn to $3.9bn. The Nancy Grace Roman Space Telescope, a successor to the Hubble and James Webb space telescopes, would be axed. Two missions to Venus – the DAVINCI atmosphere probe and the VERITAS surface-mapping project – as well as the Mars Sample Return mission would lose their funding too.

“The impacts of these proposed funding cuts would not only be devastating to the astronomical sciences community, but they would also have far-reaching consequences for the nation,” says Dara Norman, president of the American Astronomical Society. “These cuts will derail not only cutting-edge scientific advances, but also the training of the nation’s future STEM workforce.”

The National Oceanic and Atmospheric Administration (NOAA) also stands to lose key programmes, with the budget for its Ocean and Atmospheric Research Office slashed from $485m to just over $170m. Surviving programmes from the office, including research on tornado warning and ocean acidification, would move to the National Weather Service and National Ocean Service.

“This administration’s hostility toward research and rejection of climate science will have the consequence of eviscerating the weather forecasting capabilities that this plan claims to preserve,” says Zoe Lofgren, a senior Democrat who sits on the House of Representatives’ Science, Space, and Technology Committee.

The National Science Foundation (NSF), meanwhile, is unlikely to receive $234m for major building projects this financial year, which could spell the end of the Horizon supercomputer being built at the University of Texas at Austin. The NSF has already halved the number of graduate students in its research fellowship programme, while Science magazine says it is calling back all grant proposals that had been approved but not signed off, apparently to check that awardees conform to Trump’s stance on DEI.

A survey of 292 department chairs at US institutions in early April, carried out by the American Institute of Physics, reveals that almost half of respondents are experiencing or anticipate cuts in federal funding in the coming months. Entitled Impacts of Restrictions on Federal Grant Funding in Physics and Astronomy Graduate Programs, the report also says that the number of first-year graduate students in physics and astronomy is expected to drop by 13% in the next enrolment.

Update: 25/04/2025: Sethuraman Panchanathan has resigned as NSF director five years into his six-year term. Panchanathan took up the position in 2020 during Trump’s first term as US President. “I believe that I have done all I can to advance the mission of the agency and feel that it is time to pass the baton to new leadership,” Panchanathan said in a statement yesterday. “This is a pivotal moment for our nation in terms of global competitiveness. We must not lose our competitive edge.”

The post Harvard University sues Trump administration as attacks on US science deepen appeared first on Physics World.

Superconducting device delivers ultrafast changes in magnetic field

23 avril 2025 à 18:12

Precise control over the generation of intense, ultrafast changes in magnetic fields called “magnetic steps” has been achieved by researchers in Hamburg, Germany. Using ultrashort laser pulses, Andrea Cavalleri and colleagues at the Max Planck Institute for the Structure and Dynamics of Matter disrupted the currents flowing through a superconducting disc. This alters the superconductor’s local magnetic environment on very short timescales – creating a magnetic step.

Magnetic steps rise to their peak intensity in just a few picoseconds, before decaying more slowly in several nanoseconds. They are useful to scientists because they rise and fall on timescales far shorter than the time it takes for materials to respond to external magnetic fields. As a result, magnetic steps could provide fundamental insights into the non-equilibrium properties of magnetic materials, and could also have practical applications in areas such as magnetic memory storage.

So far, however, progress in this field has been held back by technical difficulties in generating and controlling magnetic steps on ultrashort timescales. Previous strategies  have employed technologies including microcoils, specialized antennas, and circularly polarized light pulses. However, each of these schemes offer a limited degree of control over the properties of the magnetic steps they generated.

Quenching supercurrents

Now, Cavalleri’s team has developed a new technique that involves the quenching of currents in a superconductor. Normally, these “supercurrents” will flow indefinitely without losing energy, and will act to expel any external magnetic fields from the superconductor’s interior. However, if these currents are temporarily disrupted on ultrashort timescales, a sudden change will be triggered in the magnetic field close to the superconductor – which could be used to create a magnetic step.

To create this process, Cavalleri and colleagues applied ultrashort laser pulses to a thin, superconducting disc of yttrium barium copper oxide (YBCO), while also exposing the disc to an external magnetic field.

To detect whether magnetic steps had been generated, they placed a crystal of the semiconductor gallium phosphide in the superconductor’s vicinity. This material exhibits an extremely rapid Faraday response. This involves the rotation of the polarization of light passing through the semiconductor in response to changes in the local magnetic field. Crucially, this rotation can occur on sub-picosecond timescales.

In their experiments, researchers monitored changes to the polarization of an ultrashort “probe” laser pulse passing through the semiconductor shortly after they quenched supercurrents in their YBCO disc using a separate ultrashort “pump” laser pulse.

“By abruptly disrupting the material’s supercurrents using ultrashort laser pulses, we could generate ultrafast magnetic field steps with rise times of approximately one picosecond – or one trillionth of a second,” explains team member Gregor Jotzu.

Broadband step

This was used to generate an extremely broadband magnetic step, which contains frequencies ranging from sub-gigahertz to terahertz. In principle, this should make the technique suitable for studying magnetization in a diverse variety of materials.

To demonstrate practical applications, the team used these magnetic steps to control the magnetization of a ferrimagnet. Such a magnet has opposing magnetic moments, but has a non-zero spontaneous magnetization in zero magnetic field.

When they placed a ferrimagnet on top of their superconductor and created a magnetic step, the step field caused the ferrimagnet’s magnetization to rotate.

For now, the magnetic steps generated through this approach do not have the speed or amplitude needed to switch materials like a ferrimagnet between stable states. Yet through further tweaks to the geometry of their setup, the researchers are confident that this ability may not be far out of reach.

“Our goal is to create a universal, ultrafast stimulus that can switch any magnetic sample between stable magnetic states,” Cavalleri says. “With suitable improvements, we envision applications ranging from phase transition control to complete switching of magnetic order parameters.”

The research is described in Nature Photonics.

The post Superconducting device delivers ultrafast changes in magnetic field appeared first on Physics World.

FLIR MIX – a breakthrough in infrared and visible imaging

23 avril 2025 à 16:25

flir mix champagne cork

Until now, researchers have had to choose between thermal and visible imaging: One reveals heat signatures while the other provides structural detail. Recording both and trying to align them manually — or harder still, synchronizing them temporally — can be inconsistent and time-consuming. The result is data that is close but never quite complete. The new FLIR MIX is a game changer, capturing and synchronizing high-speed thermal and visible imagery at up to 1000 fps. Visible and high-performance infrared cameras with FLIR Research Studio software work together to deliver one data set with perfect spatial and temporal alignment — no missed details or second guessing, just a complete picture of fast-moving events.

Jerry Beeney
Jerry Beeney

Jerry Beeney is a seasoned global business development leader with a proven track record of driving product growth and sales performance in the Teledyne FLIR Science and Automation verticals. With more than 20 years at Teledyne FLIR, he has played a pivotal role in launching new thermal imaging solutions, working closely with technical experts, product managers, and customers to align products with market demands and customer needs. Before assuming his current role, Beeney held a variety of technical and sales positions, including senior scientific segment engineer. In these roles, he managed strategic accounts and delivered training and product demonstrations for clients across diverse R&D and scientific research fields. Beeney’s dedication to achieving meaningful results and cultivating lasting client relationships remains a cornerstone of his professional approach.

The post FLIR MIX – a breakthrough in infrared and visible imaging appeared first on Physics World.

Dual-robot radiotherapy system designed to reduce the cost of cancer treatment

23 avril 2025 à 13:00

Researchers at the University of Victoria in Canada are developing a low-cost radiotherapy system for use in low- and middle-income countries and geographically remote rural regions. Initial performance characterization of the proof-of-concept device produced encouraging results, and the design team is now refining the system with the goal of clinical commercialization.

This could be good news for people living in low-resource settings, where access to cancer treatment is an urgent global health concern. The WHO’s International Agency for Research on Cancer estimates that there are at least 20 million new cases of cancer diagnosed annually and 9.7 million annual cancer-related deaths, based on 2022 data. By 2030, approximately 75% of cancer deaths are expected to occur in low- and middle-income countries, due to rising populations, healthcare and financial disparities, and a general lack of personnel and equipment resources compared with high-income countries.

The team’s orthovoltage radiotherapy system, known as KOALA (kilovoltage optimized alternative for adaptive therapy), is designed to create, optimize and deliver radiation treatments in a single session. The device, described in Biomedical Physics & Engineering Express, consists of a dual-robot system with a 225 kVp X-ray tube mounted onto one robotic arm and a flat-panel detector mounted on the other.

The same X-ray tube can be used to acquire cone-beam CT (CBCT) images, as well as to deliver treatment, with a peak tube voltage of 225 kVp and a maximum tube current of 2.65 mA for a 1.2 mm focal spot. Due to its maximum reach of 2.05 m and collision restrictions, the KOALA system has a limited range of motion, achieving 190° arcs for both CBCT acquisition and treatments.

Device testing

To characterize the KOALA system, lead author Olivia Masella and colleagues measured X-ray spectra for tube voltages of 120, 180 and 225 kVp. At 120 and 180 kVp, they observed good agreement with spectra from SpekPy (a Python software toolkit for modelling X-ray tube spectra). For the 225 kVp spectrum, they found a notable overestimation in the higher energies.

The researchers performed dosimetric tests by measuring percent depth dose (PDD) curves for a 120 kVp imaging beam and a 225 kVp therapy beam, using solid water phantom blocks with a Farmer ionization chamber at various depths. They used an open beam with 40° divergence and a source-to-surface distance of 30 cm. They also measured 2D dose profiles with radiochromic film at various depths in the phantom for a collimated 225 kVp therapy beam and a dose of approximately 175 mGy at the surface.

The PDD curves showed excellent agreement between experiment and simulations at both 120 and 225 kVp, with dose errors of less than 2%. The 2D profile results were less than optimal. The team aims to correct this by using a more optimal source-to-collimator distance (100 mm) and a custom-built motorized collimator.

Workflow proof-of-concept for the KOALA system
Workflow proof-of-concept The team tested the workflow by acquiring a CBCT image of a dosimetry phantom containing radiochromic film, delivering a 190° arc to the phantom, and scanning and analysing the film. The CBCT image was then processed for Monte Carlo dose calculation and compared to the film dose. (Courtesy: CC BY 4.0/Biomed. Phys. Eng. Express 10.1088/2057-1976/adbcb2)

Geometrical evaluation conducted using a coplanar star-shot test showed that the system demonstrated excellent geometrical accuracy, generating a wobble circle with a diameter of just 0.3 mm.

Low costs and clinical practicality

Principal investigator Magdalena Bazalova-Carter describes the rationale behind the KOALA’s development. “I began the computer simulations of this project about 15 years ago, but the idea originated from Michael Weil, a radiation oncologist in Northern California,” she tells Physics World. “He and our industrial partner, Tai-Nang Huang, the president of Linden Technologies, are overseeing the progress of the project. Our university team is diversified, working in medical physics, computer science, and electrical and mechanical engineering. Orimtech, a medical device manufacturer and collaborator, developed the CBCT acquisition and reconstruction software and built the imaging prototype.”

Masella says that the team is keeping costs low is various ways. “Megavoltage X-rays are most commonly used in conventional radiotherapy, but KOALA’s design utilizes low-energy kilovoltage X-rays for treatment. By using a 225 kVp X-ray tube, the X-ray generation alone is significantly cheaper compared to a conventional linac, at a cost of USD $150,000 compared to $3 million,” she explains. “By operating in the kilovoltage instead of megavoltage range, only about 4 mm of lead shielding is required, instead of 6 to 7 feet of high-density concrete, bringing the shielding cost down from $2 million to $50,000. We also have incorporated components that are much lower cost than [those in] a conventional radiotherapy system.”

“Our novel iris collimator leaves are only 1-mm thick due to the lower treatment X-ray beam energy, and its 12 leaves are driven by a single motor,” adds Bazalova-Carter. “Although multileaf collimators with 120 leaves utilized with megavoltage X-ray radiotherapy are able to create complex fields, they are about 8-cm thick and are controlled by 120 separate motors. Given the high cost and mechanical vulnerability of multileaf collimators, our single motor design offers a more robust and reliable alternative.”

The team is currently developing a new motorized collimator, an improved treatment couch and a treatment planning system. They plan to improve CBCT imaging quality with hardware modifications, develop a CBCT-to-synthetic CT machine learning algorithm, refine the auto-contouring tool and integrate all of the software to smooth the workflow.

The researchers are planning to work with veterinarians to test the KOALA system with dogs diagnosed with cancer. They will also develop quality assurance protocols specific to the KOALA device using a dog-head phantom.

“We hope to demonstrate the capabilities of our system by treating beloved pets for whom available cancer treatment might be cost-prohibitive. And while our system could become clinically adopted in veterinary medicine, our hope is that it will be used to treat people in regions where conventional radiotherapy treatment is insufficient to meet demand,” they say.

The post Dual-robot radiotherapy system designed to reduce the cost of cancer treatment appeared first on Physics World.

Top-quark pairs at ATLAS could shed light on the early universe

22 avril 2025 à 18:02

Physicists working on the ATLAS experiment on the Large Hadron Collider (LHC) are the first to report the production of top quark–antiquark pairs in collisions involving heavy nuclei. By colliding lead ions, CERN’s LHC creates a fleeting state of matter called the quark–gluon plasma. This is an extremely hot and dense soup of subatomic particles that includes deconfined quarks and gluons. This plasma is believed to have filled the early universe microseconds after the Big Bang.

“Heavy-ion collisions at the LHC recreate the quark–gluon plasma in a laboratory setting,” Anthony Badea, a postdoctoral researcher at the University of Chicago and one of the lead authors of a paper describing the research. As well as boosting our understanding of the early universe, studying the quark–gluon plasma at the LHC could also provide insights into quantum chromodynamics (QCD), which is the theory of how quarks and gluons interact.

Although the quark–gluon plasma at the LHC vanishes after about 10-23 s, scientists can study it by analysing how other particles produced in collisions move through it. The top quark is the heaviest known elementary particle and its short lifetime and distinct decay pattern offer a unique way to explore the quark–gluon plasma. This because the top quark decays before the quark–gluon plasma dissipates.

“The top quark decays into lighter particles that subsequently further decay,” explains Stefano Forte at the University of Milan, who was not involved in the research. “The time lag between these subsequent decays is modified if they happen within the quark–gluon plasma, and thus studying them has been suggested as a way to probe [quark–gluon plasma’s] structure. In order for this to be possible, the very first step is to know how many top quarks are produced in the first place, and determining this experimentally is what is done in this [ATLAS] study.”

First observations

The ATLAS team analysed data from lead–lead collisions and searched for events in which a top quark and its antimatter counterpart were produced. These particles can then decay in several different ways and the researchers focused on a less frequent but more easily identifiable mode known as the di-lepton channel. In this scenario, each top quark decays into a bottom quark and a W boson, which is a weak force-carrying particle that then transforms into a detectable lepton and an invisible neutrino.

The results not only confirmed that top quarks are created in this complex environment but also showed that their production rate matches predictions based on our current understanding of the strong nuclear force.

“This is a very important study,” says Juan Rojo, a theoretical physicist at the Free University of Amsterdam who did not take part in the research. “We have studied the production of top quarks, the heaviest known elementary particle, in the relatively simple proton–proton collisions for decades. This work represents the first time that we observe the production of these very heavy particles in a much more complex environment, with two lead nuclei colliding among them.”

As well as confirming QCD’s prediction of heavy-quark production in heavy-nuclei collisions, Rojo explains that “we have a novel probe to resolve the structure of the quark–gluon plasma”. He also says that future studies will enable us “to understand novel phenomena in the strong interactions such as how much gluons in a heavy nucleus differ from gluons within the proton”.

Crucial first step

“This is a first step – a crucial one – but further studies will require larger samples of top quark events to explore more subtle effects,” adds Rojo.

The number of top quarks created in the ATLAS lead–lead collisions agrees with theoretical expectations. In the future, more detailed measurements could help refine our understanding of how quarks and gluons behave inside nuclei. Eventually, physicists hope to use top quarks not just to confirm existing models, but to reveal entirely new features of the quark–gluon plasma.

Rojo says we could, “learn about the time structure of the quark–gluon plasma, measurements which are ‘finer’ would be better, but for this we need to wait until more data is collected, in particular during the upcoming high-luminosity run of the LHC”.

Badea agrees that ATLAS’s observation opens the door to deeper explorations. “As we collect more nuclei collision data and improve our understanding of top-quark processes in proton collisions, the future will open up exciting prospects”.

The research is described in Physical Review Letters.

The post Top-quark pairs at ATLAS could shed light on the early universe appeared first on Physics World.

Grete Hermann: the quantum physicist who challenged Werner Heisenberg and John von Neumann

22 avril 2025 à 17:00
Grete Hermann
Great mind Grete Hermann, pictured here in 1955, was one of the first scientists to consider the philosophical implications of quantum mechanics. (Photo: Lohrisch-Achilles. Courtesy: Bremen State Archives)

In the early days of quantum mechanics, physicists found its radical nature difficult to accept – even though the theory had successes. In particular Werner Heisenberg developed the first comprehensive formulation of quantum mechanics in 1925, while the following year Erwin Schrödinger was able to predict the spectrum of light emitted by hydrogen using his eponymous equation. Satisfying though these achievements were, there was trouble in store.

Long accustomed to Isaac Newton’s mechanical view of the universe, physicists had assumed that identical systems always evolve with time in exactly the same way, that is to say “deterministically”. But Heisenberg’s uncertainty principle and the probabilistic nature of Schrödinger’s wave function suggested worrying flaws in this notion. Those doubts were famously expressed by Albert Einstein, Boris Podolsky and Nathan Rosen in their “EPR” paper of 1935 (Phys. Rev. 47 777) and in debates between Einstein and Niels Bohr.

But the issues at stake went deeper than just a disagreement among physicists. They also touched on long-standing philosophical questions about whether we inhabit a deterministic universe, the related question of human free will, and the centrality of cause and effect. One person who rigorously addressed the questions raised by quantum theory was the German mathematician and philosopher Grete Hermann (1901–1984).

Hermann stands out in an era when it was rare for women to contribute to physics or philosophy, let alone to both. Writing in The Oxford Handbook of the History of Quantum Interpretations, published in 2022, the City University of New York philosopher of science Elise Crull has called Hermann’s work “one of the first, and finest, philosophical treatments of quantum mechanics”.

Grete Hermann upended the famous ‘proof’, developed by the Hungarian-American mathematician and physicist John von Neumann, that ‘hidden variables’ are impossible in quantum mechanics

What’s more, Hermann upended the famous “proof”, developed by the Hungarian-American mathematician and physicist John von Neumann, that “hidden variables” are impossible in quantum mechanics. But why have Hermann’s successes in studying the roots and meanings of quantum physics been so often overlooked? With 2025 being the International Year of Quantum Science and Technology, it’s time to find out.

Free thinker

Hermann was born on 2 March 1901 in the north German port city of Bremen. One of seven children, her mother was deeply religious, while her father was a merchant, a sailor and later an itinerant preacher. According to the 2016 book Grete Hermann: Between Physics and Philosophy by Crull and Guido Bacciagaluppi, she was raised according to her father’s maxim: “I train my children in freedom!” Essentially, he enabled Hermann to develop a wide range of interests and benefit from the best that the educational system could offer a woman at the time.

She was eventually admitted as one of a handful of girls at the Neue Gymnasium – a grammar school in Bremen – where she took a rigorous and broad programme of subjects. In 1921 Hermann earned a certificate to teach high-school pupils – an interest in education that reappeared in her later life – and began studying mathematics, physics and philosophy at the University of Göttingen.

In just four years, Hermann earned a PhD under the exceptional Göttingen mathematician Emmy Noether (1882–1935), famous for her groundbreaking theorem linking symmetry to physical conservation laws. Hermann’s final oral exam in 1925 featured not just mathematics, which was the subject of her PhD, but physics and philosophy too. She had specifically requested to be examined in the latter by the Göttingen philosopher Leonard Nelson, whose “logical sharpness” in lectures had impressed her.

abstract illustration of human heads overlapping
Mutual interconnections Grete Hermann was fascinated by the fundamental overlap between physics and philosophy. (Courtesy: iStock/agsandrew)

By this time, Hermann’s interest in philosophy was starting to dominate her commitment to mathematics. Although Noether had found a mathematics position for her at the University of Freiburg, Hermann instead decided to become Nelson’s assistant, editing his books on philosophy. “She studies mathematics for four years,” Noether declared, “and suddenly she discovers her philosophical heart!”

Hermann found Nelson to be demanding and sometimes overbearing but benefitted from the challenges he set. “I gradually learnt to eke out, step by step,” she later declared, “the courage for truth that is necessary if one is to utterly place one’s trust, also within one’s own thinking, in a method of thought recognized as cogent.” Hermann, it appeared, was searching for a path to the internal discovery of truth, rather like Einstein’s Gedankenexperimente.

After Nelson died in 1927 aged just 45, Hermann stayed in Göttingen, where she continued editing and expanding his philosophical work and related political ideas. Espousing a form of socialism based on ethical reasoning to produce a just society, Nelson had co-founded a political action group and set up the associated Philosophical-Political Academy (PPA) to teach his ideas. Hermann contributed to both and also wrote for the PPA’s anti-Nazi newspaper.

Hermann’s involvement in the organizations Nelson had founded later saw her move to other locations in Germany, including Berlin. But after Hitler came to power in 1933, the Nazis banned the PPA, and Hermann and her socialist associates drew up plans to leave Germany. Initially, she lived at a PPA “school-in-exile” in neighbouring Denmark. As the Nazis began to arrest socialists, Hermann feared that Germany might occupy Denmark (as it indeed later did) and so moved again, first to Paris and then London.

Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics

Arriving in Britain in early 1938, Hermann became acquainted with Edward Henry, another socialist, whom she later married. It was, however, merely a marriage of convenience that gave Hermann British citizenship and – when the Second World War started in 1939 – stopped her from being interned as an enemy alien. (The couple divorced after the war.) Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics.

Mixing philosophy and physics

A major stimulus for Hermann’s work came from discussions she had in 1934 with Heisenberg and Carl Friedrich von Weizsäcker, who was then his research assistant at the Institute for Theoretical Physics in Leipzig. The previous year Hermann had written an essay entitled “Determinism and quantum mechanics”, which analysed whether the indeterminate nature of quantum mechanics – central to the “Copenhagen interpretation” of quantum behaviour – challenged the concept of causality.

Much cherished by physicists, causality says that every event has a cause, and that a given cause always produces a single specific event. Causality was also a tenet of the 18th-century German philosopher Immanuel Kant, best known for his famous 1781 treatise Critique of Pure Reason. He believed that causality is fundamental for how humans organize their experiences and make sense of the world.

Hermann, like Nelson, was a “neo-Kantian” who believed that Kant’s ideas should be treated with scientific rigour. In her 1933 essay, Hermann examined how the Copenhagen interpretation undermines Kant’s principle of causality. Although the article was not published at the time, she sent copies to Heisenberg, von Weizsäcker, Bohr and also Paul Dirac, who was then at the University of Cambridge in the UK.

In fact, we only know of the essay’s existence because Crull and Bacciagaluppi discovered a copy in Dirac’s archives at Churchill College, Cambridge. They also found a 1933 letter to Hermann from Gustav Heckmann, a physicist who said that Heisenberg, von Weizsäcker and Bohr had all read her essay and took it “absolutely and completely seriously”. Heisenberg added that Hermann was a “fabulously clever woman”.

Heckmann then advised Hermann to discuss her ideas more fully with Heisenberg, who he felt would be more open than Bohr to new ideas from an unexpected source. In 1934 Hermann visited Heisenberg and von Weizsäcker in Leipzig, with Heisenberg later describing their interaction in his 1971 memoir Physics and Beyond: Encounters and Conversations.

In that book, Heisenberg relates how rigorously Hermann wanted to treat philosophical questions. “[She] believed she could prove that the causal law – in the form Kant had given it – was unshakable,” Heisenberg recalled. “Now the new quantum mechanics seemed to be challenging the Kantian conception, and she had accordingly decided to fight the matter out with us.”

Their interaction was no fight, but a spirited discussion, with some sharp questioning from Hermann. When Heisenberg suggested, for instance, that a particular radium atom emitting an electron is an example of an unpredictable random event that has no cause, Hermann countered by saying that just because no cause has been found, it didn’t mean no such cause exists.

Significantly, this was a reference to what we now call “hidden variables” – the idea that quantum mechanics is being steered by additional parameters that we possibly don’t know anything about. Heisenberg then argued that even with such causes, knowing them would lead to complications in other experiments because of the wave nature of electrons.

Abstract illustration of atomic physics
Forward thinker Grete Hermann was one of the first people to study the notion that quantum mechanics might be steered by mysterious additional parameters – now dubbed “hidden variables” – that we know nothing about. (Courtesy: iStock/pobytov)

Suppose, using a hidden variable, we could predict exactly which direction an electron would move. The electron wave wouldn’t then be able to split and interfere with itself, resulting in an extinction of the electron. But such electron interference effects are experimentally observed, which Heisenberg took as evidence that no additional hidden variables are needed to make quantum mechanics complete. Once again, Hermann pointed out a discrepancy in Heisenberg’s argument.

In the end, neither side fully convinced the other, but inroads were made, with Heisenberg concluding in his 1971 book that “we had all learned a good deal about the relationship between Kant’s philosophy and modern science”. Hermann herself paid tribute to Heisenberg in a 1935 paper “Natural-philosophical foundations of quantum mechanics”, which appeared in a relatively obscure philosophy journal called Abhandlungen der Fries’schen Schule (6 69). In it, she thanked Heisenberg “above all for his willingness to discuss the foundations of quantum mechanics, which was crucial in helping the present investigations”.

Quantum indeterminacy versus causality

In her 1933 paper, Hermann aimed to understand if the indeterminacy of quantum mechanics threatens causality. Her overall finding was that wherever indeterminacy is invoked in quantum mechanics, it is not logically essential to the theory. So without claiming that quantum theory actually supports causality, she left the possibility open that it might.

To illustrate her point, Hermann considered Heisenberg’s uncertainty principle, which says that there’s a limit to the accuracy with which complementary variables, such as position, q, and momentum, p, can be measured, namely ΔqΔp h where h is Planck’s constant. Does this principle, she wondered, truly indicate quantum indeterminism?

Hermann asserted that this relation can mean only one of two possible things. One is that measuring one variable leaves the value of the other undetermined. Alternatively, the result of measuring the other variable can’t be precisely predicted. Hermann dismissed the first option because its very statement implies that exact values exist, and so it cannot be logically used to argue against determinism. The second choice could be valid, but that does not exclude the possibility of finding new properties – hidden variables – that give an exact prediction.

Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics

In making her argument about hidden variables, Hermann used her mathematical training to point out a flaw in von Neumann’s famous 1932 proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics. Quantum mechanics, according to von Neumann, is complete and no extra deterministic features need to be added.

For decades, his result was cited as “proof” that any deterministic addition to quantum mechanics must be wrong. Indeed, von Neumann had such a well-deserved reputation as a brilliant mathematician that few people had ever bothered to scrutinize his analysis. But in 1964 the Northern Irish theorist John Bell famously showed that a valid hidden-variable theory could indeed exist, though only if it’s “non-local” (Physics Physique Fizika 1 195).

Non-locality says that things can happen at different parts of the universe simultaneously without needing faster-than-light communication. Despite being a notion that Einstein never liked, non-locality has been widely confirmed experimentally. In fact, non-locality is a defining feature of quantum physics and one that’s eminently useful in quantum technology.

Then, in 1966 Bell examined von Neumann’s reasoning and found an error that decisively refuted the proof (Rev. Mod, Phys. 38 447). Bell, in other words, showed that quantum mechanics could permit hidden variables after all – a finding that opened the door to alternative interpretations of quantum mechanics. However, Hermann had reported the very same error in her 1933 paper, and again in her 1935 essay, with an especially lucid exposition that almost exactly foresees Bell’s objection.

She had got there first, more than three decades earlier (see box).

Grete Hermann: 30 years ahead of John Bell

artist impression of a quantum computer core
(Courtesy: iStock/Chayanan)

According to Grete Hermann, John von Neumann’s 1932 proof that quantum mechanics doesn’t need hidden variables “stands or falls” on his assumption concerning “expectation values”, which is the sum of all possible outcomes weighted by their respective probabilities. In the case of two quantities, say, r and s, von Neumann supposed that the expectation value of (r + s) is the same as the expectation value of r plus the expectation value of s. In other words, <(s)> = <r> + <s>.

This is clearly true in classical physics, Hermann writes, but the truth is more complicated in quantum mechanics. Suppose r and s are the conjugate variables in an uncertainty relationship, such as momentum q and position p given by ΔqΔp ≥ h. By definition, measuring q eliminates making a precise measurement of p, so it is impossible to simultaneously measure them and satisfy the relation <p> = <q> + <p>.

Further analysis, which Hermann supplied and Bell presented more fully, shows exactly why this invalidates or at least strongly limits the applicability of von Neumann’s proof; but Hermann caught the essence of the error first. Bell did not recognize or cite Hermann’s work, most probably because it was hardly known to the physics community until years after his 1966 paper.

A new view of causality

After rebutting von Neumann’s proof in her 1935 essay, Hermann didn’t actually turn to hidden variables. Instead, Hermann went in a different and surprising direction, probably as a result of her discussions with Heisenberg. She accepted that quantum mechanics is a complete theory that makes only statistical predictions, but proposed an alternative view of causality within this interpretation.

We cannot foresee precise causal links in a quantum mechanics that is statistical, she wrote. But once a measurement has been made with a known result, we can work backwards to get a cause that led to that result. In fact, Hermann showed exactly how to do this with various examples. In this way, she maintains, quantum mechanics does not refute the general Kantian category of causality.

Not all philosophers have been satisfied by the idea of retroactive causality. But writing in The Oxford Handbook of the History of Quantum Interpretations, Crull says that Hermann “provides the contours of a neo-Kantian interpretation of quantum mechanics”. “With one foot squarely on Kant’s turf and the other squarely on Bohr’s and Heisenberg’s,” Crull concludes, “[Hermann’s] interpretation truly stands on unique ground.”

Grete Hermann’s 1935 paper shows a deep and subtle grasp of elements of the Copenhagen interpretation.

But Hermann’s 1935 paper did more than just upset von Neumann’s proof. In the article, she shows a deep and subtle grasp of elements of the Copenhagen interpretation such as its correspondence principle, which says that – in the limit of large quantum numbers – answers derived from quantum physics must approach those from classical physics.

The paper also shows that Hermann was fully aware – and indeed extended the meaning – of the implications of Heisenberg’s thought experiment that he used to illustrate the uncertainty principle. Heisenberg envisaged a photon colliding with an electron, but after that contact, she writes, the wave function of the physical system is a linear combination of terms, each being “the product of one wave function describing the electron and one describing the light quantum”.

As she went on to say, “The light quantum and the electron are thus not described each by itself, but only in their relation to each other. Each state of the one is associated with one of the other.” Remarkably, this amounts to an early perception of quantum entanglement, which Schrödinger described and named later in 1935. There is no evidence, however, that Schrödinger knew of Hermann’s insights.

Hermann’s legacy                             

On the centenary of the birth of a full theory of quantum mechanics, how should we remember Hermann? According to Crull, the early founders of quantum mechanics were “asking philosophical questions about the implications of their theory [but] none of these men were trained in both physics and philosophy”. Hermann, however, was an expert in the two. “[She] composed a brilliant philosophical analysis of quantum mechanics, as only one with her training and insight could have done,” Crull says.

Had Hermann’s 1935 paper been more widely known, it could have altered the early development of quantum mechanics

Sadly for Hermann, few physicists at the time were aware of her 1935 paper even though she had sent copies to some of them. Had it been more widely known, her paper could have altered the early development of quantum mechanics. Reading it today shows how Hermann’s style of incisive logical examination can bring new understanding.

Hermann leaves other legacies too. As the Second World War drew to a close, she started writing about the ethics of science, especially the way in which it was carried out under the Nazis. After the war, she returned to Germany, where she devoted herself to pedagogy and teacher training. She disseminated Nelson’s views as well as her own through the reconstituted PPA, and took on governmental positions where she worked to rebuild the German educational system, apparently to good effect according to contemporary testimony.

Hermann also became active in politics as an adviser to the Social Democratic Party. She continued to have an interest in quantum mechanics, but it is not clear how seriously she pursued it in later life, which saw her move back to Bremen to care for an ill comrade from her early socialist days.

Hermann’s achievements first came to light in 1974 when the physicist and historian Max Jammer revealed her 1935 critique of von Neumann’s proof in his book The Philosophy of Quantum Mechanics. Following Hermann’s death in Bremen on 15 April 1984, interest slowly grew, culminating in Crull and Bacciagaluppi’s 2016 landmark study Grete Hermann: Between Physics and Philosophy.

The life of this deep thinker, who also worked to educate others and to achieve worthy societal goals, remains an inspiration for any scientist or philosopher today.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Grete Hermann: the quantum physicist who challenged Werner Heisenberg and John von Neumann appeared first on Physics World.

Tennis-ball towers reach record-breaking heights with 12-storey, 34-ball structure

18 avril 2025 à 12:00
Four photos of tennis ball towers: 34 balls with base 3n+1; 21 balls with base 4n+1; 11 balls with base 5n+1; and six balls in a single layer
Oh, balls A record-breaking 34-ball, 12-storey tower with three balls per layer (photo a); a 21-ball six-storey tower with four balls per layer (photo b); an 11-ball, three-storey tower with five balls per layers (photo c); and why a tower with six balls per layer would be impossible as the “locker” ball just sits in the middle (photo d). (Courtesy: Andria Rogava)

A few years ago, I wrote in Physics World about various bizarre structures I’d built from tennis balls, the most peculiar of which I termed “tennis-ball towers”. They consisted of a series of three-ball layers topped by a single ball (“the locker”) that keeps the whole tower intact. Each tower had (3n + 1) balls, where n is the number of triangular layers. The tallest tower I made was a seven-storey, 19-ball structure (n = 6). Shortly afterwards, I made an even bigger, nine-storey, 25-ball structure (n = 8).

Now, in the latest exciting development, I have built a new, record-breaking tower with 34 balls (n = 11), in which all 30 balls from the second to the eleventh layer are kept in equilibrium by the locker on the top (see photo a). The three balls in the bottom layer aren’t influenced by the locker as they stay in place by virtue of being on the horizontal surface of a table.

I tried going even higher but failed to build a structure that would stay intact without supporting “scaffolds”. Now in case you think I’ve just glued the balls together, watch the video below to see how the incredible 34-ball structure collapses spontaneously, probably due to a slight vibration as I walked around the table.

Even more unexpectedly, I have been able to make tennis-ball towers consisting of layers of four balls (4n + 1) and five balls too (5n + 1). Their equilibria are more delicate and, in the case of four-ball structures, so far I have only managed to build (photo b) a 21-ball, six-storey tower (n = 5). You can also see the tower in the video below.

The (5n + 1) towers are even trickier to make and (photo c) I have only got up to a three-storey structure with 11 balls (n = 2): two lots of five balls with a sixth single ball on top. In case you’re wondering, towers with six balls in each layer are physically impossible to build because they form a regular hexagon. You can’t just use another ball as a locker because it would simply sit between the other six (photo d).

The post Tennis-ball towers reach record-breaking heights with 12-storey, 34-ball structure appeared first on Physics World.

KATRIN sets tighter limit on neutrino mass

16 avril 2025 à 17:00

Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.

Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.

Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.

Looking for clues in electrons

In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.

“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”

Because the subtle effects of neutrino mass are most visible in decays where the neutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.

Improvements over previous results

The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.

Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.

Photo of two researchers in lab coats and laser safety goggles bending over a box containing optics and other equipment. A beam of green laser light is passing through the box, suffusing the otherwise dark photo with a green glow
Keeping track: Laser system for the analysis of the tritium gas composition at KATRIN’s Windowless Gaseous Tritium Source. Improvements to temperature control in this source helped raise the precision of the neutrino mass limit. (Courtesy: Tritium Laboratory, KIT)

At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”

Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”

Model independence

Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”

KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.

“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”

The research appears in Science.

The post KATRIN sets tighter limit on neutrino mass appeared first on Physics World.

On the path towards a quantum economy

16 avril 2025 à 16:15
The high-street bank HSBC has worked with the NQCC, hardware provider Rigetti and the Quantum Software Lab to investigate the advantages that quantum computing could offer for detecting the signs of fraud in transactional data. (Courtesy: Shutterstock/Westend61 on Offset)

Rapid technical innovation in quantum computing is expected to yield an array of hardware platforms that can run increasingly sophisticated algorithms. In the real world, however, such technical advances will remain little more than a curiosity if they are not adopted by businesses and the public sector to drive positive change. As a result, one key priority for the UK’s National Quantum Computing Centre (NQCC) has been to help companies and other organizations to gain an early understanding of the value that quantum computing can offer for improving performance and enhancing outcomes.

To meet that objective the NQCC has supported several feasibility studies that enable commercial organizations in the UK to work alongside quantum specialists to investigate specific use cases where quantum computing could have a significant impact within their industry. One prime example is a project involving the high-street bank HSBC, which has been exploring the potential of quantum technologies for spotting the signs of fraud in financial transactions. Such fraudulent activity, which affects millions of people every year, now accounts for about 40% of all criminal offences in the UK and in 2023 generated total losses of more than £2.3 bn across all sectors of the economy.

Banks like HSBC currently exploit classical machine learning to detect fraudulent transactions, but these techniques require a large computational overhead to train the models and deliver accurate results. Quantum specialists at the bank have therefore been working with the NQCC, along with hardware provider Rigetti and the Quantum Software Lab at the University of Edinburgh, to investigate the capabilities of quantum machine learning (QML) for identifying the tell-tale indicators of fraud.

“HSBC’s involvement in this project has brought transactional fraud detection into the realm of cutting-edge technology, demonstrating our commitment to pushing the boundaries of quantum-inspired solutions for near-term benefit,” comments Philip Intallura, Group Head of Quantum Technologies at HSBC. “Our philosophy is to innovate today while preparing for the quantum advantage of tomorrow.”

Another study focused on a key problem in the aviation industry that has a direct impact on fuel consumption and the amount of carbon emissions produced during a flight. In this logistical challenge, the aim was to find the optimal way to load cargo containers onto a commercial aircraft. One motivation was to maximize the amount of cargo that can be carried, the other was to balance the weight of the cargo to reduce drag and improve fuel efficiency.

“Even a small shift in the centre of gravity can have a big effect,” explains Salvatore Sinno of technology solutions company Unisys, who worked on the project along with applications engineers at the NQCC and mathematicians at the University of Newcastle. “On a Boeing 747 a displacement of just 75 cm can increase the carbon emissions on a flight of 10,000 miles by four tonnes, and also increases the fuel costs for the airline company.”

aeroplane being loaded with cargo
A hybrid quantum–classical solution has been used to optimize the configuration of air freight, which can improve fuel efficiency and lower carbon emissions. (Courtesy: Shutterstock/supakitswn)

With such a large number of possible loading combinations, classical computers cannot produce an exact solution for the optimal arrangement of cargo containers. In their project the team improved the precision of the solution by combining quantum annealing with high-performance computing, a hybrid approach that Unisys believes can offer immediate value for complex optimization problems. “We have reached the limit of what we can achieve with classical computing, and with this work we have shown the benefit of incorporating an element of quantum processing into our solution,” explains Sinno.

The HSBC project team also found that a hybrid quantum–classical solution could provide an immediate performance boost for detecting anomalous transactions. In this case, a quantum simulator running on a classical computer was used to run quantum algorithms for machine learning. “These simulators allow us to execute simple QML programmes, even though they can’t be run to the same level of complexity as we could achieve with a physical quantum processor,” explains Marco Paini, the project lead for Rigetti. “These simulations show the potential of these low-depth QML programmes for fraud detection in the near term.”

The team also simulated more complex QML approaches using a similar but smaller-scale problem, demonstrating a further improvement in performance. This outcome suggests that running deeper QML algorithms on a physical quantum processor could deliver an advantage for detecting anomalies in larger datasets, even though the hardware does not yet provide the performance needed to achieve reliable results. “This initiative not only showcases the near-term applicability of advanced fraud models, but it also equips us with the expertise to leverage QML methods as quantum computing scales,” comments Intellura.

Indeed, the results obtained so far have enabled the project partners to develop a roadmap that will guide their ongoing development work as the hardware matures. One key insight, for example, is that even a fault-tolerant quantum computer would struggle to process the huge financial datasets produced by a bank like HSBC, since a finite amount of time is needed to run the quantum calculation for each data point. “From the simulations we found that the hybrid quantum–classical solution produces more false positives than classical methods,” says Paini. “One approach we can explore would be to use the simulations to flag suspicious transactions and then run the deeper algorithms on a quantum processor to analyse the filtered results.”

This particular project also highlighted the need for agreed protocols to navigate the strict rules on data security within the banking sector. For this project the HSBC team was able to run the QML simulations on its existing computing infrastructure, avoiding the need to share sensitive financial data with external partners. In the longer term, however, banks will need reassurance that their customer information can be protected when processed using a quantum computer. Anticipating this need, the NQCC has already started to work with regulators such as the Financial Conduct Authority, which is exploring some of the key considerations around privacy and data security, with that initial work feeding into international initiatives that are starting to consider the regulatory frameworks for using quantum computing within the financial sector.

For the cargo-loading project, meanwhile, Sinno says that an important learning point has been the need to formulate the problem in a way that can be tackled by the current generation of quantum computers. In practical terms that means defining constraints that reduce the complexity of the problem, but that still reflect the requirements of the real-world scenario. “Working with the applications engineers at the NQCC has helped us to understand what is possible with today’s quantum hardware, and how to make the quantum algorithms more viable for our particular problem,” he says. “Participating in these studies is a great way to learn and has allowed us to start using these emerging quantum technologies without taking a huge risk.”

Indeed, one key feature of these feasibility studies is the opportunity they offer for different project partners to learn from each other. Each project includes an end-user organization with a deep knowledge of the problem, quantum specialists who understand the capabilities and limitations of present-day solutions, and academic experts who offer an insight into emerging theoretical approaches as well as methodologies for benchmarking the results. The domain knowledge provided by the end users is particularly important, says Paini, to guide ongoing development work within the quantum sector. “If we only focused on the hardware for the next few years, we might come up with a better technical solution but it might not address the right problem,” he says. “We need to know where quantum computing will be useful, and to find that convergence we need to develop the applications alongside the algorithms and the hardware.”

Another major outcome from these projects has been the ability to make new connections and identify opportunities for future collaborations. As a national facility NQCC has played an important role in providing networking opportunities that bring diverse stakeholders together, creating a community of end users and technology providers, and supporting project partners with an expert and independent view of emerging quantum technologies. The NQCC has also helped the project teams to share their results more widely, generating positive feedback from the wider community that has already sparked new ideas and interactions.

“We have been able to network with start-up companies and larger enterprise firms, and with the NQCC we are already working with them to develop some proof-of-concept projects,” says Sinno. “Having access to that wider network will be really important as we continue to develop our expertise and capability in quantum computing.”

The post On the path towards a quantum economy appeared first on Physics World.

Microwaves slow down chemical reactions at low temperatures

16 avril 2025 à 14:17

Through new experiments, researchers in Switzerland have tested models of how microwaves affect low-temperature chemical reactions between ions and molecules. Through their innovative setup, Valentina Zhelyazkova and colleagues at ETH Zurich showed for the first time how the application of microwave pulses can slow down reaction rates via nonthermal mechanisms.

Physicists have been studying chemical reactions between ions and neutral molecules for some time. At close to room temperature, classical models can closely predict how the electric fields emanating from ions will induce dipoles in nearby neutral molecules, allowing researchers to calculate these reaction rates with impressive accuracy. Yet as temperatures drop close to absolute zero, a wide array of more complex effects come into play, which have gradually been incorporated into the latest theoretical models.

“At low temperatures, models of reactivity must include the effects of the permanent electric dipoles and quadrupole moments of the molecules, the effect of their vibrational and rotational motion,” Zhelyazkova explains. “At extremely low temperatures, even the quantum-mechanical wave nature of the reactants must be considered.”

Rigorous experiments

Although these low-temperature models have steadily improved in recent years, the ability to put them to the test through rigorous experiments has so far been hampered by external factors.

In particular, stray electric fields in the surrounding environment can heat the ions and molecules, so that any important quantum effects are quickly drowned out by noise. “Consequently, it is only in the past few years that experiments have provided information on the rates of ion–molecule reactions at very low temperatures,” Zhelyazkova explains.

In their study, Zhelyazkova’s team improved on these past experiments through an innovative approach to cooling the internal motions of the molecules being heated by stray electric fields. Their experiment involved a reaction between positively-charged helium ions and neutral molecules of carbon monoxide (CO). This creates neutral atoms of helium and oxygen, and a positively-charged carbon atom.

To initiate the reaction, the researchers created separate but parallel supersonic beams of helium and CO that were combined in a reaction cell. “In order to overcome the problem of heating the ions by stray electric fields, we study the reactions within the distant orbit of a highly excited electron, which makes the overall system electrically neutral without affecting the ion–molecule reaction taking place within the electron orbit,” explains ETH’s Frédéric Merkt.

Giant atoms

In such a “Rydberg atom”, the highly excited electron is some distance from the helium nucleus and its other electron. As a result, a Rydberg helium atom can be considered an ion with a “spectator” electron, which has little influence over how the reaction unfolds. To ensure the best possible accuracy, “we use a printed circuit board device with carefully designed surface electrodes to deflect one of the two beams,” explains ETH’s, Fernanda Martins. “We then merged this beam with the other, and controlled the relative velocity of the two beams.”

Altogether, this approach enabled the researchers to cool the molecules internally to temperatures below 10 K – where their quantum effects can dominate over externally induced noise. With this setup, Zhelyazkova, Merkt, Martins, and their colleagues could finally put the latest theoretical models to the test.

According to the latest low-temperature models, the rate of the CO–helium ion reaction should be determined by the quantized rotational states of the CO molecule – whose energies lie within the microwave range. In this case, the team used microwave pulses to put the CO into different rotational states, allowing them to directly probe their influence on the overall reaction rate.

Three important findings

Altogether, their experiment yielded three important findings. It confirmed that the reaction rate can vary, depending on the rotational state of the CO molecule; it showed that this reactivity can be modified by using a short microwave pulse to excite the CO molecule from its ground state to its first excited state – with the first excited state being less reactive than the ground state.

The third and most counterintuitive finding is that microwaves can slow down the reaction rate, via mechanisms unrelated to the heat they impart on the molecules absorbing them. “In most applications of microwaves in chemical synthesis, the microwaves are used as a way to thermally heat the molecules up, which always makes them more reactive,” Zhelyazkova says.

Building on the success of their experimental approach, the team now hopes to investigate these nonthermal mechanisms in more detail – with the aim to shed new light on how microwaves can influence chemical reactions via effects other than heating. In turn, their results could ultimately pave the way for advanced new techniques for fine-tuning the rate of reactions between ions and neutral molecules.

The research is described in Physical Review Letters.

The post Microwaves slow down chemical reactions at low temperatures appeared first on Physics World.

Schrödinger cat states like it hot

15 avril 2025 à 16:00

Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.

In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.

Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.

Cat on a hot tin…microwave cavity?

The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.

Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.

The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.

To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.

Benefits for quantum sensing and error correction

According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.

“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.

Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.

However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.

The post Schrödinger cat states like it hot appeared first on Physics World.

Intercalation-based desalination and carbon capture for water and climate sustainability

14 avril 2025 à 10:28

webinar main image

With increased water scarcity and global warming looming, electrochemical technology offers low-energy mitigation pathways via desalination and carbon capture.  This webinar will demonstrate how the less than 5 molar solid-state concentration swings afforded by cation intercalation materials – used originally in rocking-chair batteries – can effect desalination using Faradaic deionization (FDI).  We show how the salt depletion/accumulation effect – that plagues Li-ion battery capacity under fast charging conditions – is exploited in a symmetric Na-ion battery to achieve seawater desalination, exceeding by an order of magnitude the limits of capacitive deionization with electric double layers.  While initial modeling that introduced such an architecture blazed the trail for the development of new and old intercalation materials in FDI, experimental demonstration of seawater-level desalination using Prussian blue analogs required cell engineering to overcome the performance-degrading processes that are unique to the cycling of intercalation electrodes in the presence of flow, leading to innovative embedded, micro-interdigitated flow fields with broader application toward fuel cells, flow batteries, and other flow-based electrochemical devices.  Similar symmetric FDI architectures using proton intercalation materials are also shown to facilitate direct-air capture of carbon dioxide with unprecedentedly low energy input by reversibly shifting pH within aqueous electrolyte.

Kyle Smith headshot
Kyle Smith

Kyle C Smith joined the faculty of Mechanical Science and Engineering at the University of Illinois Urbana-Champaign (UIUC) in 2014 after completing his PhD in mechanical engineering (Purdue, 2012) and his post-doc in materials science and engineering (MIT, 2014).  His group uses understanding of flow, transport, and thermodynamics in electrochemical devices and materials to innovate toward separations, energy storage, and conversion.  For his research he was awarded the 2018 ISE-Elsevier Prize in Applied Electrochemistry of the International Society of Electrochemistry and the 2024 Dean’s Award for Early Innovation as an associate professor by UIUC’s Grainger College.  Among his 59 journal papers and 14 patents and patents pending, his work that introduced Na-ion battery-based desalination using porous electrode theory [Smith and Dmello, J. Electrochem. Soc., 163, p. A530 (2016)] was among the top ten most downloaded in the Journal of the Electrochemical Society for five months in 2016.  His group was also the first to experimentally demonstrate seawater-level salt removal using this approach [Do et al., Energy Environ. Sci., 16, p. 3025 (2023); Rahman et al., Electrochimica Acta, 514, p. 145632 (2025)], introducing flow fields embedded in electrodes to do so.

The post Intercalation-based desalination and carbon capture for water and climate sustainability appeared first on Physics World.

Photon collisions in dying stars could create neutrons for heavy elements

12 avril 2025 à 15:48

A model that could help explain how heavy elements are forged within collapsing stars has been unveiled by Matthew Mumpower at Los Alamos National Laboratory and colleagues in the US. The team suggests that energetic photons generated by newly forming black holes or neutron stars transmute protons within ejected stellar material into neutrons, thereby providing ideal conditions for heavy elements to form.

Astrophysicists believe that elements heavier than iron are created in violent processes such as the explosions of massive stars and the mergers of neutron stars. One way that this is thought to occur is the rapid neutron-capture process (r-process), whereby lighter nuclei created in stars capture neutrons in rapid succession. However, exactly where the r-process occurs is not well understood.

As Mumpower explains, the r-process must be occurring in environments where free neutrons are available in abundance. “But there’s a catch,” he says. “Free neutrons are unstable and decay in about 15 min. Only a few places in the universe have the right conditions to create and use these neutrons quickly enough. Identifying those places has been one of the toughest open questions in physics.”

Intense flashes of light

In their study, Mumpower’s team – which included researchers from the Los Alamos and Argonne national laboratories – looked at how lots of neutrons could be created within massive stars that are collapsing to become neutron stars or black holes. Their idea focuses on the intense flashes of light that are known to be emitted from the cores of these objects.

This radiation is emitted at wavelengths across the electromagnetic spectrum – including highly energetic gamma rays. Furthermore, the light is emitted along a pair of narrow jets, which blast outward above each pole of the star’s collapsing core. As they form, these jets plough through the envelope of stellar material surrounding the core, which had been previously ejected by the star. This is believed to create a “cocoon” of hot, dense material surrounding each jet.

In this environment, Mumpower’s team suggest that energetic photons in a jet collide with protons to create a neutron and a pion. Since these neutrons are have no electrical charge, many of them could dissolve into the cocoon, providing ideal conditions for the r-process to occur.

To test their hypothesis, the researchers carried out detailed computer simulations to predict the number of free neutrons entering the cocoon due to this process.

Gold and platinum

“We found that this light-based process can create a large number of neutrons,” Mumpower says. “There may be enough neutrons produced this way to build heavy elements, from gold and platinum all the way up to the heaviest elements in the periodic table – and maybe even beyond.”

If their model is correct, suggests that the origin of some heavy elements involves processes that are associated with the high-energy particle physics that is studied at facilities like the Large Hadron Collider.

“This process connects high-energy physics – which usually focuses on particles like quarks, with low-energy astrophysics – which studies stars and galaxies,” Mumpower says. “These are two areas that rarely intersect in the context of forming heavy elements.”

Kilonova explosions

The team’s findings also shed new light on some other astrophysical phenomena. “Our study offers a new explanation for why certain cosmic events, like long gamma-ray bursts, are often followed by kilonova explosions – the glow from the radioactive decay of freshly made heavy elements,” Mumpower continues. “It also helps explain why the pattern of heavy elements in old stars across the galaxy looks surprisingly similar.”

The findings could also improve our understanding of the chemical makeup of deep-sea deposits on Earth. The presence of both iron and plutonium in this material suggests that both elements may have been created in the same type of event, before coalescing into the newly forming Earth.

For now, the team will aim to strengthen their model through further simulations – which could better reproduce the complex, dynamic processes taking place as massive stars collapse.

The research is described in The Astrophysical Journal.

The post Photon collisions in dying stars could create neutrons for heavy elements appeared first on Physics World.

Researchers claim Trump administration is conducting ‘a wholesale assault on science’

11 avril 2025 à 14:15

The US administration is carrying out “a wholesale assault on US science” that could hold back research in the country for several decades. That is the warning from more than 1900 members of the US National Academies of Sciences, Engineering, and Medicine, who have signed an open letter condemning the policies introduced by Donald Trump since he took up office on 20 January.

US universities are in the firing line of the Trump administration, which is seeking to revoke the visas of foreign students, threatening to withdraw grants and demanding control over academic syllabuses. “The voice of science must not be silenced,” the letter writers say. “We all benefit from science, and we all stand to lose if the nation’s research enterprise is destroyed.”

Particularly hard hit are the country’s eight Ivy League universities, which have been accused of downplaying antisemitism exhibited in campus demonstrations in support of Gaza. Columbia University in New York, for example, has been trying to regain $400m in federal funds that the Trump administration threatened to cancel.

Columbia initially reached an agreement with the government on issues such as banning facemasks on its campus and taking control of its department responsible for courses on the Middle East. But on 8 April, according to reports, the National Institutes of Health, under orders from the Department of Health and Human Services, blocked all of its grants to Columbia.

Harvard University, meanwhile, has announced plans to privately borrow $750m after the Trump administration announced that it would review $9bn in the university’s government funding. Brown University in Rhode Island faces a loss of $510m, while the government has suspended several dozen research grants for Princeton University.

The administration also continues to oppose the use of diversity, equity and inclusion (DEI) programmes in universities. The University of Pennsylvania, from which Donald Trump graduated, faces the suspension of $175m in grants for offences against the government’s DEI policy.

Brain drain

Researchers in medical and social sciences are bearing the brunt of government cuts, with physics departments seeing relatively little impact on staffing and recruitment so far. “Of course we are concerned,” Peter Littlewood, chair of the University of Chicago’s physics department, told Physics World. “Nonetheless, we have made a deliberate decision not to halt faculty recruiting and stand by all our PhD offers.”

David Hsieh, executive officer for physics at California Institute of Technology, told Physics World that his department has also not taken any action so far. “I am sure that each institution is preparing in ways that make the most sense for them,” he says. “But I am not aware of any collective response at the moment.”

Yet universities are already bracing themselves for a potential brain drain. “The faculty and postdoc market is international, and the current sentiment makes the US less attractive for reasons beyond just finance,” warns Littlewood at Chicago.

That sentiment is echoed by Maura Healey, governor of Massachusetts, who claims that Europe, the Middle East and China are already recruiting the state’s best and brightest. “[They’re saying] we’ll give you a lab; we’ll give you staff. We’re giving away assets to other countries instead of training them, growing them [and] supporting them here.”

Science agencies remain under pressure too. The Department of Government Efficiency, run by Elon Musk, has already  ended $420m in “unneeded” NASA contracts. The administration aims to cut the year’s National Science Foundation (NSF) construction budget, with data indicating that the agency has roughly halved its number of new grants since Trump became president.

Yet a threat to reduce the percentage of ancillary costs related to scientific grants appeared at least on hold, for now at least. “NSF awardees may continue to budget and charge indirect costs using either their federally negotiated indirect cost rate agreement or the “de minimis” rate of 15%, as authorized by the uniform guidance and other Federal regulations,” says an NSF spokesperson.

The post Researchers claim Trump administration is conducting ‘a wholesale assault on science’ appeared first on Physics World.

Quantum computer generates strings of certifiably random numbers

10 avril 2025 à 10:00

A quantum computer has been used for the first time to generate strings of certifiably random numbers. The protocol for doing this, which was developed by a team that included researchers at JPMorganChase and the quantum computing firm Quantinuum, could have applications in areas ranging from lotteries to cryptography – leading Quantinuum to claim it as quantum computing’s first commercial application, though other firms have made similar assertions. Separately, Quantinuum and its academic collaborators used the same trapped-ion quantum computer to explore problems in quantum magnetism and knot theory.

Genuinely random numbers are important in several fields, but classical computers cannot create them. The best they can do is to generate apparently random or “pseudorandom” numbers. Randomness is inherent in the laws of quantum mechanics, however, so quantum computers are naturally suited to random number generation. In fact, random circuit sampling – in which all qubits are initialized in a given state and allowed to evolve via quantum gates before having their states measured at the output – is often used to benchmark their power.

Of course, not everyone who wants to produce random numbers will have their own quantum computer. However, in 2023 Scott Aaronson of the University of Texas at Austin, US and his then-PhD student Shi-Han Hung suggested that a client could send a series of pseudorandomly chosen “challenge” circuits to a central server. There, a quantum computer could perform random circuit sampling before sending the readouts to the client.

If these readouts are truly the product of random circuit sampling measurements performed on a quantum computer, they will be truly random numbers. “Certifying the ‘quantumness’ of the output guarantees its randomness,” says Marco Pistoia, JPMorganChase’s head of global technology applied research.

Importantly, this certification is something a classical computer can do. The way this works is that the client samples a subset of the bit strings in the readouts and performs a test called cross-entropy benchmarking. This test measures the probability that the numbers could have come from a non-quantum source. If the client is satisfied with this measurement, they can trust that the samples were genuinely the result of random circuit sampling. Otherwise, they may conclude that the data could have been generated by “spoofing” – that is, using a classical algorithm to mimic a quantum computer. The degree of confidence in this test, and the number of bits they are willing to settle for to achieve this confidence, is up to the client.

High-fidelity quantum computing

In the new work, Pistoia, Aaronson, Hung and colleagues sent challenge circuits to the 56-qubit Quantinuum H2-1 quantum computer over the Internet. The attraction of the Quantinuum H2-1, Pistoia explains, is its high fidelity: “Somebody could say ‘Well, when it comes to randomness, why would you care about accuracy – it’s random anyway’,” he says. “But we want to measure whether the number that we get from Quantinuum really came from a quantum computer, and a low-fidelity quantum computer makes it more difficult to ascertain that with confidence… That’s why we needed to wait all these years, because a low-fidelity quantum computer wouldn’t have given us the certification part.”

The team then certified the randomness of the bits they got back by performing cross-entropy benchmarking using four of the world’s most powerful supercomputers, including Frontier at the US Department of Energy’s Oak Ridge National Laboratory. The results showed that it would have been impossible for a dishonest adversary with similar classical computing power to spoof a quantum computer – provided the client set a short enough time limit.

One drawback is that at present, the computational cost of verifying that random numbers have not been spoofed is similar to the computational cost of spoofing them. “New work is needed to develop approaches for which the certification process can run on a regular computer,” Pistoia says. “I think this will remain an active area of research in the future.”

Studying other problems

Quantinuum has also released the results of two scientific studies performed using the Quantinuum H2-1. The first examines a well-known problem in knot theory involving the Jones polynomial. The second explores quantum magnetism, which was also the subject of quantum computing work by groups at Harvard University, Google Quantum AI and, most recently, D-Wave Systems. Michael Foss-Feig, a quantum computing theorist at Quantinuum who led the quantum magnetism study, explains that the groups focused on different problems, with Quantinuum and its American and European academic collaborators studying thermalization rather than quantum phase transitions.

A more important difference, Foss-Feig argues, is that whereas the other groups used a partly analogue approach to simulating their quantum magnetic system, with all quantum gates activated simultaneously, Quantinuum’s approach divided time into a series of discrete steps, with operations following in a sequence similar to that of a classical computer. This digitization meant the researchers could perform a discrete gate operation as required, between any of the ionic qubits in their lattice. “This digital architecture is an extremely convenient way to compile a very wide range of physical problems,” Foss-Feig says. “You might think, for example, of simulating not just spins, for example, but also fermions or bosons.”

While the researchers say it would be just possible to reproduce these simulations using classical computers, they plan to study larger models soon. A 96-qubit version of their device, called Helios, is slated for launch later in 2025.

“We’ve gone through a shift”

Quantum information scientist Barry Sanders of the University of Calgary, Canada is impressed by all three works. “The real game changer here is Quantinuum’s really nice 56-qubit quantum computer,” he says. “Instead of just being bigger in its number of qubits, it’s hit multiple important targets.”

In Sanders’ view, the computer’s fully digital architecture is important for scalability, although he notes that many in the field would dispute that. The most important development, he adds, is that the research frames the value of a quantum computer in terms of its accomplishments.

“We’ve gone through a shift: when you buy a normal computer, you want to know what that computer can do for you, not how good is the transistor,” he says. “In the old days, we used to say ‘I made a quantum computer and my components are better than your components – my two-qubit gate is better’… Now we say, ‘I made a quantum computer and I’m going to brag about the problem I solved’.”

The random number generation paper is published in Nature. The others are available on the arXiv pre-print server.

The post Quantum computer generates strings of certifiably random numbers appeared first on Physics World.

Isolated pockets of audible sound are created using metasurfaces

7 avril 2025 à 13:25

A ground-breaking method to create “audible enclaves” – localized zones where sound is perceptible while remaining completely unheard outside – has been unveiled by researchers at Pennsylvania State University and Lawrence Livermore National Laboratory. Their innovation could transform personal audio experiences in public spaces and improve secure communications.

“One of the biggest challenges in sound engineering is delivering audio to specific listeners without disturbing others,” explains Penn State’s Jiaxin Zhong. “Traditional speakers broadcast sound in all directions, and even directional sound technologies still generate audible sound along their entire path. We aimed to develop a method that allows sound to be generated only at a specific location, without any leakage along the way. This would enable applications such as private speech zones, immersive audio experiences, and spatially controlled sound environments.”

To achieve precise audio targeting, the researchers used a phenomenon known as difference-frequency wave generation. This process involves emitting two ultrasonic beams – sound waves with frequencies beyond the range of human hearing – that intersect at a chosen point. At their intersection, these beams interact to produce a lower-frequency sound wave within the audible range. In their experiments, the team used ultrasonic waves at frequencies of 40 kHz and 39.5 kHz. When these waves converge, they generated an audible sound at 500 Hz, which falls within the typical human hearing range of approximately 20 Hz–20 kHz.

To prevent obstacles like human bodies from blocking the sound beams, the researchers used self-bending beams that follow curved paths instead of travelling in straight lines. They did this by passing ultrasound waves through specially designed metasurfaces, which redirected the waves along controlled trajectories, allowing them to meet at a specific point where the sound is generated.

Manipulative metasurfaces

“Metasurfaces are engineered materials that manipulate wave behaviour in ways that natural materials cannot,” said Zhong. “In our study, we use metasurfaces to precisely control the phase of ultrasonic waves, shaping them into self-bending beams. This is similar to how an optical lens bends light.”

The researchers began with computer simulations to model how ultrasonic waves would travel around obstacles, such as a human head, to determine the optimal design for the sound sources and metasurfaces. These simulations confirmed the feasibility of creating an audible enclave at the intersection of the curved beams. Subsequently, the team constructed a physical setup in a room-sized environment to validate their findings experimentally. The results closely matched their simulations, demonstrating the practical viability of their approach.​

“Our method allows sound to be produced only in an intended area while remaining completely silent everywhere else,” says Zhong. “By using acoustic metasurfaces, we direct ultrasound along curved paths, making it possible to ‘place’ sound behind objects without a direct line of sight. A person standing inside the enclave can hear the sound, but someone just a few centimetres away will hear almost nothing.”

Initially, the team produced a steady 500 Hz sound within the enclave. By allowing the frequencies of the two ultrasonic sources to vary, they generated a broader range of audible sounds, covering the frequencies from 125 Hz–4 kHz. This expanded range includes much of the human auditory spectrum, increasing the potential applications of the technique.

The ability to generate sound in a confined space without any audible leakage opens up many possible applications. Museums and exhibitions could provide visitors with personalized audio experiences without the need for headphones, allowing individuals to hear different information depending on their location. In cars, drivers could receive navigation instructions without disturbing passengers, who could simultaneously listen to music or other content. Virtual and augmented reality applications could benefit from more immersive soundscapes that do not require bulky headsets.

The technology could also enhance secure communications, creating localized zones where sensitive conversations remain private even in shared spaces. In noisy environments, future adaptations of this method might allow for targeted noise cancellation, reducing unwanted sound in specific areas while preserving important auditory information elsewhere.

Future challenges

While their results are promising, the researchers acknowledge several challenges that must be addressed before the technology can be widely implemented. One concern is the intensity of the ultrasonic beams required to generate audible sound at a practical volume. Currently, achieving sufficient sound levels necessitates ultrasonic intensities that may have unknown effects on human health.​

Another challenge is ensuring high-quality sound reproduction. The relationship between the ultrasonic beam parameters and the resulting audible sound is complex, making it difficult to produce clear audio across a wide range of frequencies and volumes.

“We are currently working on improving sound quality and efficiency,” Zhong said. “We are exploring deep learning and advanced nonlinear signal processing methods to optimize sound clarity. Another area of development is power efficiency — ensuring that the ultrasound-to-audio conversion is both effective and safe for practical use. In the long run, we hope to collaborate with industry partners to bring this technology to consumer electronics, automotive audio, and immersive media applications.”

The research is reported in Proceedings of the National Academy of Sciences.

The post Isolated pockets of audible sound are created using metasurfaces appeared first on Physics World.

Solar cell greenhouse accelerates plant growth

2 avril 2025 à 10:30

Agrivoltaics is an interdisciplinary research area that lies at the intersection of photovoltaics (PVs) and agriculture. Traditional PV systems used in agricultural settings are made from silicon materials and are opaque. The opaque nature of these solar cells can block sunlight reaching plants and hinder their growth. As such, there’s a need for advanced semi-transparent solar cells that can provide sufficient power but still enable plants to grow instead of casting a shadow over them.

In a recent study headed up at the Institute for Microelectronics and Microsystems (IMM) in Italy, Alessandra Alberti and colleagues investigated the potential of semi-transparent perovskite solar cells as coatings on the roof of a greenhouse housing radicchio seedlings.

Solar cell shading an issue for plant growth

Opaque solar cells are known to induce shade avoidance syndrome in plants. This can cause morphological adaptations, including changes in chlorophyll content and an increased leaf area, as well as a change in the metabolite profile of the plant. Lower UV exposure can also reduce polyphenol content – antioxidant and anti-inflammatory molecules that humans get from plants.

Addressing these issues requires the development of semi-transparent PV panels with high enough efficiencies to be commercially feasible. Some common panels that can be made thin enough to be semi-transparent include organic and dye-sensitized solar cells (DSSCs). While these have been used to provide power while growing tomatoes and lettuces, they typically only have a power conversion efficiency (PCE) of a few percent – a more efficient energy harvester is still required.

A semi-transparent perovskite solar cell greenhouse

Perovskite PVs are seen as the future of the solar cell industry and show a lot of promise in terms of PCE, even if they are not yet up to the level of silicon. However, perovskite PVs can also be made semi-transparent.

Laboratory-scale greenhouse
Experimental set-up The laboratory-scale greenhouse. (Courtesy: CNR-IMM)

In this latest study, the researchers designed a laboratory-scale greenhouse using a semi-transparent europium (Eu)-enriched CsPbI3 perovskite-coated rooftop and investigated how radicchio seeds grew in the greenhouse for 15 days. They chose this Eu-enriched perovskite composition because CsPbI3 has superior thermal stability compared with other perovskites, making it ideal for long exposures to the Sun’s rays. The addition of Eu into the CsPbI3 structure improved the perovskite stability by minimizing the number of intrinsic defects and increasing the surface-to-volume ratio of perovskite grains.

Alongside this stability, this perovskite also has no volatile components that could potentially effuse under high surface temperatures. It also typically possesses a high PCE – the record for this composition is 21.15%, which is significantly higher and much more commercially feasible than previously possible with organic PVs and DSSCs. This perovskite, therefore, provides a good trade-off between the PCE that can be achieved while transmitting enough light to allow the seedlings to grow.

Low light conditions promote seedling growth

Even though the seedlings were exposed to lower light conditions than natural light, the team found that they grew more quickly, and with bigger leaves, than those under glass panels. This is attributed to the perovskite acting as a filter for only red light to pass through. And red light is known to improve the photosynthetic efficiency and light absorption capabilities of plants, as well as increase the levels of sucrose and hexose within the plant.

The researchers also found that seedlings grown under these conditions had different gene expression patterns compared with those grown under glass. These expression patterns were associated with environmental stress responses, growth regulation, metabolism and light perception, suggesting that the seedlings naturally adapted to different light conditions – although further research is needed to see whether these adaptations will improve the crop yield.

Overall, the use of perovskite PVs strikes a good balance between being able to provide enough power to cover the annual energy needs for irrigation, lighting and air conditioning, while still allowing the seedlings to grow – and grow much quicker and faster. The team suggest that the perovskite solar cells could help with indoor food production operations in the agricultural sector as a potentially affordable solution, although more work now needs to be done on much larger scales to test the technology’s commercial feasibility.

The research is published in Nature Communications.

The post Solar cell greenhouse accelerates plant growth appeared first on Physics World.

DESI delivers a cosmological bombshell

1 avril 2025 à 17:53

The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.

The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.

“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.

DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”.  The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.

Intrinsic energy density

Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.

DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.

This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.

DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.

The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.

DESI data
Density fluctuations DESI observations showing
nearby bright galaxies (yellow), luminous red galaxies (orange), emission-line galaxies (blue), and quasars (green). The inset shows the large-scale structure of a small portion of the universe. (Courtesy: Claire Lamman/DESI collaboration)

“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”

As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.

The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.

However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.

“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”

Dark energy takes over

The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.

“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.

Scalar quantum field

Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.

However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”

Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.

“Big rip” avoided

Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.

“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.

Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”

The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.

The post DESI delivers a cosmological bombshell appeared first on Physics World.

Apple picked as logo for celebration of classical physics in 2027

1 avril 2025 à 08:00
Newton apple tree
Core physics This apple tree at Woolsthorpe Manor is believed to have been the inspiration for Isaac Newton. (Courtesy: Bs0u10e01/CC BY-SA 4.0)

Physicists in the UK have drawn up plans for an International Year of Classical Physics (IYC) in 2027 – exactly three centuries after the death of Isaac Newton. Following successful international years devoted to astronomy (2009), light (2015) and quantum science (2025), they want more recognition for a branch of physics that underpins much of everyday life.

A bright green Flower of Kent apple has now been picked as the official IYC logo in tribute to Newton, who is seen as the “father of classical physics”. Newton, who died in 1727, famously developed our understanding of gravity – one of the fundamental forces of nature – after watching an apple fall from a tree of that variety in his home town of Woolsthorpe, Lincolnshire, in 1666.

2027 International Year of Classical Physics logo

“Gravity is central to classical physics and contributes an estimated $270bn to the global economy,” says Crispin McIntosh-Smith, chief classical physicist at the University of Lincoln. “Whether it’s rockets escaping Earth’s pull or skiing down a mountain slope, gravity is loads more important than quantum physics.”

McIntosh-Smith, who also works in cosmology having developed the Cosmic Crisp theory of the universe during his PhD, will now be leading attempts to get endorsement for IYC from the United Nations. He is set to take a 10-strong delegation from Bramley, Surrey, to Paris later this month.

An official gala launch ceremony is being pencilled in for the Travelodge in Grantham, which is the closest hotel to Newton’s birthplace. A parallel scientific workshop will take place in the grounds of Woolsthorpe Manor, with a plenary lecture from TV physicist Brian Cox. Evening entertainment will feature a jazz band.

Numerous outreach events are planned for the year, including the world’s largest demonstration of a wooden block on a ramp balanced by a crate on a pulley. It will involve schoolchildren pouring Golden Delicious apples into the crate to illustrate Newton’s laws of motion. Physicists will also be attempting to break the record for the tallest tower of stacked Braeburn apples.

But there is envy from those behind the 2025 International Year of Quantum Science and Technology. “Of course, classical physics is important but we fear this year will peel attention away from the game-changing impact of quantum physics,” says Anne Oyd from the start-up firm Qrunch, who insists she will only play a cameo role in events. “I believe the impact of classical physics is over-hyped.”

The post Apple picked as logo for celebration of classical physics in 2027 appeared first on Physics World.

Electron and proton FLASH deliver similar skin-sparing in radiotherapy of mice

28 mars 2025 à 10:04

FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).

Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.

Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.

Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.

For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.

The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.

FLASH confers skin protection

To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.

The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.

To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.

The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.

Comparing conventional radiotherapy with electron FLASH
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)

For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.

Similar skin damage between electrons and protons

Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.

Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.

The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”

The post Electron and proton FLASH deliver similar skin-sparing in radiotherapy of mice appeared first on Physics World.

Teaching university physics doesn’t have to be rocket science

26 mars 2025 à 12:00

Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.

Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.

With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.

From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.

For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.

As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.

Supporting students

The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.

Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.

Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.

More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.

The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.

Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.

Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.

The post Teaching university physics doesn’t have to be rocket science appeared first on Physics World.

Quantum computers extend lead over classical machines in random circuit sampling

25 mars 2025 à 17:41

Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.

The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.

The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”

At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.

Recent claims of quantum advantage

In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.

To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.

Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.

The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors.  “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”

“How progress gets made”

Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.

The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.

Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”

The post Quantum computers extend lead over classical machines in random circuit sampling appeared first on Physics World.

Tiny island, big science: the North Ronaldsay Science Festival

25 mars 2025 à 14:25

Sometimes, you just have to follow your instincts and let serendipity take care of the rest.

North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.

But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.

My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.

This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.

Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.

The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.

Four photos of children and adults creating structures from cardboard
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)

Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.

One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.

Two photos of adults pressing LEDs into a picture of Orion the hunter
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)

The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.

Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.

  • The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)

The post Tiny island, big science: the North Ronaldsay Science Festival appeared first on Physics World.

Cell sorting device could detect circulating tumour cells

25 mars 2025 à 10:48
Acousto-microfluidic chip
Cell separation Illustration of the fabricated optimal acousto-microfluidic chip. (Courtesy: Afshin Kouhkord and Naserifar Naser)

Analysing circulating tumour cells (CTCs) in the blood could help scientists detect cancer in the body. But separating CTCs from blood is a difficult, laborious process and requires large sample volumes.

Researchers at the K N Toosi University of Technology (KNTU) in Tehran, Iran believe that ultrasonic waves could separate CTCs from red blood cells accurately, in an energy efficient way and in real time. They publish their study in the journal Physics of Fluids.

“In a broader sense, we asked: ‘How can we design a microfluidic, lab-on-a-chip device powered by SAWs [standing acoustic waves] that remains simple enough for medical experts to use easily, while still delivering precise and efficient cell separation?’,” says senior author Naser Naserifar, an assistant professor in mechanical engineering at KNTU. “We became interested in acoustofluidics because it offers strong, biocompatible forces that effectively handle cells with minimal damage.”

Acoustic waves can deliver enough force to move cells over small distances without damaging them. The researchers used dual pressure acoustic fields at critical positions in a microchannel to separate CTCs from other cells. The CTCs are gathered at an outlet for further analyses, cultures and laboratory procedures.

In the process of designing the chip, the researchers integrated computational modelling, experimental analysis and artificial intelligence (AI) algorithms to analyse acoustofluidic phenomena and generate datasets that predict CTC migration in the body.

“We introduced an acoustofluidic microchannel with two optimized acoustic zones, enabling fast, accurate separation of CTCs from RBCs [red blood cells],” explains Afshin Kouhkord, who performed the work while a master’s student in the Advance Research in Micro And Nano Systems Lab at KNTU. “Despite the added complexity under the hood, the resulting chip is designed for simple operation in a clinical environment.”

So far, the researchers have evaluated the device with numerical simulations and tested it using a physical prototype. Simulations modelled fluid flow, acoustic pressure fields and particle trajectories. The physical prototype was made of lithium niobate, with polystyrene microspheres used as surrogates for red blood cells and CTCs. Results from the prototype agreed with numerical simulations to within 3.5%.

“This innovative approach in laboratory-on-chip technology paves the way for personalized medicine, real-time molecular analysis and point-of-care diagnostics,” Kouhkord and Naserifar write.

The researchers are now refining their design, aiming for a portable device that could be operated with a small battery pack in resource-limited and remote environments.

The post Cell sorting device could detect circulating tumour cells appeared first on Physics World.

D-Wave Systems claims quantum advantage, but some physicists are not convinced

24 mars 2025 à 19:59

D-Wave Systems has used quantum annealing to do simulations of quantum magnetic phase transitions. The company claims that some of their calculations would be beyond the capabilities of the most powerful conventional (classical) computers – an achievement referred to as quantum advantage. This would mark the first time quantum computers had achieved such a feat for a practical physics problem.

However, the claim has been challenged by two independent groups of researchers in Switzerland and the US, who have published papers on the arXiv preprint server that report that similar calculations could be done using classical computers. D-Wave’s experts believe these classical results fall well short of the company’s own accomplishments, and some independent experts agree with D-Wave.

While most companies trying to build practical quantum computers are developing “universal” or “gate model” quantum systems, US-based D-Wave has principally focused on quantum annealing devices. While such systems are less programmable than gate model systems, the approach has allowed D-Wave to build machines with many more quantum bits (qubits) than any of its competitors. Whereas researchers at Google Quantum AI and researchers in China have, independently, recently unveiled 105-qubit universal quantum processors, some of D-Wave’s have more than 5000 qubits. Moreover, D-Wave’s systems are already in practical use, with hardware owned by the Japanese mobile phone company NTT Docomo being used to optimize cell tower operations. Systems are also being used for network optimization at motor companies, food producers and elsewhere.

Trevor Lanting, the chief development officer at D-Wave, explains the central principles behind  quantum-annealing computation: “You have a network of qubits with programmable couplings and weights between those devices and then you program in a certain configuration – a certain bias on all of the connections in the annealing processor,” he says. The quantum annealing algorithm places the system in a superposition of all possible states of the system. When the couplings are slowly switched off, the system settles into its most energetically favoured state – which is the desired solution.

Quantum hiking

Lanting compares this to a hiker in the mountains searching for the lowest point on a landscape: “As a classical hiker all you can really do is start going downhill until you get to a minimum, he explains; “The problem is that, because you’re not doing a global search, you could get stuck in a local valley that isn’t at the minimum elevation.” By starting out in a quantum superposition of all possible states (or locations in the mountains), however, quantum annealing is able to find the global potential minimum.

In the new work, researchers at D-Wave and elsewhere set out to show that their machines could use quantum annealing to solve practical physics problems beyond the reach of classical computers. The researchers used two different 1200-qubit processors to model magnetic quantum phase transitions. This is a similar problem to one studied in gate-model systems by researchers at Google and Harvard University in independent work announced in February.

“When water freezes into ice, you can sometimes see patterns in the ice crystal, and this is a result of the dynamics of the phase transition,” explains Andrew King, who is senior distinguished scientist at D-Wave and the lead author of a paper describing the work. “The experiments that we’re demonstrating shed light on a quantum analogue of this phenomenon taking place in a magnetic material that has been programmed into our quantum processors and a phase transition driven by a magnetic field.” Understanding such phase transitions are important in the discovery and design of new magnetic materials.

Quantum versus classical

The researchers studied multiple configurations, comprising ever-more spins arranged in ever-more complex lattice structures. The company says that its system performed the most complex simulation in minutes. They also ascertained how long it would take to do the simulations using several leading classical computation techniques, including neural network methods, and how the time to achieve a solution grew with the complexity of the problem. Based on this, they extrapolated that the most complex lattices would require almost a million years on Frontier, which is one of the world’s most powerful supercomputers.

However, two independent groups – one at EPFL in Switzerland and one at the Flatiron Institute in the US – have posted papers on the arXiv preprint server claiming to have done some of the less complex calculations using classical computers. They argue that their results should scale simply to larger sizes; the implication being that classical computers could solve the more complicated problems addressed by D-Wave.

King has a simple response: “You don’t just need to do the easy simulations, you need to do the hard ones as well, and nobody has demonstrated that.” Lanting adds that “I see this as a healthy back and forth between quantum and classical methods, but I really think that, with these results, we’re pulling ahead of classical methods on the biggest scales we can calculate”.

Very interesting work

Frank Verstraete of the University of Cambridge is unsurprised by some scientists’ scepticism. “D-Wave have historically been the absolute champions at overselling what they did,” he says. “But now it seems they’re doing something nobody else can reproduce, and in that sense it’s very interesting.” He does note, however, that the specific problem chosen is not, in his view an interesting one from a physics perspective, and has been chosen purely to be difficult for a classical computer.

Daniel Lidar of the University of Southern California, who has previously collaborated with D-Wave on similar problems but was not involved in the current work, says “I do think this is quite the breakthrough…The ability to anneal very fast on the timescales of the coherence times of the qubits has now become possible, and that’s really a game changer here.” He concludes that “the arms race is destined to continue between quantum and classical simulations, and because, in all likelihood, these are problems that are extremely hard classically, I think the quantum win is going to become more and more indisputable.”

The D-Wave research is described in Science. The Flatiron Institute preprint is by Joseph Tindall and colleagues, and the EPFL preprint is by Linda Mauron and Giuseppe Carleo.

The post D-Wave Systems claims quantum advantage, but some physicists are not convinced appeared first on Physics World.

Allegations of sexual misconduct have immediate impact on perpetrator’s citations, finds study

21 mars 2025 à 11:13

Scientists who have been publicly accused of sexual misconduct see a significant and immediate decrease in the rate at which their work is cited, according to a study by behavioural scientists in the US. However, researchers who are publicly accused of scientific misconduct are found not to suffer the same drop in citations (PLOS One 20 e0317736). Despite its flaws, citation rates are often seen a marker of impact and quality.

The study was carried by a team led by Giulia Maimone from the University of California, Los Angeles, who collected data from the Web of Science covering 31,941 scientific publications across 18 disciplines. They then analysed the citation rates for 5888 papers authored by 30 researchers accused of either sexual or scientific misconduct, the latter including data fabrication, falsification and plagiarism.

Maimone told Physics World that they used strict selection criteria to ensure that the two groups of academics were comparable and that the accusations against them were public. This meant her team only used scholars whose misconduct allegations have been reported in the media and had “detailed accounts of the allegations online”.

Maimone’s team concluded that papers by scientists accused of sexual misconduct experienced a significant drop in citations in the three years after allegations become public compared with a “control” group of academics of a similar professional standing. Those accused of scientific fraud, meanwhile, saw no statistically significant change in the citation rates of their papers.

Further work

To further explore attitudes towards sexual and scientific misconduct, the researchers surveyed 231 non-academics and 240 academics. The non-academics considered sexual misconduct more reprehensible than scientific misconduct and more deserving of punishment, while academics claimed that they would more likely continue to cite researchers accused of sexual misconduct as compared to scientific misconduct. “Exactly the opposite of what we observe in the real data,” adds Maimone.

According to the researchers, there are two possible explanations for this discrepancy. One is that academics, according to Maimone, “overestimate their ability to disentangle the scientists from the science”. Another is that scientists are aware that they would not cite sexual harassers, but they are unwilling to admit it because they feel they should take a harsher professional approach towards scientific misconduct.

Maimone says they would now like to explore the longer-term consequences of misconduct as well as the psychological mechanisms behind the citation drop for those accused of sexual misconduct. “Do [academics] simply want to distance themselves from these allegations or are they actively trying to punish these scholars?” she asks.

The post Allegations of sexual misconduct have immediate impact on perpetrator’s citations, finds study appeared first on Physics World.

CO2 laser enables long-range detection of radioactive material

19 mars 2025 à 10:00

Researchers have demonstrated that they can remotely detect radioactive material from 10 m away using short-pulse CO2 lasers – a distance over ten times farther than achieved via previous methods.

Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.

The study may one day lead to remote sensing technologies that could be used in nuclear disaster response and nuclear security.

Using atmospheric ionization

Radioactive materials emit particles – such as alpha, beta or gamma particles – that can ionize air molecules, creating free electrons and negative ions. These charged particles are typically present at very low concentrations, making them difficult to detect.

Senior author Howard Milchberg and colleagues – also from Brookhaven National Laboratory, Los Alamos National Laboratory and Lawrence Livermore National Laboratory – demonstrated that CO2 lasers could accelerate these charged particles, causing them to collide with neutral gas molecules, in turn creating further ionization. These additional free charges would then undergo the same laser-induced accelerations and collisions, leading to a cascade of charged particles.

This effect, known as “electron avalanche breakdown”, can create microplasmas that scatter laser light. By measuring the profile of the backscattered light, researchers can detect the presence of radioactive material.

The team tested their technique using a 3.6-mCi polonium-210 alpha particle source at a standoff distance of 10 m, significantly longer than previous experiments that used different types of lasers and electromagnetic radiation sources.

“The results are highly impressive,” comments EunMi Choi from the Ulsan National Institute of Science and Technology in South Korea. Choi’s team had used a gyrotron source to detect radioactive materials back in 2017.

“The researchers successfully demonstrated 10-m standoff detection of radioactive material, significantly surpassing the previous range of approximately 1 m,” she says.

Milchberg and collaborators had previously used a mid-infrared laser in a similar experiment in 2019. Changing to a long-wavelength (9.2 μm) CO2 laser brought significant advantages, he says.

“You can’t use any laser to do this cascading breakdown process,” Milchberg explains. The CO2 laser’s wavelength was able to enhance the avalanche process, while being low energy enough to not create its own ionization sources. “CO2 is sort of the limit for long wavelengths on powerful lasers and it turns out CO2 lasers are very, very efficient as well,” he says. “So this is like a sweet spot.”

Imaging microplasmas

The team also used a CMOS camera to capture visible-light emissions from the microplasmas. Milchberg says that this fluorescence around radioactive sources resembled balls of plasma, indicating the localized regions where electron avalanche breakdowns had occurred.

By counting these “plasma balls” and calibrating them against the backscattered laser signal, the researchers could link fluorescence intensity to the density of ionization in the air, and use that to determine the type of radiation source.

The CMOS imagers, however, had to be placed close to the measured radiation source, reducing their applicability to remote sensing. “Although fluorescence imaging is not practical for field deployment due to the need for close-range cameras, it provides a valuable calibration tool,” Milchberg says.

Scaling to longer distances

The researchers believe their method can be extended to standoff distances exceeding 100 m. The primary limitation is the laser’s focusing geometry, which would affect the regions in which it could trigger an avalanche breakdown. A longer focal length would require a larger laser aperture but could enable kilometre-scale detection.

Choi points out, however, that deploying a CO2 laser may be difficult in real-world applications. “A CO₂ laser is a bulky system, making it challenging to deploy in a portable manner in the field,” she says, adding that mounting the laser for long-range detection may be a solution.

Milchberg says that the next steps will be to continue developing a technique that can differentiate between different types of radioactive sources completely remotely. Choi agrees, noting that accurately quantifying both the amount and type of radioactive material continues to be a significant hurdle to realising remote sensing technologies in the field.

“There’s also the question of environmental conditions,” says Milchberg, explaining that it is critical to ensure that detection techniques are robust against the noise introduced by aerosols or air turbulence.

The research is described in Physical Review Applied.

The post CO<sub>2</sub> laser enables long-range detection of radioactive material appeared first on Physics World.

‘Milestone’ as Square Kilometre Array Observatory releases its first low-frequency image of the cosmos

18 mars 2025 à 13:30

The Square Kilometre Array (SKA) Observatory has released the first images from its partially built low-frequency telescope in Australia, known as SKA-Low.

The new SKA-Low image was created using 1024 two-metre-high antennas. It shows an area of the sky that would be obscured by a person’s clenched fist held at arm’s length.

Observed at 150 MHz to 175 MHz, the image contains 85 of the brightest known galaxies in that region, each with a black hole at their centre.

“We are demonstrating that the system as a whole is working,” notes SKA Observatory director-general Phil Diamond. “As the telescopes grow, and more stations and dishes come online, we’ll see the images improve in leaps and bounds and start to realise the full power of the SKAO.”

SKA-Low will ultimately have 131 072 two-metre-high antennas that will be clumped together in arrays to act as a single instrument.

These arrays collect the relatively quiet signals from space and combine them to produce radio images of the sky with the aim of answering some of cosmology’s most enigmatic questions, including what dark matter is, how galaxies form, and if there is other life in the universe.

When the full SKA-Low gazes at the same portion of sky as captured in the image released yesterday, it will be able to observe more than 600,000 galaxies.

“The bright galaxies we can see in this image are just the tip of the iceberg,” says George Heald, lead commissioning scientist for SKA-Low. “With the full telescope we will have the sensitivity to reveal the faintest and most distant galaxies, back to the early universe when the first stars and galaxies started to form.”

‘Milestone’ achieved

SKA-Low is one of two telescopes under construction by the observatory. The other, SKA-Mid, which observes mid-frequency range, will include 197 three-storey dishes and is being built in South Africa.

The telescopes, with a combined price tag of £1bn, are projected to begin making science observations in 2028. They are being funded through a consortium of member states, including China, Germany and the UK.

University of Cambridge astrophysicist Eloy de Lera Acedo, who is principal Investigator at his institution for the observatory’s science data processor, says the first image from SKA-Low is an “important milestone” for the project.

“It is worth remembering that these images now require a lot of work, and a lot more data to be captured with the telescope as it builds up, to reach the science quality level we all expect and hope for,” he adds.

Rob Fender, an astrophysicist at the University of Oxford, who is not directly involved in the SKA Observatory, says that the first image “hints at the enormous potential” for the array that will eventually “provide humanity’s deepest ever view of the universe at wavelengths longer than a metre”.

The post ‘Milestone’ as Square Kilometre Array Observatory releases its first low-frequency image of the cosmos appeared first on Physics World.

❌