↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Nanocrystals measure tiny forces on tiny length scales

Par : No Author

Two independent teams in the US have demonstrated the potential of using the optical properties of nanocrystals to create remote sensors that measure tiny forces on tiny length scales. One team is based at Stanford University and used nanocrystals to measure the micronewton-scale forces exerted by a worm as it chewed bacteria. The other team is based at several institutes and used the photon avalanche effect in nanocrystals to measure sub-nanonewton to micronewton forces. The latter technique could potentially be used to study forces involved in processes such as stem cell differentiation.

Remote sensing of forces at small scales is challenging, especially inside living organisms. Optical tweezers cannot make remote measurements inside the body, while fluorophores – molecules that absorb and re-emit light – can measure forces in organisms, but have limited range, problematic stability or, in the case of quantum dots, toxicity. Nanocrystals with optical properties that change when subjected to external forces offer a way forward.

At Stanford, materials scientist Jennifer Dionne led a team that used nanocrystals doped with ytterbium and erbium. When two ytterbium atoms absorb near-infrared photons, they can then transfer energy to a nearby erbium atom. In this excited state, the erbium can either decay directly to its lowest energy state by emitting red light, or become excited to an even higher-energy state that decays by emitting green light. These processes are called upconversion.

Colour change

The ratio of green to red emission depends on the separation between the ytterbium and erbium atoms, and the separation between the erbium atoms – explains Dionne’s PhD student Jason Casar, who is lead author of a paper describing the Stanford research. Forces on the nanocrystal can change these separations and therefore affect that ratio.

The researchers encased their nanocrystals in polystyrene vessels approximately the size of a E coli bacterium. They then mixed the encased nanoparticles with E coli bacteria that were then fed to tiny nematode worms. To extract the nutrients, the worm’s pharynx needs to break open the bacterial cell wall. “The biological question we set out to answer is how much force is the bacterium generating to achieve that breakage?” explains Stanford’s Miriam Goodman.

The researchers shone near-infrared light on the worms, allowing them to monitor the flow of the nanocrystals. By measuring the colour of the emitted light when the particles reached the pharynx, they determined the force it exerted with micronewton-scale precision.

Meanwhile, a collaboration of scientists at Columbia University, Lawrence Berkeley National Laboratory and elsewhere has shown that a process called photon avalanche can be used to measure even smaller forces on nanocrystals. The team’s avalanching nanoparticles (ANPs) are sodium yttrium fluoride nanocrystals doped with thulium – and were discovered by the team in 2021.

The fun starts here

The sensing process uses a laser tuned off-resonance from any transition from the ground state of the ANP. “We’re bathing our particles in 1064 nm light,” explains James Schuck of Columbia University, whose group led the research. “If the intensity is low, that all just blows by. But if, for some reason, you do eventually get some absorption – maybe a non-resonant absorption in which you give up a few phonons…then the fun starts. Our laser is resonant with an excited state transition, so you can absorb another photon.”

This creates a doubly excited state that can decay radiatively directly to the ground state, producing an upconverted photon. Or, it energy can be transferred to a nearby thulium atom, which becomes resonant with the excited state transition and can excite more thulium atoms into resonance with the laser. “That’s the avalanche,” says Schuck; “We find on average you get 30 or 40 of these events – it’s analogous to a chain reaction in nuclear fission.”

Now, Schuck and colleagues have shown that the exact number of photons produced in each avalanche decreases when the nanoparticle experiences compressive force. One reason is that the phonon frequencies are raised as the lattice is compressed, making non-radiatively decay energetically more favourable.

The thulium-doped nanoparticles decay by emitting either red or near infrared photons. As the force increases, the red dims more quickly, causing a change in the colour of the emitted light. These effects allowed the researchers to measure forces from the sub-nanonewton to the micronewton range – at which point the light output from the nanoparticles became too low to detect.

Not just for forces

Schuck and colleagues are now seeking practical applications of their discovery, and not just for measuring forces.

“We’re discovering that this avalanching process is sensitive to a lot of things,” says Schuck. “If we put these particles in a cell and we’re trying to measure a cellular force gradient, but the cell also happened to change its temperature, that would also affect the brightness of our particles, and we would like to be able to differentiate between those things. We think we know how to do that.”

If the technique could be made to work in a living cell, it could be used to measure tiny forces such as those involved in the extra-cellular matrix that dictate stem cell differentiation.

Andries Meijerink of Utrecht University in the Netherlands believes both teams have done important work that is impressive in different ways. Schuck and colleagues for unveiling a fundamentally new force sensing technique and Dionne’s team for demonstrating a remarkable practical application.

However, Meijerink is sceptical that photon avalanching will be useful for sensing in the short term. “It’s a very intricate process,” he says, adding, “There’s a really tricky balance between this first absorption step, which has to be slow and weak, and this resonant absorption”. Nevertheless, he says that researchers are discovering other systems that can avalanche. “I’m convinced that many more systems will be found,” he says.

Both studies are described in Nature. Dionne and colleagues report their results here, and Schuck and colleagues here.

The post Nanocrystals measure tiny forces on tiny length scales appeared first on Physics World.

IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics

Par : No Author

Last year was the year of elections and 2025 is going to be the year of decisions.

After many countries, including the UK, Ireland and the US, went to the polls in 2024, the start of 2025 will see governments at the beginning of new terms, forced to respond swiftly to mounting economic, social, security, environmental and technological challenges.

These issues would be difficult to address at any given time, but today they come amid a turbulent geopolitical context. Governments are often judged against short milestones – the first 100 days or a first budget – but urgency should not come at the cost of thinking long-term, because the decisions over the next few months will shape outcomes for years, perhaps decades, to come. This is no less true for science than it is for health and social care, education or international relations.

In the UK, the first half of the year will be dominated by the government’s spending review. Due in late spring, it could be one of the toughest political tests for UK science, as the implications of the tight spending plans announced in the October budget become clear. Decisions about departmental spending will have important implications for physics funding, from research to infrastructure, facilities and teaching.

One of the UK government’s commitments is to establish 10-year funding cycles for key R&D activities – a policy that could be a positive improvement. Physics discoveries often take time to realise in full, but their transformational nature is indisputable. From fibre-optic communications to magnetic resonance imaging, physics has been indispensable to many of the world’s most impactful and successful innovations.

Emerging technologies, enabled by physicists’ breakthroughs in fields such as materials science and quantum physics, promise to transform the way we live and work, and create new business opportunities and open up new markets. A clear, comprehensive and long-term vision for R&D would instil confidence among researchers and innovators, and long-term and sustainable R&D funding would enable people and disruptive ideas to flourish and drive tomorrow’s breakthroughs.

Alongside the spending review, we are also expecting the publication of the government’s industrial strategy. The focus of the green paper published last year was an indication of how the strategy will place significance on science and technology in positioning the UK for economic growth.

If we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead

Physics-based industries are a foundation stone for the UK economy and are highly productive, as highlighted by research commissioned by the Institute of Physics, which publishes Physics World. Across the UK, the physics sector generates £229bn gross value added, or 11% of total UK gross domestic product. It creates a collective turnover of £643bn, or £1380bn when indirect and induced turnover is included.

Labour productivity in physics-based businesses is also strong at £84 300 per worker, per year. So, if physics is not at the heart of this effort, then the government’s mission of economic revival is in danger of failing to get off the launch pad.

A pivotal year

Another of the new government’s policy priorities is the strategic defence review, which is expected to be published later this year. It could have huge implications for physics given its core role in many of the technologies that contribute to the UK’s defence capabilities. The changing geopolitical landscape, and potential for strained relations between global powers, may well bring research security to the front of the national mind.

Intellectual property, and scientific innovation, are some of the UK’s greatest strengths and it is right to secure them. But physics discoveries in particular can be hampered by overzealous security measures. So much of the important work in our discipline comes from years of collaboration between researchers across the globe. Decisions about research security need to protect, not hamper, the future of UK physics research.

This year could also be pivotal for UK universities, as securing their financial stability and future will be one of the major challenges. Last year, the pressures faced by higher education institutions became apparent, with announcements of course closures, redundancies and restructures as a way of saving money. The rise in tuition fees has far from solved the problem, so we need to be prepared for more turbulence coming for the higher education sector.

These things matter enormously. We have heard that universities are facing a tough situation, and it’s getting harder for physics departments to exist. But if we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead.

As we celebrate the International Year of Quantum Science and Technology that marks the centenary of the initial development of quantum mechanics by Werner Heisenberg, 2025 is a reminder of how the benefits of physics span over decades.

We need to enhance all the vital and exciting developments that are happening in physics departments. The country wants and needs a stronger scientific workforce – just think about all those individuals who studied physics and now work in industries that are defending the country – and that workforce will be strongly dependent on physics skills. So our priority is to make sure that physics departments keep doing world-leading research and preparing the next generation of physicists that they do so well.

The post IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics appeared first on Physics World.

Wrinkles in spacetime could remember the secrets of exploding stars

Par : No Author

Permanent distortions in spacetime caused by the passage of gravitational waves could be detectable from Earth. Known as “gravitational memory”, such distortions are predicted to occur most prominently when the core of a supernova collapses. Observing them could therefore provide a window into the death of massive stars and the creation of black holes, but there’s a catch: the supernova might have to happen in our own galaxy.

Physicists have been detecting gravitational waves from colliding stellar-mass black holes and neutron stars for almost a decade now, and theory predicts that core-collapse supernovae should also produce them. The difference is that unlike collisions, supernovae tend to be lopsided – they don’t explode outwards equally in all directions. It is this asymmetry – in both the emission of neutrinos from the collapsing core and the motion of the blast wave itself – that produces the gravitational-wave memory effect.

“The memory is the result of the lowest frequency aspects of these motions,” explains Colter Richardson, a PhD student at the University of Tennessee in Knoxville, US and co-lead author (with Haakon Andresen of Sweden’s Oskar Klein Centre) of a Physical Review Letters paper describing how gravitational-wave memory detection might work on Earth.

Filtering out seismic noise

Previously, many physicists assumed it wouldn’t be possible to detect the memory effect from Earth. This is because it manifests at frequencies below 10 Hz, where noise from seismic events tends to swamp detectors. Indeed, Harvard astrophysicist Kiranjyot Gill argues that detecting gravitational memory “would require exceptional sensitivity in the millihertz range to separate it from background noise and other astrophysical signals” – a sensitivity that she says Earth-based detectors simply don’t have.

Anthony Mezzacappa, Richardson’s supervisor at Tennessee, counters this by saying that while the memory signal itself cannot be detected, the ramp-up to it can. “The signal ramp-up corresponds to a frequency of 20–30 Hz, which is well above 10 Hz, below which the detector response needs to be better characterized for what we can detect on Earth, before dropping down to virtually 0 Hz where the final memory amplitude is achieved,” he tells Physics World.

The key, Mezzacappa explains, is a “matched filter” technique in which templates of what the ramp-up should look like are matched to the signal to pick it out from low-frequency background noise. Using this technique, the team’s simulations show that it should be possible for Earth-based gravitational-wave detectors such as LIGO to detect the ramp-up even though the actual deformation effect would be tiny – around 10-16 cm “scaled to the size of a LIGO detector arm”, Richardson says.

The snag is that for the ramp-up to be detectable, the simulations suggest the supernova would need to be close – probably within 10 kiloparsecs (32::615 light years) of Earth. That would place it within our own galaxy, and galactic supernovae are not exactly common. The last to be observed in real time was spotted by Johannes Kepler in 1604; though there have been others since, we’ve only identified their remnants after the fact.

Going to the Moon

Mezzacappa and colleagues are optimistic that multi-messenger astronomy techniques such as gravitational-wave and neutrino detectors will help astronomers identify future Milky Way supernovae as they happen, even if cosmic dust (for example) hides their light for optical observers.

Gill, however, prefers to look towards the future. In a paper under revision at Astrophysical Journal Letters, and currently available as a preprint, she cites two proposals for detectors on the Moon that could transform gravitational-wave physics and extend the range at which gravitational memory signals can be detected.

The first, called the Lunar Gravitational Wave Antenna, would use inertial sensors to detect the Moon shaking as gravitational waves ripple through it. The other, known as the Laser Interferometer Lunar Antenna, would be like a giant, triangular version of LIGO with arms spanning tens of kilometres open to space. Both are distinct from the European Space Agency’s Laser Interferometer Space Antenna, which is due for launch in the 2030s, but is optimized to detect gravitational waves from supermassive black holes rather than supernovae.

“Lunar-based detectors or future space-based observatories beyond LISA would overcome the terrestrial limitations,” Gill argues. Such detectors, she adds, could register a memory effect from supernovae tens or even hundreds of millions of light years away. This huge volume of space would encompass many galaxies, making the detection of gravitational waves from core-collapse supernovae almost routine.

The memory of something far away

In response, Richardson points out that his team’s filtering method could also work at longer ranges – up to approximately 10 million light years, encompassing our own Local Group of galaxies and several others – in certain circumstances. If a massive star is spinning very quickly, or it has an exceptionally strong magnetic field, its eventual supernova explosion will be highly collimated and almost jet-like, boosting the amplitude of the memory effect. “If the amplitude is significantly larger, then the detection distance is also significantly larger,” he says.

Whatever technologies are involved, both groups agree that detecting gravitational-wave memory is important. It might, for example, tell us whether a supernova has left behind a neutron star or a black hole, which would be valuable because the reasons one forms and not the other remain a source of debate among astrophysicists.

“By complementing other multi-messenger observations in the electromagnetic spectrum and neutrinos, gravitational-wave memory detection would provide unparalleled insights into the complex interplay of forces in core-collapse supernovae,” Gill says.

Richardson agrees that a detection would be huge and hopes that his work and that of others “motivates new investigations into the low-frequency region of gravitational-wave astronomy”.

The post Wrinkles in spacetime could remember the secrets of exploding stars appeared first on Physics World.

‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question

Par : No Author

Several years ago I was sitting at the back of a classroom supporting a newly qualified science teacher. The lesson was going well, a pretty standard class on Hooke’s law, when a student leaned over to me and asked “Why are we doing this? What’s the point?”.

Having taught myself, this was a question I had been asked many times before. I suspect that when I was a teacher, I went for the knee-jerk “it’s useful if you want to be an engineer” response, or something similar. This isn’t a very satisfying answer, but I never really had the time to formulate a real justification for studying Hooke’s law, or physics in general for that matter.

Who is the physics curriculum designed for? Should it be designed for the small number of students who will pursue the subject, or subjects allied to it, at the post-16 and post-18 level? Or should we be reflecting on the needs of the overwhelming majority who will never use most of the curriculum content again? Only about 10% of students pursue physics or physics-rich subjects post-16 in England, and at degree level, only around 4000 students graduate with physics degrees in the UK each year.

One argument often levelled at me is that learning this is “useful”, to which I retort – in a similar vein to the student from the first paragraph – “In what way?” In the 40 years or so since first learning Hooke’s law, I can’t remember ever explicitly using it in my everyday life, despite being a physicist. Whenever I give a talk on this subject, someone often pipes up with a tenuous example, but I suspect they are in the minority. An audience member once said they consider the elastic behaviour of wire when hanging pictures, but I suspect that many thousands of pictures have been successfully hung with no recourse to F = –kx.

Hooke’s law is incredibly important in engineering but, again, most students will not become engineers or rely on a knowledge of the properties of springs, unless they get themselves a job in a mattress factory.

From a personal perspective, Hooke’s law fascinates me. I find it remarkable that we can see the macroscopic properties of materials being governed by microscopic interactions and that this can be expressed in a simple linear form. There is no utilitarianism in this, simply awe, wonder and aesthetics. I would always share this “joy of physics” with my students, and it was incredibly rewarding when this was reciprocated. But for many, if not most, my personal perspective was largely irrelevant, and they knew that the curriculum content would not directly support them in their future careers.

At this point, I should declare my position – I don’t think we should take Hooke’s law, or physics, off the curriculum, but my reason is not the one often given to students.

A series of lessons on Hooke’s law is likely to include: experimental design; setting up and using equipment; collecting numerical data using a range of devices; recording and presenting data, including graphs; interpreting data; modelling data and testing theories; devising evidence-based explanations; communicating ideas; evaluating procedures; critically appraising data; collaborating with others; and working safely.

Science education must be about preparing young people to be active and critical members of a democracy, equipped with the skills and confidence to engage with complex arguments that will shape their lives. For most students, this is the most valuable lesson they will take away from Hooke’s law. We should encourage students to find our subject fascinating and relevant, and in doing so make them receptive to the acquisition of scientific knowledge throughout their lives.

At a time when pressures on the education system are greater than ever, we must be able to articulate and justify our position within a crowded curriculum. I don’t believe that students should simply accept that they should learn something because it is on a specification. But they do deserve a coherent reason that relates to their lives and their careers. As science educators, we owe it to our students to have an authentic justification for what we are asking them to do. As physicists, even those who don’t have to field tricky questions from bored teenagers, I think it’s worthwhile for all of us to ask ourselves how we would answer the question “What is the point of this?”.

The post ‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question appeared first on Physics World.

New Journal of Physics seeks to expand its horizons

Par : No Author

The New Journal of Physics (NJP) has long been a flagship journal for IOP Publishing. The journal published its first volume in 1998 and was an early pioneer of open-access publishing. Co-owned by the Institute of Physics, which publishes Physics World, and the Deutsche Physikalische Gesellschaft (DPG), after some 25 years the journal is now seeking to establish itself further as a journal that represents the entire range of physics disciplines.

New Journal of Physics
A journal for all physics: the New Journal of Physics publishes research in a broad range of disciplines including quantum optics and quantum information, condensed-matter physics as well as high-energy physics. (Courtesy: IOP Publishing)

NJP publishes articles in pure, applied, theoretical and experimental research, as well as interdisciplinary topics. Research areas include optics, condensed-matter physics, quantum science and statistical physics, and the journal publishes a range of article types such as papers, topical reviews, fast-track communications, perspectives and special issues.

While NJP has been seen as a leading journal for quantum information, optics and condensed-matter physics, the journal is currently undergoing a significant transformation to broaden its scope to attract a wider array of physics disciplines. This shift aims to enhance the journal’s relevance, foster a broader audience and maintain NJP’s position as a leading publication in the global scientific community.

While quantum physics in general, and quantum optics and quantum information in particular, will remain crucial areas for the journal, researchers in other fields such as gravitational-wave research, condensed- and soft-matter physics, polymer physics, theoretical chemistry, statistical and mathematical physics are being encouraged to submit their articles to the journal. “It’s a reminder to the community that NJP is a journal for all kinds of physics and not just a select few,” says quantum physicist Andreas Buchleitner from the Albert-Ludwigs-Universität Freiburg who is NJP’s editor-in-chief.

Historically, NJP has had a strong focus on theoretical physics, particularly in quantum information. Yet another significant aspect of NJP’s new strategy is the inclusion of more experimental research. Attracting high-quality experimental papers to balance its content and enhance its reputation as a comprehensive physics journal, will also allow it to compete with other leading physics journals. Part of this shift will also involve attracting a reliable and loyal group of authors who regularly publish their best work in NJP.

A broader scope

To aid this move, NJP has recently grown its editorial board to add expertise in subjects such as gravitational-wave physics. This diversity of capabilities is crucial to evaluate submissions from different areas of physics and maintain high standards of quality during the peer-review process. That point is particularly relevant for Buchleitner, who sees the expansion of the editorial board as helping to improve the journal’s handling of submissions to ensure that authors feel their work is being evaluated fairly and by knowledgeable and engaged individuals. “Increasing the editorial board was quite an important concept in terms of helping the journal expand,” adds Buchleitner. “What is important to me is that scientists who contact the journal feel that they are talking to people and not to artificial intelligence substitutes.”

While citation metrics such as impact factors are often debated in terms of their scientific value, they remain essential for a journal’s visibility and reputation. In the competitive landscape of scientific publishing, they can set a journal apart from its competitors. With that in mind, NJP, which has an impact factor of 2.8, is also focusing on improving its citation indices to compete with top-tier journals.

Yet that doesn’t only just include the impact factor but other metrics that ensure efficient and constructive handling of submissions that will encourage researchers to publish with the journal again. To set it apart from competitors, the time taken to first decision before peer review, for example, is only six days while the journal has a median of 50 days to first decision after peer review.

Society benefits

While NJP pioneered the open-access model of scientific publishing, that position is no longer unique given the huge increase in open-access journals over the past decade. Yet the publishing model continues to be an important aspect of the journal’s identity to ensure that the research it publishes is freely available to all. Another crucial factor to attract authors and set it apart from commercial entities is that NJP is published by learned societies – the IOP and DPG.

NJP has often been thought of as a “European journal”. Indeed, NJP’s role is significant in the context of the UK leaving the European Union, in that it serves as a bridge between the UK and mainland European research communities. “That’s one of the reasons why I like the journal,” says Buchleitner, who adds that with a wider scope NJP will not only publish the best research from around the world but also strengthen its identity as a leading European journal.

The post <em>New Journal of Physics</em> seeks to expand its horizons appeared first on Physics World.

World’s darkest skies threatened by industrial megaproject in Chile, astronomers warn

Par : No Author

The darkest, clearest skies anywhere in the world could suffer “irreparable damage” by a proposed industrial megaproject. That is the warning from the European Southern Observatory (ESO) in response to plans by AES Andes, a subsidiary of the US power company AES Corporation, to develop a green hydrogen project just a few kilometres from ESO’s flagship Paranal Observatory in Chile’s Atacama Desert.

The Atacama Desert is considered one of the most important astronomical research sites in the world due to its stable atmosphere and lack of light pollution. Sitting 2635 m above sea level, on Cerro Paranal, the Paranal Observatory is home to key astronomical instruments including the Very Large Telescope. The Extremely Large Telescope (ELT) – the largest visible and infrared light telescope in the world – is also being constructed at the observatory on Cerro Armazones with first light expected in 2028.

AES Chile submitted an Environmental Impact Assessment in Chile for an industrial-scale green hydrogen project at the end of December. The complex is expected to cover more than 3000 hectares – similar in size to 1200 football pitches. According to AES, the project is in the early stages of development, but could include green hydrogen and ammonia production plants, solar and wind farms as well as battery storage facilities.

ESO is calling for the development to be relocated to preserve “one of Earth’s last truly pristine dark skies” and “safeguard the future” of astronomy. “The proximity of the AES Andes industrial megaproject to Paranal poses a critical risk to the most pristine night skies on the planet,” says ESO director general Xavier Barcons. “Dust emissions during construction, increased atmospheric turbulence, and especially light pollution will irreparably impact the capabilities for astronomical observation.”

In a statement sent to Physics World, an AES spokesperson says they “understand there are concerns raised by ESO regarding the development of renewable energy projects in the area”. The spokesperson adds that the project would be in an area “designated for renewable energy development”. They also claim that the company is “dedicated to complying with all regulatory guidelines and rules” and “supporting local economic development while maintaining the highest environmental and safety standards”.

According to the statement, the proposal “incorporates the highest standards in lighting” to comply with Chilean regulatory requirements designed “to prevent light pollution, and protect the astronomical quality of the night skies”.

Yet Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, noted that it is “obvious” that the light pollution from such a large complex will negatively affect observations. “There are not many places left in the world with the dark and other climatic conditions necessary to do cutting-edge science in the field of observational astrophysics,” adds Corradi. “Light pollution is a global effect and it is therefore essential to protect sites as important as Paranal.”

The post World’s darkest skies threatened by industrial megaproject in Chile, astronomers warn appeared first on Physics World.

Could bubble-like microrobots be the drug delivery vehicles of the future?

Par : No Author

Biomedical microrobots could revolutionize future cancer treatments, reliably delivering targeted doses of toxic cancer-fighting drugs to destroy malignant tumours while sparing healthy bodily tissues. Development of such drug-delivering microrobots is at the forefront of biomedical engineering research. However, there are many challenges to overcome before this minimally invasive technology moves from research lab to clinical use.

Microrobots must be capable of rapid, steady and reliable propulsion through various biological materials, while generating enhanced image contrast to enable visualization through thick body tissue. They require an accurate guidance system to precisely target diseased tissue. They also need to support sizable payloads of drugs, maintain their structure long enough to release this cargo, and then efficiently biodegrade – all without causing any harm to the body.

Aiming to meet this tall order, researchers at the California Institute of Technology (Caltech) and the University of Southern California have designed a hydrogel-based, image-guided, bioresorbable acoustic microrobot (BAM) with these characteristics and capabilities. Reporting their findings in Science Robotics, they demonstrated that the BAMs could successfully deliver drugs that decreased the size of bladder tumours in mice.

Microrobot design

The team, led by Caltech’s Wei Gao, fabricated the hydrogel-based BAMs using high-resolution two-photon polymerization. The microrobots are hollow spheres with an outer diameter of 30 µm and an 18 µm-diameter internal cavity to trap a tiny air bubble inside.

The BAMs have a hydrophobic inner surface to prolong microbubble retention within biofluids and a hydrophilic outer layer that prevents microrobot clustering and promotes degradation. Magnetic nanoparticles and therapeutic agents integrated into the hydrogel matrix enable wireless magnetic steering and drug delivery, respectively.

The entrapped microbubbles are key as they provide propulsion for the BAMs. When stimulated by focused ultrasound (FUS), the bubbles oscillate at their resonant frequencies. This vibration creates microstreaming vortices around the BAM, generating a propulsive force in the opposite direction of the flow. The microbubbles inside the BAMs also act as ultrasound contrast agents, enabling real-time, deep-tissue visualization.

The researchers designed the microrobots with two cylinder-like openings, which they found achieves faster propulsion speeds than single- or triple-opening spheres. They attribute this to propulsive forces that run parallel to the sphere’s boundary improving both speed and stability of movement when activated by FUS.

Flow patterns generated by a vibrating BAM
Numerical simulations Flow patterns generated by a BAM vibrating at its resonant frequency. The microrobot’s two openings are clearly visible. Scale bar, 15 µm. (Courtesy: Hong Han)

They also discovered that asymmetric placement of the microbubble centre from the centre of the sphere generated propulsion speeds more than twice that achieved by BAMS with a symmetric design.

To perform simultaneous imaging of BAM location and acoustic propulsion within soft tissue, the team employed a dual-probe design. An ultrasound imaging probe enabled real-time imaging of the bubbles, while the acoustic field generated by a FUS probe (at an excitation frequency of 480 kHz and an applied acoustic pressure of 626 kPa peak-to-peak) provided effective propulsion.

In vitro and in vivo testing 

The team performed real-time imaging of the propulsion of BAMs in vitro, using an agarose chamber to simulate an artificial bladder. When exposed to an ultrasound field generated by the FUS probe, the BAMs demonstrated highly efficient motion, as observed in the ultrasound imaging scans. The propulsion direction of BAMs could be precisely controlled by an external magnetic field.

The researchers also conducted in vivo testing, using laboratory mice with bladder cancer and the anti-cancer drug 5-fluorouracil (5-FU). They treated groups of mice with either phosphate buffered saline, free drug, passive BAMs or active (acoustically actuated and magnetically guided) BAMs, at three day intervals over four sessions. They then monitored the tumour progression for 21 days, using bioluminescence signals emitted by cancer cells.

The active BAM group exhibited a 93% decrease in bioluminescence by the 14th day, indicating large tumour shrinkage. Histological examination of excised bladders revealed that mice receiving this treatment had considerably reduced tumour sizes compared with the other groups.

“Embedding the anticancer drug 5-FU into the hydrogel matrix of BAMs substantially improved the therapeutic efficiency compared with 5-FU alone,” the authors write. “These BAMs used a controlled-release mechanism that prolonged the bioavailability of the loaded drug, leading to sustained therapeutic activity and better outcomes.”

Mice treated with active BAMS experienced no weight changes, and no adverse effects to the heart, liver, spleen, lung or kidney compared with the control group. The researchers also evaluated in vivo degradability by measuring BAM bioreabsorption rates following subcutaneous implantation into both flanks of a mouse. Within six weeks, they observed complete breakdown of the microrobots.

Gao tells Physics World that the team has subsequently expanded the scope of its work to optimize the design and performance of the microbubble robots for broader biomedical applications.

“We are also investigating the use of advanced surface engineering techniques to further enhance targeting efficiency and drug loading capacity,” he says. “Planned follow-up studies include preclinical trials to evaluate the therapeutic potential of these robots in other tumour models, as well as exploring their application in non-cancerous diseases requiring precise drug delivery and tissue penetration.”

The post Could bubble-like microrobots be the drug delivery vehicles of the future? appeared first on Physics World.

Sustainability spotlight: PFAS unveiled

Par : No Author

So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.

Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.

Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.

Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.

Left to right: Jonas Baltrusaitis, Linda S. Lee, Clinton Williams, Sara Lupton, Jude Maul

Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.

Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.

Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.

Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.

Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.

About this journal

Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic.
Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA

 

The post Sustainability spotlight: PFAS unveiled appeared first on Physics World.

String theory may be inevitable as a unified theory of physics, calculations suggest

Par : No Author

Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.

The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.

Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.

“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”

Why strings?

String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.

However, a major hurdle remains. The characteristic size of these strings is estimated to be around 1035 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.

Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.

“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”

In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.

Unique solution

Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.

Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”

The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.

Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.

The research is described in Physical Review Letters.

The post String theory may be inevitable as a unified theory of physics, calculations suggest appeared first on Physics World.

Photonics West shines a light on optical innovation

Par : No Author

SPIE Photonics West, the world’s largest photonics technologies event, takes place in San Francisco, California, from 25 to 30 January. Showcasing cutting-edge research in lasers, biomedical optics, biophotonics, quantum technologies, optoelectronics and more, Photonics West features leaders in the field discussing the industry’s challenges and breakthroughs, and sharing their research and visions of the future.

As well as 100 technical conferences with over 5000 presentations, the event brings together several world-class exhibitions, kicking off on 25 January with the BiOS Expo, the world’s largest biomedical optics and biophotonics exhibition.

The main Photonics West Exhibition starts on 28 January. Hosting more than 1200 companies, the event highlights the latest developments in laser technologies, optoelectronics, photonic components, materials and devices, and system support. The newest and fastest growing expo, Quantum West, showcases photonics as an enabling technology for a quantum future. Finally, the co-located AR | VR | MR exhibition features the latest extended reality hardware and systems. Here are some of the innovative products on show at this year’s event.

HydraHarp 500: a new era in time-correlated single-photon counting

Photonics West sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.

HydraHarp 500
Redefining what’s possible PicoQuant presents HydraHarp 500, a next-generation TCSPC unit that maximizes precision, flexibility and efficiency. (Courtesy: PicoQuant)

At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.

In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.

The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.

Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering
precision, flexibility and efficiency combined with reliability and seamless integration to
achieve breakthrough results.

For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

SmarAct: shaping the future of precision

SmarAct is set to make waves at the upcoming SPIE Photonics West, the world’s leading exhibition for photonics, biomedical optics and laser technologies, and the parallel BiOS trade fair. SmarAct will showcase a portfolio of cutting-edge solutions designed to redefine precision and performance across a wide range of applications.

At Photonics West, SmarAct will unveil its latest innovations, as well as its well-established and appreciated iris diaphragms and optomechanical systems. All of the highlighted technologies exemplify SmarAct’s commitment to enabling superior control in optical setups, a critical requirement for research and industrial environments.

Attendees can also experience the unparalleled capabilities of electromagnetic positioners and SmarPod systems. With their hexapod-like design, these systems offer nanometre-scale precision and flexibility, making them indispensable tools for complex alignment tasks in photonics and beyond.

SmarAct’s advanced positioning systems
Ensuring optimal performance SmarAct’s advanced positioning systems provide the precision and stability required for the alignment and microassembly of intricate optical components. (Courtesy: SmarAct)

One major highlight is SmarAct’s debut of a 3D pick-and-place system designed for handling optical fibres. This state-of-the-art solution integrates precision and flexibility, offering a glimpse into the future of fibre alignment and assembly. Complementing this is a sophisticated gantry system for microassembly of optical components. Designed to handle large travel ranges with remarkable accuracy, this system meets the growing demand for precision in the assembly of intricate optical technologies. It combines the best of SmarAct’s drive technologies, such as fast (up to 1 m/s) and durable electromagnetic positioners and scanner stages based on piezo-driven mechanical flexures with maximum scanning speed and minimum scanning error.

Simultaneously, at the BiOS trade fair SmarAct will spotlight its new electromagnetic microscopy stage, a breakthrough specifically tailored for life sciences applications. This advanced stage delivers exceptional stability and adaptability, enabling researchers to push the boundaries of imaging and experimental precision. This innovation underscores SmarAct’s dedication to addressing the unique challenges faced by the biomedical and life sciences sectors, as well as bioprinting and tissue engineering companies.

Throughout the event, SmarAct’s experts will demonstrate these solutions in action, offering visitors an interactive and hands-on understanding of how these technologies can meet their specific needs. Visit SmarAct’s booths to engage with experts and discover how SmarAct solutions can empower your projects.

Whether you’re advancing research in semiconductors, developing next-generation photonic devices or pioneering breakthroughs in life sciences, SmarAct’s solutions are tailored to help you achieve your goals with unmatched precision and reliability.

Precision positioning systems enable diverse applications 

For 25 years Mad City Labs has provided precision instrumentation for research and industry – including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes, atomic-force microscopes (AFMs) and customized solutions.

The company’s newest micropositioning system – the MMP-UHV50 – is a modular, linear micropositioner designed for ultrahigh-vacuum (UHV) environments. Constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks, the MMP-UHV50 offers 50 mm travel range with 190 nm step size and a maximum vertical payload of 2 kg.

The MMP-UHV50 micropositioning system
UHV compatible The new MMP-UHV50 micropositioning system is designed for ultrahigh-vacuum environments. (Courtesy: Mad City Labs)

Uniquely, the MMP-UHV50 incorporates a zero-power feature when not in motion, to minimize heating and drift. Safety features include limit switches and overheat protection – critical features when operating in vacuum environments. The system includes the Micro-Drive-UHV digital electronic controller, supplied with LabVIEW-based software and compatible with user-written software via the supplied DLL file (for example, Python, Matlab or C++).

Other products from Mad City Labs include piezo nanopositioners featuring the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution. These high-performance sensors enable motion control down to the single picometre level.

For scanning probe microscopy, Mad City Labs’s nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. The company offers both an optical deflection AFM – the MadAFM, a multimodal sample scanning AFM in a compact, tabletop design and designed for simple installation – plus resonant probe AFM models.

The resonant probe products include the company’s AFM controllers, MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs’ micro- and nanopositioners.  All AFM instruments are ideal for material characterization, but the resonant probe AFMs are uniquely suitable for quantum sensing and nano-magnetometry applications.

Mad City Labs also offers standalone micropositioning products, including optical microscope stages, compact positioners for photonics and the Mad-Deck XYZ stage platform, all of which employ proprietary intelligent control to optimize stability and precision. They are also compatible with the high-resolution nanopositioning systems, enabling motion control across micro-to-picometre length scales.

Finally, for high-end microscopy applications, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multi-colour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques.

Our product portfolio, coupled with our expertise in custom design and manufacturing, ensures that we are able to provide solutions for nanoscale motion for diverse applications such as astronomy, photonics, metrology and quantum sensing.

  • Learn more at BiOS booth #8525 and Photonics West booth #3525.

 

The post Photonics West shines a light on optical innovation appeared first on Physics World.

Trump nominates AI experts for key science positions

Par : No Author

Incoming US President Donald Trump has selected Silicon Valley executive Michael Kratsios as director of the Office of Science and Technology Policy (OSTP). Kratsios will also serve as Trump’s science advisor, a position that, unlike the OSTP directorship, does not require approval by the US Senate. Meanwhile, computer scientist Lynne Parker from the University of Tennessee, Knoxville, has been appointed to a new position – executive director of the President’s Council on Advisors on Science and Technology. Parker, who is a former member of OSTP, will also act as counsellor to the OSTP director.

Kratsios, with a BA in politics from Princeton University, was previously chief of staff to Silicon Valley venture capitalist Peter Thiel before becoming the White House’s chief technology officer in 2017 at the start of Trump’s first stint as US president. In addition to his technology remit, Kratsios was effectively Trump’s science advisor until meteorologist Kelvin Droegemeier took that position in January 2019. Kratsios then became the Department of Defense’s acting undersecretary of research and engineering. After the 2020 presidential election, Kratsios left government to run the San Francisco-based company Scale AI.

Parker has a MS from the University of Tennessee and a PhD from Massachusetts Institute of Technology, both in computer science. She was founding director of the University of Tennessee’s AI Tennessee Initiative before spending four years as a member of OSTP, bridging the first Trump and Biden administrations. There, she served as deputy chief technology officer and was the inaugural director of OSTP’s National Artificial Intelligence Initiative Office.

Unlike some other Trump nominations, the appointments have been positively received by the science community. “APLU is enthusiastic that President-elect Trump has selected two individuals who recognize the importance of science to national competitiveness, health, and economic growth,” noted the Association of Public & Land Universities – a membership organisation of public research universities — in a statement. Analysts expect the nominations to reflect the returning president’s interest in pursuing AI, which could indicate a move towards technology over scientific research in the coming four years.

  • Bill Nelson – NASA’s departing administrator – has handed over a decision about when to retrieve samples from Mars to potential successor Jared Isaacman. In the wake of huge cost increases and long delays in the schedule for bringing back samples collected by the rover Perseverance, NASA had said last year that it would develop a fresh plan for the “Mars Sample Return” mission. Nelson now says the agency had two lower-cost plans in mind – but that a choice will not be made until mid-2026. One plan would use a sky crane system resembling that which delivered Perseverance to the Martian surface, while the other would require a commercially produced “heavy lift lander” to pick up samples. Each option could cost up to $7.5 bn – much less than the rejected plan’s $11 bn.

The post Trump nominates AI experts for key science positions appeared first on Physics World.

Fermilab seeks new boss after Lia Merminga resigns as director

Par : No Author

Lia Merminga has resigned as director of Fermilab – the US’s premier particle-physics lab. She stepped down yesterday after a turbulent year that saw staff layoffs, a change in the lab’s management contractor and accusations of a toxic atmosphere. Merminga is being replaced by Young-Kee Kim from the University of Chicago, who will serve as interim director until a permanent successor is found. Kim was previously Fermilab’s deputy director between 2006 and 2013.

Tracy Marc, a spokerperson for Fermilab, says that the search for Merminga’s successor has already begun, although without a specific schedule. “Input from Fermilab employees is highly valued and we expect to have Fermilab employee representatives as advisory members on the search committee, just as has been done in the past,” Marc told Physics World. “The search committee will keep the Fermilab community informed about the progress of this search.”

The departure of Merminga, who became Fermilab director in August 2022, was announced by Paul Alivisatos, president of the University of Chicago. The university jointly manages the lab with Universities Research Association (URA), a consortium of research universities, as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates.

“Her dedication and passion for high-energy physics and Fermilab’s mission have been deeply appreciated,” Alivisatos said in a statement. “This leadership change will bring fresh perspectives and expertise to the Fermilab leadership team.”

Turbulent times

The reasons for Merminga’s resignation are unclear but Fermilab has experienced a difficult last two years with questions raised about its internal management and external oversight. Last August, a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.

The document highlighted issues such as management cover ups of dangerous behaviour including guns being brought onto Fermilab’s campus and a male employee’s attack on a female colleague. In addition, key experiments such as the Deep Underground Neutrino Experiment suffered notable delays. Cost overruns also led to a “limited operations period” with most staff on leave in late August.

In October, the US Department of Energy, which oversees Fermilab, announced a new organization – Fermi Forward Discovery Group – to manage the lab. Yet that decision came under scrutiny given it is dominated by the University of Chicago and URA, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off, adding to portray an institution in crisis.

The whistleblowers, who told Physics World that they still stand by their analysis of the lab’s issues, say that the layoffs “undermined Fermilab’s scientific mission” and claim that it sidelined “some of its most accomplished” researchers at the lab. “Meanwhile, executive managers, insulated by high salaries and direct oversight responsibilities, remained unaffected,” they allege.

Born in Greece, Merminga, 65, earned a BSc in physics from the University of Athens before moving to the University of Michigan where she completed an MS and PhD in physics. Before taking on Fermilab’s directorship, she held leadership posts in governmental physics-related institutions in the US and Canada.

The post Fermilab seeks new boss after Lia Merminga resigns as director appeared first on Physics World.

Antimatter partner of hyperhelium-4 is spotted at CERN

Par : No Author

CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation  has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.

Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.

Exotic antinuclei

Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).

Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.

Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.

Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.

Rapid decay

But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.

Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.

The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.

The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.

The observation is described in a paper that has been submitted to Physical Review Letters.

The post Antimatter partner of hyperhelium-4 is spotted at CERN appeared first on Physics World.

How publishing in Electrochemical Society journals fosters a sense of community

Par : No Author

The Electrochemical Society (ECS) is an international non-profit scholarly organization that promotes research, education and technological innovation in electrochemistry, solid-state science and related fields.

Founded in 1902, the ECS brings together scientists and engineers to share knowledge and advance electrochemical technologies.

As part of that mission, the society publishes several journals including the flagship Journal of the Electrochemical Society (JES), which is over 120 years old and covers a wide range of topics in electrochemical science and engineering.

Someone who has seen their involvement with the ECS and ECS journals increase over their career is chemist Trisha Andrew from the University of Massachusetts Amherst. She directs the wearable electronics lab, a multi-disciplinary research team that produces garment-integrated technologies using reactive vapor deposition.

Trisha Andrew from the University of Massachusetts Amherst. (Courtesy: Trisha Andrew)

Her involvement with the ECS began when she was invited by the editor-in-chief of ECS Sensors Plus to act as a referee for the journal. Andrew found the depth and practical application of the papers she reviewed interesting and of high quality. This resulted in her submitting her own work to ECS journals and she later became an associate editor for both ECS Sensors Plus and JES.

Professional Opportunities

Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China, meanwhile, says that due to the reputation of ECS journals, they have been his “go-to” place to publish since graduate school.

Weiran Zheng
Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China. (Courtesy: Weiran Zheng)

One of his papers entitled “Python for electrochemistry: a free an all-in-one toolset” (ECS Adv. 2 040502) has been downloaded over 8000 times and is currently the most-read ECS Advances article. This led to an invitation to deliver an ECS webinar — Introducing Python for Electrochemistry Research. “I never expected such an impact when the paper was accepted, and none of this would be possible without the platform offered by ECS journals,” adds Zheng.

Publishing in ECS journals has helped Zheng’s career advance through new connections and becoming more involved with ECS activities. This has not only boosted his research but also professional network and given these benefits, Zheng plans to continue to publish his latest findings in ECS journals.

Highly cited papers

Battery researcher Thierry Brousse from Nantes University in France, came to electrochemistry later on in his career having first carried out a PhD in high-temperature superconducting thin films at the University of Caen Normandy.

Thierry Brousse
Battery researcher Thierry Brousse from Nantes University in France. (Courtesy: Thierry Brousse)

When he began working in the field he collaborated with the chemist Donald Schleich from Polytech Nantes, who was an ECS member. It was then that he began to read the JES finding it a prestigious platform for his research in supercapacitors and microdevices for energy storage. “Most of the inspiring scientific papers I was reading at that time were from JES,” notes Brousse. “Naturally, my first papers were then submitted to this journal.”

Brousse says that publishing in ECS journals has provided him with new collaborations as well as invitations to speak at major conferences. He emphasizes the importance of innovative work and the positive impact of publishing in ECS journals where some of his most cited work has been published.

Brousse, who is an associate editor for JES, adds that he particularly values how publishing with ECS journals fosters a quick integration into specific research communities. This, he says, has been instrumental in advancing his career.

Long-standing relationships

Robert Savinell’s relationship with the ECS and ECS journals began during his PhD research in electrochemistry, which he carried out at the University of Pittsburgh. Now at Case Western Reserve University in Cleveland, Ohio, his research focusses on developing a flow battery for low-cost long duration energy storage primarily using iron and water. It is designed to improve the efficiency of the power grid and accelerate the addition of solar and wind power supplies.

Robert F Savinell
Robert Savinell at Case Western Reserve University in Cleveland, Ohio. (Courtesy: Robert Savinell)

Savinell also leads a Department of Energy funded Emerging Frontier Research Center on Breakthrough Electrolytes for Energy Storage. This Center focuses on fundamental research on nano to meso-scale structured electrolytes for energy storage.

ECS journals have been a cornerstone of his professional career, providing a platform for his research and fostering valuable professional connections. “Some of my research published in JES many years ago are still cited today,” says Savinell.

Savinell’s contributions to the ECS community have been recognized through various roles, including being elected a fellow of the ECS and he has previously served as chair of the ECS’s electrolytic and electrochemical engineering division. He was editor-in-chief of JES for the past decade and most recently was elected third vice president of the ECS.

Savinell says that the connections he has made through ECS have been significant, ranging from funding programme managers to personal friends. “My whole professional career has been focused around ECS,” he says, adding that he aims to continue to publish in ECS journals and hopes that his work will inspire solutions to some of society’s biggest problems.

Personal touch

For many researchers in the field, publishing in ECS journals has brought with it several benefits. That includes the high level of engagement and the personal touch within the ECS community and also the promotional support ECS provides for published work.

The ECS journals’ broad portfolio also ensure that researcher’s work reaches the right audience, and such a visibility and engagement is a significant factor when it comes to advancing the careers of scientists. “The difference between ECS journals is the amount of engagement, views and reception that you receive,” says Andrew. “That’s what I found to be the most unique”.

The post How publishing in Electrochemical Society journals fosters a sense of community appeared first on Physics World.

Higher-order brain function revealed by new analysis of fMRI data

Par : No Author

An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.

To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.

Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.

“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”

Higher-order interactions

Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.

To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.

Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.

To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”

Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.

Distinguishing between tasks

With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.

By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.

By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.

“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.

The research is described in Nature Communications.

The post Higher-order brain function revealed by new analysis of fMRI data appeared first on Physics World.

Start-stop operation and the degradation impact in electrolysis

Par : No Author

start-stop graph

This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.

An interactive Q&A session follows the presentation.

Shaun Alia
Shaun Alia

Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.

 

 

The post Start-stop operation and the degradation impact in electrolysis appeared first on Physics World.

NMR technology shows promise in landmine clearance field trials

Par : No Author

Novel landmine detectors based on nuclear magnetic resonance (NMR) have passed their first field-trial tests. Built by the Sydney-based company mRead, the devices could speed up the removal of explosives in former war zones. The company tested its prototype detectors in Angola late last year, finding that they could reliably sense explosives buried up to 15 cm underground — the typical depth of a deployed landmine.

Landmines are a problem in many countries recovering from armed conflict. According to NATO, some 110 million landmines are located in 70 countries worldwide including Cambodia and Bosnia despite conflict ending in both nations decades ago. Ukraine is currently the world’s most mine-infested country, making vast swathes of Ukraine’s agricultural land potentially unusable for decades.

Such landmines also continue to kill innocent civilians. According to the Landmine and Cluster Munition Monitor, nearly 2000 people died from landmine incidents in 2023 – double the number compared to 2022 – and a further 3660 were injured. Over 80% of the casualties were civilians, with children accounting for 37% of deaths.

Humanitarian “deminers”, who are trying to remove these explosives, currently inspect suspected minefields with hand-held metal detectors. These devices use magnetic induction coils that respond to the metal components present in landmines. Unfortunately, they react to every random piece of metal and shrapnel in the soil, leading to high rates of false positives.

“It’s not unreasonable with a metal detector to see 100 false alarms for every mine that you clear,” says Matthew Abercrombie, research and development officer at the HALO Trust, a de-mining charity. “Each of these false alarms, you still have to investigate as if it were a mine.” But for every mine excavated, about 50 hours is wasted on excavating false positives, meaning that clearing a single minefield could take months or years.

“Landmines make time stand still,” adds HALO Trust research officer Ronan Shenhav. “They can lie silent and invisible in the ground for decades. Once disturbed they kill and maim civilians, as well as valuable livestock, preventing access to schools, roads, and prime agricultural land.”

Hope for the future

One alternative landmine-detecting technology option is NMR, which is already widely used to look for underground mineral resources and scan for drugs at airports. NMR results in nuclei inside atoms emitting a weak electromagnetic signal in the presence of a strong constant magnetic field and a weak oscillating field. As the frequency of the signal depends on the molecule’s structure, every chemical compound has a specific electromagnetic fingerprint.

The problem with using it to sniff out landmines is pervasive environmental radio noise, with the electromagnetic signal emitted by the excited molecules being 16 orders of magnitude weaker than that used to trigger the effect. Digital radio transmission, electricity generators and industrial infrastructure all produce noise of the same frequency as the one the detectors are listening for. Even thunderstorms trigger such a radio hum that can spread across vast distances.

mRead scanner
The handheld detectors developed by MRead emit radio pulses at frequencies between 0.5 and 5 MHz (courtesy: mRead)

“It’s easier to listen to the Big Bang at the edge of the Universe,” says Nick Cutmore, chief technology officer at mRead. “Because the signal is so small, every interference stops you. That stopped a lot of practical applications of this technique in the past.” Cutmore is part of a team that has been trying to cut the effects of noise since the early 2000s, eventually finding a way to filter out this persistent crackle through a proprietary sensor design.

MRead’s handheld detectors emit radio pulses at frequencies between 0.5 and 5 MHz, which are much higher than the kilohertz-range frequencies used by conventional metal detectors. The signal elicits the magnetic resonance response in atoms of sodium, potassium and chlorine, which are commonly found in explosives. A sensor inside the detector “listens out” for the particular fingerprint signal, locating a forgotten mine more precisely than is possible with conventional metal detectors.

With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter

James Cowan

Given that the detected signal is so small, it has be amplified, but this resulted in adding noise. The company says it has found a way to make sure the electronics in the detector do not exacerbate the problem. “Our current handheld system only consumes 40 to 50 W when operating,” says Cutmore. “Previous systems have sometimes operated at a few kilowatts, making them power-hungry and bulky.”

Having tested the prototype detectors in a simulated minefield in Australia in August 2024, mRead engineers have now deployed them in minefields in Angola in cooperation with the HALO Trust. As the detectors respond directly to the explosive substance, they almost eliminated false positives completely, allowing deminers to double-check locations flagged by metal detectors before time-consuming digging took place.

During the three-week trial, the researchers also detected mines that had a low content of metal, which is difficult to spot with metal detectors.“Instead of doing 1000 metal detections and finding one mine, we can isolate those detections and very quickly before people start digging,” says Cutmore.

Researchers at mRead plan to return to Angola later this year for further tests. They also want to finetune their prototypes and begin working on devices that could be produced commercially. “I am tremendously excited by the results of these trials,” says James Cowan, chief executive officer of the HALO Trust. “With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter.”

The post NMR technology shows promise in landmine clearance field trials appeared first on Physics World.

Solid-state nuclear clocks brought closer by physical vapour deposition

Par : No Author
8-1-25 PVD thorium clock article
Solid-state clock Illustration of how thorium atoms are vaporized (bottom) and then deposited in a thin film on the substrate’s surface (middle). This film could form the basis for a nuclear clock (top). (Courtesy: Steven Burrows/Ye group)

Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.

Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.

Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.

Sensitive to new physics

Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.

Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.

That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.

In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”

Layers of thorium fluoride

In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.

The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.

More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.

Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”

The research is described in Nature

The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.

Moonstruck: art and science collide in stunning collection of lunar maps and essays

Par : No Author

As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are presently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.

As explained in LUNAR: A History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee table books, one cannot fail but to become a little mad for the Moon too.

Many faces of the Moon

As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylised, crescent moon prop.)

The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).

Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.

One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”

Many moons ago

Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.

Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.

“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”

From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.

Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.

Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.

New moon

Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.

As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”

I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.

  • 2024 Thames and Hudson 256pp £50.00

 

 

 

 

 

 

 

 

 

The post Moonstruck: art and science collide in stunning collection of lunar maps and essays appeared first on Physics World.

Entanglement entropy in protons affects high-energy collisions, calculations reveal

Par : No Author

An international team of physicists has used the principle of entanglement entropy to examine how particles are produced in high-energy electron–proton collisions. Led by Kong Tu at Brookhaven National Laboratory in the US, the researchers showed that quarks and gluons in protons are deeply entangled and approach a state of maximum entanglement when they take part in high-energy collisions.

While particle physicists have made significant progress in understanding the inner structures of protons, neutrons, and other hadrons, there is still much to learn. Quantum chromodynamics (QCD) says that the proton and other hadrons comprise quarks, which are tightly bound together via exchanges of gluons – mediators of the strong force. However, using QCD to calculate the properties of hadrons is notoriously difficult except under certain special circumstances.

Calculations can be simplified by describing the quarks and gluons as partons in a model that was developed in late 1960s by James Bjorken, Richard Feynman, Vladimir Gribov and others. “Here, all the partons within a proton appear ‘frozen’ when the proton is moving very fast relative to an observer, such as in high-energy particle colliders,” explains Tu.

Dynamic and deeply complex interactions

While the parton model is useful for interpreting the results of particle collisions, it cannot fully capture the dynamic and deeply complex interactions between quarks and gluons within protons and other hadrons. These interactions are quantum in nature and therefore involve entanglement. This is a purely quantum phenomenon whereby a group of particles can be more highly correlated than is possible in classical physics.

“To analyse this concept of entanglement, we utilize a tool from quantum information science named entanglement entropy, which quantifies the degree of entanglement within a system,” Tu explains.

In physics, entropy is used to quantify the degree of randomness and disorder in a system. However, it can also be used in information theory to measure the degree of uncertainty within a set of possible outcomes.

“In terms of information theory, entropy measures the minimum amount of information required to describe a system,” Tu says. “The higher the entropy, the more information is needed to describe the system, meaning there is more uncertainty in the system. This provides a dynamic picture of a complex proton structure at high energy.”

Deeply entangled

In this context, particles in a system with high entanglement entropy will be deeply entangled – whereas those in a system with low entanglement entropy will be mostly uncorrelated.

In recent studies, entanglement entropy has been used to described how hadrons are produced through deep inelastic scattering interactions – such as when an electron or neutrino collides with a hadron at high energy. However, the evolution with energy of entanglement entropy within protons had gone largely unexplored. “Before we did this work, no one had looked at entanglement inside of a proton in experimental high-energy collision data,” says Tu.

Now, Tu’s team investigated how entanglement entropy varies with the speed of the proton – and how this relationship relates to the hadrons created during inelastic collisions.

Matching experimental data

Their study revealed that the equations of QCD can accurately predict the evolution of entanglement entropy – with their results closely matching with experimental collision data. Perhaps most strikingly, they discovered that if this entanglement entropy is increased at high energies, it may approach a state of maximum entanglement under certain conditions. This high degree of entropy is evident in the large numbers of particles that are produced in electron–proton collisions.

The researchers are now confident that their approach could lead to further insights about QCD. “This method serves as a powerful tool for studying not only the structure of the proton, but also those of the nucleons within atomic nuclei.” Tu explains. “It is particularly useful for investigating the underlying mechanisms by which nucleons are modified in the nuclear environment.”

In the future, Tu and colleagues hope that their model could boost our understanding of processes such as the formation and fragmentation of hadrons within the high-energy jets created in particle collisions, and the resulting shift in parton distributions within atomic nuclei. Ultimately, this could lead to a fresh new perspective on the inner workings of QCD.

The research is described in Reports on Progress in Physics.

The post Entanglement entropy in protons affects high-energy collisions, calculations reveal appeared first on Physics World.

PLANCKS physics quiz – how do you measure up against the brightest physics students in the UK and Ireland?

Par : No Author

Each year, the International Association of Physics Students organizes a physics competition for bachelor’s and master’s students from across the world. Known as the Physics League Across Numerous Countries for Kick-ass Students (PLANCKS), it’s a three-day event where teams of three to four students compete to answer challenging physics questions.

In the UK and Ireland, teams compete in a preliminary competition to be sent to the final. Here are some fiendish questions from past PLANCKS UK and Ireland preliminaries and the 2024 final in Dublin, written by Anthony Quinlan and Sam Carr, for you to try this holiday season.

Question 1: 4D Sun

Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]

Boltzmann constant, kB = 1.38 × 10−23 J K−1

Speed of light, c = 3 × 108 m s−1

Question 2: Heavy stuff

In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]

Question 3: Just like clockwork

Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.

Simplified schematic of a pendulum clock mechanism
1 Tick tock Simplified schematic of a pendulum clock mechanism. When the pendulum swings one way (a), the escapement releases the gear attached to the hanging mass and allows it to fall. When the pendulum swings the other way (b) the escapement stops the gear attached to the mass moving so the mass stays in place. (Courtesy: Katherine Skipper/IOP Publishing)

A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by

T=2πlg

where g is the acceleration due to gravity.

Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.

Radius of the Earth, R = 6.3781 × 106 m

Period of one Earth day, τ0 = 8.64 × 104 s

How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]

Question 4: Quantum stick

Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few milli­kelvins, so the stick is in its ground state.

Moment of inertia for a rod,

I=13ml2

where m is the mass and l is the length.

Uncertainty principle,

ΔxΔp2

There are several possible approximations and simplifications you could make in solving this problem, including:

sinθ ≈ θ for small θ

cosh1x=ln x+x21

and

sinh1x=ln x+x2+1

Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]

Hint: Consider the two possible initial conditions that arise from the uncertainty principle.

  • Answers will be posted here on the Physics World website next month. There are no prizes.
  • If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk

The post PLANCKS physics quiz – how do you measure up against the brightest physics students in the UK and Ireland? appeared first on Physics World.

How the operating window of LFP/Graphite cells affects their lifetime

Par : No Author

 

Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.

Eniko Zsoldos
Eniko Zsoldos

Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.

 

The Electrochemical Society

 

The post How the operating window of LFP/Graphite cells affects their lifetime appeared first on Physics World.

New day dawns for quantum computing in the UK

Par : No Author

A building may be little more than bricks and mortar, but behind the façade it can bring people together and catalyse change. That was the vision for the main facility of the UK’s National Quantum Computing Centre (NQCC), located on the Harwell Campus in Oxfordshire, which is designed to foster collaboration and accelerate innovation across all parts of the UK’s quantum ecosystem.

At the official opening of the building, held at the end of October 2024, the NQCC team showed how that original vision had been turned into reality. In the new experimental labs on the ground floor, NQCC scientists who were previously working as individual teams in borrowed facilities around the Harwell site are now working in an environment where they can swap notes with colleagues working on other hardware platforms.

“It is always useful to have other scientists around to share ideas and solve specific problems,” said Klara Theophilo, an atomic physicist who is setting up trapped-ion systems based on chips originally developed at the University of Oxford and the National Physical Laboratory (NPL). “Trapped-ion systems share some of the same challenges as hardware platforms based on neutral atoms, while the cryogenic engineering we need is also being used for systems based on superconducting qubits.”

Theophilo and her scientific colleagues are benefiting from state-of-the-art experimental facilities purpose-designed for building and testing quantum computers. “This lab has the best environmental control I have ever worked in,” she said. “To achieve high gate fidelities we need careful control of both the temperature and the humidity to ensure that our lasers can manipulate the qubits with high precision, and in our previous lab space there was a constant need to realign and recalibrate the lasers.”

Joining the NQCC technical teams will be scientists and engineers from commercial companies who are building their own systems for quantum computing. In the coming months, several firms are due to install prototype hardware platforms commissioned by the NQCC as part of its programme to establish seven experimental testbeds based on different qubit modalities.

Others will be hosted at the Innovation Hub, the NQCC’s other facility on the Harwell Campus, while quantum networking company NuQuantum is also preparing to establish a team within the main building for a three-year co-development project with the NQCC. The aim of this programme, called Project IDRA, will be to build a distributed quantum computing system that will connect together multiple hardware nodes by entangling the qubits in different quantum processors.

Vivek Chidambaram
Into the labs Vivek Chidambaram of the NQCC introduces visitors to the superconducting technology being developed by his team for scalable quantum computers. (Courtesy: NQCC)

 facility like the NQCC can act like an anchor for businesses to build around, creating a cluster of companies that form a supply chain for each other

Mark Thomson, executive chair of the Science and Technology Facilities Council (STFC)

For the NQCC and its backers, the longer term hope is that bringing these hardware companies into the national lab will catalyse the formation of a quantum cluster in and around the Harwell Campus.

“We have a unique ability on this site to connect academia and national infrastructure with start-up businesses and large enterprise,” said Mark Thomson, currently the executive chair of the Science and Technology Facilities Council (STFC) and soon to be the new director general of CERN. “A facility like the NQCC can act like an anchor for businesses to build around, creating a cluster of companies that form a supply chain for each other. We have already seen that in the space sector, and I genuinely believe that we will now see the same clustering effect for quantum technologies.”

Indeed, many of the hardware providers who are installing their prototype systems within the NQCC are eager to find new ways to work with the national lab and its growing network of academic and commercial partners. “Establishing a presence in the NQCC is a great way for us to become more connected with the UK’s wider quantum ecosystem,” said Alice Voaden, project manager for Rigetti, one of the testbed providers. “It puts us in a better position to identify future opportunities for collaboration, which could help us to explore how emerging applications and software strategies can work with our technology.”

Beyond the technical work, the new facility brings together the NQCC’s growing team of technical and innovation specialists under the same roof for the first time. Previously distributed among temporary office spaces across the Harwell Campus, around 80 people working across a diverse range of activities now have the chance to make new connections and forge a collective identity that will help to establish the NQCC as a focal point for quantum computing in the UK and beyond.

Indeed, since the NQCC was established in 2020 it has put an increasing emphasis on building a community of hardware providers, software developers and end users who can work together to explore the value of quantum computing for the benefit of society and the economy.

“The early vision for the NQCC was to address the issue of scaling in quantum computing, and originally we were primarily focused on technology development,” commented NQCC director Michael Cuthbert. “But increasingly we’ve been turning our attention to scaling the user community for quantum computing, and today is an opportunity for us to highlight our activities across the breadth of our programme.”

Those efforts include providing easy access to quantum computing resources, offering learning opportunities to boost the ranks of scientists and engineers with an understanding of quantum computers, and working directly with organizations in the public and private sectors to develop use cases where quantum computing can make a meaningful impact.

In one example highlighted at the inauguration, applications engineers from the NQCC are working with software company Unisys and the University of Newcastle to explore how today’s quantum computers could be used to optimize the loading of cargo onto aircraft, which can cut fuel costs and reduce carbon emissions.

“What happens here will create jobs and businesses, and it will benefit people across the UK and beyond,” said Science Minister Lord Patrick Vallance, who officially opened the building. “You have created something that will bring academics and people from industry together to harness the power of quantum computing to solve problems that really matter.”

NQCC's opening ceremony
Sure start Science Minister Lord Patrick Vallance officially opens the NQCC’s new facility on the Harwell Campus in Oxfordshire, UK. (Courtesy: NQCC)

Another element of the NQCC’s remit is to provide clear, trusted and impartial guidance to government, businesses and the  ublic. It is already working with NPL and other government and industry bodies on standards development, with the NQCC spearheading the global debate around responsible and ethical quantum computing. “Gaining public trust is vital to drive user adoption,” said Cuthbert. “The NQCC is in a unique position to provide thought leadership on ethical considerations, which will ultimately benefit the whole community.”

While the inauguration of the UK’s newest national lab was focused on the prospects for quantum computing, there were also reminders that the NQCC is a direct result of the country’s established strength in quantum science and technology. Following decades of basic research across many contributing disciplines, the National Quantum Technologies Programme, which has seen more than £1bn of investment since 2014, has been created a collaborative culture in which academics work in tandem with start-up companies to translate scientific insights into innovative technologies.

“We know that quantum computing will be a long-haul journey that requires some patience, but the NQCC is already showing what can be achieved through collaboration and co-location,” said Peter Knight, the architect of the NQTP and the instigator behind the NQCC. “Bringing companies and academics into the facility will enable dialogue, drive future collaboration, and accelerate progress towards our mission of delivering quantum computing at scale.”

The post New day dawns for quantum computing in the UK appeared first on Physics World.

Magnetically controlled microbots are small enough to diffract visible light

Par : No Author
Diffractive robotics platform
Diffractive robotics platform (A) 10×10 mm chip containing arrays of diffractive robots. (B) Close-up view of robots with varying sizes and numbers of panels. Scale bars, 50 µm. (C) False-coloured SEM image of the prototypical diffractive robot. (D) SEM of the nanomagnet arrays. (E) ALD hinges. (Courtesy: C L Smart et al. Science 10.1126/science.adr2177)

Microscopic robots with small-scale features that can control light at the microscale offer the potential to probe the microscopic world in more detail – with the scattering of light from such microbots able to induce diffractive optical effects.

To date, this combination of diffractive optics and tuneable mechanics has primarily exploited microelectromechanical systems (MEMS) devices, but creating actuatable microbots with features on the scale of the wavelength of light has been challenging.

To address this challenge, researchers at Cornell University turned to magnetically controlled microbots. While such robots have been developed at millimetre scales, the ability to perform magnetic actuation at the micron scale only became possible recently, due to the creation of protocols that encode magnetic information into microscale robotics and the use of atomic layer deposition (ALD) to create nanoscale hinges that make flexible micromachines capable of advanced navigation.

The team has now created magnetically controlled microbots that operate at the visible-light diffraction limit, so-called diffractive robots.

“A walking robot that’s small enough to interact with and shape light effectively takes a microscope’s lens and puts it directly into the microworld,” says team leader Paul McEuen in a press statement. “It can perform up-close imaging in ways that a regular microscope never could.”

New magnetic microbots

Using nanometre-scale mechanical membranes, rigid panels, programmable nanomagnets and diffractive optical elements, McEuen and colleagues created untethered microbots that are small enough to diffract visible light. They used the ALD hinges to connect the microbot’s rigid panels with magnetically actuatable joints, enabling them to reconfigure and move in millitesla-scale magnetic fields.

The core elements of the diffractive microbots comprise the light-diffracting panels with integrated nanomagnet arrays and the flexible hinges; the platform can also embed optical elements such as an optical diffraction grating. To enable the required mechanical, diffractive and magnetic performance, these integrated elements span several orders of magnitude in terms of their individual scales. The light diffracting grating panels were tens of microns in size, with each panel 1 µm wide, whereas the diffractive grating lines were on the scale of light wavelengths, the hinges had a thickness of 5 nm, and the magnetic domains were in the nanoscale realm.

The hinges played a crucial role, the researchers note, by providing a high degree of flexibility to an otherwise rigid robot. This flexibility allowed the microbots to rotate and reorientate themselves to dynamically change how light is diffracted, focused and redirected.

When manipulated with a magnetic field, the microrobots were able to simultaneously change shape, locomote along a surface and control diffracted light. This locomotion capability was due to the array of nanomagnets integrated into the light-diffracting grating panels.

By selectively controlling the aspect ratio of the nanomagnet domains and programming them using the strength of the external magnetic field, the researchers could control the movement of the microbots – including crawling forward on a solid surface and “swimming” through fluids while simultaneously steering and diffracting light.

“These robots are 5 microns to 2 microns,” says co-author Itai Cohen. “They’re tiny. And we can get them to do whatever we want by controlling the magnetic fields driving their motions.”

The researchers note that the tuneability of the optical elements could be further improved by adding more magnetic material to the microbots and/or increasing the size of the magnetic fields used to control them. And while this study centred around individual microbots, it should also be possible to use multiple microbots in magnetically actuated robot swarms to introduce collective optical effects.

Potential applications

As a generalized robotics platform, the microbots could easily be modified and produced with differing sizes, geometries and optical elements according to the intended application. Some key optical elements that could be integrated include meta-atoms, subwavelength apertures and plasmonic resonant probes.

The researchers have already demonstrated that the microbots have capabilities including force sensing with piconewton sensitivity, subdiffractive imaging using a type of structured illumination microscopy, and light beam steering and focusing using tunable diffractive optical elements. Other potential applications include endoscopic imaging and tissue ablation, high-resolution fluorescence microscopy of cells, and the high-resolution sensing of magnetic fields and current in integrated circuits.

The research is described in Science.

The post Magnetically controlled microbots are small enough to diffract visible light appeared first on Physics World.

❌