Fired CDC Director Says RFK Jr. Pressured Her to Blindly Approve Vaccine Changes
In this week’s special CEO Series edition of Space Minds, we're at the World Space Business Week in Paris. In today's episode, SpaceNews editor Mike Gruss talks with Peter Cannito, CEO of Redwire.
The post Redwire’s global strategy from space to security appeared first on SpaceNews.
With great fanfare, the newly installed American President Donald Trump launched the idea of the United States creating an “Iron Dome” to protect America against aerial threats, similar to how […]
The post President Trump’s Golden Dome: golden dream or black nightmare? appeared first on SpaceNews.
The product, called Vivid Features, combines Maxar’s satellite imagery archive with Ecopia’s artificial intelligence software
The post Maxar and Ecopia roll out AI-powered Earth mapping system appeared first on SpaceNews.
PARIS – Europe should be investing in disruptive capabilities like spaceplanes, said Maj. Gen. Philippe Koffi, French armament agency DGA strategic lead for air, land and naval combat. “A spaceplane is maneuverable, reusable and flexible, so it can deliver payload in orbit, recover critical assets, conduct reconnaissance and intervene against threats in orbit,” Koffi said […]
The post Does Europe need a spaceplane? appeared first on SpaceNews.
Finnish spacecraft developer ReOrbit has raised 45 million euros ($53.3 million) to increase spacecraft production to meet growing demand from government customers.
The post ReOrbit raises 45 million euros to increase spacecraft production appeared first on SpaceNews.
Today’s artificial intelligence (AI) systems are built on data generated by humans. They’re trained on huge repositories of writing, images and videos, most of which have been scraped from the Internet without the knowledge or consent of their creators. It’s a vast and sometimes ill-gotten treasure trove of information – but for machine-learning pioneer David Silver, it’s nowhere near enough.
“I think if you provide the knowledge that humans already have, it doesn’t really answer the deepest question for AI, which is how it can learn for itself to solve problems,” Silver told an audience at the 12th Heidelberg Laureate Forum (HLF) in Heidelberg, Germany, on Monday.
Silver’s proposed solution is to move from the “era of human data”, in which AI passively ingests information like a student cramming for an exam, into what he calls the “era of experience” in which it learns like a baby exploring its world. In his HLF talk on Monday, Silver played a sped-up video of a baby repeatedly picking up toys, manipulating them and putting them down while crawling and rolling around a room. To murmurs of appreciation from the audience, he declared, “I think that provides a different perspective of how a system might learn.”
Silver, a computer scientist at University College London, UK, has been instrumental in making this experiential learning happen in the virtual worlds of computer science and mathematics. As head of reinforcement learning at Google DeepMind, he was instrumental in developing AlphaZero, an AI system that taught itself to play the ancient stones-and-grid game of Go. It did this via a so-called “reward function” that pushed it to improve over many iterations, without ever being taught the game’s rules or strategy.
More recently, Silver coordinated a follow-up project called AlphaProof that treats formal mathematics as a game. In this case, AlphaZero’s reward is based on getting correct proofs. While it isn’t yet outperforming the best human mathematicians, in 2024 it achieved silver-medal standard on problems at the International Mathematical Olympiad.
Could a similar experiential learning approach work in the physical sciences? At an HLF panel discussion on Tuesday afternoon, particle physicist Thea Klaeboe Åarrestad began by outlining one possible application. Whenever CERN’s Large Hadron Collider (LHC) is running, Åarrestad explained, she and her colleagues in the CMS experiment must control the magnets that keep protons on the right path as they zoom around the collider. Currently, this task is performed by a person, working in real time.
In principle, Åarrestad continued, a reinforcement-learning AI could take over that job after learning by experience what works and what doesn’t. There’s just one problem: if it got anything wrong, the protons would smash into a wall and melt the beam pipe. “You don’t really want to do that mistake twice,” Åarrestad deadpanned.
For Åarrestad’s fellow panellist Kyle Cranmer, a particle physicist who works on data science and machine learning at the University of Wisconsin-Madison, US, this nightmare scenario symbolizes the challenge with using reinforcement learning in physical sciences. In situations where you’re able to do many experiments very quickly and essentially for free – as is the case with AlphaGo and its descendants – you can expect reinforcement learning to work well, Cranmer explained. But once you’re interacting with a real, physical system, even non-destructive experiments require finite amounts of time and money.
Another challenge, Cranmer continued, is that particle physics already has good theories that predict some quantities to multiple decimal places. “It’s not low-hanging fruit for getting an AI to come up with a replacement framework de novo,” Cranmer said. A better option, he suggested, might be to put AI to work on modelling atmospheric fluid dynamics, which are emergent phenomena without first-principles descriptions. “Those are super-exciting places to use ideas from machine learning,” he said.
Silver, who was also on Tuesday’s panel, agreed that reinforcement learning isn’t always the right solution. “We should do this in areas where mistakes are small and it can learn from those small mistakes to avoid making big mistakes,” he said. To general laughter, he added that he would not recommend “letting an AI loose on nuclear arsenals”, either.
Reinforcement learning aside, both Åarrestad and Cranmer are highly enthusiastic about AI. For Cranmer, one of the most exciting aspects of the technology is the way it gets scientists from different disciplines talking to each other. The HLF, which aims to connect early-career researchers with senior figures in mathematics and computer science, is itself a good example, with many talks in the weeklong schedule devoted to AI in one form or another.
For Åarrestad, though, AI’s most exciting possibility relates to physics itself. Because the LHC produces far more data than humans and present-day algorithms can handle, Åarrestad explained, much of it is currently discarded. The idea that, as a result, she and her colleagues could be throwing away major discoveries sometimes keeps her up at night. “Is there new physics below 1 TeV?” Åarrestad wondered.
Someday, maybe, an AI might be able to tell us.
The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.
Radiotherapy is a precision cancer therapy that employs personalized treatment plans to target radiation to tumours with high accuracy. Such plans are usually created from high-resolution CT scans of the patient. But interest is growing in an alternative approach: MR simulation, in which MR images are used to generate the treatment plans – for delivery on conventional linac systems as well as the increasingly prevalent MR-guided radiotherapy systems.
One site that has transitioned to this approach is the Institut Jules Bordet in Belgium, which in 2021 acquired both an Elekta Unity MR-Linac and a Siemens MAGNETOM Aera MR-Simulator. “It was a long-term objective for our clinic to have an MR-only workflow,” says Akos Gulyban, a medical physicist at Institut Jules Bordet. “When we moved to a new campus, we decided to purchase the MR-Linac. Then we thought that if we are getting into the MR world for treatment adaptation, we also need to step up in terms of simulation.”
The move to MR simulation delivers many clinical benefits, with MR images providing the detailed anatomical information required to delineate targets and organs-at-risk with the highest precision. But it also creates new challenges for the physicists, particularly when it comes to quality assurance (QA) of MR-based systems. “The biggest concern is geometric distortion,” Gulyban explains. “If there is no distortion correction, then the usability of the machine or the sequence is very limited.”
While the magnetic field gradient is theoretically linear, and MRI is indeed extremely accurate at the imaging isocentre, moving away from the isocentre increases distortion. Images of regions 30 or 40 cm away from the isocentre – a reasonable distance for a classical linac – can differ from reality by 15 to 20 mm, says Gulyban. Thankfully, 3D correction algorithms can reduce this discrepancy down to just a couple of millimetres. But such corrections first require an accurate way to measure the distortion.
To address this task, the team at Institut Jules Bordet employ a geometric distortion phantom –the QUASAR MRID3D Geometric Distortion Analysis System from IBA Dosimetry. Gulyban explains that the MRID3D was chosen following discussions with experienced users, and that key selling points included the phantom’s automated software and its ability to efficiently store results for long-term traceability.
“My concern was how much time we spend cross-processing, generating reports or evaluating results,” he says. “This software is fully automated, making it much easier to perform the evaluation and less dependent on the operator.”
Gulyban adds that the team was looking for a vendor-independent solution. “I think it is a good approach to use the tools provided [by the vendor] but now we have a way to measure the same thing using a different approach. Since our new campus has a mixture of Siemens MRs and the MR-Linac, this phantom provides a vendor-independent bridge between the two worlds.”
For quality control of the MR-Simulator, the team perform distortion measurements every three months, as well as after system interventions such as shimming and following any problems arising during other routine QA procedures. “We should not consider tests as individual islands in the QA process,” says Gulyban. “For instance, the ACR image quality phantom, which is used for more frequent evaluation, also partly assesses distortion. If we see that failing, I would directly trigger measurements with the more appropriate geometric distortion phantom.”
To perform MR simulation, the images used for treatment planning must encompass both the target volume and the surrounding region, to ensure accurate delineation of the tumour and nearby organs-at-risk. This requires a large field-of-view (FOV) scan – plus geometric distortion QA that covers the same large FOV.
“You’re using this image to delineate the target and also to spare the organs-at-risk, so the image must reflect reality,” explains Kawtar Lakrad, medical physicist and clinical application specialist at IBA Dosimetry. “You don’t want that image to be twisted or the target volume to appear smaller or bigger than it actually is. You want to make sure that all geometric qualities of the image align with what’s real.”
Typically, geometric distortion phantoms are grid-like, with control points spaced every 0.5 or 1 cm. The entire volume is imaged in the MR scanner and the locations of control points seen in the image compared with their actual positions. “If we apply this to a large FOV phantom, which for MRI will be filled with either water or oil, it’s going to be a very large grid and it’s going to be heavy, 40 or 50 kg,” says Lakrad.
To overcome this obstacle, IBA researchers used innovative harmonic analysis algorithms to design a lightweight geometric distortion phantom with submillimetre accuracy and a large (35 x 30 cm) FOV: the MRID3D. The phantom comprises two concentric hollow acrylic cylinders, the only liquid being a prefilled mineral oil layer between the two shells, reducing its weight to just 21 kg.
“The idea behind the phantom was very smart because it relies on a mathematical tool,” explains Lakrad. “There is a Fourier transform for the linear signal, which is used for standard grids. But there are also spherical harmonics – and this is what’s used in the MRID3D. The control points are all on the cylinder surface, plus one in the isocentre, creating a virtual grid that measures 3D geometric distortion.” She adds that the MRID3D can also differentiate distortion due to the main magnetic field from gradient non-linearity distortion.
Gulyban and his team at Institut Jules Bordet first used MR simulation for pelvic treatments, particularly prostate cancer, he tells Physics World. This was followed by abdominal tumours, such as pancreatic and liver cancers (where many patients were being treated on the MR-Linac) and more recently, cranial and head-and-neck irradiations.
Gulyban points out that the introduction of the MR-Simulator was eased by the team’s experience with the MR-Linac, which helped them “step into the MR world”. Here also, the MRID3D phantom is used to quantify geometric distortion, both for initial commissioning and continuous QA of the MR-Linac.
“It’s like a consistency check,” he explains. “We have certain manufacturer-defined conditions that we need to meet for the MR-Linac – for instance, that distortion within a 40 mm diameter should be less than 1 mm. To ensure that these are met in a consistent fashion, we repeat the measurements with the manufacturer’s phantom and with the MRID3D. This gives us extra peace of mind that the machine is performing under the correct conditions.”
For other cancer centres looking to integrate MR into their radiotherapy clinics, Gulyban has some key points of advice. These include starting with MR-guided radiotherapy and then adding MR simulation, identifying a suitable pathology to treat first and gain familiarity, and attending relevant courses or congresses for inspiration.
“The biggest change is actually a change in culture because you have an active MRI in the radiotherapy department,” he notes. “We are used to the radioprotection aspects of radiotherapy, wearing a dosimeter and observing radiation protection principles. MRI is even less forgiving – every possible thing that could go wrong you have to eliminate. Closing all the doors and emptying your pockets must become a reflex habit. You have to prepare mentally for that.”
“When you’re used to CT-based machines, moving to an MR workflow can be a little bit new,” adds Lakrad. “Most physicists are already familiar with the MR concept, but when it comes to the QA process, that’s the most challenging part. Some people would just repeat what’s done in radiology – but the use case is different. In radiotherapy, you have to delineate the target and surrounding volumes exactly. You’re going to be delivering dose, which means the tolerance between diagnostic and radiation therapy is different. That’s the biggest challenge.”
The post MRID<sup>3D</sup> phantom eases the introduction of MRI into the radiotherapy clinic appeared first on Physics World.
Artificial intelligence (AI) could help sniff out questionable open-access publications that are more interested in profit than scientific integrity. That is according to an analysis of 15,000 scientific journals by an international team of computer scientists. They find that dubious journals tend to publish an unusually high number of articles and feature authors who have many affiliations and frequently self-cite (Sci. Adv. 11 eadt2792).
Open access removes the requirement for traditional subscriptions. Articles are instead made immediately and freely available for anyone to read, with publication costs covered by authors by paying an article-processing charge.
But as the popularity of open-access journals has risen, there has been a growth in “predatory” journals that exploit the open-access model by making scientists pay publication fees without a proper peer-review process in place.
To build an AI-based method for distinguishing legitimate from questionable journals, Daniel Acuña, a computer scientist at the University of Colorado Boulder, and colleagues used the Directory of Open Access Journals (DOAJ) – an online, community-curated index of open-access journals.
The researchers trained their machine-learning model on 12,869 journals indexed on the DOAJ and 2536 journals that have been removed from the DOAJ due to questionable practices that violate the community’s listing criteria. The team then tested the tool on 15,191 journals listed by Unpaywall, an online directory of free research articles.
To identify questionable journals, the AI-system analyses journals’ bibliometric information and the content and design of their websites, scrutinising details such as the affiliations of editorial board members and the average author h-index – a metric that quantifies a researcher’s productivity and impact.
The AI-model flagged 1437 journals as questionable, with the researchers concluding that 1092 were genuinely questionable while 345 were false positives.
They also identified around 1780 problematic journals that the AI screening failed to flag. According to the study authors, their analysis shows that problematic publishing practices leave detectable patterns in citation behaviour such as the last authors having a low h-index together with a high rate of self-citation.
Acuña adds that the tool could help to pre-screen large numbers of journals, adding, however, that “human professionals should do the final analysis”. The researcher’s novel AI screening system isn’t publicly accessible but they hope to make it available to universities and publishing companies soon.
The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.
It’s almost impossible to avoid reading about advances in quantum computing these days. Despite this, we’re still some way off having fully fault-tolerant, large-scale quantum computers as of right now. One practical difficulty is that even the best present-day quantum computers suffer from noise that can often cause them to return erroneous results.
Research in this field can be broadly divided into two areas: a) designing quantum algorithms with potential practical advantages over classical algorithms (the software) and b) physically building a quantum computer (the hardware).
One of the main approaches to algorithm design is to minimise the number of operations or runtime in an algorithm. One intuitively expects that reducing the number of operations would decrease the chance of errors – the key to constructing a reliable quantum computer.
However, this is not always the case. In a recent paper, the research team found that minimising the number of operations in a quantum algorithm can sometimes be counterproductive, leading to an increased sensitivity to noise. Essentially, running a faster algorithm in non-ideal conditions can result in more errors than if a slower algorithm had been used.
The authors proved that there’s a trade-off between an algorithm’s number of operations and its resilience to noise. This means that, for certain types of errors, slower algorithms might actually be better in some real-world conditions.
These results bring together research on quantum hardware and software. The mathematical framework developed will enable quantum algorithms to be designed with the limitations of current real quantum computers in mind.
Resilience–runtime tradeoff relations for quantum algorithms – IOPscience
García-Pintos et al. 2025 Rep. Prog. Phys. 88 037601
The post Are longer quantum algorithms actually good? appeared first on Physics World.
Despite the huge success of the Standard Model of particle physics, we know it’s not complete. Dark matter and neutrino masses are just two of the things that are conspicuously missing in our current theory.
However, it’s been notoriously difficult to perform an experiment that actually disagrees with the model’s predictions.
Many proposed extensions of the Standard Model, such as the fraternal twin Higgs or folded supersymmetry models, include so-called long-lived particles (LLPs).
Unlike most particles produced in high-energy collisions, which decay almost instantaneously, LLPs have relatively long lifetimes, meaning they travel a measurable distance before decaying.
A new paper from the CMS collaboration at CERN searched for evidence of these particles by re-examining previous data from proton-proton collision events.
The new analysis used new techniques such as machine-learning methods to enhance the sensitivity to LLPs.
So, did they find any new particles? The short answer, unfortunately, is no.
However, the new study achieves up to a tenfold improvement over previous limits for LLP masses. It also places the first constraints on many proposed models that predict these particles.
Although this study found no new physics, we’re still confident that something must be out there. And by narrowing down the possible spaces where we might find new particles, we’re one step closer to finding them.
The search continues.
The CMS Collaboration, 2025 Rep. Prog. Phys. 88 037801
The post The hunt for long-lived particles produced in proton-proton collisions at the LHC appeared first on Physics World.
Call it millennial, generation Y or fin de siècle, high-energy physics during the last two decades of the 20th century had a special flavour. The principal pieces of the Standard Model of particle physics had come together remarkably tightly – so tightly, in fact, that physicists had to rethink what instruments to build, what experiments to plan, and what theories to develop to move forward. But it was also an era when the hub of particle physics moved from the US to Europe.
The momentous events of the 1980s and 1990s will be the focus of the 4th International Symposium on the History of Particle Physics, which is being held from 10–13 November at CERN. The meeting will take place more than four decades after the first symposium in the series was held at Fermilab near Chicago in 1980. Entitled The Birth of Particle Physics, that initial meeting covered the years 1930 to 1950.
Speakers back then included trailblazers such as Paul Dirac, Julian Schwinger and Victor Weisskopf. They reviewed discoveries such as the neutron and the positron and the development of relativistic quantum field theory. Those two decades before 1950 were a time when particle physicists “constructed the room”, so to speak, in which the discipline would be based.
The second symposium – Pions to Quarks – was also held at Fermilab and covered the 1950s. Accelerators could now create particles seen in cosmic-ray collisions, populating what Robert Oppenheimer called the “particle zoo”. Certain discoveries of this era, such as parity violation in the weak interaction, were so shocking that C N Yang likened it to having a blackout and not knowing if the room would look the same when the lights came back on. Speakers at that 1985 event included Luis Alvarez, Val Fitch, Abdus Salam, Robert Wilson and Yang himself.
The third symposium, The Rise of the Standard Model, was held in Stanford, California, in 1992 and covered the 1960s and 1970s. It was a time not of blackouts but of disruptions that dimmed the lights. Charge-parity violation and the existence of two types of neutrino were found in the 1960s, followed in the 1970s by deep inelastic electron scattering and quarks, neutral currents, a fourth quark and gluon jets.
These discoveries decimated alternative approaches to quantum field theory, which was duly established for good as the skeleton of high-energy physics. The era culminated with Sheldon Glashow, Abdus Salam and Steven Weinberg winning the 1979 Nobel Prize for Physics for their part in establishing the Standard Model. Speakers at that third symposium included Murray Gell-Mann, Leon Lederman and Weinberg himself.
The upcoming CERN event, on whose programme committee I serve, will start exactly where the previous symposium ended. “1980 is a natural historical break,” says conference co-organizer Michael Riordan, who won the 2025 Abraham Pais Prize for History of Physics. “It begins a period of the consolidation of the Standard Model. Colliders became the main instruments, and were built with specific standard-model targets in mind. And the centre of gravity of the discipline moved across the Atlantic to Europe.”
The conference will address physics that took place at CERN’s Super Proton Synchrotron (SPS), where the W and Z particles were discovered in 1983. It will also examines the SPS’s successor – the Large Electron-Positron (LEP) collider. Opened in 1989, it was used to make precise measurements of these and other implications of the Standard Model until being controversially shut down in 2000 to make way for the Large Hadron Collider (LHC).
There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities
Speakers at the meeting will also discuss Fermilab’s Tevatron, where the top quark – another Standard Model component – was found in 1995. Work at the Stanford Linear Accelerator Center, DESY in Germany, and Tsukuba, Japan, will be tackled too. There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities.
In particular, I will speak about ISABELLE, a planned and partially built proton–proton collider at Brookhaven National Laboratory, which was terminated in 1983 to make way for the far more ambitious Superconducting Super Collider (SSC). ISABELLE was then transformed into the Relativistic Heavy Ion Collider (RHIC), which was completed in 1999 and took nuclear physics into the high-energy regime.
Riordan will talk about the fate of the SSC, which was supposed to discover the Higgs boson or whatever else plays its mass-generating role. But in 1993 the US Congress terminated that project, a traumatic episode for US physics, about which Riordan co-authored the book Tunnel Visions. Its cancellation signalled the end of the glory years for US particle physics and the realization of the need for international collaborations in ever-costlier accelerator projects.
The CERN meeting will also explore more positive developments such as the growing convergence of particle physics and cosmology during the 1980s and 1990s. During that time, researchers stepped up their studies of dark matter, neutrino oscillations and supernovas. It was a period that saw the construction of underground detectors at Gran Sasso in Italy and Kamiokande in Japan.
Other themes to be explored include the development of the Web – which transformed the world – and the impact of globalization, the end of the Cold War, and the rise of high-energy physics in China, and physics in Russia, former Soviet Union republics, and former Eastern Bloc countries. While particle physics became more global, it also grew more dependent on, and vulnerable to, changing political ambitions, economic realities and international collaborations. The growing importance of diversity, communication and knowledge transfer will be looked at too.
The years between 1980 and 2000 were a distinct period in the history of particle physics. It took place in the afterglow of the triumph of the Standard Model. The lights in high energy physics did not go out or even dim, to use Yang’s metaphor. Instead, the Standard Model shed so much light on high-energy physics that the effort and excitement focused around consolidating the model.
Particle physics, during those years, was all about finding the deeply hidden outstanding pieces, developing the theory, and connecting with other areas of physics. The triumph was so complete that physicists began to wonder what bigger and more comprehensive structure the Standard Model’s “room” might be embedded in – what was “beyond the Standard Model”. A quarter of a century on, out attempts to make out that structure is still an ongoing task.
The post Relive the two decades when physicists basked in the afterglow of the Standard Model appeared first on Physics World.
Toulouse, France — MECANO ID, a leading provider of advanced mechanical and thermal solutions for the space industry, has reached a major milestone. On August 26, its EOS-8’’ satellite ejection […]
The post Successful flight on Falcon 9 for EOS-8’’, MECANO ID’s Satellite Ejection System appeared first on SpaceNews.
NASA is delaying the arrival of a Cygnus spacecraft at the International Space Station to investigate a thruster problem with the cargo vehicle.
The post Thruster issue delays Cygnus arrival at ISS appeared first on SpaceNews.
Amazon is preparing to double the size of its Project Kuiper constellation to over 200 satellites this year with three more launches, supporting broadband services in the U.S. and four other countries by the end of March.
The post Project Kuiper plots broadband services in five countries by end of March appeared first on SpaceNews.
The demonstration, targeted for 2026, will attempt to showcase a spacecraft’s ability to approach, image and maneuver around other objects in orbit without direct human control
The post Impulse Space and Anduril to demonstrate autonomous spacecraft maneuvers in GEO appeared first on SpaceNews.