Blue Origin conducted the tenth crewed flight of its New Shepard suborbital vehicle Feb. 25, carrying six people, one of whom was at least semi-anonymous.
Learn about the tracks and footprints at White Sands National Park in New Mexico, possibly representing the earliest evidence of human-made transportation technology.
The weather for Neptune today is sunny, with a chance of reflective clouds. Learn more about the uniquely blazing atmosphere of this ultra-hot exoplanet.
Amid growing efforts to bridge satellites and mobile networks, Eutelsat said it has successfully employed its OneWeb constellation to connect a broadband terminal with a core 5G network using next-gen smartphone protocols.
Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond”, says Scott Aaronson, a computer scientist at the University of Texas at Austin.
However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.
Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices.”. A MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.
“That’s a big no-no”
“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits”, says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.
Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Annaheim in March.
But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field”, he says, because it could lead to unrealistic expectations about what quantum computers can do.
As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.
Built-in error protection
As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.
It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.
In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of a MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim”, says Aaronson.
Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately”, says Nayak.
The device is a remarkable achievement from the materials science and fabrication standpoint
Ivar Martin, Argonne National Laboratory
“The device is a remarkable achievement from the materials science and fabrication standpoint”, says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).
Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.
Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog”, says Aaronson.
“Most governments won’t fund such work, because it’s way too risky and expensive”, adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”
Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.
This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.
In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.
Philip Minnmann
Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.
Johannes Schubert
Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.
Fast-moving stars in the Milky Way indicate there could be a supermassive black hole in the neighboring Large Magellanic Cloud—something that has never been detected in a smaller galaxy.
Monitoring the noises within ecosystems reveals their health—allowing researchers to monitor changes in biodiversity, detect threats, and measure the effectiveness of conservation strategies.
The fusion physicist Ian Chapman is to be the next head of UK Research and Innovation (UKRI) – the UK’s biggest public research funder. He will take up the position in June, replacing the geniticist Ottoline Leyser who has held the position since 2020.
UK science minister Patrick Vallance notes that Chapman’s “leadership experience, scientific expertise and academic achievements make him an exceptionally strong candidate to lead UKRI”.
UKRI chairman Andrew Mackenzie, meanwhile, states that Chapman “has the skills, experience, leadership and commitment to unlock this opportunity to improve the lives and livelihoods of everyone”.
Hard act to follow
After gaining an MSc in mathematics and physics from Durham University, Chapman completed a PhD at Imperial College London in fusion science, which he partly did at Culham Science Centre in Oxfordshire.
In 2014 he became head of tokamak science at Culham and then became fusion programme manager a year later. In 2016, aged just 34, he was named chief executive of the UK Atomic Energy Authority (UKAEA), which saw him lead the UK’s magnetic confinement fusion research programme at Culham.
In that role he oversaw an upgrade to the lab’s Mega Amp Spherical Tokamak as well as the final operation of the Joint European Torus (JET) – one of the world’s largest nuclear fusion devices – that closed in 2024.
Chapman also played a part in planning a prototype fusion power plant. Known as the Spherical Tokamak for Energy Production (STEP), it was first announced by the UK government in 2019 with operations expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.
Chapman, who currently sits on UKRI’s board, says that he is “excited” to take over as head of UKRI. “Research and innovation must be central to the prosperity of our society and our economy, so UKRI can shape the future of the country,” he notes. “I was tremendously fortunate to represent UKAEA, an organisation at the forefront of global research and innovation of fusion energy, and I look forward to building on those experiences to enable the wider UK research and innovation sector.”
The UKAEA has announced that Tim Bestwick, who is currently UKAEA’s deputy chief executive, will take over as interim UKAEA head until a permanent replacement is found.
Steve Cowley, director of the Princeton Plasma Physics Laboratory in the US and a former chief executive of UKAEA, told Physics World that Chapman is an “astonishing science leader” and that the UKRI is in “excellent hands”. “[Chapman] has set a direction for UK fusion research that is bold and inspired,” adds Cowley. “It will be a hard act to follow but UK fusion development will go ahead with great energy.”
The United States faces a new era of competition, one that is shifting the battlefield from the land, sea and air domains to the vastness of space. Our adversaries, particularly […]
Washington, D.C., February 25, 2025 – SpaceNews is excited to announce the launch of The SpaceNews Exchange, a dynamic job board designed to support the growing space industry by connecting […]
A team at the Trento Proton Therapy Centre in Italy has delivered the first clinical treatments using proton arc therapy, an emerging proton delivery technique. Following successful dosimetric comparisons with clinically delivered proton plans, the researchers confirmed the feasibility of PAT delivery and used PAT to treat nine cancer patients, reporting their findings inMedical Physics.
Currently, proton therapy is mostly delivered using pencil-beam scanning (PBS), which provides highly conformal dose distributions. But PBS delivery can be compromised by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this limitation by moving to an arc trajectory.
“Proton arc treatments are different from any other pencil-beam proton delivery technique because of the large number of beam angles used and the possibility to optimize the number of energies used for each beam direction, which enables optimization of the delivery time,” explains first author Francesco Fracchiolla. “The ability to optimize both the number of energy layers and the spot weights makes these treatments superior to any previous delivery technique.”
Plan comparisons
The Trento researchers – working with colleagues from RaySearch Laboratories – compared the dosimetric parameters of PAT plans with those of state-of-the-art multiple-field optimized (MFO) PBS plans, for 10 patients with head-and-neck cancer. They focused on this site due to the high number of organs-at-risk (OARs) close to the target that may be spared using this new technique.
In future, PAT plans will be delivered with the beam on during gantry motion (dynamic mode). This requires dynamic arc plan delivery with all system settings automatically adjusted as a function of gantry angle – an approach with specific hardware and software requirements that have so far impeded clinical rollout.
Instead, Fracchiolla and colleagues employed an alternative version of static PAT, in which the static arc is converted into a series of PBS beams and delivered using conventional delivery workflows. Using the RayStation treatment planning system, they created MFO plans (using six noncoplanar beam directions) and PAT plans (with 30 beam directions), robustly optimized against setup and range uncertainties.
PAT plans dramatically improved dose conformality compared with MFO treatments. While target coverage was of equal quality for both treatment types, PAT decreased the mean doses to OARs for all patients. The biggest impact was in the brainstem, where PAT reduced maximum and mean doses by 19.6 and 9.5 Gy(RBE), respectively. Dose to other primary OARs did not differ significantly between plans, but PAT achieved an impressive reduction in mean dose to secondary OARs not directly adjacent to the target.
The team also evaluated how these dosimetric differences impact normal tissue complication probability (NTCP). PAT significantly reduced (by 8.5%) the risk of developing dry mouth and slightly lowered other NTCP endpoints (swallowing dysfunction, tube feeding and sticky saliva).
To verify the feasibility of clinical PAT, the researchers delivered MFO and PAT plans for one patient on a clinical gantry. Importantly, delivery times (from the start of the first beam to the end of the last) were similar for both techniques: 36 min for PAT with 30 beam directions and 31 min for MFO. Reducing the number of beam directions to 20 reduced the delivery time to 25 min, while maintaining near-identical dosimetric data.
First patient treatments
The successful findings of the plan comparison and feasibility test prompted the team to begin clinical treatments.
“The final trigger to go live was the fact that the discretized PAT plans maintained pretty much exactly the optimal dosimetric characteristics of the original dynamic (continuous rotation) arc plan from which they derived, so there was no need to wait for full arc to put the potential benefits to clinical use. Pretreatment verification showed excellent dosimetric accuracy and everything could be done in a fully CE-certified environment,” say Frank Lohr and Marco Cianchetti, director and deputy director, respectively, of the Trento Proton Therapy Center. “The only current drawback is that we are not at the treatment speed that we could be with full dynamic arc.”
To date, nine patients have received or are undergoing PAT treatment: five with head-and-neck tumours, three with brain tumours and one thorax cancer. For the first two head-and-neck patients, the team created PAT plans with a half arc (180° to 0°) with 10 beam directions and a mean treatment time of 12 min. The next two were treated with a complete arc (360°) with 20 beam directions. Here, the mean treatment time was 24 min. Patient-specific quality assurance revealed an average gamma passing rate (3%, 3 mm) of 99.6% and only one patient required replanning.
All PAT treatments were performed using the centre’s IBA ProteusPlus proton therapy unit and the existing clinical workflow. “Our treatment planning system can convert an arc plan into a PBS plan with multiple beams,” Fracchiolla explains. “With this workaround, the entire clinical chain doesn’t change and the plan can be delivered on the existing system. This ability to convert the arc plans into PBS plans means that basically every proton centre can deliver these treatments with the current hardware settings.”
The researchers are now analysing acute toxicity data from the patients, to determine whether PAT reduces toxicity. They are also looking to further reduce the delivery times.
“Hopefully, together with IBA, we will streamline the current workflow between the OIS [oncology information system] and the treatment control system to reduce treatment times, thus being competitive in comparison with conventional approaches, even before full dynamic arc treatments become a clinical reality,” adds Lohr.
In this episode of Commercial Space Transformers SpaceNews Senior Staff Writer Jason Rainbow speaks with Declan Ganley, CEO & Chairman, Rivada networks.
HELSINKI — China plans to send a space observatory out of the plane of the ecliptic for a mission to study the poles of the sun. The Solar Polar Orbit […]
Inside view Private companies like Tokamak Energy in the UK are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. (Courtesy: Tokamak Energy)
Fusion – the process that powers the Sun – offers a tantalizing opportunity to generate almost unlimited amounts of clean energy. In the Sun’s core, matter is more than 10 times denser than lead and temperatures reach 15 million K. In these conditions, ionized isotopes of hydrogen (deuterium and tritium) can overcome their electrostatic repulsion, fusing into helium nuclei and ejecting high-energy neutrons. The products of this reaction are slightly lighter than the two reacting nuclei, and the excess mass is converted to lots of energy.
The engineering and materials challenges of creating what is essentially a ‘Sun in a freezer’ are formidable
The Sun’s core is kept hot and dense by the enormous gravitational force exerted by its huge mass. To achieve nuclear fusion on Earth, different tactics are needed. Instead of gravity, the most common approach uses strong superconducting magnets operating at ultracold temperatures to confine the intensely hot hydrogen plasma.
The engineering and materials challenges of creating what is essentially a “Sun in a freezer”, and harnessing its power to make electricity, are formidable. This is partly because, over time, high-energy neutrons from the fusion reaction will damage the surrounding materials. Superconductors are incredibly sensitive to this kind of damage, so substantial shielding is needed to maximize the lifetime of the reactor.
The traditional roadmap towards fusion power, led by large international projects, has set its sights on bigger and bigger reactors, at greater and greater expense. However these are moving at a snail’s pace, with the first power to the grid not anticipated until the 2060s, leading to the common perception that “fusion power is 30 years away, and always will be.”
There is therefore considerable interest in alternative concepts for smaller, simpler reactors to speed up the fusion timeline. Such novel reactors will need a different toolkit of superconductors. Promising materials exist, but because fusion can still only be sustained in brief bursts, we have no way to directly test how these compounds will degrade over decades of use.
Is smaller better?
A leading concept for a nuclear fusion reactor is a machine called a tokamak, in which the plasma is confined to a doughnut-shaped region. In a tokamak, D-shaped electromagnets are arranged in a ring around a central column, producing a circulating (toroidal) magnetic field. This exerts a force (the Lorentz force) on the positively charged hydrogen nuclei, making them trace helical paths that follow the field lines and keep them away from the walls of the vessel.
In 2010, construction began in France on ITER, a tokamak that is designed to demonstrate the viability of nuclear fusion for energy generation. The aim is to produce burning plasma, where more than half of the energy heating the plasma comes from fusion in the plasma itself, and to generate, for short pulses, a tenfold return on the power input.
But despite being proposed 40 years ago, ITER’s projected first operation was recently pushed back by another 10 years to 2034. The project’s budget has also been revised multiple times and it is currently expected to cost tens of billions of euros. One reason ITER is such an ambitious and costly project is its sheer size. ITER’s plasma radius of 6.2 m is twice that of the JT-60SA in Japan, the world’s current largest tokamak. The power generated by a tokamak roughly scales with the radius of the doughnut cubed which means that doubling the radius should yield an eight-fold increase in power.
Small but mighty Tokamak Energy’s ST40 compact tokamak uses copper electromagnets, which would be unsuitable for long-term operation due to overheating. REBCO compounds, which are high-temperature superconductors that can generate very high magnetic fields, are an attractive alternative. (Courtesy: Tokamak Energy)
However, instead of chasing larger and larger tokamaks, some organizations are going in the opposite direction. Private companies like Tokamak Energy in the UK and Commonwealth Fusion Systems in the US are developing compact tokamaks that, they hope, could bring fusion power to the grid in the 2030s. Their approach is to ramp up the magnetic field rather than the size of the tokamak. The fusion power of a tokamak has a stronger dependence on the magnetic field than the radius, scaling with the fourth power.
The drawback of smaller tokamaks is that the materials will sustain more damage from neutrons during operation. Of all the materials in the tokamak, the superconducting magnets are most sensitive to this. If the reactor is made more compact, they are also closer to the plasma and there will be less space for shielding. So if compact tokamaks are to succeed commercially, we need to choose superconducting materials that will be functional even after many years of irradiation.
1 Superconductors
Operation window for Nb-Ti, Nb3Sn and REBCO superconductors. (Courtesy: Susie Speller/IOP Publishing)
Superconductors are materials that have zero electrical resistance when they are cooled below a certain critical temperature (Tc). Superconducting wires can therefore carry electricity much more efficiently than conventional resistive metals like copper.
What’s more, a superconducting wire can carry a much higher current than a copper wire of the same diameter because it has zero resistance and so no heat is generated. In contrast, as you pass ever more current through a copper wire, it heats up and its resistance rises even further, until eventually it melts.
Without this resistive heating, a superconducting wire can carry a much higher current than a copper wire of the same diameter. This increased current density (current per unit cross-sectional area) enables high-field superconducting magnets to be more compact than resistive ones.
However, there is an upper limit to the strength of the magnetic field that a superconductor can usefully tolerate without losing the ability to carry lossless current. This is known as the “irreversibility field”, and for a given superconductor its value decreases as temperature is increased, as shown above.
High-performance fusion materials
Superconductors are a class of materials that, when cooled below a characteristic temperature, conduct with no resistance (see box 1, above). Magnets made from superconducting wires can carry high currents without overheating, making them ideal for generating the very high fields required for fusion. Superconductivity is highly sensitive to the arrangement of the atoms; whilst some amorphous superconductors exist, most superconducting compounds only conduct high currents in a specific crystalline state. A few defects will always arise, and can sometimes even improve the material’s performance. But introducing significant disorder to a crystalline superconductor will eventually destroy its ability to superconduct.
The most common material for superconducting magnets is a niobium-titanium (Nb-Ti) alloy, which is used in MRI machines in hospitals and CERN’s Large Hadron Collider. Nb-Ti superconducting magnets are relatively cheap and easy to manufacture, but – like all superconducting materials – it has an upper limit to the magnetic field in which it can superconduct, known as the irreversibility field. This value in Nb-Ti is too low for this material to be used for the high-field magnets in ITER. The ITER tokamak will instead use a niobium-tin (Nb3Sn) superconductor, which has a higher irreversibility field than Nb-Ti, even though it is much more expensive and challenging to work with.
2 REBCO unit cell
(Courtesy: redrawn from Wikimedia Commons/IOP Publishing)
The unit cell of a REBCO high-temperature superconductor. Here the pink atoms are copper and the red atoms are oxygen, the barium atoms are in green and the rare-earth element here is yttrium in blue.
Needing stronger magnetic fields, compact tokamaks require a superconducting material with an even higher irreversibility field. Over the last decade, another class of superconducting materials called “REBCO” have been proposed as an alternative. Short for rare earth barium copper oxide, these are a family of superconductors with the chemical formula REBa2Cu3O7, where RE is a rare-earth element such as yttrium, gadolinium or europium (see Box 2 “REBCO unit cell”).
REBCO compounds are high-temperature superconductors, which are defined as having transition temperatures above 77 K, meaning they can be cooled with liquid nitrogen rather than the more expensive liquid helium. REBCO compounds also have a much higher irreversibility field than niobium-tin, and so can sustain the high fields necessary for a small fusion reactor.
REBCO wires: Bendy but brittle
REBCO materials have attractive superconducting properties, but it is not easy to manufacture them into flexible wires for electromagnets. REBCO is a brittle ceramic so can’t be made into wires in the same way as ductile materials like copper or Nb-Ti, where the material is drawn through progressively smaller holes.
Instead, REBCO tapes are manufactured by coating metallic ribbons with a series of very thin ceramic layers, one of which is the superconducting REBCO compound. Ideally, the REBCO would be a single crystal, but in practice, it will be comprised of many small grains. The metal gives mechanical stability and flexibility whilst the underlying ceramic “buffer” layers protect the REBCO from chemical reactions with the metal and act as a template for aligning the REBCO grains. This is important because the boundaries between individual grains reduce the maximum current the wire can carry.
Another potential problem is that these compounds are chemically sensitive and are “poisoned” by nearly all the impurities that may be introduced during manufacture. These impurities can produce insulating compounds that block supercurrent flow or degrade the performance of the REBCO compound itself.
Despite these challenges, and thanks to impressive materials engineering from several companies and institutions worldwide, REBCO is now made in kilometre-long, flexible tapes capable of carrying thousands of amps of current. In 2024, more than 10,000 km of this material was manufactured for the burgeoning fusion industry. This is impressive given that only 1000 km was made in 2020. However, a single compact tokamak will require up to 20,000 km of this REBCO-coated conductor for the magnet systems, and because the superconductor is so expensive to manufacture it is estimated that this would account for a considerable fraction of the total cost of a power plant.
Pushing superconductors to the limit
Another problem with REBCO materials is that the temperature below which they superconduct falls steeply once they’ve been irradiated with neutrons. Their lifetime in service will depend on the reactor design and amount of shielding, but research from the Vienna University of Technology in 2018 suggested that REBCO materials can withstand about a thousand times less damage than structural materials like steel before they start to lose performance (Supercond. Sci. Technol. 31 044006).
These experiments are currently being used by the designers of small fusion machines to assess how much shielding will be required, but they don’t tell the whole story. The 2018 study used neutrons from a fission reactor, which have a different spectrum of energies compared to fusion neutrons. They also did not reproduce the environment inside a compact tokamak, where the superconducting tapes will be at cryogenic temperatures, carrying high currents and under considerable strain from Lorentz forces generated in the magnets.
Even if we could get a sample of REBCO inside a working tokamak, the maximum runtime of current machines is measured in minutes, meaning we cannot do enough damage to test how susceptible the superconductor will be in a real fusion environment. The current record for tokamak power is 69 megajoules, achieved in a 5-second burst at the Joint European Torus (JET) tokamak in the UK.
Given the difficulty of using neutrons from fusion reactors, our team is looking for answers using ions instead. Ion irradiation is much more readily available, quicker to perform, and doesn’t make the samples radioactive. It is also possible to access a wide range of energies and ion species to tune the damage mechanisms in the material. The trouble is that because ions are charged they won’t interact with materials in exactly the same way as neutrons, so it is not clear if these particles cause the same kinds of damage or by the same mechanisms.
To find out, we first tried to directly image the crystalline structure of REBCO after both neutron and ion irradiation using transmission electron microscopy (TEM). When we compared the samples, we saw small amorphous regions in the neutron-irradiated REBCO where the crystal structure was destroyed (J. Microsc. 286 3), which are not observed after light ion irradiation (see Box 3 below).
TEM images of REBCO before (a) and after (b) helium ion irradiation. The image on the right (c) shows only the positions of the copper, barium and rare-earth atoms – the oxygen atoms in the crystal lattice cannot be inages using this technique. After ion irradiation, REBCO materials exhibit a lower superconducting transition temperature. However, the above images show no corresponding defects in the lattice, indicating that defects caused by oxygen atoms being knocked out of place are responsible for this effect.
We believe these regions to be collision cascades generated initially by a single violent neutron impact that knocks an atom out of its place in the lattice with enough energy that the atom ricochets through the material, knocking other atoms from their positions. However, these amorphous regions are small, and superconducting currents should be able to pass around them, so it was likely that another effect was reducing the superconducting transition temperature.
Searching for clues
The TEM images didn’t show any other defects, so on our hunt to understand the effect of neutron irradiation, we instead thought about what we couldn’t see in the images. The TEM technique we used cannot resolve the oxygen atoms in REBCO because they are too light to scatter the electrons by large angles. Oxygen is also the most mobile atom in a REBCO material, which led us to think that oxygen point defects – single oxygen atoms that have been moved out of place and which are distributed randomly throughout the material – might be responsible for the drop in transition temperature.
In REBCO, the oxygen atoms are all bonded to copper, so the bonding environment of the copper atoms can be used to identify oxygen defects. To test this theory we switched from electrons to photons, using a technique called X-ray absorption spectroscopy. Here the sample is illuminated with X-rays that preferentially excite the copper atoms; the precise energies where absorption is highest indicate specific bonding arrangements, and therefore point to specific defects. We have started to identify the defects that are likely to be present in the irradiated samples, finding spectral changes that are consistent with oxygen atoms moving into unoccupied sites (Communications Materials3 52).
We see very similar changes to the spectra when we irradiate with helium ions and neutrons, suggesting that similar defects are created in both cases (Supercond. Sci. Technol.36 10LT01 ). This work has increased our confidence that light ions are a good proxy for neutron damage in REBCO superconductors, and that this damage is due to changes in the oxygen lattice.
The Surrey Ion Beam Centre allows users to carry out a wide variety of research using ion implantation, ion irradiation and ion beam analysis. (Courtesy: Surrey Ion Beam Centre)
Another advantage of ion irradiation is that, compared to neutrons, it is easier to access experimentally relevant cryogenic temperatures. Our experiments are performed at the Surrey Ion Beam Centre, where a cryocooler can be attached to the end of the ion accelerator, enabling us to recreate some of the conditions inside a fusion reactor.
We have shown that when REBCO is irradiated at cryogenic temperatures and then allowed to warm to room temperature, it recovers some of its superconducting properties (Supercond. Sci. Technol.34 09LT01). We attribute this to annealing, where rearrangements of atoms occur in a material warmed below its melting point, smoothing out defects in the crystal lattice. We have shown that further recovery of a perfect superconducting lattice can be induced using careful heat treatments to avoid loss of oxygen from the samples (MRS Bulletin48 710).
Lots more experiments are required to fully understand the effect of irradiation temperature on the degradation of REBCO. Our results indicate that room temperature and cryogenic irradiation with helium ions lead to a similar rate of degradation, but similar work by a group at the Massachusetts Institute of Technology (MIT) in the US using proton irradiation has found that the superconductor degrades more rapidly at cryogenic temperatures (Rev. Sci. Instrum.95 063907). The effect of other critical parameters like magnetic field and strain also still needs to be explored.
Towards net zero
The remarkable properties of REBCO high-temperature superconductors present new opportunities for designing fusion reactors that are substantially smaller (and cheaper) than traditional tokamaks, and which private companies ambitiously promise will enable the delivery of power to the grid on vastly accelerated timescales. REBCO tape can already be manufactured commercially with the required performance but more research is needed to understand the effects of neutron damage that the magnets will be subjected to so they will achieve the desired service lifetimes.
This would open up extensive new applications, such as lossless transmission cables, wind turbine generators and magnet-based energy storage devices
Scale-up of REBCO tape production is already happening at pace, and it is expected that this will drive down the cost of manufacture. This would open up extensive new applications, not only in fusion but also in power applications such as lossless transmission cables, for which the historically high costs of the superconducting material have proved prohibitive. Superconductors are also being introduced into wind turbine generators, and magnet-based energy storage devices.
This symbiotic relationship between fusion and superconductor research could lead not only to the realization of clean fusion energy but also many other superconducting technologies that will contribute to the achievement of net zero.
Astronomers have constructed the first “weather map” of the exoplanet WASP-127b, and the forecast there is brutal. Winds roar around its equator at speeds as high as 33 000 km/hr, far exceeding anything found in our own solar system. Its poles are cooler than the rest of its surface, though “cool” is a relative term on a planet where temperatures routinely exceed 1000 °C. And its atmosphere contains water vapour, so rain – albeit not in the form we’re accustomed to on Earth – can’t be ruled out.
Astronomers have been studying WASP-127b since its discovery in 2016. A gas giant exoplanet located over 500 light-years from Earth, it is slightly larger than Jupiter but much less dense, and it orbits its host – a G-type star like our own Sun – in just 4.18 Earth days. To probe its atmosphere, astronomers record the light transmitted as it passes in front of its host star according to our line of sight. During such passes, or transits, some starlight gets filtered though the planet’s upper atmosphere and is “imprinted” with the characteristic pattern of absorption lines found in the atoms and molecules present there.
Observing the planet during a transit event
On the night of 24/25 March 2022, astronomers used the CRyogenic InfraRed Echelle Spectrograph (CRIRES+) on the European Southern Observatory’s Very Large Telescope to observe WASP-127b at wavelengths of 1972‒2452 nm during a transit event lasting 6.6 hours. The data they collected show that the planet is home to supersonic winds travelling at speeds nearly six times faster than its own rotation – something that has never been observed before. By comparison, the fastest wind speeds measured in our solar system were on Neptune, where they top out at “just” 1800 km/hr, or 0.5 km/s.
Such strong winds – the fastest ever observed on a planet – would be hellish to experience. But for the astronomers, they were crucial for mapping WASP-127b’s weather.
“The light we measure still looks to us as if it all came from one point in space, because we cannot resolve the planet optically/spatially like we can do for planets in our own solar system,” explains Lisa Nortmann, an astronomer at the University of Göttingen, Germany and the lead author of a Astronomy and Astrophysics paper describing the measurements. However, Nortmann continues, “the unexpectedly fast velocities measured in this planet’s atmosphere have allowed us to investigate different regions on the planet, as it causes their signals to shift to different parts of the light spectrum. This meant we could reconstruct a rough weather map of the planet, even though we cannot resolve these different regions optically.”
The astronomers also used the transit data to study the composition of WASP-127b’s atmosphere. They detected both water vapour and carbon monoxide. In addition, they found that the temperature was lower at the planet’s poles than elsewhere.
Removing unwanted signals
According to Nortmann, one of the challenges in the study was removing signals from Earth’s atmosphere and WASP-127b’s host star so as to focus on the planet itself. She notes that the work will have implications for researchers working on theoretical models that aim to predict wind patterns on exoplanets.
“They will now have to try to see if their models can recreate the winds speeds we have observed,” she tells Physics World. “The results also really highlight that when we investigate this and other planets, we have to take the 3D structure of winds into account when interpreting our results.”
The astronomers say they are now planning further observations of WASP-127b to find out whether its weather patterns are stable or change over time. “We would also like to investigate molecules on the planet other than H2O and CO,” Nortmann says. “This could possibly allow us to probe the wind at different altitudes in the planet’s atmosphere and understand the conditions there even better.”
Budget cuts and reallocations are creating an opening for commercial players in the space industry, said a report by the investment firm Cantor Fitzgerald
Speakers from Rolls-Royce, Thales Alenia, NOAA, Babcock, European Space Agency, UK Space Agency, and Egyptian Space Agency are the latest to join the growing programmes of the Farnborough International Space […]
At least three people linked to Elon Musk’s DOGE task force have access to NIH systems that control budgets, procurement, and more, according to records and internal documents viewed by WIRED.