A combination of static proton arcs and shoot-through proton beams could increase plan conformity and homogeneity and reduce delivery times in upright proton therapy, according to new research from RaySearch Laboratories in Sweden.
Proton arc therapy (PAT) is an emerging rotational delivery technique with potential to improve plan quality – reducing dose to organs-at-risk while maintaining target dose. The first clinical PAT treatments employed static arcs, in which multiple energy layers are delivered from many (typically 10 to 30) discrete angles. Importantly, static arc PAT can be delivered on conventional proton therapy machines. It also offers simpler beam arrangements than intensity-modulated proton therapy (IMPT).
“In IMPT of head-and-neck cancers, the beam directions are normally set up in a complicated pattern in different planes, with range shifters needed to treat the shallow part of the tumour,” explains Erik Engwall, chief physicist at RaySearch Laboratories. “In PAT, the many beam directions are arranged in the same plane and no range shifters are typically needed. With all beams in the same plane, it is easier to move to upright treatments.”
Upright proton therapy involves rotating the patient (in an upright position) in front of a static horizontal treatment beam. The approach could reduce costs by using compact proton delivery systems. This compactness, however, places energy selection close to the patient, increasing scattering in the proton beam. To combat this, the team propose adding a layer of shoot-through protons to each direction of the proton arc.
The idea is that while most protons are delivered with Bragg peaks placed in the target, the sharp penumbra of the high-energy protons shooting through the target will combat beam broadening. The rotational delivery in the proton arc spreads the exit dose from these shoot-through beams over many angles, minimizing dose to surrounding tissues. And as the beamline is fixed, shoot-through protons exit in the same direction (behind the patient) for all angles, simplifying shielding to a single beam dump opposite the fixed beam.
Simulation studies
To test this approach, Engwall and colleagues simulated treatment plans for a virtual phantom containing three targets and an organ-at-risk, reporting their findings in Medical Physics. They used a development version of RayStation v2025 with a beam model of the Mevion s250-FIT system (which combines a compact cyclotron, an upright positioner and an in-room CT scanner).
For each target, the team created static arc plans with (Arc+ST) and without shoot-through beams and with/without collimation, as well as 3-beam IMPT plans with and without shoot-through beams (all with collimation). Arc plans used 20 uniformly spaced beam directions, and the shoot-through plans included an additional layer of the highest system energy (230 MeV) for each direction.
For all targets, Arc+ST plans showed superior conformity, homogeneity and target robustness to arc plans without shoot-through protons. Adding collimation slightly improved the arc plans without shoot-through protons but had little impact on Arc+ST plans.
The IMPT plans achieved similar homogeneity and robustness to the best arc plans, but with far lower conformity due to the shoot-through protons delivering a concentrated exit dose behind the target (while static arcs distribute this dose over many directions). Adding shoot-through protons improved IMPT plan quality, but to a lesser degree than for PAT plans.
Clinical case
The researchers repeated their analysis for a clinical head-and-neck cancer case, comparing static arcs with 5-beam IMPT. Again, Arc+ST plans performed better than any others for almost all metrics. “The Arc+ST plans have the best quality due to the sharpening of the penumbra of the shoot-through part, even better than when using a collimator,” says Engwall.
Plan comparisons (a) Static arc with an additional shoot-through layer, (b) partial static arcs with collimation and (c) 5-beam collimated plan. Panel (d) shows the shoot-through portion of the dose distribution in (a). Dose–volume histograms are displayed for the targets and representative organs-at-risk. (Courtesy: CC BY 4.0/Med. Phys. 10.1002/mp.18051)
Notably, the findings suggest that collimation is not needed when combining arcs with shoot-through beams, enabling rapid treatments. With fast energy switching and the patient rotation at 1 rpm, Arc+ST achieved an estimated delivery time of less than 5.4 min – faster than all other plans for this case, including 5-beam IMPT.
“Treatment time is reduced when the leaves of the dynamic collimator do not need to move,” Engwall explains. “There is also no risk of mechanical failures of the collimator and the secondary neutron production will be lower when there are fewer objects in the beamline.”
Another benefit of upright delivery is that the shoot-through protons can be used for range verification during treatments, using a detector integrated into the beam dump behind the patient. The team investigated this concept with three simulated error scenarios: 5% systematic shift in stopping power ratio; 5 mm setup shift; and 2 cm shoulder movement. The technique successfully detected all errors.
As the range detector is permanently installed in the treatment room and the shoot-through protons are part of the treatment plan, this method does not add time to the patient setup and can be used in every treatment fraction to detect both intra- and inter-fraction uncertainties.
Although this is a proof-of-concept study, the researchers conclude that it highlights the combined advantages of the new treatment technique, which could “leverage the potential of compact upright proton treatments and make proton treatments more affordable and accessible to a larger patient group”.
Engwall tells Physics World that the team is now collaborating with several clinical research partners to investigate the technique’s potential across larger patient data sets, for other treatment sites and multiple treatment machines.
Researchers in Germany have demonstrated the first cancer treatment using a radioactive carbon ion beam (11C), on a mouse with a bone tumour close to the spine. Performing particle therapy with radioactive ion beams enables simultaneous treatment and visualization of the beam within the body.
Particle therapy using beams of protons or heavy ions is a highly effective cancer treatment, with the favourable depth–dose deposition – the Bragg peak – providing extremely conformal tumour targeting. This conformality, however, makes particle therapy particularly sensitive to range uncertainties, which can impact the Bragg peak position.
One way to reduce such uncertainties is to use positron emission tomography (PET) to map the isotopes generated as the treatment beam interacts with tissues in the patient. For therapy with carbon (12C) ions, currently performed at 17 centres worldwide, this involves detecting the beta decay of 10C and 11C projectile fragments. Unfortunately, such fragments generate a small PET signal, while their lower mass shifts the measured activity peak away from the Bragg peak.
The researchers – working within the ERC-funded BARB (Biomedical Applications of Radioactive ion Beams) project – propose that treatment with positron-emitting ions such as 11C could overcome these obstacles. Radioactive ion beams have the same biological effectiveness as their corresponding stable ion beams, but generate an order of magnitude larger PET signal. They also reduce the shift between the activity and dose peaks, enabling precise localization of the ion beam in vivo.
“Range uncertainty remains the main problem of particle therapy, as we do not know exactly where the Bragg peak is,” explains Marco Durante, head of biophysics at the GSI Helmholtz Centre for Heavy Ion Research and principal investigator of the BARB project. “If we ‘aim-and-shoot’ using a radioactive beam and PET imaging, we can see where the beam is and can then correct it. By doing this, we can reduce the margins around the target that spoil the precision of particle therapy.”
In vivo experiments
To test this premise, Durante and colleagues performed in vivo experiments at the GSI/FAIR accelerator facility in Darmstadt. For online range verification, they used a portable small-animal in-beam PET scanner built by Katia Parodi and her team at LMU Munich. The scanner, initially designed for the ERC project SIRMIO (Small-animal proton irradiator for research in molecular image-guided radiation-oncology), contains 56 depth-of-interaction detectors – based on scintillator blocks of pixelated LYSO crystals – arranged spherically with an inner diameter of 72 mm.
LMU researchers Members of the LMU team involved in the BARB project (left to right: Peter Thirolf, Giulio Lovatti, Angelica Noto, Francesco Evangelista, Munetaka Nitta and Katia Parodi) with the small-animal PET scanner. (Courtesy: Katia Parodi/Francesco Evangelista, LMU)
“Not only does our spherical in-beam PET scanner offer unprecedented sensitivity and spatial resolution, but it also enables on-the-fly monitoring of the activity implantation for direct feedback during irradiation,” says Parodi, co-principal investigator of the BARB project.
The researchers used a radioactive 11C-ion beam – produced at the GSI fragment separator – to treat 32 mice with an osteosarcoma tumour implanted in the neck near the spinal cord. To encompass the full target volume, they employed a range modulator to produce a spread-out Bragg peak (SOBP) and a plastic compensator collar, which also served to position and immobilize the mice. The anaesthetized animals were placed vertically inside the PET scanner and treated with either 20 or 5 Gy at a dose rate of around 1 Gy/min.
For each irradiation, the team compared the measured activity with Monte Carlo-simulated activity based on pre-treatment microCT scans. The activity distributions were shifted by about 1 mm, attributed to anatomical changes between the scans (with mice positioned horizontally) and irradiation (vertical positioning). After accounting for this anatomical shift, the simulation accurately matched the measured activity. “Our findings reinforce the necessity of vertical CT planning and highlight the potential of online PET as a valuable tool for upright particle therapy,” the researchers write.
With the tumour so close to the spine, even small range uncertainties risk damage to the spinal cord, so the team used the online PET images generated during the irradiation to check that the SOPB did not cover the spine. While this was not seen in any of the animals, Durante notes that if it had, the beam could be moved to enable “truly adaptive” particle therapy. Assessing the mice for signs of radiation-induced myelopathy (which can lead to motor deficits and paralysis) revealed that no mice exhibited severe toxicity, further demonstrating that the spine was not exposed to high doses.
PET imaging in a mouse (a) Simulation showing the expected 11C-ion dose distribution in the pre-treatment microCT scan. (b) Corresponding simulated PET activity. (c) Online PET image of the activity during 11C irradiation, overlaid on the same microCT used for simulations. The target is outlined in black, the spine in red. (Courtesy: CC BY 4.0/Nat. Phys. 10.1038/s41567-025-02993-8)
Following treatment, tumour measurements revealed complete tumour control after 20 Gy irradiation and prolonged tumour growth delay after 5 Gy, suggesting complete target coverage in all animals.
The researchers also assessed the washout of the signal from the tumour, which includes a slow activity decrease due to the decay of 11C (which has a half-life of 20.34 min), plus a faster decrease as blood flow removes the radioactive isotopes from the tumour. The results showed that the biological washout was dose-dependent, with the fast component visible at 5 Gy but disappearing at 20 Gy.
“We propose that this finding is due to damage to the blood vessel feeding the tumour,” says Durante. “If this is true, high-dose radiotherapy may work in a completely different way from conventional radiotherapy: rather than killing all the cancer stem cells, we just starve the tumour by damaging the blood vessels.”
Future plans
Next, the team intends to investigate the use of 10C or 15O treatment beams, which should provide stronger signals and increased temporal resolution. A new Super-FRS fragment separator at the FAIR accelerator facility will provide the high-intensity beams required for studies with 10C.
Looking further ahead, clinical translation will require a realistic and relatively cheap design, says Durante. “CERN has proposed a design [the MEDICIS-Promed project] based on ISOL [isotope separation online] that can be used as a source of radioactive beams in current accelerators,” he tells Physics World. “At GSI we are also working on a possible in-flight device for medical accelerators.”
Traumatic brain injury (TBI), caused by a sudden impact to the head, is a leading cause of death and disability. After such an injury, the most important indicator of how severe the injury is intracranial pressure – the pressure inside the skull. But currently, the only way to assess this is by inserting a pressure sensor into the patient’s brain. UK-based startup Crainio aims to change this by developing a non-invasive method to measure intracranial pressure using a simple optical probe attached to the patient’s forehead.
Can you explain why diagnosing TBI is such an important clinical challenge?
Every three minutes in the UK, someone is admitted to hospital with a head injury, it’s a very common problem. But when someone has a blow to the head, nobody knows how bad it is until they actually reach the hospital. TBI is something that, at the moment, cannot be assessed at the point of injury.
From the time of impact to the time that the patient receives an assessment by a neurosurgical expert is known as the golden hour. And nobody knows what’s happening to the brain during this time – you don’t know how best to manage the patient, whether they have a severe TBI with intracranial pressure rising in the head, or just a concussion or a medium TBI.
Once at the hospital, the neurosurgeons have to assess the patient’s intracranial pressure, to determine whether it is above the threshold that classifies the injury as severe. And to do that, they have to drill a hole in the head – literally – and place an electrical probe into the brain. This really is one of the most invasive non-therapeutic procedures, and you obviously can’t do this to every patient that comes with a blow in the head. It has its risks, there is a risk of haemorrhage or of infection.
Therefore, there’s a need to develop technologies that can measure intracranial pressure more effectively, earlier and in a non-invasive manner. For many years, this was almost like a dream: “How can you access the brain and see if the pressure is rising in the brain, just by placing an optical sensor on the forehead?”
Crainio has now created such a non-invasive sensor; what led to this breakthrough?
The research goes back to 2016, at the Research Centre for Biomedical Engineering at City, University of London (now City St George’s, University of London), when the National Institute for Health Research (NIHR) gave us our first grant to investigate the feasibility of a non-invasive intracranial sensor based on light technologies. We developed a prototype, secured the intellectual property and conducted a feasibility study on TBI patients at the Royal London Hospital, the biggest trauma hospital in the UK.
It was back in 2021, before Crainio was established, that we first discovered that after we shone certain frequencies of light, like near-infrared, into the brain through the forehead, the optical signals coming back – known as the photoplethysmogram, or PPG – contained information about the physiology or the haemodynamics of the brain.
When the pressure in the brain rises, the brain swells up, but it cannot go anywhere because the skull is like concrete. Therefore, the arteries and vessels in the brain are compressed by that pressure. PPG measures changes in blood volume as it pulses through the arteries during the cardiac cycle. If you have a viscoelastic artery that is opening and closing, the volume of blood changes and this is captured by the PPG. Now, if you have an artery that is compromised, pushed down because of pressure in the brain, that viscoelastic property is impacted and that will impact the PPG.
Changes in the PPG signal due to changes arising from compression of the vessels in the brain, can give us information about the intracranial pressure. And we developed algorithms to interrogate this optical signal and machine learning models to estimate intracranial pressure.
How did the establishment of Crainio help to progress the sensor technology?
Following our research within the university, Crainio was set up in 2022. It brought together a team of experts in medical devices and optical sensors to lead the further development and commercialization of this device. And this small team worked tirelessly over the last few years to generate funding to progress the development of the optical sensor technology and bring it to a level that is ready for further clinical trials.
Panicos Kyriacou “At Crainio we want to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.” (Courtesy: Crainio)
In 2023, Crainio was successful with an Innovate UK biomedical catalyst grant, which will enable the company to engage in a clinical feasibility study, optimize the probe technology and further develop the algorithms. The company was later awarded another NIHR grant to move into a validation study.
The interest in this project has been overwhelming. We’ve had a very positive feedback from the neurocritical care community. But we also see a lot of interest from communities where injury to the brain is significant, such as rugby associations, for example.
Could the device be used in the field, at the site of an accident?
While Crainio’s primary focus is to deliver a technology for use in critical care, the system could also be used in ambulances, in helicopters, in transfer patients and beyond. The device is non-invasive, the sensor is just like a sticking plaster on the forehead and the backend is a small box containing all the electronics. In the past few years, working in a research environment, the technology was connected into a laptop computer. But we are now transferring everything into a graphical interface, with a monitor to be able to see the signals and the intracranial pressure values in a portable device.
Following preliminary tests on patients, Crainio is now starting a new clinical trial. What do you hope to achieve with the next measurements?
The first study, a feasibility study on the sensor technology, was done during the time when the project was within the university. The second round is led by Crainio using a more optimized probe. Learning from the technical challenges we had in the first study, we tried to mitigate them with a new probe design. We’ve also learned more about the challenges associated with the acquisition of signals, the type of patients, how long we should monitor.
We are now at the stage where Crainio has redeveloped the sensor and it looks amazing. The technology has received approval by MHRA, the UK regulator, for clinical studies and ethical approvals have been secured. This will be an opportunity to work with the new probe, which has more advanced electronics that enable more detailed acquisition of signals from TBI patients.
We are again partnering with the Royal London Hospital, as well as collaborators from the traumatic brain injury team at Cambridge and we’re expecting to enter clinical trials soon. These are patients admitted into neurocritical trauma units and they all have an invasive intracranial pressure bolt. This will allow us to compare the physiological signal coming from our intracranial pressure sensor with the gold standard.
The signals will be analysed by Crainio’s data science team, with machine learning algorithms used to look at changes in the PPG signal, extract morphological features and build models to develop the technology further. So we’re enriching the study with a more advanced technology, and this should lead to more accurate machine learning models for correctly capturing dynamic changes in intracranial pressure.
The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose traumatic brain injury effectively, faster, accurately and earlier
This time around, we will also record more information from the patients. We will look at CT scans to see whether scalp density and thickness have an impact. We will also collect data from commercial medical monitors within neurocritical care to see the relation between intracranial pressure and other physiological data acquired in the patients. We aim to expand our knowledge of what happens when a patient’s intracranial pressure rises – what happens to their blood pressures? What happens to other physiological measurements?
How far away is the system from being used as a standard clinical tool?
Crainio is very ambitious. We’re hoping that within the next couple of years we will progress adequately in order to achieve CE marking and all meet the standards that are necessary to launch a medical device.
The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose TBI effectively, faster, accurately and earlier. This can only yield better outcomes and improve patients’ quality-of-life.
Of course, as a company we’re interested in being successful commercially. But the ambition here is, first of all, to keep the cost affordable. We live in a world where medical technologies need to be affordable, not only for Western nations, but for nations that cannot afford state-of-the-art technologies. So this is another of Crainio’s primary aims, to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.
Positron emission tomography (PET) is a diagnostic imaging technique that uses an injected radioactive tracer to detect early signs of cancer, brain disorders or other diseases. At Jagiellonian University in Poland, a research team headed up by Paweł Moskal is developing a totally new type of PET scanner. The Jagiellonian PET (J-PET) can image the properties of positronium, a positron–electron bound state produced during PET scans, offering potential to increase the specificity of PET diagnoses.
The researchers have now recorded the first ever in vivo positronium image of the human brain. They also used the J-PET to show that annihilation photons generated during PET scans are not completely quantum entangled, opening up the possibility of using the degree of quantum entanglement as a diagnostic indicator. Moskal tells Physics World’s Tami Freeman about these latest breakthroughs and the team’s ongoing project to build the world’s first whole-body quantum PET scanner.
Can you describe how conventional PET images are generated?
PET is based on the annihilation of a positron with an electron to create two photons. The patient is administered a radiopharmaceutical labelled with a positron-emitting radionuclide (for example, fluoro-deoxy-glucose (FDG) labelled with 18F), which localizes in targeted tissues. The 18F emits positrons inside the body, which annihilate with electrons from the body, and the resulting annihilation photons are registered by the PET scanner.
By measuring the locations and times of the photons’ interactions in the scanner, we can reconstruct the density distribution of annihilation points in the body. With 18F-FDG, this image correlates with the density distribution of glucose, which in turn, indicates the rate of glucose metabolism. Thus the PET scanner delivers an image of the radiopharmaceutical’s metabolic rate in the body.
Such an image enables physicians to identify tissues with abnormal metabolism, such as cancers that metabolize glucose up to 10 times more intensively than healthy tissues. Therefore, PET scanners can provide information about alterations in cell function, even before cancer may be visible in anatomical images recorded using CT or MRI.
During annihilation, a short-lived atom called positronium can form. What’s the rationale for imaging this positronium?
It’s amazing that in tissue, positron–electron annihilation proceeds via the formation of positronium in about 40% of cases. Positronium, a bound state of matter and antimatter (an electron and a positron), is short lived because it can undergo self-annihilation into photons. In tissue, however, it can decay via additional processes that further shorten its lifetime. For example, its positron may annihilate by “picking off” an electron from a surrounding atom, or it may convert from the long-lived state (ortho-positronium) to the short-lived state (para-positronium) through interaction with oxygen molecules.
In tissue, therefore, positronium lifetime is an indicator of the intra- and inter-molecular structure and the concentration of oxygen molecules. Both molecular composition and the degree of oxygen concentration differ between healthy and cancerous tissues, with hypoxia (a deficit in tissue oxygenation) a major feature of solid tumours that’s related to the development of metastases and treatment resistance.
As such, imaging positronium lifetime can help in early disease recognition at the stage of molecular alterations. It can also improve diagnosis and the proper choice of anti-cancer therapy. In the case of brain diagnostics, positronium imaging may become an early diagnostic indicator for neurodegenerative disorders such as dementia, Alzheimer’s disease and Parkinson’s disease.
So how does the J-PET detect positronium?
To reconstruct the positronium lifetime we use a radionuclide (44Sc, 82Rb or 124I, for example) that, after emitting a positron, promptly (within a few picoseconds) emits an additional gamma photon. This “prompt gamma” can be used to measure the exact time that the positron was emitted into the tissue and formed positronium.
Multiphoton detection In about 1% of cases, after emitting a positron that annihilates with an electron into photons (blue arrows), 68Ga also emits a prompt gamma (solid arrow). (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)
Current PET scanners are designed to register only two annihilation photons, which makes them incapable of determining positronium lifetime. The J-PET is the first multiphoton PET scanner designed for simultaneous registration of any number of photons.
The registration of annihilation photons enables us to reconstruct the time and location of the positronium decay, while registration of the prompt gamma provides the time of its formation. The positronium lifetime is then calculated as the time difference between annihilation and prompt gamma emission.
Can you describe how your team recorded the first in vivo positronium image?
Last year we presented the world’s first in vivo images of positronium lifetime in a human, reported in Science Advances. For this, we designed and constructed a modular, lightweight and portable J-PET tomograph, consisting of 24 independent detection modules, each weighing only 2 kg. The device uses a multiphoton data acquisition system, invented by us, to simultaneously register prompt gamma and annihilation photons – the first PET scanner in the world to achieve this.
The research was performed at the Medical University of Warsaw, with studies conducted following routine procedures so as not to interfere with routine diagnostics and therapy. If a patient agreed to stay longer on the platform, we had about 10 minutes to install the J-PET tomograph around them and collect data.
In vivo imaging The first imaging of a patient, illustrating the advantages of the J-PET as a portable, lightweight device with an adaptable imaging volume. (Courtesy: Paweł Moskal)
The first patient was a 45-year-old man with glioblastoma (an aggressive brain tumour) undergoing alpha-particle radiotherapy. The primary aim of his therapy was to destroy the tumour using alpha particles emitted by the radionuclide 225Ac. The positronium imaging was made possible by the concurrent theranostic application of the radionuclide 68Ga to monitor the site of cancer lesions using a PET scanner.
The patient was administered a simultaneous intra-tumoural injection of the alpha-particle-emitting radiopharmaceutical (225Ac-DOTA-SP) for therapy and the positron emitting pharmaceutical (68Ga-DOTA-SP) for diagnosis. In about 1% of cases, after emitting a positron that annihilates with an electron, 68Ga also emits a prompt gamma ray.
We determined the annihilation location by measuring the time and position of interaction of the annihilation photons in the scanner. For each image voxel, we also determined a lifetime spectrum as the distribution of differences between the time of annihilation and the time of prompt gamma emission.
Our study found that positronium lifetimes in glioblastoma cells are shorter than in salivary glands and healthy brain tissues. We showed for the first time that the mean lifetime of ortho-positronium in a glioma (1.77±0.58 ns) is shorter than in healthy brain tissue (2.72±0.72 ns). This finding demonstrates that positronium imaging could be used for in vivo diagnosis to differentiate between healthy and cancerous tissues.
Lifetime distributions Positronium images of a patient with glioblastoma, showing the difference in mean ortho-positronium lifetime between glioma and healthy brain. (Courtesy: CC BY/Sci. Adv. 10.1126/sciadv.adp2840)
You recently demonstrated that J-PET can detect quantum entanglement of annihilation photons, how could this impact cancer diagnostics?
For this study, reported earlier this year in Science Advances, we used the laboratory prototype of the J-PET scanner (as employed previously for the first ex vivo positronium imaging). The crucial result was the first ever observation that photons from electron–positron annihilation in matter are not completely quantum entangled. Our study is pioneering in revealing a clear dependence of the degree of photon entanglement on the material in which the annihilation occurs.
These results are totally new compared with all previous investigations of photons from positron–electron annihilations. Up to this point, all experiments had focused on showing that this entanglement is maximal, and for that purpose, were performed in metals. None of the previous studies mentioned or even hypothesized a possible material dependence.
Lab prototype The J-PET scanner used to discover non-maximal entanglement, with (left to right) Deepak Kumar, Sushil Sharma and Pawel Moskal. (Courtesy: Damian Gil and Deepak Kumar)
If the degree of quantum entanglement of annihilation photons depends on the material, it may also differ according to tissue type or the degree of hypoxia. This is a hypothesis that we will test in future studies. I recently received an ERC Advanced Grant, entitled “Can tissue oxidation be sensed by positronium?”, to investigate whether the degree of oxidation in tissue can be sensed by the degree of quantum entanglement of photons originating from positron annihilation.
What causes annihilation photons to be entangled (or not)?
Quantum entanglement is a fascinating phenomenon that cannot be explained by our classical perception of the world. Entangled photons behave as if one instantly knows what is happening with the other, regardless of how far apart they are, so they propagate in space as a single object.
Annihilation photons are entangled if they originate from a pure quantum state. A state is “pure” if we know everything that can be known about it. For example, if the photons originate from the ground state of para-positronium (a pure state), then we expect them to be maximally entangled.
However, if electron–positron annihilation occurs in a mixed state (a statistical mixture of different pure states) where we have incomplete information, then the resulting photons will not be maximally entangled. In our case, this could be the annihilation of a positron from positronium with electrons from the patient’s body. Because these electrons can have different angular momenta with respect to the positron, the annihilation generally occurs from a mixed state.
You have also measured the polarization of the annihilation photons; how is this information used?
In current PET scanners, images are reconstructed based on the position and time of interaction of annihilation photons within the scanner. However, annihilation photons also carry information about their polarization.
Theoretically, annihilation photons are quantum entangled in polarization and exhibit non-local correlations. In the case of electron–positron annihilation into two photons, this means that the amplitude of the distribution of the relative angle between their polarization planes is larger when they are quantum entangled than when they propagate in space as independent objects.
State-of-the-art PET scanners, however, cannot access polarization information. Annihilation photons have energy in the mega-electronvolt range and their polarization cannot be determined using established optical methods, which are designed for optical photons in the electronvolt range. Because these energetic annihilation photons interact with single electrons, their polarization can only be sensed via Compton scattering.
The angular distribution of photons scattered by electrons is not isotropic with respect to the polarization direction. Instead, scattering is most likely to occur in a plane perpendicular to the polarization plane of the photon before scattering. Thus, by determining the scattering plane (containing the primary and scattered photon), one can estimate the direction of polarization as being perpendicular to that plane. Therefore, to practically determine the polarization plane of the photon, you need to know its directions of flight both before and after Compton scattering in the material.
In plastic scintillators, annihilation photons primarily interact via the Compton effect. As the J-PET is built from plastic scintillators, it’s ideally suited to provide information about the photons’ polarization, which can be determined by registering both the annihilation photon and the scattered photon and then reconstructing the scattering plane.
Using the J-PET scanner, we determined the distribution of the relative angle between the polarization planes of photons from positron–electron annihilation in a porous polymer. The amplitude of the observed distribution is smaller than predicted for maximally quantum-entangled two-photon states, but larger than expected for separable photons.
This result can be explained by assuming that photons from pick-off annihilation are not entangled, while photons from direct and para-positronium annihilations are maximally entangled. Our finding indicates that the degree of entanglement depends on the annihilation mechanism in matter, opening avenues for exploring polarization correlations in PET as a diagnostic indicator.
What further developments are planned for the J-PET scanner?
When creating the J-PET technology, we started with a two-strip prototype, then a 24-strip prototype in 2014, followed by a full-scale 192-strip prototype in 2016. In 2021 we completed the construction of a lightweight (60 kg) J-PET version that is both modular and portable, and which we used to demonstrate the first clinical images.
The next step is the construction of the total-body quantum J-PET scanner. We are now at the stage of collecting all the elements of this scanner and expect to complete construction in 2028. The scanner will be installed at the Center for Theranostics, established by myself and Ewa Stępień, medical head of the J-PET team, at Jagiellonian University.
Future developments Schematic cross-section of the full-body J-PET scanner under construction at Jagiellonian University. The diagram shows the patient and several examples of electron–positron annihilation. (Courtesy: Rev. Mod. Phys. 10.1103/RevModPhys.95.021002)
Total-body PET provides the ability to image the metabolism of all tissues in the body at the same time. Additionally, due to the high sensitivity of total-body PET scanners, it is possible to perform dynamic imaging – essentially, creating a movie of how the radiopharmaceutical distributes throughout the body over time.
The total-body J-PET will also be able to register the pharmacokinetics of drugs administered to a patient. However, its true distinction is that it will be the world’s first quantum PET scanner with the ability to image the degree of quantum entanglement of annihilation photons throughout the patient’s body. Additionally, it will be the world’s first total-body multiphoton PET, enabling simultaneous positronium imaging in the entire human body.
How do you see the J-PET’s clinical applications evolving in the future?
We have already performed the first clinical imaging using J-PET at the Medical University of Warsaw and the University Hospital in Kraków. The studies included the diagnosis of patients with neuroendocrine, prostate and glioblastoma tumours. The data collected at these hospitals were used to reconstruct standard PET images as well as positronium lifetime images.
Next, we plan to conduct positronium imaging of phantoms and humans with various radionuclides to explore its clinical applications as a biomarker for tissue pathology and hypoxia. We also intend to explore the J-PET’s multiphoton capabilities for simultaneous double-tracer imaging, as well as study the degree of quantum entanglement as a function of the annihilation mechanism.
Finally, we plan to explore the possibilities of applying quantum entanglement to diagnostics, and we look forward to performing total-body positronium and quantum entanglement imaging with the total-body J-PET in the Center for Theranostics.
Paweł Moskal is a panellist in the forthcoming Physics World Live event on 25 September 2025. The event, which also features Miles Padgett from the University of Glasgow and Matt Brookes from the University of Nottingham, will examine how medical physics can make the most of the burgeoning field of quantum science. You can sign up free here.
Human reproduction is an inefficient process, with less than one third of conceptions leading to live births. Failure of the embryo to implant in the uterus is one of the main causes of miscarriage. Recording this implantation process in vivo in real time is not yet possible, but a team headed up at the Institute for Bioengineering of Catalonia (IBEC) has designed a platform that enables visualization of human embryo implantation in the laboratory. The researchers hope that quantifying the dynamics of implantation could impact fertility rates and help improve assisted reproductive technologies.
At its very earliest stage, an embryo comprises a small ball of cells called a blastocyst. About six days after fertilization, this blastocyst starts to embed itself into the walls of the uterus. To study this implantation process in real time, the IBEC team created an ex vivo platform that simulates the outer layers of the uterus. Unlike previous studies that mostly focused on the biochemical and genetic aspects of implantation, the new platform enables study of the mechanical forces exerted by the embryo to penetrate the uterus.
The implantation platform incorporates a collagen gel to mimic the extracellular matrix encountered in vivo, as well as globulin-rich proteins that are required for embryo development. The researchers designed two configurations: a 2D platform, in which blastocysts settle on top of a flat gel; and a 3D version where the blastocysts are placed directly inside collagen drops.
To capture the dynamics of blastocyst implantation, the researchers recorded time-lapse movies using fluorescence imaging and traction force microscopy. They imaged the matrix fibres and their deformations using light scattering and visualized autofluorescence from the embryo under multiphoton illumination. To quantify matrix deformation, they used the fibres as markers for real-time tracking and derived maps showing the direction and amplitude of fibre displacements – revealing the regions where the embryo applied force and invaded the matrix.
Quantifying implantation dynamics
In the 2D platform, 72% of human blastocysts attached to and then integrated into the collagen matrix, reaching a depth of up to 200 µm in the gel. The embryos increased in size over time and maintained a spherical shape without spreading on the surface. Implantation in the 3D platform, in which the embryo is embedded directly inside the matrix, led to 80% survival and invasion rate. In both platforms, the blastocysts showed motility in the matrix, illustrating the invasion capacity of human embryos.
Research team From left to right: Samuel Ojosnegros, Anna Seriola and Amélie Godeau at IBEC labs. (Courtesy: Institute for Bioengineering of Catalonia)
The researchers also monitored the traction forces that the embryos exerted on the collagen matrix, moving and reorganising it with a displacement that increased over time. They note that the displacement was not perfectly uniform and that the pulling varied over time and space, suggesting that this pulsatile behaviour may help the embryos to continuously sense the environment.
“We have observed that human embryos burrow into the uterus, exerting considerable force during the process,” explains study leader Samuel Ojosnegros in a press statement. “These forces are necessary because the embryos must be able to invade the uterine tissue, becoming completely integrated with it. It is a surprisingly invasive process. Although it is known that many women experience abdominal pain and slight bleeding during implantation, the process itself had never been observed before.”
For comparison, the researchers also examined the implantation of mouse blastocysts. In contrast to the complete integration seen for human blastocysts, mouse embryo outgrowth was limited to the matrix surface. In both platforms, initial attachment was followed by invasion and proliferation of trophoblast cells (the outer layer of the blastocyst). The embryo applied strong pulling forces to the fibrous matrix, remodelling the collagen and aligning the fibres around it during implantation. The displacement maps revealed a fluctuating pattern, as seen for the human embryos.
“By measuring the direct impact of the embryo on the matrix scaffold, we reveal the underlying mechanics of embryo implantation,” the researchers write. “We found that mouse and human embryos generated forces during implantation using a species-specific pattern.”
The team is now working to incorporate a theoretical framework to better understand the physical processes underlying implantation. “Our observations at earlier stages show that attachment is a limiting factor at the onset of human embryo implantation,” co-first author Amélie Godeau tells Physics World. “Our next step is to identify the key elements that enable a successful initial connection between the embryo and the matrix.”
A new method for generating high-energy proton beams could one day improve the precision of proton therapy for treating cancer. Developed by an international research collaboration headed up at the National University of Singapore, the technique involves accelerating H2+ ions and then using a novel two-dimensional carbon membrane to split the high-energy ion beam into beams of protons.
One obstacle when accelerating large numbers of protons together is that they all carry the same positive charge and thus naturally repel each other. This so-called space–charge effect makes it difficult to keep the beam tight and focused.
“By accelerating H₂⁺ ions instead of single protons, the particles don’t repel each other as strongly,” says project leader Jiong Lu. “This enables delivery of proton beam currents up to an order of magnitude higher than those from existing cyclotrons.”
Lu explains that a high-current proton beam can deliver more protons in a shorter time, making proton treatments quicker, more precise and targeting tumours more effectively. Such a proton beam could also be employed in FLASH therapy, an emerging treatment that delivers therapeutic radiation at ultrahigh dose rates to reduce normal tissue toxicity while preserving anti-tumour activity.
Industry-compatible fabrication
The key to this technique lies in the choice of an optimal membrane with which to split the H₂⁺ ions. For this task, Lu and colleagues developed a new material – ultraclean monolayer amorphous carbon (UC-MAC). MAC is similar in structure to graphene, but instead of an ordered honeycomb structure of hexagonal rings, it contains a disordered mix of five-, six-, seven and eight-membered carbon rings. This disorder creates angstrom-scale pores in the films, which can be used to split the H₂⁺ ions into protons as they pass through.
Pentagons, hexagons, heptagons, octagons Illustration of disorder-to-disorder synthesis (left); scanning transmission electron microscopy image of UC-MAC (right). (Courtesy: National University of Singapore)
Scaling the manufacture of ultrathin MAC films, however, has previously proved challenging, with no industrial synthesis method available. To address this problem, the researchers proposed a new fabrication approach in which the emergence of long-range order in the material is suppressed, not by the conventional approach of low-temperature growth, but by a novel disorder-to-disorder (DTD) strategy.
DTD synthesis uses plasma-enhanced chemical vapor deposition (CVD) to create a MAC film on a copper substrate containing numerous nanoscale crystalline grains. This disordered substrate induces high levels of randomized nucleation in the carbon layer and disrupts long-range order. The approach enabled wafer-scale (8-inch) production of UC-MAC films within just 3 s – an order of magnitude faster than conventional CVD methods.
Disorder creates precision
To assess the ability of UC-MAC to split H₂⁺ ions into protons, the researchers generated a high-energy H2+ nanobeam and focused it onto a freestanding two-dimensional UC-MAC crystal. This resulted in the ion beam splitting to create high-precision proton beams. For comparison they repeated the experiment (with beam current stabilities controlled within 10%) using single-crystal graphene, non-clean MAC with metal impurities and commercial carbon thin films (8 nm).
Measuring double-proton events – in which two proton signals are detected from a single H2+ ion splitting – as an indicator for proton scattering revealed that the UC-MAC membrane produced far fewer unwanted scattered protons than the other films. Ion splitting using UC-MAC resulted in about 47 double-proton events over a 20 s collection time, while the graphene film exhibited roughly twice this number and the non-clean MAC slightly more. The carbon thin film generated around 46 times more scattering events.
The researchers point out that the reduced double-proton events in UC-MAC “demonstrate its superior ability to minimize proton scattering compared with commercial materials”. They note that as well as UC-MAC creating a superior quality proton beam, the technique provides control over the splitting rate, with yields ranging from 88.8 to 296.0 proton events per second per detector.
“Using UC-MAC to split H₂⁺ produces a highly sharpened, high-energy proton beam with minimal scattering and high spatial precision,” says Lu. “This allows more precise targeting in proton therapy – particularly for tumours in delicate or critical organs.”
“Building on our achievement of producing proton beams with greatly reduced scattering, our team is now developing single molecule ion reaction platforms based on two-dimensional amorphous materials using high-energy ion nanobeam systems,” he tells Physics World. “Our goal is to make proton beams for cancer therapy even more precise, more affordable and easier to use in clinical settings.”
Multimodal locomotion Top panel: fabrication and magnetic assembly of permanent magnetic droplet-derived microrobots (PMDMs). Lower panel: magnetic fields direct PMDM chains through complex biological environments such as the intestine. (Courtesy: CC BY 4.0/Sci. Adv. 10.1126/sciadv.adw3172)
Microrobots provide a promising vehicle for precision delivery of therapeutics into the body. But there’s a fine balance needed between optimizing multifunctional cargo loading and maintaining efficient locomotion. A research collaboration headed up at the University of Oxford and the University of Michigan has now developed permanent magnetic droplet-derived microrobots (PMDMs) that meet both of these requirements.
The PMDMs are made from a biocompatible hydrogel incorporating permanent magnetic microparticles. The hydrogel – which can be tailored to each clinical scenario – can carry drugs or therapeutic cells, while the particles’ magnetic properties enable them to self-assemble into chains and perform a range of locomotion behaviours under external magnetic control.
“Our motivation was to design a microrobot system with adaptable motion capabilities for potential applications in drug delivery,” explains Molly Stevens from the University of Oxford, experimental lead on this study. “By using self-assembled magnetic particles, we were able to create reconfigurable, modular microrobots that could adapt their shape on demand – allowing them to manoeuvre through complex biological terrains to deliver therapeutic payloads.”
Building the microrobots
To create the PMDMs, Stevens and collaborators used cascade tubing microfluidics to rapidly generate ferromagnetic droplets (around 300 per minute) from the hydrogel and microparticles. Gravitational sedimentation of the 5 µm-diameter microparticles led to the formation of Janus droplets with distinct hydrogel and magnetic phases. The droplets were then polymerized and magnetized to form PMDMs of roughly 0.5 mm in diameter.
The next step involved self-assembly of the PMDMs into chains. The researchers demonstrated that exposure to a precessing magnetic field caused the microrobots to rapidly assemble into dimers and trimers before forming a chain of eight, with their dipole moments aligned. Exposure to various dynamic magnetic fields caused the chains to move via different modalities, including walking, crawling, swinging and lateral movement.
The microrobots were able to ascend and descend stairs, and navigate obstacles including a 3-mm high railing, a 3-mm diameter cylinder and a column array. The reconfigurable PMDM chains could also adapt to confined narrow spaces by disassembling into shorter fragments and overcome tall obstacles by merging into longer chains.
Towards biomedical applications
By tailoring the hydrogel composition, the researchers showed that the microrobots could deliver different types of cargo with controlled dosage. PMDMs made from rigid polyethylene glycol diacrylate (PEGDA) could deliver fluorescent microspheres, for example, while soft alginate/gelatin hydrogels can be used for cell delivery.
PMDM chains also successfully transported human mesenchymal stem cell (hMSC)-laden Matrigel without compromising cell viability, highlighting their potential to deliver cells to specific sites for in vivo disease treatment.
To evaluate intestinal targeting, the researchers delivered PMDMs to ex vivo porcine intestine. Once inside, the microrobots assembled into chains and exhibited effective locomotion on the intestine surface. Importantly, the viscous and unstructured tissue surface did not affect chain assembly or motion. After navigation to the target site, exposing the PMDMs to the enzyme collagenase instigated controlled cargo release. Even after full degradation of the hydrogel phase, the chains retained integrity and locomotion capabilities.
The team also demonstrated programmable release of different cargoes, using hybrid chains containing rigid PEGDA segments and degradable alginate/gelatin segments. Upon exposure to collagenase, the cargo from the degradable domains exhibited burst release, while the slower degradation of PEGDA delayed the release of cargo in the PEGDA segments.
Biological environment Delivery of preassembled PMDM chains into a printed human cartilage model. The procedure consists of injections and assembly, locomotion, drug release and retrieval of PMDMs. Scale bars: 5 mm. (Courtesy: CC BY 4.0/Sci. Adv. 10.1126/sciadv.adw3172)
In another potential clinical application, the researchers delivered microrobots to 3D-printed human cartilage with an injury site. This involved catheter-based injection of PMDMs followed by application of an oscillating magnetic field to assemble the PMDMs into a chain. The chains could be navigated by external magnetic fields to the targeted injury site, where the hydrogel degraded and released the drug cargo.
After drug delivery, the team guided the microrobots back to the initial injection site and retrieved them using a magnetic catheter. This feature offers a major advantage over traditional microrobots, which often struggle to retrieve magnetic particles after cargo release, potentially triggering immune responses, tissue damage or other side effects.
“For microrobots to be clinically viable, they must not only perform their intended functions effectively but also do so safely,” explains co-first author Yuanxiong Cao from the University of Oxford. “The ability to retrieve the PMDM chains after they completed the intended therapeutic delivery enhances the biosafety of the system.”
Cao adds that while the focus for the intestine model was to demonstrate navigation and localized delivery, the precise control achieved over the microrobots suggests that “extraction is also feasible in this and other biomedically relevant environments”.
Predicting PMDM performance
Alongside the experiments, the team developed a computational platform, built using molecular dynamics simulations, to provide further insight into the collective behaviour of the PMDMs.
“The computational model was instrumental in predicting how individual microrobot units would self-assemble and respond to dynamic magnetic fields,” says Philipp Schoenhoefer, co-first author from the University of Michigan. “This allowed us to understand and optimize the magnetic interactions between the particles and anticipate how the robots would behave under specific actuation protocols.”
The researchers are now using these simulations to design more advanced microrobot structures with enhanced multifunctionality and mechanical resilience. “The next-generation designs aim to handle the more challenging in vivo conditions, such as high fluid shear and irregular tissue architectures,” Sharon Glotzer from the University of Michigan, simulation lead for the project, tells Physics World.
Cherenkov dosimetry is an emerging technique used to verify the dose delivered during radiotherapy, by capturing Cherenkov light generated when X-ray photons in the treatment beam interact with tissue in the patient. The initial intensity of this light is proportional to the deposited radiation dose – providing a means of non-contact in vivo dosimetry. The intensity emitted at the skin surface, however, is highly dependent on the patient’s skin colour, with increasing melanin absorbing more Cherenkov photons.
To increase the accuracy of dose measurements, researchers are investigating ways to calibrate the Cherenkov emission according to skin pigmentation. A collaboration headed up at Dartmouth College and Moffitt Cancer Center has now studied Cherenkov dosimetry in patients with a wide spectrum of skin tones. Reporting their findings in Physics in Medicine & Biology, they show how such a calibration can mitigate the effect of skin pigmentation.
“Cherenkov dosimetry is an interesting prospect because it gives us a completely passive, fly-on-the-wall approach to radiation dose verification. It does not require taping of detectors or wires to the patient, and allows for a broader sampling of the treatment area,” explains corresponding author Jacqueline Andreozzi. “The hope is that this would allow for safer, verifiable radiation dose delivery consistent with the treatment plan generated for each patient, and provide a means of assessing the clinical impact when treatment does not go as planned.”
Cherenkov dosimetry The intensity of Cherenkov light detected during radiotherapy is influenced by the individual’s melanin concentration. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/aded68)
A diverse patient population
Andreozzi, first author Savannah Decker and their colleagues examined 24 patients undergoing breast radiotherapy using 6 or 15 MV photon beams, or a combination of both energies.
During routine radiotherapy at Moffitt Cancer Center the researchers measured the Cherenkov emission from the tissue surface (roughly 5 mm deep) using a time-gated, intensified CMOS camera installed in the bunker ceiling. To minimize effects from skin reactions, they analysed the earliest fraction of each patient’s treatment.
First author Medical physicist Savannah Decker. (Courtesy: Jacob Sunnerberg)
Patients with darker skin exhibited up to five times lower Cherenkov emission than those with lighter skin for the same delivered dose – highlighting the significant impact of skin pigmentation on Cherenkov-based dose estimates.
To assess each patient’s skin tone, the team used standard colour photography to calculate the relative skin luminance as a metric for pigmentation. A colour camera module co-mounted with the Cherenkov imaging system simultaneously recorded an image of each patient during their radiation treatments. The room lighting was standardized across all patient sessions and the researchers only imaged skin regions directly facing the camera.
In addition to skin pigmentation, subsurface tissue properties can also affect the transmission of Cherenkov light. Different tissue types – such as dense fibroglandular or less dense adipose tissue – have differing optical densities. To compensate for this, the team used routine CT scans to establish an institution-specific CT calibration factor (independent of skin pigmentation) for the diverse patient dataset, using a process based on previous research by co-author Rachael Hachadorian.
Following CT calibration, the Cherenkov intensity per unit dose showed a linear relationship with relative skin luminance, for both 6 and 15 MV beams. Encouraged by this observed linearity, the researchers generated linear calibration factors based on each patient’s skin pigmentation, for application to the Cherenkov image data. They note that the calibration can be incorporated into existing clinical workflows without impacting patient care.
Improving the accuracy
To test the impact of their calibration factors, the researchers first plotted the mean uncalibrated Cherenkov intensity as a function of mean surface dose (based on the projected dose from the treatment planning software for the first 5 mm of tissue) for all patients. For 6 MV beams, this gave an R2 value (a measure of data variance from the linear fit) of 0.81. For 15 MV treatments, R2 was 0.17, indicating lower Cherenkov-to-dose linearity.
Applying the CT calibration to the diverse patient data did not improve the linearity. However, applying the pigmentation-based calibration had a significant impact, improving the R2 values to 0.91 and 0.64, for 6 and 15 MV beams, respectively. The highest Cherenkov-to-dose linearity was achieved after applying both calibration factors, which resulted in R2 values of 0.96 and 0.91 for 6 and 15 MV beams, respectively.
Using only the CT calibration, the average dose errors (the mean difference between the estimated and reference dose) were 38% and 62% for 6 and15 MV treatments, respectively. The pigmentation-based calibration reduced these errors to 21% and 6.6%.
“Integrating colour imaging to assess patients’ skin luminance can provide individualized calibration factors that significantly improve Cherenkov-to-dose estimations,” the researchers conclude. They emphasize that this calibration is institution-specific – different sites will need to derive a calibration algorithm corresponding to their specific cameras, room lighting and beam energies.
Bringing quantitative in vivo Cherenkov dosimetry into routine clinical use will require further research effort, says Andreozzi. “In Cherenkov dosimetry, the patient becomes their own dosimeter, read out by a specialized camera. In that respect, it comes with many challenges – we usually have standardized, calibrated detectors, and patients are in no way standardized or calibrated,” Andreozzi tells Physics World. “We have to characterize the superficial optical properties of each individual patient in order to translate what the cameras see into something close to radiation dose.”
Damage to the spinal cord can disrupt communication between the brain and body, with potentially devastating effects. Spinal cord injuries can cause permanent loss of sensory, motor and autonomic functions, or even paralysis, and there’s currently no cure. To address this inadequacy, researchers at Chalmers University of Technology in Sweden and the University of Auckland in New Zealand have developed an ultrathin bioelectric implant that improved movement in rats with spinal cord injuries.
The implant works by delivering a low-frequency pulsed electric field (EF) across the injury site – an approach that shows promise in promoting regeneration of axons (nerve fibres) and improving outcomes. Traditional EF treatments, however, rely on metal electrodes that are prone to corrosion. In this latest study, described in Nature Communications, the researchers fabricated stimulation electrodes from sputtered iridium oxide films (SIROF), which exhibit superior durability and stability to their metal counterparts.
The team further enhanced the EF treatment by placing the electrodes directly on the spinal cord to deliver stimulation directly to the injury site. Although this subdural positioning requires more invasive surgery than the epidural placement used previously, it should deliver stronger stimulation while using an order of magnitude less power than epidural electrodes.
“We chose subdural stimulation because it avoids the shunting effect of cerebrospinal fluid, which is highly conductive and can weaken the electric field when electrodes are placed epidurally,” explains co-lead researcher Lukas Matter from Chalmers University of Technology. “Subdural placement puts the electrodes directly on the spinal cord, allowing for stronger and more precise stimulation with lower current.”
Restoring motionand sensation
Matter and collaborators tested the implants in rats with spinal cord injuries, using 200 μm diameter SIROF electrodes placed on either side of the injury site. The animals received 1 h of EF treatment daily for the first 7–11 days, and then on weekdays only for up to 12 weeks.
To compare EF treatment with natural healing (unlike humans, rats can recover after spinal cord injury), the researchers assessed the hind-limb function of both treated and non-treated rats. They found that during the first week, the non-treated group recovered faster than the treated group. From week 4 onwards, however, treated rats showed significantly improved locomotion and coordination compared with non-treated rats, indicating greater recovery of hind-limb function.
The treated rats continued to improve until the end of the study (week 12), while non-treated rats showed no further improvement after week 5. At week 12, all of the treated animals exhibited consistent coordination between front and hind limbs, compared with only 20% of non-treated rats, which struggled to move smoothly.
The team also assessed the recovery of mechanical sensation by touching the animals’ paws with a metal filament. Treated rats withdrew their paws faster than non-treated rats, suggesting a recovery of touch sensitivity – though the researchers note that this may reflect hypersensitivity.
“This indicates that the treatment supported recovery of both movement and sensation,” says co-lead researcher Bruce Harland from the University of Auckland in a press statement. “Just as importantly, our analysis confirmed that the treatment did not cause inflammation or other damage to the spinal cord, demonstrating that it was not only effective but also safe.”
Durable design
To confirm the superior stability of SIROF electrodes, the researchers performed benchtop tests mimicking the in vivo treatment. The SIROF electrodes showed no signs of dysfunction or delamination, while platinum electrodes corroded and failed.
“Platinum electrodes are prone to degradation over time, especially at high charge densities, due to irreversible electrochemical reactions that cause corrosion and delamination, ultimately compromising their long-term stability,” says Matter. “SIROF enables reversible charge injection through surface-bound oxidation states, minimizing the generation of potentially toxic stimulation byproducts and enhancing their stimulation capabilities.”
In contrast with previous studies, the researchers did not see any change in axon density around the lesion site. Matter suggests some possible reasons for this finding: “The 12-week time point may have been too late to capture early signs of regeneration. The injury itself created a large cystic cavity, which may have blocked axon growth. Also, electric field treatment might improve recovery through protective or alternative mechanisms, not necessarily by promoting new axon growth”.
The researchers are now developing an enhanced version of the implant with larger electrodes based on the conductive polymer PEDOT, which enables higher charge densities without compromising biocompatibility. This will allow them to assess a broader range of field strengths and pulse durations in order to determine the optimal treatment conditions. They also plan to test the implant in larger animal models, and hope to elucidate the mechanisms underlying the locomotion improvement using ex vivo models.
As for the possibility of future clinical implementation, senior author Maria Asplund of Chalmers University envisions a temporary, possibly biodegradable, subdural implant that safely delivers low-frequency EF therapy. “This could be implanted early after spinal cord injury to support axon regrowth and reduce the follow-up damage that occurs after the injury itself,” she tells Physics World.
As the number of cancer cases continues to grow, radiation oncology departments are under increasing pressure to treat more and more patients. And as clinical facilities expand to manage this ongoing growth, and technology developments increase the complexity of radiotherapy delivery, there’s an urgent need to optimize the treatment workflow without ramping up time or staffing requirements.
To enable this level of optimization, radiation therapy departments will require an efficient quality management system that can handle both machine and patient quality assurance (QA), works seamlessly with treatment devices from multiple vendors, and provides the time savings required to ease staff workload.
Driven by growth
A case in point is the Moffitt Cancer Center in Florida, which in 2018 shifted all of its QA to SunCHECK, a quality management platform from Sun Nuclear that combines hardware and software to streamline treatment and delivery system QA into one centralized platform. Speaking at a recent Sun Nuclear webinar, clinical physicist Daniel Opp explained that the primary driver for this switch was growth.
Daniel Opp “Having one system means that we’re able to do tests in the same way across all our linacs.” (Courtesy: D Opp)
“In 2018, our physicians were shifting to perform a lot more SBRT [stereotactic body radiation therapy]. Our leadership had plans in motion to add online adaptive planning as well as expand with opening more radiation oncology centres,” he explained.
At that time, the centre was using multiple software platforms and many different imaging phantoms to run its QA, with physicists still relying on manual measurements and qualitative visual assessments. Now, the team performs all machine QA using SunCHECK Machine and almost all patient-specific QA [PSQA] using SunCHECK Patient.
“Our QA software and data were fractured and all over the place,” said Opp. “The move to SunCHECK made sense as it gave us the ability to integrate all measurements, software and databases into a one-stop shop, providing significant time savings and far cleaner record keeping.”
SunCHECK also simplifies QA procedures by consolidating tests. Opp explained that back in 2018, photon tests on the centre’s linacs required five setups, 12 measurements and manually entering values 22 times; SunCHECK reduced this to one setup, four measurements and no manual entries. “This alone gives you an overview of the significant time savings,” he said.
Another benefit is the ability to automate tests and ensure standardization. “If you tell our large group of physicists to do a picket fence test, we’ll all do it a little differently,” Opp explained. “Having one system on which we’re all running the same tests means that we’re able to do the test in the same way across all our linacs.”
Opp noted that SunCHECK displays all required information on an easy-to-read screen, with the patient QA worklist on one side and the machine QA worklist on the other. “You see a snapshot of the clinic and can figure out if there’s anything you need to take care of. It’s very efficient in letting you know when something needs your attention,” he said.
Sansourekidou initiated the switch to SunCHECK after joining UNM in 2020 as its new director of medical physics. At that time the cancer centre was treating about 1000 patients per year. But high patient numbers led to a long waiting list – with roughly three months between referral and the start of treatment – and clear need for the facility to expand.
Patricia Sansourekidou “We saw huge time savings for both monthly and daily QA.” (Courtesy: P Sansourekidou)
Assessing the centre’s QA procedures in 2020 revealed that the team was using a wide variety of QA software, making routine checks time consuming. Monthly linac QA, for example, required roughly 32 files and took about 14 hours to perform. In addition, Sansourekidou noted, physicists were spending hours every month adjusting the machines. “One day it was the energy that was off and then the output was off; I soon realised that, in the absence of appropriate software, we were making adjustments back and forth,” she said. “More importantly, we had no way to track these trends.”
Sansourekidou concluded that the centre needed an improved QA solution based on one unified platform. “So we went on a physics hunt,” she said. “We met with every vendor out there and Sun Nuclear won the request for proposal. So we implemented SunCHECK Machine and SunCHECK Patient.”
Switching to SunCHECK reduced monthly QA to just 4–5 hours per linac. “We’re saving about nine hours per linac per month; that’s 324 hours per year when we could be doing something else for our patients,” said Sansourekidou. Importantly, the new software enables the team to visualize trends and assess whether a genuine problem is present.
For daily QA, which previously required numerous spreadsheets and systems, SunCHECK’s daily QA template provides time savings of about 60%. “At six in the morning, that’s important,” Sansourekidou pointed out. Annual QA saw roughly 33% time savings, while for the 70% of patients requiring PSQA, time savings were about 25%.
Another “unexpected side effect” of deploying SunCHECK, said Sansourekidou, is that the IT department was happy to maintain one platform. “Every time we have a new physicist, it’s much easier for our IT department to set them up. That has been a huge benefit for us,” she said. “Additionally, our service engineers are happy because we are not spending hours of their time adjusting the machine back and forth.”
“Overall, I thought there were great improvements that really helped us justify the initial investment – not just monetary, but also time investment from our physics team,” she said.
Efficiency savings QA times before and after implementing SunCHECK at the UNM Comprehensive Cancer Center. (Courtesy: Patricia Sansourekidou)
Phantom-free QA
For Opp, one of the biggest features enabled by SunCHECK was the move to phantom-free PSQA, which saves a lot of time and eliminates errors that can be inherent to phantom-based QA. In the last year, the Moffitt team also switched to using DoseCHECK – SunCHECK’s secondary 3D dose calculation algorithm – as the foundation of its quality checks. Alongside, a RayStation script checks plan deliverability to ensure that no problems arise once the patient is on the table.
“We don’t do our pre-treatment QA anymore. We rely on those two to get confidence into the final work and then we run our logs off the first patient fraction,” Opp explained. “We have a large physics group and there was natural apprehension, but everybody got on board and agreed that this was a shift we needed to make. We leveraged DoseCHECK to create a better QA system for ourselves.”
Since 2018, both patient workload and staff numbers at the Moffitt Cancer Center have doubled. By the end of 2025, it will also have almost doubled its number of treatment units. The centre has over 100 SunCHECK users – including therapists, dosimetrists and physicists – and Opp emphasized that the system is robust enough to handle all these users doing different tasks at different times without any issues.
As patient numbers increase, the time savings conferred by SunCHECK help reduce staff workload and improve quality-of-life for users. The centre currently performs about 100 PSQA procedures per week, which would have taken about 37 hours using previous QA processes – a workload that Opp notes would not be managed well. SunCHECK reduced the weekly average to around seven hours.
Similarly, linac QA previously required two or three late nights per month (or one full day on the weekend). “After the switch to SunCHECK, everybody’s pretty much able to get it done in one late night per month,” said Opp. He added that the Moffitt Cancer Center’s continuing growth has required the onboarding of many new physicists – and that it’s significantly easier to train these new staff with all of the QA software in one centralized platform.
Enabling accreditation
Finally, accreditation is essential for radiation oncology departments to demonstrate the ability to deliver safe, high-quality care. The UNM Comprehensive Cancer Centre’s previous American College of Radiology (ACR) accreditation had expired before Sansourekidou’s arrival, and she was keen to rectify this situation. And in March 2024 the centre achieved ASTRO’s APEx accreditation.
“SunCHECK helped with that,” she said. “It wasn’t the only reason, there were other things that we had to improve, but we did come across as having a strong physics programme.”
Achieving accreditation also helps justify the purchase of a totally new QA platform, Sansourekidou explained. “The most important thing to explain to your administration is that if we don’t do things the way that our regulatory bodies advise, then not only will we lose our accreditation, but we will fall behind,” she said.
Sansourekidou emphasized that the efficiency gains conferred by SunCHECK were invaluable for the physics team, particularly for out-of-hours working. “We saw huge time savings for both monthly and daily QA,” she said. “It is a large investment, but improving efficiency through investment in software will really help the department in the long term.”