↩ Accueil

Vue normale

index.feed.received.before_yesterday

World’s first patient treatments delivered with proton arc therapy

25 février 2025 à 14:00

A team at the Trento Proton Therapy Centre in Italy has delivered the first clinical treatments using proton arc therapy, an emerging proton delivery technique. Following successful dosimetric comparisons with clinically delivered proton plans, the researchers confirmed the feasibility of PAT delivery and used PAT to treat nine cancer patients, reporting their findings in Medical Physics.

Currently, proton therapy is mostly delivered using pencil-beam scanning (PBS), which provides highly conformal dose distributions. But PBS delivery can be compromised by the small number of beam directions deliverable in an acceptable treatment time. PAT overcomes this limitation by moving to an arc trajectory.

“Proton arc treatments are different from any other pencil-beam proton delivery technique because of the large number of beam angles used and the possibility to optimize the number of energies used for each beam direction, which enables optimization of the delivery time,” explains first author Francesco Fracchiolla. “The ability to optimize both the number of energy layers and the spot weights makes these treatments superior to any previous delivery technique.”

Plan comparisons

The Trento researchers – working with colleagues from RaySearch Laboratories – compared the dosimetric parameters of PAT plans with those of state-of-the-art multiple-field optimized (MFO) PBS plans, for 10 patients with head-and-neck cancer. They focused on this site due to the high number of organs-at-risk (OARs) close to the target that may be spared using this new technique.

In future, PAT plans will be delivered with the beam on during gantry motion (dynamic mode). This requires dynamic arc plan delivery with all system settings automatically adjusted as a function of gantry angle – an approach with specific hardware and software requirements that have so far impeded clinical rollout.

Instead, Fracchiolla and colleagues employed an alternative version of static PAT, in which the static arc is converted into a series of PBS beams and delivered using conventional delivery workflows. Using the RayStation treatment planning system, they created MFO plans (using six noncoplanar beam directions) and PAT plans (with 30 beam directions), robustly optimized against setup and range uncertainties.

PAT plans dramatically improved dose conformality compared with MFO treatments. While target coverage was of equal quality for both treatment types, PAT decreased the mean doses to OARs for all patients. The biggest impact was in the brainstem, where PAT reduced maximum and mean doses by 19.6 and 9.5 Gy(RBE), respectively. Dose to other primary OARs did not differ significantly between plans, but PAT achieved an impressive reduction in mean dose to secondary OARs not directly adjacent to the target.

The team also evaluated how these dosimetric differences impact normal tissue complication probability (NTCP). PAT significantly reduced (by 8.5%) the risk of developing dry mouth and slightly lowered other NTCP endpoints (swallowing dysfunction, tube feeding and sticky saliva).

To verify the feasibility of clinical PAT, the researchers delivered MFO and PAT plans for one patient on a clinical gantry. Importantly, delivery times (from the start of the first beam to the end of the last) were similar for both techniques: 36 min for PAT with 30 beam directions and 31 min for MFO. Reducing the number of beam directions to 20 reduced the delivery time to 25 min, while maintaining near-identical dosimetric data.

First patient treatments

The successful findings of the plan comparison and feasibility test prompted the team to begin clinical treatments.

“The final trigger to go live was the fact that the discretized PAT plans maintained pretty much exactly the optimal dosimetric characteristics of the original dynamic (continuous rotation) arc plan from which they derived, so there was no need to wait for full arc to put the potential benefits to clinical use. Pretreatment verification showed excellent dosimetric accuracy and everything could be done in a fully CE-certified environment,” say Frank Lohr and Marco Cianchetti, director and deputy director, respectively, of the Trento Proton Therapy Center. “The only current drawback is that we are not at the treatment speed that we could be with full dynamic arc.”

To date, nine patients have received or are undergoing PAT treatment: five with head-and-neck tumours, three with brain tumours and one thorax cancer. For the first two head-and-neck patients, the team created PAT plans with a half arc (180° to 0°) with 10 beam directions and a mean treatment time of 12 min. The next two were treated with a complete arc (360°) with 20 beam directions. Here, the mean treatment time was 24 min. Patient-specific quality assurance revealed an average gamma passing rate (3%, 3 mm) of 99.6% and only one patient required replanning.

All PAT treatments were performed using the centre’s IBA ProteusPlus proton therapy unit and the existing clinical workflow. “Our treatment planning system can convert an arc plan into a PBS plan with multiple beams,” Fracchiolla explains. “With this workaround, the entire clinical chain doesn’t change and the plan can be delivered on the existing system. This ability to convert the arc plans into PBS plans means that basically every proton centre can deliver these treatments with the current hardware settings.”

The researchers are now analysing acute toxicity data from the patients, to determine whether PAT reduces toxicity. They are also looking to further reduce the delivery times.

“Hopefully, together with IBA, we will streamline the current workflow between the OIS [oncology information system] and the treatment control system to reduce treatment times, thus being competitive in comparison with conventional approaches, even before full dynamic arc treatments become a clinical reality,” adds Lohr.

The post World’s first patient treatments delivered with proton arc therapy appeared first on Physics World.

Low-temperature plasma halves cancer recurrence in mice

18 février 2025 à 10:00

Treatment with low-temperature plasma is emerging as a novel cancer therapy. Previous studies have shown that plasma can deactivate cancer cells in vitro, suppress tumour growth in vivo and potentially induce anti-tumour immunity. Researchers at the University of Tokyo are investigating another promising application – the use of plasma to inhibit tumour recurrence after surgery.

Lead author Ryo Ono and colleagues demonstrated that treating cancer resection sites with streamer discharge – a type of low-temperature atmospheric plasma – significantly reduced the recurrence rate of melanoma tumours in mice.

“We believe that plasma is more effective when used as an adjuvant therapy rather than as a standalone treatment, which led us to focus on post-surgical treatment in this study,” says Ono.

In vivo experiments

To create the streamer discharge, the team applied a high-voltage pulse (25 kV, 20 ns, 100 pulse/s) to a 3 mm-diameter rod electrode with a hemispherical tip. The rod was placed in a quartz tube with a 4 mm inner diameter, and the working gas – humid oxygen mixed with ambient air – was flowed through the tube. As electrons in the plasma collide with molecules in the gas, the mixture generates cytotoxic reactive oxygen and nitrogen species.

The researchers performed three experiments on mice with melanoma, a skin cancer with a local recurrence rate of up to 10%. In the first experiment, they injected 11 mice with mouse melanoma cells, resecting the resulting tumours eight days later. They then treated five of the mice with streamer discharge for 10 min, with the mouse placed on a grounded plate and the electrode tip 10 mm above the resection site.

Experimental setup for plasma generation
Experimental setup Streamer discharge generation and treatment. (Courtesy: J. Phys. D: Appl. Phys. 10.1088/1361-6463/ada98c)

Tumour recurrence occurred in five of the six control mice (no plasma treatment) and two of the five plasma-treated mice, corresponding to recurrence rates of 83% and 40%, respectively. In a second experiment with the same parameters, recurrence rates were 44% in nine control mice and 25% in eight plasma-treated mice.

In a third experiment, the researchers delayed the surgery until 12 days after cell injection, increasing the size of the tumour before resection. This led to a 100% recurrence rate in the control group of five mice. Only one recurrence was seen in five plasma-treated mice, although one mouse that died of unknown causes was counted as a recurrence, resulting in a recurrence rate of 40%.

All of the experiments showed that plasma treatment reduced the recurrence rate by roughly 50%. The researchers note that the plasma treatment did not affect the animals’ overall health.

Cytotoxic mechanisms

To further confirm the cytotoxicity of streamer discharge, Ono and colleagues treated cultured melanoma cells for between 0 and 250 s, at an electrode–surface distance of 10 mm. The cells were then incubated for 3, 6 or 24 h. Following plasma treatments of up to 100 s, most cells were still viable 24 h later. But between 100 and 150 s of treatment, the cell survival rate decreased rapidly.

The experiment also revealed a rapid transition from apoptosis (natural programmed cell death) to late apoptosis/necrosis (cell death due to external toxins) between 3 and 24 h post-treatment. Indeed, 24 h after a 150 s plasma treatment, 95% of the dead cells were in the late stages of apoptosis/necrosis. This finding suggests that the observed cytotoxicity may arise from direct induction of apoptosis and necrosis, combined with inhibition of cell growth at extended time points.

In a previous experiment, the researchers used streamer discharge to treat tumours in mice before resection. This treatment delayed tumour regrowth by at least six days, but all mice still experienced local recurrence. In contrast, in the current study, plasma treatment reduced the recurrence rate.

The difference may be due to different mechanisms by which plasma inhibits tumour recurrence: cytotoxic reactive species killing residual cancer cells at the resection site; or reactive species triggering immunogenic cell death. The team note that either or both of these mechanisms may be occurring in the current study.

“Initially, we considered streamer discharge as the main contributor to the therapeutic effect, as it is the primary source of highly reactive short-lived species,” explains Ono. “However, recent experiments suggest that the discharge within the quartz tube also generates a significant amount of long-lived reactive species (with lifetimes typically exceeding 0.1 s), which may contribute to the therapeutic effect.”

One advantage of the streamer discharge device is that it uses only room air and oxygen, without requiring the noble gases employed in other cold atmospheric plasmas. “Additionally, since different plasma types generate different reactive species, we hypothesized that streamer discharge could produce a unique therapeutic effect,” says Ono. “Conducting in vivo experiments with different plasma sources will be an important direction for future research.”

Looking ahead to use in the clinic, Ono believes that the low cost of the device and its operation should make it feasible to use plasma treatment immediately after tumour resection to reduce recurrence risk. “Currently, we have only obtained preliminary results in mice,” he tells Physics World. “Clinical application remains a long-term goal.”

The study is reported in Journal of Physics D: Applied Physics.

The post Low-temperature plasma halves cancer recurrence in mice appeared first on Physics World.

Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection

11 février 2025 à 17:15

The COVID-19 pandemic provided a driving force for researchers to seek out new disinfection methods that could tackle future viral outbreaks. One promising approach relies on the use of nanoparticles, with several metal and metal oxide nanoparticles showing anti-viral activity against SARS-CoV-2, the virus that causes COVID-19. With this in mind, researchers from Sweden and Estonia investigated the effect of such nanoparticles on two different virus types.

Aiming to elucidate the nanoparticles’ mode of action, they discovered a previously unknown antiviral mechanism, reporting their findings in Nanoscale.

The researchers – from the Swedish University of Agricultural Sciences (SLU) and the University of Tartu – examined triethanolamine terminated titania (TATT) nanoparticles, spherical 3.5-nm diameter titanium dioxide (titania) particles that are expected to interact strongly with viral surface proteins.

They tested the antiviral activity of the TATT nanoparticles against two types of virus: swine transmissible gastroenteritis virus (TGEV) – an enveloped coronavirus that’s surrounded by a phospholipid membrane and transmembrane proteins; and the non-enveloped encephalomyocarditis virus (EMCV), which does not have a phospholipid membrane. SARS-CoV-2 has a similar structure to TGEV: an enveloped virus with an outer lipid membrane and three proteins forming the surface.

“We collaborated with the University of Tartu in studies of antiviral materials,” explains lead author Vadim Kessler from SLU. “They had found strong activity from cerium dioxide nanoparticles, which acted as oxidants for membrane destruction. In our own studies, we saw that TATT formed appreciably stable complexes with viral proteins, so we could expect potentially much higher activity at lower concentration.”

In this latest investigation, the team aimed to determine whether one of these potential mechanisms – blocking of surface proteins, or membrane disruption via oxidation by nanoparticle-generated reactive oxygen species – is the likely cause of TATT’s antiviral activity. The first of these effects usually occurs at low (nanomolar to micromolar) nanoparticle concentrations, the latter at higher (millimolar) concentrations.

Mode of action

To assess the nanoparticle’s antiviral activity, the researchers exposed viral suspensions to colloidal TATT solutions for 1 h, at room temperature and in the dark (without UV illumination). For comparison, they repeated the process with silicotungstate polyoxometalate (POM) nanoparticles, which are not able to bind strongly to cell membranes.

The nanoparticle-exposed viruses were then used to infect cells and the resulting cell viability served as a measure of the virus infectivity. The team note that the nanoparticles alone showed no cytotoxicity against the host cells.

Measuring viral infectivity after nanoparticle exposure revealed that POM nanoparticles did not exhibit antiviral effects on either virus, even at relatively high concentrations of 1.25 mM. TATT nanoparticles, on the other hand, showed significant antiviral activity against the enveloped TGEV virus at concentrations starting from 0.125 mM, but did not affect the non-enveloped EMCV virus.

Based on previous evidence that TATT nanoparticles interact strongly with proteins in darkness, the researchers expected to see antiviral activity at a nanomolar level. But the finding that TATT activity only occurred at millimolar concentrations, and only affected the enveloped virus, suggests that the antiviral effect is not due to blocking of surface proteins. And as titania is not oxidative in darkness, the team propose that the antiviral effect is actually due to direct complexation of nanoparticles with membrane phospholipids – a mode of antiviral action not previously considered.

“Typical nanoparticle concentrations required for effects on membrane proteins correspond to the protein content on the virus surface. With a 1:1 complex, we would need maximum nanomolar concentrations,” Kessler explains. “We saw an effect at about 1 mM/l, which is far higher. This was the indication for us that the effect was on the whole of membrane.”

Verifying the membrane effect

To corroborate their hypothesis, the researchers examined the leakage of dye-labelled RNA from the TGEV coronavirus after 1 h exposure to nanoparticles. The fluorescence signal from the dye showed that TATT-treated TGEV released significantly more RNA than non-exposed virus, attributed to the nanoparticles disrupting the virus’s phospholipid membrane.

Finally, the team studied the interactions between TATT nanoparticles and two model phospholipid compounds. Both molecules formed strong complexes with TATT nanoparticles, while their interaction with POM nanoparticles was weak. This additional verification led the researchers to conclude that the antiviral effect of TATT in dark conditions is due to direct membrane disruption via complexation of titania nanoparticles with phospholipids.

“To the best of our knowledge, [this] proves a new pathway for metal oxide nanoparticles antiviral action,” they write.

Importantly, the nanoparticles are non-toxic, and work at room temperature without requiring UV illumination – enabling simple and low-cost disinfection methods. “While it was known that disinfection with titania could work in UV light, we showed that no special technical measures are necessary,” says Kessler.

Kessler suggests that the nanoparticles could be used to coat surfaces to destroy enveloped viruses, or in cost-effective filters to decontaminate air or water. “[It should be] possible to easily create antiviral surfaces that don’t require any UV activation just by spraying them with a solution of TATT, or possibly other oxide nanoparticles with an affinity to phosphate, including iron and aluminium oxides in particular,” he tells Physics World.

The post Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection appeared first on Physics World.

Imaging reveals how microplastics may harm the brain

29 janvier 2025 à 13:00

Pollution from microplastics – small plastic particles less than 5 mm in size – poses an ongoing threat to human health. Independent studies have found microplastics in human tissues and within the bloodstream. And as blood circulates throughout the body and through vital organs, these microplastics reach can critical regions and lead to tissue dysfunction and disease. Microplastics can also cause functional irregularities in the brain, but exactly how they exert neurotoxic effects remains unclear.

A research collaboration headed up at the Chinese Research Academy of Environmental Sciences and Peking University has shed light on this conundrum. In a series of cerebral imaging studies reported in Science Advances, the researchers tracked the progression of fluorescent microplastics through the brains of mice. They found that microplastics entering the bloodstream become engulfed by immune cells, which then obstruct blood vessels in the brain and cause neurobehavioral abnormalities.

“Understanding the presence and the state of microplastics in the blood is crucial. Therefore, it is essential to develop methods for detecting microplastics within the bloodstream,” explains principal investigator Haipeng Huang from Peking University. “We focused on the brain due to its critical importance: if microplastics induce lesions in this region, it could have a profound impact on the entire body. Our experimental technology enables us to observe the blood vessels within the brain and detect microplastics present in these vessels.”

In vivo imaging

Huang and colleagues developed a microplastics imaging system by integrating a two-photon microscopy system with fluorescent plastic particles and demonstrated that it could image brain blood vessels in awake mice. They then fed five mice with water containing 5-µm diameter fluorescent microplastics. After a couple of hours, fluorescence images revealed microplastics within the animals’ cerebral vessels.

The microplastic flash
Lightening bolt The “MP-flash” observed as two plastic particles rapidly fly through the cerebral blood vessels. (Courtesy: Haipeng Huang)

As they move through rapidly flowing blood, the microplastics generate a fluorescence signal resembling a lightning bolt, which the researchers call a “microplastic flash” (MP-flash). This MP-flash was observed in four of the mice, with the entire MP-flash trajectory captured in a single imaging frame of less than 208 ms.

Three hours after administering the microplastics, the researchers observed fluorescent cells in the bloodstream. The signals from these cells were of comparable intensity to the MP-flash signal, suggesting that the cells had engulfed microplastics in the blood to create microplastic-labelled cells (MPL-cells). The team note that the microplastics did not directly attach to the vessel wall or cross into brain tissue.

To test this idea further, the researchers injected microplastics directly into the bloodstream of the mice. Within minutes, they saw the MP-Flash signal in the brain’s blood vessels, and roughly 6 min later MPL-cells appeared. No fluorescent cells were seen in non-treated mice. Flow cytometry of mouse blood after microplastics injection revealed that the MPL-cells, which were around 21 µm in dimeter, were immune cells, mostly neutrophils and macrophages.

Tracking these MPL-cells revealed that they sometimes became trapped within a blood vessel. Some cells exited the imaging field following a period of obstruction while others remained in cerebral vessels for extended durations, in some instances for nearly 2.5 h of imaging. The team also found that one week after injection, the MPL-cells had still not cleared, although the density of blockages was much reduced.

“[While] most MPL-cells flow rapidly with the bloodstream, a small fraction become trapped within the blood vessels,” Huang tells Physics World. “We provide an example where an MPL-cell is trapped at a microvascular turn and, after some time, is fortunate enough to escape. Many obstructed cells are less fortunate, as the blockage may persist for several weeks. Obstructed cells can also trigger a crash-like chain reaction, resulting in several MPL-cells colliding in a single location and posing significant risks.”

The MPL-cell blockages also impeded blood flow in the mouse brain. Using laser speckle contrast imaging to monitor blood flow, the researchers saw reduced perfusion in the cerebral cortical vessels, notably at 30 min after microplastics injection and particularly affecting smaller vessels.

Laser speckle contrast images showing blood flow in the mouse brain
Reduced blood flow These laser speckle contrast images show blood flow in the mouse brain at various times after microplastics injection. The images indicate that blockages of microplastic-labelled cells inhibit perfusion in the cerebral cortical vessels. (Courtesy: Huang et al. Sci. Adv. 11 eadr8243 (2025))

Changing behaviour

Lastly, Huang and colleagues investigated whether the reduced blood supply to the brain caused by cell blockages caused behavioural changes in the mice. In an open-field experiment (used to assess rodents’ exploratory behaviour) mice injected with microplastics travelled shorter distances at lower speeds than mice in the control group.

The Y-maze test for assessing memory also showed that microplastics-treated mice travelled smaller total distances than control animals, with a significant reduction in spatial memory. Tests to evaluate motor coordination and endurance revealed that microplastics additionally inhibited motor abilities. By day 28 after injection, these behavioural impairments were restored, corresponding with the observed recovery of MPL-cell obstruction in the cerebral vasculature at 28 days.

The researchers conclude that their study demonstrates that microplastics harm the brain indirectly – via cell obstruction and disruption of blood circulation – rather than directly penetrating tissue. They emphasize, however, that this mechanism may not necessarily apply to humans, who have roughly 1200 times greater volume of circulating blood volume than mice and significantly different vascular diameters.

“In the future, we plan to collaborate with clinicians,” says Huang. “We will enhance our imaging techniques for the detection of microplastics in human blood vessels, and investigate whether ‘MPL-cell-car-crash’ happens in human. We anticipate that this research will lead to exciting new discoveries.”

Huang emphasizes how the use of fluorescent microplastic imaging technology has fundamentally transformed research in this field over the past five years. “In the future, advancements in real-time imaging of depth and the enhanced tracking ability of microplastic particles in vivo may further drive innovation in this area of study,” he says.

The post Imaging reveals how microplastics may harm the brain appeared first on Physics World.

Flexible tactile sensor reads braille in real time

27 janvier 2025 à 10:00

Braille is a tactile writing system that helps people who are blind or partially sighted acquire information by touching patterns of tiny raised dots. Braille uses combinations of six dots (two columns of three) to represent letters, numbers and punctuation. But learning to read braille can be challenging, particularly for those who lose their sight later in life, prompting researchers to create automated braille recognition technologies.

One approach involves simply imaging the dots and using algorithms to extract the required information. This visual method, however, struggles with the small size of braille characters and can be impacted by differing light levels. Another option is tactile sensing; but existing tactile sensors aren’t particularly sensitive, with small pressure variations leading to incorrect readings.

To tackle these limitations, researchers from Beijing Normal University and Shenyang Aerospace University in China have employed an optical fibre ring resonator (FRR) to create a tactile braille recognition system that accurately reads braille in real time.

“Current braille readers often struggle with accuracy and speed, especially when it comes to dynamic reading, where you move your finger across braille dots in real time,” says team leader Zhuo Wang. “I wanted to create something that could read braille more reliably, handle slight variations in pressure and do it quickly. Plus, I saw an opportunity to apply cutting-edge technology – like flexible optical fibres and machine learning – to solve this challenge in a novel way.”

Flexible fibre sensor

At the core of the braille sensor is the optical FRR – a resonant cavity made from a loop of fibre containing circulating laser light. Wang and colleagues created the sensing region by embedding an optical fibre in flexible polymer and connecting it into the FRR ring. Three small polymer protrusions on top of the sensor act as probes to transfer the applied pressure to the optical fibre. Spaced 2.5 mm apart to align with the dot spacing, each protrusion responds to the pressure from one of the three braille dots (or absence of a dot) in a vertical column.

Fabricating the fibre ring resonator sensor
Sensor fabrication The optical FRR is made by connecting ports of a 2×2 fibre coupler to form a loop. The sensing region is then connected into the loop. (Courtesy: Optics Express 10.1364/OE.546873)

As the sensor is scanned over the braille surface, the pressure exerted by the raised dots slightly changes the length and refractive index of the fibre, causing tiny shifts in the frequency of the light travelling through the FRR. The device employs a technique called Pound-Drever-Hall (PDH) demodulation to “lock” onto these shifts, amplify them and convert them into readable data.

“The PDH demodulation curve has an extremely steep linear slope, which means that even a very tiny frequency shift translates into a significant, measurable voltage change,” Wang explains. “As a result, the system can detect even the smallest variations in pressure with remarkable precision. The steep slope significantly enhances the system’s sensitivity and resolution, allowing it to pick up subtle differences in braille dots that might be too small for other sensors to detect.”

The eight possible configurations of three dots generate eight distinct pressure signals, with each braille character defined by two pressure outputs (one per column). Each protrusion has a slightly different hardness level, enabling the sensor to differentiate pressures from each dot. Rather than measuring each dot individually, the sensor reads the overall pressure signal and instantly determines the combination of dots and the character they correspond to.

The researchers note that, in practice, the contact force may vary slightly during the scanning process, resulting in the same dot patterns exhibiting slightly different pressure signals. To combat this, they used neural networks trained on large amounts of experimental data to correctly classify braille patterns, even with small pressure variations.

“This design makes the sensor incredibly efficient,” Wang explains. “It doesn’t just feel the braille, it understands it in real time. As the sensor slides over a braille board, it quickly decodes the patterns and translates them into readable information. This allows the system to identify letters, numbers, punctuation, and even words or poems with remarkable accuracy.”

Stable and accurate

Measurements on the braille sensor revealed that it responds to pressures of up to 3 N, as typically exerted by a finger when touching braille, with an average response time of below 0.1 s, suitable for fast dynamic braille reading. The sensor also exhibited excellent stability under temperature or power fluctuations.

To assess its ability to read braille dots, the team used the sensor to read eight different arrangements of three dots. Using a multilayer perceptron (MLP) neural network, the system effectively distinguished the eight different tactile pressures with a classification accuracy of 98.57%.

Next, the researchers trained a long short-term memory (LSTM) neural network to classify signals generated by five English words. Here, the system demonstrated a classification accuracy of 100%, implying that slight errors in classifying signals in each column will not affect the overall understanding of the braille.

Finally, they used the MLP-LSTM model to read short sentences, either sliding the sensor manually or scanning it electronically to maintain a consistent contact force. In both cases, the sensor accurately recognised the phrases.

The team concludes that the sensor can advance intelligent braille recognition, with further potential in smart medical care and intelligent robotics. The next phase of development will focus on making the sensor more durable, improving the machine learning models and making it scalable.

“Right now, the sensor works well in controlled environments; the next step is to test its use by different people with varying reading styles, or under complex application conditions,” Wang tells Physics World. “We’re also working on making the sensor more affordable so it can be integrated into devices like mobile braille readers or wearables.”

The sensor is described in Optics Express.

The post Flexible tactile sensor reads braille in real time appeared first on Physics World.

Microbeams plus radiosensitizers could optimize brain cancer treatment

21 janvier 2025 à 10:40

Brain tumours are notoriously difficult to treat, resisting conventional treatments such as radiation therapy, where the deliverable dose is limited by normal tissue tolerance. To better protect healthy tissues, researchers are turning to microbeam radiation therapy (MRT), which uses spatially fractionated beams to spare normal tissue while effectively killing cancer cells.

MRT is delivered using arrays of ultrahigh-dose rate synchrotron X-ray beams tens of microns wide (high-dose peaks) and spaced hundreds of microns apart (low-dose valleys). A research team from the Centre for Medical Radiation Physics at the University of Wollongong in Australia has now demonstrated that combining MRT with targeted radiosensitizers – such as nanoparticles or anti-cancer drugs – can further boost treatment efficacy, reporting their findings in Cancers.

“MRT is famous for its healthy tissue-sparing capabilities with good tumour control, whilst radiosensitizers are known for their ability to deliver targeted dose enhancement to cancer,” explains first author Michael Valceski. “Combining these modalities just made sense, with their synergy providing the potential for the best of both worlds.”

Enhancement effects

Valceski and colleagues combined MRT with thulium oxide nanoparticles, the chemotherapy drug methotrexate and the radiosensitizer iododeoxyuridine (IUdR). They examined the response of monolayers of rodent brain cancer cells to various therapy combinations. They also compared conventional broadbeam orthovoltage X-ray irradiation with synchrotron broadbeam X-rays and synchrotron MRT.

Synchrotron irradiations were performed on the Imaging and Medical Beamline at the ANSTO Australian Synchrotron, using ultrahigh dose rates of 74.1 Gy/s for broadbeam irradiation and 50.3 Gy/s for MRT. The peak-to-valley dose ratio (PVDR, used to characterize an MRT field) of this set-up was measured as 8.9.

Using a clonogenic assay to measure cell survival, the team observed that synchrotron-based irradiation enhanced cell killing compared with conventional irradiation at the same 5 Gy dose (for MRT this is the valley dose, the peaks experience 8.9 times higher dose), demonstrating the increased cell-killing effect of these ultrahigh-dose rate X-rays.

Adding radiosensitizers further increased the impact of synchrotron broadbeam irradiation, with DNA-localized IUdR killing more cells than cytoplasm-localized nanoparticles. Methotrexate, meanwhile, halved cell survival compared with conventional irradiation.

The team observed that at 5 Gy, MRT showed equivalent cell killing to synchrotron broadbeam irradiation. Valceski explains that this demonstrates MRT’s tissue-sparing potential, by showing how MRT can maintain treatment efficacy while simultaneously protecting healthy cells.

MRT also showed enhanced cell killing when combined with radiosensitizers, with the greatest effect seen for IUdR and IUdR plus methotrexate. This local dose enhancement, attributed to the DNA localization of IUdR, could further improve the tissue-sparing capabilities of MRT by enabling a lower per-fraction dose to reduce patient exposure whilst maintaining tumour control.

Imaging valleys and peaks

To link the biological effects with the physical collimation of MRT, the researchers performed confocal microscopy (at the Fluorescence Analysis Facility in Molecular Horizons, University of Wollongong) to investigate DNA damage following treatment at 0.5 and 5 Gy. Twenty minutes after irradiation, they imaged fixed cells to visualize double-strand DNA breaks (DSBs), as shown by γH2AX foci (representing a nuclear DSB site).

Spatially fractionated beams
Spatially fractionated beams Imaging DNA damage following MRT confirms that the cells’ biological responses match the beam collimation. The images show double-strand DNA breaks (green) overlaid on a nuclear counterstain (blue). (Courtesy: CC BY/Cancers 10.3390/cancers16244231)

The images verified that the cells’ biological responses corresponded with the MRT beam patterns, with the 400 µm microbeam spacing clearly seen in all treated cells, both with and without radiosensitizers.

In the 0.5 Gy images, the microbeam tracks were consistent in width, while the 5 Gy MRT tracks were wider as DNA damage spread from peaks into the valleys. This radiation roll-off was also seen with IUdR and IUdR plus methotrexate, with numerous bright foci visible in the valleys, demonstrating dose enhancement and improved cancer-killing with these radiosensitizers.

The researchers also analysed the MRT beam profiles using the γH2AX foci intensity across the images. Cells treated with radiosensitizers had broadened peaks, with the largest effect seen with the nanoparticles. As nanoparticles can be designed to target tumours, this broadening (roughly 30%) can be used to increase the radiation dose to cancer cells in nearby valleys.

“Peak broadening adds a novel benefit to radiosensitizer-enhanced MRT. The widening of the peaks in the presence of nanoparticles could potentially ‘engulf’ the entire cancer, and only the cancer, whilst normal tissues without nanoparticles retain the protection of MRT tissue sparing,” Valceski explains. “This opens up the potential for MRT radiosurgery, something our research team has previously investigated.”

Finally, the researchers used γH2AX foci data for each peak and valley to determine a biological PVDR. The biological PDVR values matched the physical PVDR of 8.9, confirming for the first time a direct relationship between physical dose delivered and DSBs induced in the cancer cells. They note that adding radiosensitizers generally lowered the biological PVDRs from the physical value, likely due to additional DSBs induced in the valleys.

The next step will be to perform preclinical studies of MRT. “Trials to assess the efficacy of this multimodal therapy in treating aggressive cancers in vivo are key, especially given the theragnostic potential of nanoparticles for image-guided treatment and precision planning, as well as cancer-specific dose enhancement,” senior author Moeava Tehei tells Physics World. “Considering the radiosurgical potential of stereotactic, radiosensitizer-enhanced MRT fractions, we can foresee a revolutionary multimodal technique with curative potential in the near future.”

The post Microbeams plus radiosensitizers could optimize brain cancer treatment appeared first on Physics World.

Novel MRI technique can quantify lung function

20 janvier 2025 à 10:30

Assessing lung function is crucial for diagnosing and monitoring respiratory diseases. The most common way to do this is using spirometry, which measures the amount and speed of air that a person can inhale and exhale. Spirometry, however, is insensitive to early disease and cannot detect heterogeneity in lung function. Techniques such as chest radiography or CT provide more detailed spatial information, but are not ideal for long-term monitoring as they expose patients to ionizing radiation.

Now, a team headed up at Newcastle University in the UK has demonstrated a new lung MR imaging technique that provides quantitative and spatially localized assessment of pulmonary ventilation. The researchers also show that the MRI scans – recorded after the patient inhales a safe gas mixture – can track improvements in lung function following medication.

Although conventional MRI of the lungs is challenging, lung function can be assessed by imaging the distribution of an inhaled gas, most commonly hyperpolarized 3He or 129Xe. These gases can be expensive, however, and the magnetic preparation step requires extra equipment and manpower. Instead, project leader Pete Thelwall and colleagues are investigating 19F-MRI of inhaled perfluoropropane – an inert gas that does not require hyperpolarization to be visible in an MRI scan.

“Conventional MRI detects magnetic signals from hydrogen nuclei in water to generate images of water distribution,” Thelwall explains. “Perfluoropropane is interesting to us as we can also get an MRI signal from fluorine nuclei and visualize the distribution of inhaled perfluoropropane. We assess lung ventilation by seeing how well this MRI-visible gas moves into different parts of the lungs when it is inhaled.”

Testing the new technique

The researchers analysed 19F-MRI data from 38 healthy participants, 35 with asthma and 21 with chronic obstructive pulmonary disease (COPD), reporting their findings in Radiology. For the 19F-MRI scans, participants were asked to inhale a 79%/21% mixture of perfluoropropane and oxygen and then hold their breath. All subjects also underwent spirometry and an anatomical 1H-MRI scan, and those with respiratory disease withheld their regular bronchodilator medication before the MRI exams.

After co-registering each subject’s anatomical (1H) and ventilation (19F) images, the researchers used the perfluoropropane distribution in the images to differentiate ventilated and non-ventilated lung regions. They then calculated the ratio of non-ventilated lung to total lung volume, a measure of ventilation dysfunction known as the ventilation defect percentage (VDP).

Healthy subjects had a mean VDP of 1.8%, reflecting an even distribution of inhaled gas throughout their lungs and well-preserved lung function. In comparison, the patient groups showed elevated mean VDP values – 8.3% and 27.2% for those with asthma and COPD, respectively – reflecting substantial ventilation heterogeneity.

In participants with respiratory disease, the team also performed 19F-MRI after treatment with salbutamol, a common inhaler. They found that the MR images revealed changes in regional ventilation in response to this bronchodilator therapy.

Post-treatment images of patients with asthma showed an increase in lung regions containing perfluoropropane, reflecting the reversible nature of this disease. Participants with COPD generally showed less obvious changes following treatment, as expected for this less reversible disease. Bronchodilator therapy reduced the mean VDP by 33% in participants with asthma (from 8.3% to 5.6%) and by 14% in those with COPD (from 27.2% to 23.3%).

The calculated VDP values were negatively associated with standard spirometry metrics. However, the team note that some participants with asthma exhibited normal spirometry but an elevated mean VDP (6.7%) compared with healthy subjects. This finding suggests that the VDP acquired by 19F-MRI of inhaled perfluoropropane is more sensitive to subclinical disease than conventional spirometry.

Supporting lung transplants

In a separate study reported in JHLT Open, Thelwall and colleagues used dynamic 19F-MRI of inhaled perfluoropropane to visualize the function of transplanted lungs. Approximately half of lung transplant recipients experience organ rejection, known as chronic lung allograft dysfunction (CLAD), within five years of transplantation.

Lung function MRI
Early detection Lung function MRI showing areas of dysfunction in transplant recipients. (Courtesy: Newcastle University, UK)

Transplant recipients are monitored frequently using pulmonary function tests and chest X-rays. But by the time CLAD is diagnosed, irreversible lung damage may already have occurred. The team propose that 19F-MRI may find subtle early changes in lung function that could help detect rejection earlier.

The researchers studied 10 lung transplant recipients, six of whom were experiencing chronic rejection. They used a wash-in and washout technique, acquiring breath-hold 19F-MR images while the patient inhaled a perfluoropropane/oxygen mixture (wash-in acquisitions), followed by scans during breathing of room air (washout acquisitions).

The MR images revealed quantifiable differences in regional ventilation in participants with and without CLAD. In those with chronic rejection, scans showed poorer air movement to the edges of the lungs, likely due to damage to the small airways, a typical feature of CLAD. By detecting such changes in lung function, before signs of damage are seen in other tests, it’s possible that this imaging method might help inform patient treatment decisions to better protect the transplanted lungs from further damage.

The studies fall squarely within the field of clinical research, requiring non-standard MRI hardware to detect fluorine nuclei. But Thelwall sees a pathway towards introducing 19F-MRI in hospitals, noting that scanner manufacturers have brought systems to market that can detect nuclei other than 1H in routine diagnostic scans. Removing the requirement for hyperpolarization, combined with the lower relative cost of perfluoropropane inhalation (approximately £50 per study participant), could also help scale this method for use in the clinic.

The team is currently working on a study looking at how MRI assessment of lung function could help reduce the side effects associated with radiotherapy for lung cancer. The idea is to design a radiotherapy plan that minimizes dose to lung regions with good function, whilst maintaining effective cancer treatment.

“We are also looking at how better lung function measurements might help the development of new treatments for lung disease, by being able to see the effects of new treatments earlier and more accurately than current lung function measurements used in clinical trials,” Thelwall tells Physics World.

The post Novel MRI technique can quantify lung function appeared first on Physics World.

Magnetic particle imaging designed for the human brain

14 janvier 2025 à 17:00

Magnetic particle imaging (MPI) is an emerging medical imaging modality with the potential for high sensitivity and spatial resolution. Since its introduction back in 2005, researchers have built numerous preclinical MPI systems for small-animal studies. But human-scale MPI remains an unmet challenge. Now, a team headed up at the Athinoula A Martinos Center for Biomedical Imaging has built a proof-of-concept human brain-scale MPI system and demonstrated its potential for functional neuroimaging.

MPI works by visualizing injected superparamagnetic iron oxide nanoparticles (SPIONs). SPIONs exhibit a nonlinear response to an applied magnetic field: at low fields they respond roughly linearly, but at larger field strengths, particle response saturates. MPI exploits this behaviour by creating a magnetic field gradient across the imaging space with a field-free line (FFL) in the centre. Signals are only generated by the unsaturated SPIONs inside the FFL, which can be scanned through the imaging space to map SPION distribution.

First author Eli Mattingly and colleagues propose that MPI could be of particular interest for imaging the dynamics of blood volume in the brain, as it can measure the local distribution of nanoparticles in blood without an interfering background signal.

“In the brain, the tracer stays in the blood so we get an image of blood volume distribution,” Mattingly explains. “This is an important physiological parameter to map since blood is so vital for supporting metabolism. In fact, when a brain area is used by a mental task, the local blood volume swells about 20% in response, allowing us to map functional brain activity by dynamically imaging cerebral blood volume.”

Rescaling the scanner

The researchers began by defining the parameters required to build a human brain-scale MPI system. Such a device should be able to image the head with 6 mm spatial resolution (as used in many MRI-based functional neuroimaging studies) and 5 s temporal resolution for at least 30 min. To achieve this, they rescaled their existing rodent-sized imager.

Human brain-scale MPI scanner
Proof-of-concept system The back of the MPI scanner showing the opening for the patient head. (Courtesy: Lawrence Wald)

The resulting scanner uses two opposed permanent magnets to generate the FFL and high-power electromagnet shift coils, comprising inner and outer coils on each side of the head, to sweep the FFL across the head. The magnets create a gradient of 1.13 T/m, sufficient to achieve 5–6 mm resolution with high-performance SPIONs. To create 2D images, a mechanical gantry rotates the magnets and shift coils at 6 RPM, enabling imaging every 5 s.

The MPI system also incorporates a water-cooled 26.3 kHz drive coil, which produces the oscillating magnetic field (of up to 7 mTpeak) needed to drive the SPIONs in and out of saturation. A gradiometer-based receive coil fits over the head to record the SPION response.

Mattingly notes that this rescaling was far from straightforward as many parameters scale with the volume of the imaging bore. “With a bore about five times larger, the volume is about 125 times larger,” he says. “This means the power electronics require one to two orders of magnitude more power than rat-sized MPI systems, and the receive coils are simultaneously less sensitive as they become larger.”

Performance assessment

The researchers tested the scanner performance using a series of phantoms. They first evaluated spatial resolution by imaging 2.5 mm-diameter capillary tubes filled with Synomag SPIONs and spaced by between 5 and 9 mm. They reconstructed images using an inverse Radon reconstruction algorithm and a forward-model iterative reconstruction.

The system demonstrated a spatial resolution of about 7 mm with inverse Radon reconstruction, increasing to 5 mm with iterative reconstruction. The team notes that this resolution should be sufficient to observe changes in cerebral blood volume associated with brain function and following brain injuries.

To determine the practical detection limit, the researchers imaged Synomag samples with concentrations from 6 mgFe/ml to 15.6 µgFe/ml, observing a limit of about 1 µgFe. Based on this result, they predict that MPI should show grey matter with a signal-to-noise ratio (SNR) of roughly five and large blood vessels with an SNR of about 100 in a 5 s image. They also expect to detect changes during brain activation with a contrast-to-noise ratio of above one.

Next, they quantified the scanner’s imaging field-of-view using a G-shaped phantom filled with Synomag at roughly the concentration of blood. The field-of-view was 181 mm in diameter – sufficient to encompass most human brains. Finally, the team monitored the drive current stability over 35 min of continuous imaging. At a drive field of 4.6 mTpeak, the current deviated less than 2%. As this drift was smooth and slow, it should be straightforward to separate it from the larger signal changes expected from brain activation.

The researchers conclude that their scanner – the first human head-sized, mechanically rotating, FFL-based MPI – delivers a suitable spatial resolution, temporal resolution and sensitivity for functional human neuroimaging. And they continue to improve the device. “Currently, the group is developing hardware to enable studies such as application-specific receive coils to prepare for in vivo experiments,” says Mattingly.

At present, the scanner’s sensitivity is limited by background noise from the amplifiers. Mitigating such noise could increase sensitivity 20-fold, the team predicts, potentially providing an order of magnitude improvement over other human neuroimaging methods and enabling visualization of haemodynamic changes following brain activity.

The MPI system is described in Physics in Medicine & Biology.

The post Magnetic particle imaging designed for the human brain appeared first on Physics World.

Defying gravity: insights into hula hoop levitation

3 janvier 2025 à 11:41

Popularized in the late 1950s as a child’s toy, the hula hoop is undergoing renewed interest as a fitness activity and performance art. But have you ever wondered how a hula hoop stays aloft against the pull of gravity?

Wonder no more. A team of researchers at New York University have investigated the forces involved as a hoop rotates around a gyrating body, aiming to explain the physics and mathematics of hula hooping.

To determine the conditions required for successful hula hoop levitation, Leif Ristroph and colleagues conducted robotic experiments with hoops twirling around various shapes – including cones, cylinders and hourglass shapes. The 3D-printed shapes had rubberized surfaces to achieve high friction with a thin, rigid plastic hoop, and were driven to gyrate by a motor. The researchers launched the hoops onto the gyrating bodies by hand and recorded the resulting motion using high-speed videography and motion tracking algorithms.

They found that successful hula hooping is dependent on meeting two conditions. Firstly, the hoop orbit must be synchronized with the body gyration. This requires the hoop to be launched at sufficient speed and in the same direction as the gyration, following which, the outward pull by centrifugal action and damping due to rolling frication result in stable twirling.

Body shape impacts hula hooping ability
Shape matters Successful hula hooping requires a body type with the right slope and curvature. (Courtesy: NYU’s Applied Math Lab)

This process, however, does not necessarily keep the hoop elevated at a stable height – any perturbations could cause it to climb or fall away. The team found that maintaining hoop levitation requires the gyrating body to have a particular “body type”, including an appropriately angled or sloped surface – the “hips” – plus an hourglass-shaped profile with a sufficiently curved “waist”.

Indeed, in the robotic experiments, an hourglass-shaped body enabled steady-state hula hooping, while the cylinders and cones failed to successfully hula hoop.

The researchers also derived dynamical models that relate the motion and shape of the hoop and body to the contact forces generated. They note that their findings can be generalized to a wide range of different shapes and types of motion, and could be used in “robotic applications for transforming motions, extracting energy from vibrations, and controlling and manipulating objects without gripping”.

“We were surprised that an activity as popular, fun and healthy as hula hooping wasn’t understood even at a basic physics level,” says Ristroph in a press statement. “As we made progress on the research, we realized that the maths and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

The researchers present their findings in the Proceedings of the National Academy of Sciences.

The post Defying gravity: insights into hula hoop levitation appeared first on Physics World.

Medical physics and biotechnology: highlights of 2024

27 décembre 2024 à 11:00

From tumour-killing quantum dots to proton therapy firsts, this year has seen the traditional plethora of exciting advances in physics-based therapeutic and diagnostic imaging techniques, plus all manner of innovative bio-devices and biotechnologies for improving healthcare. Indeed, the Physics World Top 10 Breakthroughs for 2024 included a computational model designed to improve radiotherapy outcomes for patients with lung cancer by modelling the interaction of radiation with lung cells, as well as a method to make the skin of live mice temporarily transparent to enable optical imaging studies. Here are just a few more of the research highlights that caught our eye.

Marvellous MRI machines

This year we reported on some important developments in the field of magnetic resonance imaging (MRI) technology, not least of which was the introduction of a 0.05 T whole-body MRI scanner that can produce diagnostic quality images. The ultralow-field scanner, invented at the University of Hong Kong’s BISP Lab, operates from a standard wall power outlet and does not require shielding cages. The simplified design makes it easier to operate and significantly lower in cost than current clinical MRI systems. As such, the BISP Lab researchers hope that their scanner could help close the global gap in MRI availability.

Moving from ultralow- to ultrahigh-field instrumentation, a team headed up by David Feinberg at UC Berkeley created an ultrahigh-resolution 7 T MRI scanner for imaging the human brain. The system can generate functional brain images with 10 times better spatial resolution than current 7 T scanners, revealing features as small as 0.35 mm, as well as offering higher spatial resolution in diffusion, physiological and structural MR imaging. The researchers plan to use their new NexGen 7 T scanner to study underlying changes in brain circuitry in degenerative diseases, schizophrenia and disorders such as autism.

Meanwhile, researchers at Massachusetts Institute of Technology and Harvard University developed a portable magnetic resonance-based sensor for imaging at the bedside. The low-field single-sided MR sensor is designed for point-of-care evaluation of skeletal muscle tissue, removing the need to transport patients to a centralized MRI facility. The portable sensor, which weighs just 11 kg, uses a permanent magnet array and surface RF coil to provide low operational power and minimal shielding requirements.

Proton therapy progress

Alongside advances in diagnostic imaging, 2024 also saw a couple of firsts in the field of proton therapy. At the start of the year, OncoRay – the National Center for Radiation Research in Oncology in Dresden – launched the world’s first whole-body MRI-guided proton therapy system. The prototype device combines a horizontal proton beamline with a whole-body MRI scanner that rotates around the patient, a geometry that enables treatments both with patients lying down or in an upright position. Ultimately, the system could enable real-time MRI monitoring of patients during cancer treatments and significantly improve the targeting accuracy of proton therapy.

OncoRay’s research prototype
OncoRay’s research prototype The proton therapy beamline (left) and the opened MRI-guided proton therapy system, showing the in-beam MRI (centre) and patient couch (right). (Courtesy: UKD/Kirsten Lassig)

Also aiming to enhance proton therapy outcomes, a team at the PSI Center for Proton Therapy performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow. Online plan adaptation, where the patient remains on the couch throughout the replanning process, could help address uncertainties arising from anatomical changes during treatments. In five adults with tumours in rigid body regions treated using DAPT, the daily adapted plans provided target coverage to within 1.1% of the planned dose and, in over 90% of treatments, improved dose metrics to the targets and/or organs-at-risk. Importantly, the adaptive approach took just a few minutes longer than a non-adaptive treatment, remaining within the 30-min time slot allocated for a proton therapy session.

Bots and dots

Last but certainly not least, this year saw several research teams demonstrate the use of tiny devices for cancer treatment. In a study conducted at the Institute for Bioengineering of Catalonia, for instance, researchers used self-propelling nanoparticles containing radioactive iodine to shrink bladder tumours.

Graphene quantum dots
Cell death by dots Schematic illustration showing the role of graphene quantum dots as nanozymes for tumour catalytic therapy. (Courtesy: FHIPS)

Upon injection into the body, these “nanobots” search for and accumulate inside cancerous tissue, delivering radionuclide therapy directly to the target. Mice receiving a single dose of the nanobots experienced a 90% reduction in the size of bladder tumours compared with untreated animals.

At the Chinese Academy of Sciences’ Hefei Institutes of Physical Science, a team pioneered the use of metal-free graphene quantum dots for chemodynamic therapy. Studies in cancer cells and tumour-bearing mice showed that the quantum dots caused cell death and inhibition of tumour growth, respectively, with no off-target toxicity in the animals.

Finally, scientists at Huazhong University of Science and Technology developed novel magnetic coiling “microfibrebots” and used them to stem arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.

The post Medical physics and biotechnology: highlights of 2024 appeared first on Physics World.

Optimization algorithm improves safety of transcranial focused ultrasound treatments

20 décembre 2024 à 10:45

Transcranial focused ultrasound is being developed as a potential treatment for various brain diseases and disorders. One big challenge, however, is focusing the ultrasound through the skull, which can blur, attenuate and shift the beam. To minimize these effects, researchers at Zeta Surgical have developed an algorithm that automatically determines the optimal location to place a single-element focused transducer.

For therapeutic applications – including, for example, thermal ablation, drug delivery, disruption of the blood–brain barrier and neuromodulation – the ultrasound beam must be focused onto a small spot in the brain. The resulting high acoustic pressure at this spot generates a high temperature or mechanical force to treat the targeted tissues, ideally while avoiding overheating of nearby healthy tissues.

Unfortunately, when the ultrasound beam passes through the skull, which is a complex layered structure, it is both attenuated and distorted. This decreases the acoustic pressure at the focus, defocusing the beam and shifting the focus position.

Ultrasound arrays with multiple elements can compensate for such aberrations by controlling the individual array elements. But cost constraints mean that most applications still use single-element focused transducers, for which such compensation is difficult. This can result in ineffective or even unsafe treatments. What’s needed is a method that finds the optimal position to place a single-element focused ultrasound transducer such that defocusing and focus shift are minimized.

Raahil Sha and colleagues have come up with a way to do just this, using an optimization algorithm that simulates the ultrasound field through the skull. Using the k-Wave MATLAB toolbox, the algorithm simulates ultrasound fields generated within the skull cavity with the transducer placed at different locations. It then analyses the calculated fields to quantify the defocusing and focus shift.

The algorithm starts by loading a patient CT scan, which provides information on the density, speed of sound, absorption, geometry and porosity of the skull. It then defines the centre point of the target as the origin and the centre of a single-element 0.5 MHz transducer as the initial transducer location, and determines the initial values of the normalized peak-negative pressure (PNP) and focal volume.

The algorithm then performs a series of rotations of the transducer centre, simulating the PNP and focal volume at each new location. The PNP value is used to quantify the focus shift, with a higher PNP at the focal point representing a smaller shift.

Any change in the focal position is particularly concerning as it can lead to off-target tissue disruption. As such, the algorithm first identifies transducer positions that keep the focus shift below a specified threshold. Within these confines, it then finds the location with the smallest focal volume. This is then output as the optimal location for placing the transducer. In this study, this optimal location had a normalized PNP of 0.966 (higher than the pre-set threshold of 0.95) and a focal volume 6.8% smaller than that without the skull in place.

Next, the team used a Zeta neuro-navigation system and a robotic arm to automatically guide a transducer to the optimal location on a head phantom and track the placement accuracy in real time. In 45 independent registration attempts, the surgical robot could position the transducer at the optimal location with a mean position error of 0.0925 mm and a mean trajectory angle error of 0.0650 mm. These low values indicate the potential for accurate transducer placement during treatment.

The researchers conclude that the algorithm can find the optimal transducer location to avoid large focus shift and defocusing. “With the Zeta navigation system, our algorithm can help to make transcranial focused ultrasound treatment safer and more successful,” they write.

The study is reported in Bioengineering.

The post Optimization algorithm improves safety of transcranial focused ultrasound treatments appeared first on Physics World.

Virtual patient populations enable more inclusive medical device development

16 décembre 2024 à 10:30

Medical devices are thoroughly tested before being introduced into the clinic. But traditional testing approaches do not fully account for the diversity of patient populations. This can result in the launch to market of devices that may underperform in some patient subgroups or even cause harm, with often devastating consequences.

Aiming to solve this challenge, University of Leeds spin-out adsilico is working to enable more inclusive, efficient and patient-centric device development. Launched in 2021, the company is using computational methods pioneered in academia to revolutionize the way that medical devices are developed, tested and brought to market.

Sheena Macpherson, adsilico’s CEO, talks to Tami Freeman about the potential of advanced modelling and simulation techniques to help protect all patients, and how in silico trials could revolutionize medical device development.

What procedures are required to introduce a new medical device?

Medical devices currently go through a series of testing phases before reaching the market, including bench testing, animal studies and human clinical trials. These trials aim to establish the device’s safety and efficacy in the intended patient population. However, the patient populations included in clinical trials often do not adequately represent the full diversity of patients who will ultimately use the device once it is approved.

Why does this testing often exclude large segments of the population?

Traditional clinical trials tend to underrepresent women, ethnic minorities, elderly patients and those with rare conditions. This exclusion occurs for various reasons, including restrictive eligibility criteria, lack of diversity at trial sites, socioeconomic barriers to participation, and implicit biases in trial design and recruitment.

Sheena Macpherson
Computational medicine pioneer Sheena Macpherson is CEO of adsilico. (Courtesy: adsilico)

As a result, the data generated from these trials may not capture important variations in device performance across different subgroups.

This lack of diversity in testing can lead to devices that perform sub-optimally or even dangerously in certain demographic groups, with potentially life-threatening device flaws going undetected until the post-market phase when a much broader patient population is exposed.

Can you describe a real-life case of insufficient testing causing harm?

A poignant example is the recent vaginal mesh scandal. Mesh implants were widely marketed to hospitals as a simple fix for pelvic organ prolapse and urinary incontinence, conditions commonly linked to childbirth. However, the devices were often sold without adequate testing.

As a result, debilitating complications went undetected until the meshes were already in widespread use. Many women experienced severe chronic pain, mesh eroding into the vagina, inability to walk or have sex, and other life-altering side effects. Removal of the mesh often required complex surgery. A 2020 UK government inquiry found that this tragedy was further compounded by an arrogant culture in medicine that dismissed women’s concerns as “women’s problems” or a natural part of aging.

This case underscores how a lack of comprehensive and inclusive testing before market release can devastate patients’ lives. It also highlights the importance of taking patients’ experiences seriously, especially those from demographics that have been historically marginalized in medicine.

How can adsilico help to address these shortfalls?

adsilico is pioneering the use of advanced computational techniques to create virtual patient populations for testing medical devices. By leveraging massive datasets and sophisticated modelling, adsilico can generate fully synthetic “virtual patients” that capture the full spectrum of anatomical diversity in humans. These populations can then be used to conduct in silico trials, where devices are tested computationally on the virtual patients before ever being used in a real human. This allows identification of potential device flaws or limitations in specific subgroups much earlier in the development process.

How do you produce these virtual populations?

Virtual patients are created using state-of-the-art generative AI techniques. First, we generate digital twins – precise computational replicas of real patients’ anatomy and physiology – from a diverse set of fully anonymized patient medical images. We then apply generative AI to computationally combine elements from different digital twins, producing a large population of new, fully synthetic virtual patients. While these AI-generated virtual patients do not replicate any individual real patient, they collectively represent the full diversity of the real patient population in a statistically accurate way.

And how are they used in device testing?

Medical devices can be virtually implanted and simulated in these diverse synthetic anatomies to study performance across a wide range of patient variations. This enables comprehensive virtual trials that would be infeasible with traditional physical or digital twin approaches. Our solution ensures medical devices are tested on representative samples before ever reaching real patients. It’s a transformative approach to making clinical trials more inclusive, insightful and efficient.

In the cardiac space, for example, we might start with MRI scans of the heart from a broad cohort. We then computationally combine elements from different patient scans to generate a large population of new virtual heart anatomies that, while not replicating any individual real patient, collectively represent the full diversity of the real patient population. Medical devices such as stents or prosthetic heart valves can then be virtually implanted in these synthetic patients, and various simulations run to study performance and safety across a wide range of anatomical variations.

How do in silico trials help patients?

The in silico approach using virtual patients helps protect all patients by allowing more comprehensive device testing before human use. It enables the identification of potential flaws or limitations that might disproportionately affect specific subgroups, which can be missed in traditional trials with limited diversity.

This methodology also provides a way to study device performance in groups that are often underrepresented in human trials, such as ethnic minorities or those with rare conditions. By computationally generating virtual patients with these characteristics, we can proactively ensure that devices will be safe and effective for these populations. This helps prevent the kinds of adverse outcomes that can occur when devices are used in populations on which they were not adequately tested.

Could in silico trials replace human trials?

In silico trials using virtual patients are intended to supplement, rather than fully replace, human clinical trials. They provide a powerful tool for both detecting potential issues early and also enhancing the evidence available preclinically, allowing refinement of designs and testing protocols before moving to human trials. This can make the human trials more targeted, efficient and inclusive.

In silico trials can also be used to study device performance in patient types that are challenging to sufficiently represent in human trials, such as those with rare conditions. Ultimately, the combination of computational and human trials provides a more comprehensive assessment of device safety and efficacy across real-world patient populations.

Will this reduce the need for studies on animals?

In silico trials have the potential to significantly reduce the use of animals in medical device testing. Currently, animal studies remain an important step for assessing certain biological responses that are difficult to comprehensively model computationally, such as immune reactions and tissue healing. However, as computational methods become increasingly sophisticated, they are able to simulate an ever-broader range of physiological processes.

By providing a more comprehensive preclinical assessment of device safety and performance, in silico trials can already help refine designs and reduce the number of animals needed in subsequent live studies.

Ultimately, could this completely eliminate animal testing?

Looking ahead, we envision a future where advanced in silico models, validated against human clinical data, can fully replicate the key insights we currently derive from animal experiments. As these technologies mature, we may indeed see a time when animal testing is no longer a necessary precursor to human trials. Getting to that point will require close collaboration between industry, academia, regulators and the public to ensure that in silico methods are developed and validated to the highest scientific and ethical standards.

At adsilico, we are committed to advancing computational approaches in order to minimize the use of animals in the device development pipeline, with the ultimate goal of replacing animal experiments altogether. We believe this is not only a scientific imperative, but an ethical obligation as we work to build a more humane and patient-centric testing paradigm.

What are the other benefits of in silico testing?

Beyond improving device safety and inclusivity, the in silico approach can significantly accelerate the development timeline. By frontloading more comprehensive testing into the preclinical phase, device manufacturers can identify and resolve issues earlier, reducing the risk of costly failures or redesigns later in the process. The ability to generate and test on large virtual populations also enables much more rapid iteration and optimization of designs.

Additionally, by reducing the need for animal testing and making human trials more targeted and efficient, in silico methods can help bring vital new devices to patients faster and at lower cost. Industry analysts project that by 2025, in silico methods could enable 30% more new devices to reach the market each year compared with the current paradigm.

Are in silico trials being employed yet?

The use of in silico methods in medicine is rapidly expanding, but still nascent in many areas. Computational approaches are increasingly used in drug discovery and development, and regulatory agencies like the US Food and Drug Administration are actively working to qualify in silico methods for use in device evaluation.

Several companies and academic groups are pioneering the use of virtual patients for in silico device trials, and initial results are promising. However, widespread adoption is still in the early stages. With growing recognition of the limitations of traditional approaches and the power of computational methods, we expect to see significant growth in the coming years. Industry projections suggest that by 2025, 50% of new devices and 25% of new drugs will incorporate in silico methods in their development.

What’s next for adsilico?

Our near-term focus is on expanding our virtual patient capabilities to encompass an even broader range of patient diversity, and to validate our methods across multiple clinical application areas in partnership with device manufacturers.

Ultimately, our mission is to ensure that every patient, regardless of their demographic or anatomical characteristics, can benefit from medical devices that are thoroughly tested and optimized for someone like them. We won’t stop until in silico methods are a standard, integral part of developing safe and effective devices for all.

The post Virtual patient populations enable more inclusive medical device development appeared first on Physics World.

The heart of the matter: how advances in medical physics impact cardiology

10 décembre 2024 à 10:35

Medical physics techniques play a key role in all areas of cardiac medicine – from the use of advanced imaging methods and computational modelling to visualize and understand heart disease, to the development and introduction of novel pacing technologies.  At a recent meeting organised by the Institute of Physics’ Medical Physics Group, experts in the field discussed some of the latest developments in cardiac imaging and therapeutics, with a focus on transitioning technologies from the benchtop to the clinic.

Monitoring metabolism

The first speaker, Damian Tyler from the University of Oxford described how hyperpolarized MRI can provide “a new window on the reactions of life”. He discussed how MRI – most commonly employed to look at the heart’s structure and function – can also be used to characterize cardiac metabolism, with metabolic MR studies helping us understand cardiovascular disease, assess drug mechanisms and guide therapeutic interventions.

In particular, Tyler is studying pyruvate, a compound that plays a central role in the body’s metabolism of glucose. He explained that 13C MR spectroscopy is ideal for studying pyruvate metabolism, but its inherent low signal-to-noise ratio makes it unsuitable for rapid in vivo imaging. To overcome this limitation, Tyler uses hyperpolarized MR, which increases the sensitivity to 13C-enriched tracers by more than 10,000 times and enables real-time visualization of normal and abnormal metabolism.

As an example, Tyler described a study using hyperpolarized 13C MR spectroscopy to examine cardiac metabolism in diabetes, which is associated with an increased risk of heart disease. Tyler and his team examined the downstream metabolites of 13C-pyruvate (such as 13C-bicarbonate and 13C-lactate) in subjects with and without type 2 diabetes. They found reduced bicarbonate levels in diabetes and increased lactate, noting that the bicarbonate to lactate ratio could provide a diagnostic marker.

Among other potential clinical applications, hyperpolarized MR could be used to detect inflammation following a heart attack, elucidate the mechanism of drugs and accelerate new drug discovery, and provide an indication of whether a patient is likely to develop cardiotoxicity from chemotherapy. It can also be employed to guide therapeutic interventions by imaging ischaemia in tissue and assess cardiac perfusion after heart attack.

“Hyperpolarized MRI offers a safe and non-invasive way to assess cardiac metabolism,” Tyler concluded. “There are a raft of potential clinical applications for this emerging technology.”

Changing the pace

Alongside the introduction of new and improved diagnostic approaches, researchers are also developing and refining treatments for cardiac disorders. One goal is to create an effective treatment for heart failure, an incurable progressive condition in which the heart can’t pump enough blood to meet the body’s needs. Current therapies can manage symptoms, but cannot treat the underlying disease or prevent progression. Ashok Chauhan from Ceryx Medical told delegates how the company’s bio-inspired pacemaker aims to address this shortfall.

In healthy hearts, Chauhan explained, the heart rate changes in response to breathing, in a mechanism called respiratory sinus arrythmia (RSA). This natural synchronization is frequently lost in patients with heart failure. Ceryx has developed a pacing technology that aims to treat heart failure by resynchronizing the heart and lungs and restoring RSA.

Ashok Chauhan from Ceryx Medical
Heart–lung synchronization Ashok Chauhan explained how Ceryx Medical’s bio-inspired pacemaker aims to improve cardiac function in patients with heart failure.

The device works by monitoring the cardiorespiratory system in real time and using RSA inputs to generate stimulation signals in real time. Early trials in large animals demonstrated that RSA pacing increased cardiac output and ejection fraction compared with monotonic (constant) pacing. Last month, Ceryx begun the first in-human trials of its pacing technology, using an external pacemaker to assess the safety of the device.

Eliminating sex bias

Later in the day, Hannah Smith from the University of Oxford presented a fascinating talk entitled “Women’s hearts are superior and it’s killing them”.

Smith told a disturbing tale of an elderly man with chest pain, who calls an ambulance and undergoes electrocardiography (ECG) that shows he is having a heart attack. He is rushed to hospital to unblock his artery and restore cardiac function. His elderly wife also feels unwell, but her ECG only shows slight abnormality. She is sent for blood tests that eventually reveal she was also having a severe heart attack – but the delay in diagnosis led to permanent cardiac damage.

The fact is that women having heart attacks are more likely to be misdiagnosed and receive less aggressive treatment than men, Smith explained. This is due to variations in the size of the heart and differences in the distances and angles between the heart and the torso surface, which affect the ECG readings used to diagnose heart attack.

To understand the problem in more depth, Smith developed a computational tool that automatically reconstructs torso ventricular anatomy from standard clinical MR images. Her goal was to identify anatomical differences between males and females, and examine their impact on ECG measurements.

Using clinical data from the UK Biobank (around 1000 healthy men and women, and 84 women and 341 men post-heart attack), Smith modelled anatomies and correlated these with the respective ECG data. She found that the QRS complex (the signal for the heart to start contracting) was about 6 ms longer in healthy males than healthy females, attributed to the smaller heart volume in females. This is significant as it implies that the mean QRS duration would have to increase by a larger percentage for women than men to be diagnosed as elevated.

She also studied the ST segment in the ECG trace, elevation of which is a key feature used to diagnose heart attack. The ST amplitude was lower in healthy females than healthy males, due to their smaller ventricles and more superior position of the heart. The calculations revealed that overweight women would need a 63% larger increase in ST amplitude to be classified as elevated than normal weight men.

Smith concluded that heart attacks are harder to see on a woman’s ECGs than on a man’s, with differences in ventricular size, position and orientation impacting the ECG before, during and after heart attacks. Importantly, if these relationships can be elucidated and corrected for in diagnostic tools, these sex biases can be reduced, paving the way towards personalised ECG interpretation.

Prize presentations

The meeting also included a presentation from the winner of the 2023 Medical Physics Group PhD prize: Joshua Astley from the University of Sheffield, for his thesis “The role of deep learning in structural and functional lung imaging”.

Joshua Astley from the University of Sheffield
Prize presentation Joshua Astley from the University of Sheffield is the winner of the 2023 Medical Physics Group PhD prize.

Shifting the focus from the heart to the lungs, Astley discussed how hyperpolarized gas MRI, using inhaled contrast agents such as 1He and 129Xe, can visualize regional lung ventilation. To improve the accuracy and speed of such lung MRI studies, he designed a deep learning system that rapidly performs MRI segmentation and automates the calculation of ventilation defect percentage via lung cavity estimates. He noted that the tool is already being used to improve workflow in clinical hyperpolarized gas MRI scans.

Astley also described the use of CT ventilation imaging as a potentially lower-cost approach to visualize lung ventilation. Combining the benefits of computational modelling with deep learning, Astley and colleagues have developed a hybrid framework that generates synthetic ventilation scans from non-contrast CT images.

Quoting some “lessons learnt from my thesis”, Astley concluded that artificial intelligence (AI)-based workflows enable faster computation of clinical biomarkers and better integration of functional lung MRI, and that non-contrast functional lung surrogates can reduce the cost and expand use of functional lung imaging. He also emphasized that quantifying the uncertainty in AI approaches can improve clinician’s trust in using such algorithms, and that making code open and available is key to increasing its impact.

The day rounded off with awards for the meeting’s best talk in the submitted abstracts section and the best poster presentation. The former was won by Sam Barnes from Lancaster University for his presentation on the use of electroencephalography (EEG) for diagnosis of autism spectrum disorder. The poster prize was awarded to Suchit Kumar from University College London, for his work on a graphene-based electrophysiology probe for concurrent EEG and functional MRI.

The post The heart of the matter: how advances in medical physics impact cardiology appeared first on Physics World.

❌