↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Microbeams plus radiosensitizers could optimize brain cancer treatment

Par : Tami Freeman

Brain tumours are notoriously difficult to treat, resisting conventional treatments such as radiation therapy, where the deliverable dose is limited by normal tissue tolerance. To better protect healthy tissues, researchers are turning to microbeam radiation therapy (MRT), which uses spatially fractionated beams to spare normal tissue while effectively killing cancer cells.

MRT is delivered using arrays of ultrahigh-dose rate synchrotron X-ray beams tens of microns wide (high-dose peaks) and spaced hundreds of microns apart (low-dose valleys). A research team from the Centre for Medical Radiation Physics at the University of Wollongong in Australia has now demonstrated that combining MRT with targeted radiosensitizers – such as nanoparticles or anti-cancer drugs – can further boost treatment efficacy, reporting their findings in Cancers.

“MRT is famous for its healthy tissue-sparing capabilities with good tumour control, whilst radiosensitizers are known for their ability to deliver targeted dose enhancement to cancer,” explains first author Michael Valceski. “Combining these modalities just made sense, with their synergy providing the potential for the best of both worlds.”

Enhancement effects

Valceski and colleagues combined MRT with thulium oxide nanoparticles, the chemotherapy drug methotrexate and the radiosensitizer iododeoxyuridine (IUdR). They examined the response of monolayers of rodent brain cancer cells to various therapy combinations. They also compared conventional broadbeam orthovoltage X-ray irradiation with synchrotron broadbeam X-rays and synchrotron MRT.

Synchrotron irradiations were performed on the Imaging and Medical Beamline at the ANSTO Australian Synchrotron, using ultrahigh dose rates of 74.1 Gy/s for broadbeam irradiation and 50.3 Gy/s for MRT. The peak-to-valley dose ratio (PVDR, used to characterize an MRT field) of this set-up was measured as 8.9.

Using a clonogenic assay to measure cell survival, the team observed that synchrotron-based irradiation enhanced cell killing compared with conventional irradiation at the same 5 Gy dose (for MRT this is the valley dose, the peaks experience 8.9 times higher dose), demonstrating the increased cell-killing effect of these ultrahigh-dose rate X-rays.

Adding radiosensitizers further increased the impact of synchrotron broadbeam irradiation, with DNA-localized IUdR killing more cells than cytoplasm-localized nanoparticles. Methotrexate, meanwhile, halved cell survival compared with conventional irradiation.

The team observed that at 5 Gy, MRT showed equivalent cell killing to synchrotron broadbeam irradiation. Valceski explains that this demonstrates MRT’s tissue-sparing potential, by showing how MRT can maintain treatment efficacy while simultaneously protecting healthy cells.

MRT also showed enhanced cell killing when combined with radiosensitizers, with the greatest effect seen for IUdR and IUdR plus methotrexate. This local dose enhancement, attributed to the DNA localization of IUdR, could further improve the tissue-sparing capabilities of MRT by enabling a lower per-fraction dose to reduce patient exposure whilst maintaining tumour control.

Imaging valleys and peaks

To link the biological effects with the physical collimation of MRT, the researchers performed confocal microscopy (at the Fluorescence Analysis Facility in Molecular Horizons, University of Wollongong) to investigate DNA damage following treatment at 0.5 and 5 Gy. Twenty minutes after irradiation, they imaged fixed cells to visualize double-strand DNA breaks (DSBs), as shown by γH2AX foci (representing a nuclear DSB site).

Spatially fractionated beams
Spatially fractionated beams Imaging DNA damage following MRT confirms that the cells’ biological responses match the beam collimation. The images show double-strand DNA breaks (green) overlaid on a nuclear counterstain (blue). (Courtesy: CC BY/Cancers 10.3390/cancers16244231)

The images verified that the cells’ biological responses corresponded with the MRT beam patterns, with the 400 µm microbeam spacing clearly seen in all treated cells, both with and without radiosensitizers.

In the 0.5 Gy images, the microbeam tracks were consistent in width, while the 5 Gy MRT tracks were wider as DNA damage spread from peaks into the valleys. This radiation roll-off was also seen with IUdR and IUdR plus methotrexate, with numerous bright foci visible in the valleys, demonstrating dose enhancement and improved cancer-killing with these radiosensitizers.

The researchers also analysed the MRT beam profiles using the γH2AX foci intensity across the images. Cells treated with radiosensitizers had broadened peaks, with the largest effect seen with the nanoparticles. As nanoparticles can be designed to target tumours, this broadening (roughly 30%) can be used to increase the radiation dose to cancer cells in nearby valleys.

“Peak broadening adds a novel benefit to radiosensitizer-enhanced MRT. The widening of the peaks in the presence of nanoparticles could potentially ‘engulf’ the entire cancer, and only the cancer, whilst normal tissues without nanoparticles retain the protection of MRT tissue sparing,” Valceski explains. “This opens up the potential for MRT radiosurgery, something our research team has previously investigated.”

Finally, the researchers used γH2AX foci data for each peak and valley to determine a biological PVDR. The biological PDVR values matched the physical PVDR of 8.9, confirming for the first time a direct relationship between physical dose delivered and DSBs induced in the cancer cells. They note that adding radiosensitizers generally lowered the biological PVDRs from the physical value, likely due to additional DSBs induced in the valleys.

The next step will be to perform preclinical studies of MRT. “Trials to assess the efficacy of this multimodal therapy in treating aggressive cancers in vivo are key, especially given the theragnostic potential of nanoparticles for image-guided treatment and precision planning, as well as cancer-specific dose enhancement,” senior author Moeava Tehei tells Physics World. “Considering the radiosurgical potential of stereotactic, radiosensitizer-enhanced MRT fractions, we can foresee a revolutionary multimodal technique with curative potential in the near future.”

The post Microbeams plus radiosensitizers could optimize brain cancer treatment appeared first on Physics World.

Novel MRI technique can quantify lung function

Par : Tami Freeman

Assessing lung function is crucial for diagnosing and monitoring respiratory diseases. The most common way to do this is using spirometry, which measures the amount and speed of air that a person can inhale and exhale. Spirometry, however, is insensitive to early disease and cannot detect heterogeneity in lung function. Techniques such as chest radiography or CT provide more detailed spatial information, but are not ideal for long-term monitoring as they expose patients to ionizing radiation.

Now, a team headed up at Newcastle University in the UK has demonstrated a new lung MR imaging technique that provides quantitative and spatially localized assessment of pulmonary ventilation. The researchers also show that the MRI scans – recorded after the patient inhales a safe gas mixture – can track improvements in lung function following medication.

Although conventional MRI of the lungs is challenging, lung function can be assessed by imaging the distribution of an inhaled gas, most commonly hyperpolarized 3He or 129Xe. These gases can be expensive, however, and the magnetic preparation step requires extra equipment and manpower. Instead, project leader Pete Thelwall and colleagues are investigating 19F-MRI of inhaled perfluoropropane – an inert gas that does not require hyperpolarization to be visible in an MRI scan.

“Conventional MRI detects magnetic signals from hydrogen nuclei in water to generate images of water distribution,” Thelwall explains. “Perfluoropropane is interesting to us as we can also get an MRI signal from fluorine nuclei and visualize the distribution of inhaled perfluoropropane. We assess lung ventilation by seeing how well this MRI-visible gas moves into different parts of the lungs when it is inhaled.”

Testing the new technique

The researchers analysed 19F-MRI data from 38 healthy participants, 35 with asthma and 21 with chronic obstructive pulmonary disease (COPD), reporting their findings in Radiology. For the 19F-MRI scans, participants were asked to inhale a 79%/21% mixture of perfluoropropane and oxygen and then hold their breath. All subjects also underwent spirometry and an anatomical 1H-MRI scan, and those with respiratory disease withheld their regular bronchodilator medication before the MRI exams.

After co-registering each subject’s anatomical (1H) and ventilation (19F) images, the researchers used the perfluoropropane distribution in the images to differentiate ventilated and non-ventilated lung regions. They then calculated the ratio of non-ventilated lung to total lung volume, a measure of ventilation dysfunction known as the ventilation defect percentage (VDP).

Healthy subjects had a mean VDP of 1.8%, reflecting an even distribution of inhaled gas throughout their lungs and well-preserved lung function. In comparison, the patient groups showed elevated mean VDP values – 8.3% and 27.2% for those with asthma and COPD, respectively – reflecting substantial ventilation heterogeneity.

In participants with respiratory disease, the team also performed 19F-MRI after treatment with salbutamol, a common inhaler. They found that the MR images revealed changes in regional ventilation in response to this bronchodilator therapy.

Post-treatment images of patients with asthma showed an increase in lung regions containing perfluoropropane, reflecting the reversible nature of this disease. Participants with COPD generally showed less obvious changes following treatment, as expected for this less reversible disease. Bronchodilator therapy reduced the mean VDP by 33% in participants with asthma (from 8.3% to 5.6%) and by 14% in those with COPD (from 27.2% to 23.3%).

The calculated VDP values were negatively associated with standard spirometry metrics. However, the team note that some participants with asthma exhibited normal spirometry but an elevated mean VDP (6.7%) compared with healthy subjects. This finding suggests that the VDP acquired by 19F-MRI of inhaled perfluoropropane is more sensitive to subclinical disease than conventional spirometry.

Supporting lung transplants

In a separate study reported in JHLT Open, Thelwall and colleagues used dynamic 19F-MRI of inhaled perfluoropropane to visualize the function of transplanted lungs. Approximately half of lung transplant recipients experience organ rejection, known as chronic lung allograft dysfunction (CLAD), within five years of transplantation.

Lung function MRI
Early detection Lung function MRI showing areas of dysfunction in transplant recipients. (Courtesy: Newcastle University, UK)

Transplant recipients are monitored frequently using pulmonary function tests and chest X-rays. But by the time CLAD is diagnosed, irreversible lung damage may already have occurred. The team propose that 19F-MRI may find subtle early changes in lung function that could help detect rejection earlier.

The researchers studied 10 lung transplant recipients, six of whom were experiencing chronic rejection. They used a wash-in and washout technique, acquiring breath-hold 19F-MR images while the patient inhaled a perfluoropropane/oxygen mixture (wash-in acquisitions), followed by scans during breathing of room air (washout acquisitions).

The MR images revealed quantifiable differences in regional ventilation in participants with and without CLAD. In those with chronic rejection, scans showed poorer air movement to the edges of the lungs, likely due to damage to the small airways, a typical feature of CLAD. By detecting such changes in lung function, before signs of damage are seen in other tests, it’s possible that this imaging method might help inform patient treatment decisions to better protect the transplanted lungs from further damage.

The studies fall squarely within the field of clinical research, requiring non-standard MRI hardware to detect fluorine nuclei. But Thelwall sees a pathway towards introducing 19F-MRI in hospitals, noting that scanner manufacturers have brought systems to market that can detect nuclei other than 1H in routine diagnostic scans. Removing the requirement for hyperpolarization, combined with the lower relative cost of perfluoropropane inhalation (approximately £50 per study participant), could also help scale this method for use in the clinic.

The team is currently working on a study looking at how MRI assessment of lung function could help reduce the side effects associated with radiotherapy for lung cancer. The idea is to design a radiotherapy plan that minimizes dose to lung regions with good function, whilst maintaining effective cancer treatment.

“We are also looking at how better lung function measurements might help the development of new treatments for lung disease, by being able to see the effects of new treatments earlier and more accurately than current lung function measurements used in clinical trials,” Thelwall tells Physics World.

The post Novel MRI technique can quantify lung function appeared first on Physics World.

Magnetic particle imaging designed for the human brain

Par : Tami Freeman

Magnetic particle imaging (MPI) is an emerging medical imaging modality with the potential for high sensitivity and spatial resolution. Since its introduction back in 2005, researchers have built numerous preclinical MPI systems for small-animal studies. But human-scale MPI remains an unmet challenge. Now, a team headed up at the Athinoula A Martinos Center for Biomedical Imaging has built a proof-of-concept human brain-scale MPI system and demonstrated its potential for functional neuroimaging.

MPI works by visualizing injected superparamagnetic iron oxide nanoparticles (SPIONs). SPIONs exhibit a nonlinear response to an applied magnetic field: at low fields they respond roughly linearly, but at larger field strengths, particle response saturates. MPI exploits this behaviour by creating a magnetic field gradient across the imaging space with a field-free line (FFL) in the centre. Signals are only generated by the unsaturated SPIONs inside the FFL, which can be scanned through the imaging space to map SPION distribution.

First author Eli Mattingly and colleagues propose that MPI could be of particular interest for imaging the dynamics of blood volume in the brain, as it can measure the local distribution of nanoparticles in blood without an interfering background signal.

“In the brain, the tracer stays in the blood so we get an image of blood volume distribution,” Mattingly explains. “This is an important physiological parameter to map since blood is so vital for supporting metabolism. In fact, when a brain area is used by a mental task, the local blood volume swells about 20% in response, allowing us to map functional brain activity by dynamically imaging cerebral blood volume.”

Rescaling the scanner

The researchers began by defining the parameters required to build a human brain-scale MPI system. Such a device should be able to image the head with 6 mm spatial resolution (as used in many MRI-based functional neuroimaging studies) and 5 s temporal resolution for at least 30 min. To achieve this, they rescaled their existing rodent-sized imager.

Human brain-scale MPI scanner
Proof-of-concept system The back of the MPI scanner showing the opening for the patient head. (Courtesy: Lawrence Wald)

The resulting scanner uses two opposed permanent magnets to generate the FFL and high-power electromagnet shift coils, comprising inner and outer coils on each side of the head, to sweep the FFL across the head. The magnets create a gradient of 1.13 T/m, sufficient to achieve 5–6 mm resolution with high-performance SPIONs. To create 2D images, a mechanical gantry rotates the magnets and shift coils at 6 RPM, enabling imaging every 5 s.

The MPI system also incorporates a water-cooled 26.3 kHz drive coil, which produces the oscillating magnetic field (of up to 7 mTpeak) needed to drive the SPIONs in and out of saturation. A gradiometer-based receive coil fits over the head to record the SPION response.

Mattingly notes that this rescaling was far from straightforward as many parameters scale with the volume of the imaging bore. “With a bore about five times larger, the volume is about 125 times larger,” he says. “This means the power electronics require one to two orders of magnitude more power than rat-sized MPI systems, and the receive coils are simultaneously less sensitive as they become larger.”

Performance assessment

The researchers tested the scanner performance using a series of phantoms. They first evaluated spatial resolution by imaging 2.5 mm-diameter capillary tubes filled with Synomag SPIONs and spaced by between 5 and 9 mm. They reconstructed images using an inverse Radon reconstruction algorithm and a forward-model iterative reconstruction.

The system demonstrated a spatial resolution of about 7 mm with inverse Radon reconstruction, increasing to 5 mm with iterative reconstruction. The team notes that this resolution should be sufficient to observe changes in cerebral blood volume associated with brain function and following brain injuries.

To determine the practical detection limit, the researchers imaged Synomag samples with concentrations from 6 mgFe/ml to 15.6 µgFe/ml, observing a limit of about 1 µgFe. Based on this result, they predict that MPI should show grey matter with a signal-to-noise ratio (SNR) of roughly five and large blood vessels with an SNR of about 100 in a 5 s image. They also expect to detect changes during brain activation with a contrast-to-noise ratio of above one.

Next, they quantified the scanner’s imaging field-of-view using a G-shaped phantom filled with Synomag at roughly the concentration of blood. The field-of-view was 181 mm in diameter – sufficient to encompass most human brains. Finally, the team monitored the drive current stability over 35 min of continuous imaging. At a drive field of 4.6 mTpeak, the current deviated less than 2%. As this drift was smooth and slow, it should be straightforward to separate it from the larger signal changes expected from brain activation.

The researchers conclude that their scanner – the first human head-sized, mechanically rotating, FFL-based MPI – delivers a suitable spatial resolution, temporal resolution and sensitivity for functional human neuroimaging. And they continue to improve the device. “Currently, the group is developing hardware to enable studies such as application-specific receive coils to prepare for in vivo experiments,” says Mattingly.

At present, the scanner’s sensitivity is limited by background noise from the amplifiers. Mitigating such noise could increase sensitivity 20-fold, the team predicts, potentially providing an order of magnitude improvement over other human neuroimaging methods and enabling visualization of haemodynamic changes following brain activity.

The MPI system is described in Physics in Medicine & Biology.

The post Magnetic particle imaging designed for the human brain appeared first on Physics World.

Defying gravity: insights into hula hoop levitation

Par : Tami Freeman

Popularized in the late 1950s as a child’s toy, the hula hoop is undergoing renewed interest as a fitness activity and performance art. But have you ever wondered how a hula hoop stays aloft against the pull of gravity?

Wonder no more. A team of researchers at New York University have investigated the forces involved as a hoop rotates around a gyrating body, aiming to explain the physics and mathematics of hula hooping.

To determine the conditions required for successful hula hoop levitation, Leif Ristroph and colleagues conducted robotic experiments with hoops twirling around various shapes – including cones, cylinders and hourglass shapes. The 3D-printed shapes had rubberized surfaces to achieve high friction with a thin, rigid plastic hoop, and were driven to gyrate by a motor. The researchers launched the hoops onto the gyrating bodies by hand and recorded the resulting motion using high-speed videography and motion tracking algorithms.

They found that successful hula hooping is dependent on meeting two conditions. Firstly, the hoop orbit must be synchronized with the body gyration. This requires the hoop to be launched at sufficient speed and in the same direction as the gyration, following which, the outward pull by centrifugal action and damping due to rolling frication result in stable twirling.

Body shape impacts hula hooping ability
Shape matters Successful hula hooping requires a body type with the right slope and curvature. (Courtesy: NYU’s Applied Math Lab)

This process, however, does not necessarily keep the hoop elevated at a stable height – any perturbations could cause it to climb or fall away. The team found that maintaining hoop levitation requires the gyrating body to have a particular “body type”, including an appropriately angled or sloped surface – the “hips” – plus an hourglass-shaped profile with a sufficiently curved “waist”.

Indeed, in the robotic experiments, an hourglass-shaped body enabled steady-state hula hooping, while the cylinders and cones failed to successfully hula hoop.

The researchers also derived dynamical models that relate the motion and shape of the hoop and body to the contact forces generated. They note that their findings can be generalized to a wide range of different shapes and types of motion, and could be used in “robotic applications for transforming motions, extracting energy from vibrations, and controlling and manipulating objects without gripping”.

“We were surprised that an activity as popular, fun and healthy as hula hooping wasn’t understood even at a basic physics level,” says Ristroph in a press statement. “As we made progress on the research, we realized that the maths and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

The researchers present their findings in the Proceedings of the National Academy of Sciences.

The post Defying gravity: insights into hula hoop levitation appeared first on Physics World.

Medical physics and biotechnology: highlights of 2024

Par : Tami Freeman

From tumour-killing quantum dots to proton therapy firsts, this year has seen the traditional plethora of exciting advances in physics-based therapeutic and diagnostic imaging techniques, plus all manner of innovative bio-devices and biotechnologies for improving healthcare. Indeed, the Physics World Top 10 Breakthroughs for 2024 included a computational model designed to improve radiotherapy outcomes for patients with lung cancer by modelling the interaction of radiation with lung cells, as well as a method to make the skin of live mice temporarily transparent to enable optical imaging studies. Here are just a few more of the research highlights that caught our eye.

Marvellous MRI machines

This year we reported on some important developments in the field of magnetic resonance imaging (MRI) technology, not least of which was the introduction of a 0.05 T whole-body MRI scanner that can produce diagnostic quality images. The ultralow-field scanner, invented at the University of Hong Kong’s BISP Lab, operates from a standard wall power outlet and does not require shielding cages. The simplified design makes it easier to operate and significantly lower in cost than current clinical MRI systems. As such, the BISP Lab researchers hope that their scanner could help close the global gap in MRI availability.

Moving from ultralow- to ultrahigh-field instrumentation, a team headed up by David Feinberg at UC Berkeley created an ultrahigh-resolution 7 T MRI scanner for imaging the human brain. The system can generate functional brain images with 10 times better spatial resolution than current 7 T scanners, revealing features as small as 0.35 mm, as well as offering higher spatial resolution in diffusion, physiological and structural MR imaging. The researchers plan to use their new NexGen 7 T scanner to study underlying changes in brain circuitry in degenerative diseases, schizophrenia and disorders such as autism.

Meanwhile, researchers at Massachusetts Institute of Technology and Harvard University developed a portable magnetic resonance-based sensor for imaging at the bedside. The low-field single-sided MR sensor is designed for point-of-care evaluation of skeletal muscle tissue, removing the need to transport patients to a centralized MRI facility. The portable sensor, which weighs just 11 kg, uses a permanent magnet array and surface RF coil to provide low operational power and minimal shielding requirements.

Proton therapy progress

Alongside advances in diagnostic imaging, 2024 also saw a couple of firsts in the field of proton therapy. At the start of the year, OncoRay – the National Center for Radiation Research in Oncology in Dresden – launched the world’s first whole-body MRI-guided proton therapy system. The prototype device combines a horizontal proton beamline with a whole-body MRI scanner that rotates around the patient, a geometry that enables treatments both with patients lying down or in an upright position. Ultimately, the system could enable real-time MRI monitoring of patients during cancer treatments and significantly improve the targeting accuracy of proton therapy.

OncoRay’s research prototype
OncoRay’s research prototype The proton therapy beamline (left) and the opened MRI-guided proton therapy system, showing the in-beam MRI (centre) and patient couch (right). (Courtesy: UKD/Kirsten Lassig)

Also aiming to enhance proton therapy outcomes, a team at the PSI Center for Proton Therapy performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow. Online plan adaptation, where the patient remains on the couch throughout the replanning process, could help address uncertainties arising from anatomical changes during treatments. In five adults with tumours in rigid body regions treated using DAPT, the daily adapted plans provided target coverage to within 1.1% of the planned dose and, in over 90% of treatments, improved dose metrics to the targets and/or organs-at-risk. Importantly, the adaptive approach took just a few minutes longer than a non-adaptive treatment, remaining within the 30-min time slot allocated for a proton therapy session.

Bots and dots

Last but certainly not least, this year saw several research teams demonstrate the use of tiny devices for cancer treatment. In a study conducted at the Institute for Bioengineering of Catalonia, for instance, researchers used self-propelling nanoparticles containing radioactive iodine to shrink bladder tumours.

Graphene quantum dots
Cell death by dots Schematic illustration showing the role of graphene quantum dots as nanozymes for tumour catalytic therapy. (Courtesy: FHIPS)

Upon injection into the body, these “nanobots” search for and accumulate inside cancerous tissue, delivering radionuclide therapy directly to the target. Mice receiving a single dose of the nanobots experienced a 90% reduction in the size of bladder tumours compared with untreated animals.

At the Chinese Academy of Sciences’ Hefei Institutes of Physical Science, a team pioneered the use of metal-free graphene quantum dots for chemodynamic therapy. Studies in cancer cells and tumour-bearing mice showed that the quantum dots caused cell death and inhibition of tumour growth, respectively, with no off-target toxicity in the animals.

Finally, scientists at Huazhong University of Science and Technology developed novel magnetic coiling “microfibrebots” and used them to stem arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.

The post Medical physics and biotechnology: highlights of 2024 appeared first on Physics World.

Optimization algorithm improves safety of transcranial focused ultrasound treatments

Par : Tami Freeman

Transcranial focused ultrasound is being developed as a potential treatment for various brain diseases and disorders. One big challenge, however, is focusing the ultrasound through the skull, which can blur, attenuate and shift the beam. To minimize these effects, researchers at Zeta Surgical have developed an algorithm that automatically determines the optimal location to place a single-element focused transducer.

For therapeutic applications – including, for example, thermal ablation, drug delivery, disruption of the blood–brain barrier and neuromodulation – the ultrasound beam must be focused onto a small spot in the brain. The resulting high acoustic pressure at this spot generates a high temperature or mechanical force to treat the targeted tissues, ideally while avoiding overheating of nearby healthy tissues.

Unfortunately, when the ultrasound beam passes through the skull, which is a complex layered structure, it is both attenuated and distorted. This decreases the acoustic pressure at the focus, defocusing the beam and shifting the focus position.

Ultrasound arrays with multiple elements can compensate for such aberrations by controlling the individual array elements. But cost constraints mean that most applications still use single-element focused transducers, for which such compensation is difficult. This can result in ineffective or even unsafe treatments. What’s needed is a method that finds the optimal position to place a single-element focused ultrasound transducer such that defocusing and focus shift are minimized.

Raahil Sha and colleagues have come up with a way to do just this, using an optimization algorithm that simulates the ultrasound field through the skull. Using the k-Wave MATLAB toolbox, the algorithm simulates ultrasound fields generated within the skull cavity with the transducer placed at different locations. It then analyses the calculated fields to quantify the defocusing and focus shift.

The algorithm starts by loading a patient CT scan, which provides information on the density, speed of sound, absorption, geometry and porosity of the skull. It then defines the centre point of the target as the origin and the centre of a single-element 0.5 MHz transducer as the initial transducer location, and determines the initial values of the normalized peak-negative pressure (PNP) and focal volume.

The algorithm then performs a series of rotations of the transducer centre, simulating the PNP and focal volume at each new location. The PNP value is used to quantify the focus shift, with a higher PNP at the focal point representing a smaller shift.

Any change in the focal position is particularly concerning as it can lead to off-target tissue disruption. As such, the algorithm first identifies transducer positions that keep the focus shift below a specified threshold. Within these confines, it then finds the location with the smallest focal volume. This is then output as the optimal location for placing the transducer. In this study, this optimal location had a normalized PNP of 0.966 (higher than the pre-set threshold of 0.95) and a focal volume 6.8% smaller than that without the skull in place.

Next, the team used a Zeta neuro-navigation system and a robotic arm to automatically guide a transducer to the optimal location on a head phantom and track the placement accuracy in real time. In 45 independent registration attempts, the surgical robot could position the transducer at the optimal location with a mean position error of 0.0925 mm and a mean trajectory angle error of 0.0650 mm. These low values indicate the potential for accurate transducer placement during treatment.

The researchers conclude that the algorithm can find the optimal transducer location to avoid large focus shift and defocusing. “With the Zeta navigation system, our algorithm can help to make transcranial focused ultrasound treatment safer and more successful,” they write.

The study is reported in Bioengineering.

The post Optimization algorithm improves safety of transcranial focused ultrasound treatments appeared first on Physics World.

Virtual patient populations enable more inclusive medical device development

Par : Tami Freeman

Medical devices are thoroughly tested before being introduced into the clinic. But traditional testing approaches do not fully account for the diversity of patient populations. This can result in the launch to market of devices that may underperform in some patient subgroups or even cause harm, with often devastating consequences.

Aiming to solve this challenge, University of Leeds spin-out adsilico is working to enable more inclusive, efficient and patient-centric device development. Launched in 2021, the company is using computational methods pioneered in academia to revolutionize the way that medical devices are developed, tested and brought to market.

Sheena Macpherson, adsilico’s CEO, talks to Tami Freeman about the potential of advanced modelling and simulation techniques to help protect all patients, and how in silico trials could revolutionize medical device development.

What procedures are required to introduce a new medical device?

Medical devices currently go through a series of testing phases before reaching the market, including bench testing, animal studies and human clinical trials. These trials aim to establish the device’s safety and efficacy in the intended patient population. However, the patient populations included in clinical trials often do not adequately represent the full diversity of patients who will ultimately use the device once it is approved.

Why does this testing often exclude large segments of the population?

Traditional clinical trials tend to underrepresent women, ethnic minorities, elderly patients and those with rare conditions. This exclusion occurs for various reasons, including restrictive eligibility criteria, lack of diversity at trial sites, socioeconomic barriers to participation, and implicit biases in trial design and recruitment.

Sheena Macpherson
Computational medicine pioneer Sheena Macpherson is CEO of adsilico. (Courtesy: adsilico)

As a result, the data generated from these trials may not capture important variations in device performance across different subgroups.

This lack of diversity in testing can lead to devices that perform sub-optimally or even dangerously in certain demographic groups, with potentially life-threatening device flaws going undetected until the post-market phase when a much broader patient population is exposed.

Can you describe a real-life case of insufficient testing causing harm?

A poignant example is the recent vaginal mesh scandal. Mesh implants were widely marketed to hospitals as a simple fix for pelvic organ prolapse and urinary incontinence, conditions commonly linked to childbirth. However, the devices were often sold without adequate testing.

As a result, debilitating complications went undetected until the meshes were already in widespread use. Many women experienced severe chronic pain, mesh eroding into the vagina, inability to walk or have sex, and other life-altering side effects. Removal of the mesh often required complex surgery. A 2020 UK government inquiry found that this tragedy was further compounded by an arrogant culture in medicine that dismissed women’s concerns as “women’s problems” or a natural part of aging.

This case underscores how a lack of comprehensive and inclusive testing before market release can devastate patients’ lives. It also highlights the importance of taking patients’ experiences seriously, especially those from demographics that have been historically marginalized in medicine.

How can adsilico help to address these shortfalls?

adsilico is pioneering the use of advanced computational techniques to create virtual patient populations for testing medical devices. By leveraging massive datasets and sophisticated modelling, adsilico can generate fully synthetic “virtual patients” that capture the full spectrum of anatomical diversity in humans. These populations can then be used to conduct in silico trials, where devices are tested computationally on the virtual patients before ever being used in a real human. This allows identification of potential device flaws or limitations in specific subgroups much earlier in the development process.

How do you produce these virtual populations?

Virtual patients are created using state-of-the-art generative AI techniques. First, we generate digital twins – precise computational replicas of real patients’ anatomy and physiology – from a diverse set of fully anonymized patient medical images. We then apply generative AI to computationally combine elements from different digital twins, producing a large population of new, fully synthetic virtual patients. While these AI-generated virtual patients do not replicate any individual real patient, they collectively represent the full diversity of the real patient population in a statistically accurate way.

And how are they used in device testing?

Medical devices can be virtually implanted and simulated in these diverse synthetic anatomies to study performance across a wide range of patient variations. This enables comprehensive virtual trials that would be infeasible with traditional physical or digital twin approaches. Our solution ensures medical devices are tested on representative samples before ever reaching real patients. It’s a transformative approach to making clinical trials more inclusive, insightful and efficient.

In the cardiac space, for example, we might start with MRI scans of the heart from a broad cohort. We then computationally combine elements from different patient scans to generate a large population of new virtual heart anatomies that, while not replicating any individual real patient, collectively represent the full diversity of the real patient population. Medical devices such as stents or prosthetic heart valves can then be virtually implanted in these synthetic patients, and various simulations run to study performance and safety across a wide range of anatomical variations.

How do in silico trials help patients?

The in silico approach using virtual patients helps protect all patients by allowing more comprehensive device testing before human use. It enables the identification of potential flaws or limitations that might disproportionately affect specific subgroups, which can be missed in traditional trials with limited diversity.

This methodology also provides a way to study device performance in groups that are often underrepresented in human trials, such as ethnic minorities or those with rare conditions. By computationally generating virtual patients with these characteristics, we can proactively ensure that devices will be safe and effective for these populations. This helps prevent the kinds of adverse outcomes that can occur when devices are used in populations on which they were not adequately tested.

Could in silico trials replace human trials?

In silico trials using virtual patients are intended to supplement, rather than fully replace, human clinical trials. They provide a powerful tool for both detecting potential issues early and also enhancing the evidence available preclinically, allowing refinement of designs and testing protocols before moving to human trials. This can make the human trials more targeted, efficient and inclusive.

In silico trials can also be used to study device performance in patient types that are challenging to sufficiently represent in human trials, such as those with rare conditions. Ultimately, the combination of computational and human trials provides a more comprehensive assessment of device safety and efficacy across real-world patient populations.

Will this reduce the need for studies on animals?

In silico trials have the potential to significantly reduce the use of animals in medical device testing. Currently, animal studies remain an important step for assessing certain biological responses that are difficult to comprehensively model computationally, such as immune reactions and tissue healing. However, as computational methods become increasingly sophisticated, they are able to simulate an ever-broader range of physiological processes.

By providing a more comprehensive preclinical assessment of device safety and performance, in silico trials can already help refine designs and reduce the number of animals needed in subsequent live studies.

Ultimately, could this completely eliminate animal testing?

Looking ahead, we envision a future where advanced in silico models, validated against human clinical data, can fully replicate the key insights we currently derive from animal experiments. As these technologies mature, we may indeed see a time when animal testing is no longer a necessary precursor to human trials. Getting to that point will require close collaboration between industry, academia, regulators and the public to ensure that in silico methods are developed and validated to the highest scientific and ethical standards.

At adsilico, we are committed to advancing computational approaches in order to minimize the use of animals in the device development pipeline, with the ultimate goal of replacing animal experiments altogether. We believe this is not only a scientific imperative, but an ethical obligation as we work to build a more humane and patient-centric testing paradigm.

What are the other benefits of in silico testing?

Beyond improving device safety and inclusivity, the in silico approach can significantly accelerate the development timeline. By frontloading more comprehensive testing into the preclinical phase, device manufacturers can identify and resolve issues earlier, reducing the risk of costly failures or redesigns later in the process. The ability to generate and test on large virtual populations also enables much more rapid iteration and optimization of designs.

Additionally, by reducing the need for animal testing and making human trials more targeted and efficient, in silico methods can help bring vital new devices to patients faster and at lower cost. Industry analysts project that by 2025, in silico methods could enable 30% more new devices to reach the market each year compared with the current paradigm.

Are in silico trials being employed yet?

The use of in silico methods in medicine is rapidly expanding, but still nascent in many areas. Computational approaches are increasingly used in drug discovery and development, and regulatory agencies like the US Food and Drug Administration are actively working to qualify in silico methods for use in device evaluation.

Several companies and academic groups are pioneering the use of virtual patients for in silico device trials, and initial results are promising. However, widespread adoption is still in the early stages. With growing recognition of the limitations of traditional approaches and the power of computational methods, we expect to see significant growth in the coming years. Industry projections suggest that by 2025, 50% of new devices and 25% of new drugs will incorporate in silico methods in their development.

What’s next for adsilico?

Our near-term focus is on expanding our virtual patient capabilities to encompass an even broader range of patient diversity, and to validate our methods across multiple clinical application areas in partnership with device manufacturers.

Ultimately, our mission is to ensure that every patient, regardless of their demographic or anatomical characteristics, can benefit from medical devices that are thoroughly tested and optimized for someone like them. We won’t stop until in silico methods are a standard, integral part of developing safe and effective devices for all.

The post Virtual patient populations enable more inclusive medical device development appeared first on Physics World.

The heart of the matter: how advances in medical physics impact cardiology

Par : Tami Freeman

Medical physics techniques play a key role in all areas of cardiac medicine – from the use of advanced imaging methods and computational modelling to visualize and understand heart disease, to the development and introduction of novel pacing technologies.  At a recent meeting organised by the Institute of Physics’ Medical Physics Group, experts in the field discussed some of the latest developments in cardiac imaging and therapeutics, with a focus on transitioning technologies from the benchtop to the clinic.

Monitoring metabolism

The first speaker, Damian Tyler from the University of Oxford described how hyperpolarized MRI can provide “a new window on the reactions of life”. He discussed how MRI – most commonly employed to look at the heart’s structure and function – can also be used to characterize cardiac metabolism, with metabolic MR studies helping us understand cardiovascular disease, assess drug mechanisms and guide therapeutic interventions.

In particular, Tyler is studying pyruvate, a compound that plays a central role in the body’s metabolism of glucose. He explained that 13C MR spectroscopy is ideal for studying pyruvate metabolism, but its inherent low signal-to-noise ratio makes it unsuitable for rapid in vivo imaging. To overcome this limitation, Tyler uses hyperpolarized MR, which increases the sensitivity to 13C-enriched tracers by more than 10,000 times and enables real-time visualization of normal and abnormal metabolism.

As an example, Tyler described a study using hyperpolarized 13C MR spectroscopy to examine cardiac metabolism in diabetes, which is associated with an increased risk of heart disease. Tyler and his team examined the downstream metabolites of 13C-pyruvate (such as 13C-bicarbonate and 13C-lactate) in subjects with and without type 2 diabetes. They found reduced bicarbonate levels in diabetes and increased lactate, noting that the bicarbonate to lactate ratio could provide a diagnostic marker.

Among other potential clinical applications, hyperpolarized MR could be used to detect inflammation following a heart attack, elucidate the mechanism of drugs and accelerate new drug discovery, and provide an indication of whether a patient is likely to develop cardiotoxicity from chemotherapy. It can also be employed to guide therapeutic interventions by imaging ischaemia in tissue and assess cardiac perfusion after heart attack.

“Hyperpolarized MRI offers a safe and non-invasive way to assess cardiac metabolism,” Tyler concluded. “There are a raft of potential clinical applications for this emerging technology.”

Changing the pace

Alongside the introduction of new and improved diagnostic approaches, researchers are also developing and refining treatments for cardiac disorders. One goal is to create an effective treatment for heart failure, an incurable progressive condition in which the heart can’t pump enough blood to meet the body’s needs. Current therapies can manage symptoms, but cannot treat the underlying disease or prevent progression. Ashok Chauhan from Ceryx Medical told delegates how the company’s bio-inspired pacemaker aims to address this shortfall.

In healthy hearts, Chauhan explained, the heart rate changes in response to breathing, in a mechanism called respiratory sinus arrythmia (RSA). This natural synchronization is frequently lost in patients with heart failure. Ceryx has developed a pacing technology that aims to treat heart failure by resynchronizing the heart and lungs and restoring RSA.

Ashok Chauhan from Ceryx Medical
Heart–lung synchronization Ashok Chauhan explained how Ceryx Medical’s bio-inspired pacemaker aims to improve cardiac function in patients with heart failure.

The device works by monitoring the cardiorespiratory system in real time and using RSA inputs to generate stimulation signals in real time. Early trials in large animals demonstrated that RSA pacing increased cardiac output and ejection fraction compared with monotonic (constant) pacing. Last month, Ceryx begun the first in-human trials of its pacing technology, using an external pacemaker to assess the safety of the device.

Eliminating sex bias

Later in the day, Hannah Smith from the University of Oxford presented a fascinating talk entitled “Women’s hearts are superior and it’s killing them”.

Smith told a disturbing tale of an elderly man with chest pain, who calls an ambulance and undergoes electrocardiography (ECG) that shows he is having a heart attack. He is rushed to hospital to unblock his artery and restore cardiac function. His elderly wife also feels unwell, but her ECG only shows slight abnormality. She is sent for blood tests that eventually reveal she was also having a severe heart attack – but the delay in diagnosis led to permanent cardiac damage.

The fact is that women having heart attacks are more likely to be misdiagnosed and receive less aggressive treatment than men, Smith explained. This is due to variations in the size of the heart and differences in the distances and angles between the heart and the torso surface, which affect the ECG readings used to diagnose heart attack.

To understand the problem in more depth, Smith developed a computational tool that automatically reconstructs torso ventricular anatomy from standard clinical MR images. Her goal was to identify anatomical differences between males and females, and examine their impact on ECG measurements.

Using clinical data from the UK Biobank (around 1000 healthy men and women, and 84 women and 341 men post-heart attack), Smith modelled anatomies and correlated these with the respective ECG data. She found that the QRS complex (the signal for the heart to start contracting) was about 6 ms longer in healthy males than healthy females, attributed to the smaller heart volume in females. This is significant as it implies that the mean QRS duration would have to increase by a larger percentage for women than men to be diagnosed as elevated.

She also studied the ST segment in the ECG trace, elevation of which is a key feature used to diagnose heart attack. The ST amplitude was lower in healthy females than healthy males, due to their smaller ventricles and more superior position of the heart. The calculations revealed that overweight women would need a 63% larger increase in ST amplitude to be classified as elevated than normal weight men.

Smith concluded that heart attacks are harder to see on a woman’s ECGs than on a man’s, with differences in ventricular size, position and orientation impacting the ECG before, during and after heart attacks. Importantly, if these relationships can be elucidated and corrected for in diagnostic tools, these sex biases can be reduced, paving the way towards personalised ECG interpretation.

Prize presentations

The meeting also included a presentation from the winner of the 2023 Medical Physics Group PhD prize: Joshua Astley from the University of Sheffield, for his thesis “The role of deep learning in structural and functional lung imaging”.

Joshua Astley from the University of Sheffield
Prize presentation Joshua Astley from the University of Sheffield is the winner of the 2023 Medical Physics Group PhD prize.

Shifting the focus from the heart to the lungs, Astley discussed how hyperpolarized gas MRI, using inhaled contrast agents such as 1He and 129Xe, can visualize regional lung ventilation. To improve the accuracy and speed of such lung MRI studies, he designed a deep learning system that rapidly performs MRI segmentation and automates the calculation of ventilation defect percentage via lung cavity estimates. He noted that the tool is already being used to improve workflow in clinical hyperpolarized gas MRI scans.

Astley also described the use of CT ventilation imaging as a potentially lower-cost approach to visualize lung ventilation. Combining the benefits of computational modelling with deep learning, Astley and colleagues have developed a hybrid framework that generates synthetic ventilation scans from non-contrast CT images.

Quoting some “lessons learnt from my thesis”, Astley concluded that artificial intelligence (AI)-based workflows enable faster computation of clinical biomarkers and better integration of functional lung MRI, and that non-contrast functional lung surrogates can reduce the cost and expand use of functional lung imaging. He also emphasized that quantifying the uncertainty in AI approaches can improve clinician’s trust in using such algorithms, and that making code open and available is key to increasing its impact.

The day rounded off with awards for the meeting’s best talk in the submitted abstracts section and the best poster presentation. The former was won by Sam Barnes from Lancaster University for his presentation on the use of electroencephalography (EEG) for diagnosis of autism spectrum disorder. The poster prize was awarded to Suchit Kumar from University College London, for his work on a graphene-based electrophysiology probe for concurrent EEG and functional MRI.

The post The heart of the matter: how advances in medical physics impact cardiology appeared first on Physics World.

Mathematical model sheds light on how exercise suppresses tumour growth

Par : Tami Freeman

Physical exercise plays an important role in controlling disease, including cancer, due to its effect on the human body’s immune system. A research team from the USA and India has now developed a mathematical model to quantitatively investigate the complex relationship between exercise, immune function and cancer.

Exercise is thought to supress tumour growth by activating the body’s natural killer (NK) cells. In particular, skeletal muscle contractions drive the release of interleukin-6 (IL-6), which causes NK cells to shift from an inactive to an active state. The activated NK cells can then infiltrate and kill tumour cells. To investigate this process in more depth, the team developed a mathematical model describing the transition of a NK cell from its inactive to active state, at a rate driven by exercise-induced IL-6 levels.

“We developed this model to study how the interplay of exercise intensity and exercise duration can lead to tumour suppression and how the parameters associated with these exercise features can be tuned to get optimal suppression,” explains senior author Niraj Kumar from the University of Massachusetts Boston.

Impact of exercise intensity and duration

The model, reported in Physical Biology, is constructed from three ordinary differential equations that describe the temporal evolution of the number of inactive NK cells, active NK cells and tumour cells, as functions of the growth rates, death rates, switching rates (for NK cells) and the rate of tumour cell kill by activated NK cells.

Kumar and collaborators – Jay Taylor at Northeastern University and T Bagarti at Tata Steel’s Graphene Center – first investigated how exercise intensity impacts tumour suppression. They used their model to determine the evolution over time of tumour cells for different values of α0 – a parameter that correlates with the maximum level of IL-6 and increases with increased exercise intensity.

Temporal evolution of tumour cells
Modelling suppression Temporal evolution of tumour cells for different values of α0 (left) and exercise time scale τ (right). (Courtesy: J Taylor et al Phys. Biol. 10.1088/1478-3975/ad899d)

Simulating tumour growth over 20 days showed that the tumour population increased non-monotonically, exhibiting a minimum population (maximum tumour suppression) at a certain critical time before increasing and then reaching a steady-state value in the long term. At all time points, the largest tumour population was seen for the no-exercise case, confirming the premise that exercise helps suppress tumour growth.

The model revealed that as the intensity of the exercise increased, the level of tumour suppression increased alongside, due to the larger number of active NK cells. In addition, greater exercise intensity sustained tumour suppression for a longer time. The researchers also observed that if the initial tumour population was closer to the steady state, the effect of exercise on tumour suppression was reduced.

Next, the team examined the effect of exercise duration, by calculating tumour evolution over time for varying exercise time scales. Again, the tumour population showed non-monotonic growth with a minimum population at a certain critical time and a maximum population in the no-exercise case.  The maximum level of tumour suppression increased with increasing exercise duration.

Finally, the researchers analysed how multiple bouts of exercise impact tumour suppression, modelling a series of alternating exercise and rest periods. The model revealed that the effect of exercise on maximum tumour suppression exhibits a threshold response with exercise frequency. Up to a critical frequency, which varies with exercise intensity, the maximum tumour suppression doesn’t change. However, if the exercise frequency exceeds the critical frequency, it leads to a corresponding increase in maximum tumour suppression.

Clinical potential

Overall, the model demonstrated that increasing the intensity or duration of exercise leads to greater and sustained tumour suppression. It also showed that manipulating exercise frequency and intensity within multiple exercise bouts had a pronounced effect on tumour evolution.

These results highlight the model’s potential to guide the integration of exercise into a patient’s cancer treatment programme. While still at the early development stage, the model offers valuable insight into how exercise can influence immune responses. And as Taylor points out, as more experimental data become available, the model has potential for further extension.

“In the future, the model could be adapted for clinical use by testing its predictions in human trials,” he explains. “For now, it provides a foundation for designing exercise regimens that could optimize immune function and tumour suppression in cancer patients, based on the exercise intensity and duration.”

Next, the researchers plan to extend the model to incorporate both exercise and chemotherapy dosing. They will also explore how heterogeneity in the tumour population can influence tumour suppression.

The post Mathematical model sheds light on how exercise suppresses tumour growth appeared first on Physics World.

Cascaded crystals move towards ultralow-dose X-ray imaging

Par : Tami Freeman
Single-crystal and cascade-connected devices under X-ray irradiation
Working principle Illustration of the single-crystal (a) and cascade-connected two-crystal (b) devices under X-ray irradiation. (c) Time-resolved photocurrent responses of the two devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)

X-ray imaging plays an indispensable role in diagnosing and staging disease. Nevertheless, exposure to high doses of X-rays has potential for harm, and much effort is focused towards reducing radiation exposure while maintaining diagnostic function. With this aim, researchers at the King Abdullah University of Science and Technology (KAUST) have shown how interconnecting single-crystal devices can create an X-ray detector with an ultralow detection threshold.

The team created devices using lab-grown single crystals of methylammonium lead bromide (MAPbBr3), a perovskite material that exhibits considerable stability, minimal ion migration and a high X-ray absorption cross-section – making it ideal for X-ray detection. To improve performance further, they used cascade engineering to connect two or more crystals together in series, reporting their findings in ACS Central Science.

X-rays incident upon a semiconductor crystal detector generate a photocurrent via the creation of electron–hole pairs. When exposed to the same X-ray dose, cascade-connected crystals should exhibit the same photocurrent as a single-crystal device (as they generate equal net concentrations of electron–hole pairs). The cascade configuration, however, has a higher resistivity and should thus have a much lower dark current, improving the signal-to-noise ratio and enhancing the detection performance of the cascade device.

To test this premise, senior author Omar Mohammed and colleagues grew single crystals of MAPbBr3. They first selected four identical crystals to evaluate (SC1, SC2, SC3 and SC4), each 3 x 3 mm in area and approximately 2 mm thick. Measuring various optical and electrical properties revealed high consistency across the four samples.

“The synthesis process allows for reproducible production of MAPbBr3 single crystals, underscoring their strong potential for commercial applications,” says Mohammed.

Optimizing detector performance

Mohammed and colleagues fabricated X-ray detectors containing a single MAPbBr3 perovskite crystal (SC1) and detectors with two, three and four crystals connected in series (SC1−2, SC1−3 and SC1−4). To compare the dark currents of the devices they irradiated each one with X-rays under a constant 2 V bias voltage. The cascade-connected SC1–2 exhibited a dark current of 7.04 nA, roughly half that generated by SC1 (13.4 nA). SC1–3 and SC1–4 reduced the dark current further, to 4 and 3 nA, respectively.

The researchers also measured the dark current for the four devices as the bias voltage changed from 0 to -10 V. They found that SC1 reached the highest dark current of 547 nA, while SC1–2, SC1–3 and SC1–4 showed progressively decreasing dark currents of 134, 90 and 50 nA, respectively. “These findings highlight the effectiveness of cascade engineering in reducing dark current levels,” Mohammed notes.

Next, the team assessed the current stability of the devices under continuous X-ray irradiation for 450 s. SC1–2 exhibited a stable current response, with a skewness value of just 0.09, while SC1, SC1–3 and SC1–4 had larger skewness values of 0.75, 0.45 and 0.76, respectively.

The researchers point out that while connecting more single crystals in series reduced the dark current, increasing the number of connections also lowered the stability of the device. The two-crystal SC1–2 represents the optimal balance.

Low-dose imaging

One key component required for low-dose X-ray imaging is a low detection threshold. The conventional single-crystal SC1 showed a detection limit of 590 nGy/s under a 2 V bias. SC1–2 decreased this limit to 100 nGy/s – the lowest of all four devices and surpassing the existing record achieved by MAPbBr3 perovskite devices under near-identical conditions.

Spatial resolution is another important consideration. To assess this, the researchers estimated the modulation transfer function (the level of original contrast maintained by the detector) for each of the four devices. They found that SC1–2 exhibited the best spatial resolution of 8.5 line pairs/mm, compared with 5.6, 5.4 and 4 line pairs/mm for SC1, SC1–3 and SC1–4, respectively.

X-ray images of a key and a raspberry with a needle
Optimal imaging Actual and X-ray images of a key and a raspberry with a needle obtained by the SC1 to SC1–4 devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)

Finally, the researchers performed low-dose X-ray imaging experiments using the four devices, first imaging a key at a dose rate of 3.1 μGy/s. SC1 exhibited an unclear image due to the unstable current affecting its resolution. Devices SC1–2 to SC1–4 produced clearer images of the key, with SC1–2 showing the best image contrast.

They also imaged a USB port at a dose rate of 2.3 μGy/s, a metal needle piercing a raspberry at 1.9 μGy/s and an earring at 750 nGy/s. In all cases, SC1–2 exhibited the highest quality image.

The researchers conclude that the cascade-engineered configuration represents a significant shift in low-dose X-ray detection, with potential to advance applications that require minimal radiation exposure combined with excellent image quality. They also note that the approach works with different materials, demonstrating X-ray detection using cascaded cadmium telluride (CdTe) single crystals.

Mohammed says that the team is now investigating the application of the cascade structure in other perovskite single crystals, such as FAPbI3 and MAPbI3, with the goal of reducing their detection limits. “Moreover, efforts are underway to enhance the packaging of MAPbBr3 cascade single crystals to facilitate their use in dosimeter detection for real-world applications,” he tells Physics World.

The post Cascaded crystals move towards ultralow-dose X-ray imaging appeared first on Physics World.

Nanoflake-based breath sensor delivers ultrasensitive lung cancer screening

Par : Tami Freeman
Gas sensing cell
Gas sensing cell Schematic depicting the internal structure and the gas sensor’s working status. (Courtesy: Reprinted with permission from ACS Sensors 10.1021/acssensors.4c01298 ©2024 American Chemical Society)

Analysis of human breath can provide a non-invasive method for cancer screening or disease diagnosis. The level of isoprene in exhaled breath, for example, provides a biomarker that can indicate the presence of lung cancer. Now a research collaboration from China and Spain has used nanoflakes of indium oxide (In2O3)-based materials to create a gas sensor with the highest performance of any isoprene sensor reported to date.

For effective cancer screening or diagnosis, a gas sensor must be sensitive enough to detect the small amounts of isoprene present in breath (in the parts-per-billion (ppb) range) and able to differentiate isoprene from other exhaled compounds. The metal oxide semiconductor In2O3 is a promising candidate for isoprene sensing, but existing devices are limited by high operating temperatures and low detection limits.

SEM micrograph of nanoflakes
Detecting lung cancer SEM micrograph of the Pt@InNiOx nanoflakes. (Courtesy: Adapted from ACS Sensors 2024, DOI: 10.1021/acssensors.4c01298)

To optimize the sensing performance, the research team – led by Pingwei Liu from Zhejiang University and Qingyue Wang from Institute of Zhejiang University – developed a series of sensors made from nanoflakes of pure In2O3, nickel-doped (InNiOx) or platinum-loaded (Pt@InNiOx). The sensors comprise an insulating substrate with interdigitated gold/titanium electrodes, coated with a layer of roughly 10 nm-thick nanoflakes. When the sensor is exposed to isoprene, adsorption of isoprene onto the nanoflakes causes an increase in the detected electrical signal.

“The nanoflakes’ two-dimensional structure provides a relatively high surface area and pore volume compared with the bulk structure, thus promoting isoprene adsorption and enhancing electron interaction and electrical signals,” Wang explains. “This improves the sensitivity of the gas sensor.”

The researchers – also from Second Affiliated Hospital, Zhejiang University School of Medicine and Instituto de Catálisis y Petroleoquímica, CSIC – assessed the isoprene sensing performance of the various sensor chips. All three exhibited a linear response to isoprene concentrations ranging from 500 ppb to the limit-of-detection (LOD) at the operating temperature of 200 °C. Pt@InNiOx showed a response at least four times higher than InNiOx and In2O3, as well as an exceptionally low LOD of 2 ppb, greatly outperforming any previously reported sensors.

The Pt@InNiOx sensor also showed high selectivity, exhibiting 3–7 times higher response to isoprene than to other volatile organic compounds commonly found in breath. Pt@InNiOx also exhibited good repeatability over nine cycles of 500 ppb isoprene sensing.

The team next examined how humidity affects the sensors – an important factor as exhaled breath usually has a relative humidity above 65%. The InNiOx and Pt@InNiOx sensors maintained a stable current baseline in the presence of water vapour. In contrast, the In2O3 sensor showed more than a 100% baseline increase. Similarly, the isoprene sensing performance of InNiOx and Pt@InNiOx was unaffected by water vapor, while the In2O3 response decreased to less than 0.5% as relative humidity reached 80%.

The team also used simultaneous spectroscopic and electrical measurements to investigate the isoprene sensing mechanism. They found that nanoclusters of platinum in the nanoflakes play a pivotal role by catalysing the oxidation of isoprene C=C bonds, which releases electrons and triggers the isoprene-sensing process.

Clinical testing

As the performance tests indicated that Pt@InNiOx may provide an optimal sensing material for detecting ultralow levels of isoprene, the researchers integrated Pt@InNiOx nanoflakes into a portable breath sensing device. They collected exhaled breath from eight healthy individuals and five lung cancer patients, and then transferred the exhaled gases from the gas collection bags into the digital device, which displays the isoprene concentration on its screen.

The sensing device revealed that exhaled isoprene concentrations in lung cancer patients were consistently below 40 ppb, compared with more than 60 ppb in healthy individuals. As such, the device successfully distinguished individuals with lung cancer from healthy people.

“These findings underscore the effectiveness of the Pt@InNiOx sensor in real-world scenarios, validating its potential for rapid and cost-effective lung cancer diagnosis,” the researchers write. “Integrating this ultrasensitive sensing material into a portable device holds significant implications for at-home surveillance for lung cancer patients, enabling dynamic monitoring of their health status.”

Looking to future commercialization of this technology, the researchers note that this will require further research on the sensing materials and the relationship between breath isoprene levels and lung cancer. “By addressing these areas and finishing the rigorous clinical trials, breath isoprene gas sensing technology could become a transformative tool in the noninvasive detection of lung cancer, ultimately saving lives and improving healthcare,” they conclude.

“Currently, we’re cooperating with a local hospital for large-scale clinical testing and evaluating the potentials to be applied for other cancers such as prostate cancer,” Wang tells Physics World.

The researchers report their findings in ACS Sensors.

The post Nanoflake-based breath sensor delivers ultrasensitive lung cancer screening appeared first on Physics World.

From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood

Par : Tami Freeman

Malaria remains a serious health concern, with annual deaths increasing yearly since 2019 and almost half of the world’s population at risk of infection. Existing diagnostic tests are less than optimal and all rely on obtaining an invasive blood sample. Now, a research collaboration from USA and Cameroon has demonstrated a device that can non-invasively detect this potentially deadly infection without requiring a single drop of blood.

Currently, malaria is diagnosed using optical microscopy or antigen-based rapid diagnostic tests, but both methods have low sensitivity. Polymerase chain reaction (PCR) tests are more sensitive, but still require blood sampling. The new platform – Cytophone – uses photoacoustic flow cytometry (PAFC) to rapidly identify malaria-infected red blood cells via a small probe placed on the back of the hand.

PAFC works by delivering low-energy laser pulses through the skin into a blood vessel and recording the thermoacoustic signals generated by absorbers in circulating blood. Cytophone, invented by Vladimir Zharov from the University of Arkansas for Medical Science, was originally developed as a universal diagnostic platform and first tested clinically for detection of cancerous melanoma cells.

“We selected melanoma because of the possibility of performing label-free detection of circulating cells using melanin as an endogenous biomarker,” explains Zharov. “This avoids the need for in vivo labelling by injecting contrast agents into blood.” For malaria diagnosis, Cytophone detects haemozoin, an iron crystal that accumulates in red blood cells infected with malaria parasites. These haemozoin biocrystals have unique magnetic and optical properties, making them a potential diagnostic target.

Photoacoustic detection
Photoacoustic detection Schematic of the focused ultrasound transducer array assessing a blood network. (Courtesy: Nat. Commun. 10.1038/s41467-024-53243-z)

“The similarity between melanin and haemozoin biomarkers, especially the high photoacoustic contrast above the blood background, motivated us to bring a label-free malaria test with no blood drawing to malaria-endemic areas,” Zharov tells Physics World. “To build a clinical prototype for the Cameroon study we used a similar platform and just selected a smaller laser to make the device more portable.”

The Cytophone prototype uses a 1064 nm laser with a linear beam shape and a high pulse rate to interrogate fast moving blood cells within blood vessels. Haemozoin nanocrystals in infected red blood cells absorb this light (more strongly than haemoglobin in normal red blood cells), heat up and expand, generating acoustic waves. These signals are detected by an array of 16 tiny ultrasound transducers in acoustic contact with the skin. The transducers have focal volumes oriented in a line across the vessel, which increases sensitivity and resolution, and simplifies probe navigation.

In vivo testing

Zharov and collaborators – also from Yale School of Public Health and the University of Yaoundé I – tested the Cytophone in 30 Cameroonian adults diagnosed with uncomplicated malaria. They used data from 10 patients to optimize device performance and assess safety. They then performed a longitudinal study in the other 20 patients, who attended four or five times at up to 37 days following antimalarial therapy, contributing 94 visits in total.

Photoacoustic waveforms and traces from infected blood cells have a particular shape and duration, and a different time delay to that of background skin signals. The team used these features to optimize signal processing algorithms with appropriate averaging, filtration and gating to identify true signals arising from infected red blood cells. As the study subjects all had dark skin with high melanin content, this time-resolved detection also helped to avoid interference from skin melanin.

On visit 1 (the day of diagnosis), 19/20 patients had detectable photoacoustic signals. Following treatment, these signals consistently decreased with each visit. Cytophone-positive samples exhibited median photoacoustic peak rates of 1.73, 1.63, 1.18 and 0.74 peaks/min on visits 1–4, respectively. One participant had a positive signal on visit 5 (day 30). The results confirm that Cytophone is sensitive enough to detect low levels of parasites in infected blood.

The researchers note that Cytophone detected the most common and deadliest species of malaria parasite, as well as one infection by a less common species and two mixed infections. “That was a really exciting proof-of-concept with the first generation of this platform,” says co-lead author Sunil Parikh in a press statement. “I think one key part of the next phase is going to involve demonstrating whether or not the device can detect and distinguish between species.”

The research team
Team work The researchers from the USA and Cameroon are using photoacoustic flow cytometry to rapidly detect malaria infection. (Courtesy: Sunil Parikh)

Performance comparison

Compared with invasive microscopy-based detection, Cytophone demonstrated 95% sensitivity at the first visit and 90% sensitivity during the follow-up period, with 69% specificity and an area under the ROC curve of 0.84, suggesting excellent diagnostic performance. Cytophone also approached the diagnostic performance of standard PCR tests, with scope for further improvement.

Staff required just 4–6 h of training to operate Cytophone, plus a few days experience to achieve optimal probe placement. And with minimal consumables required and the increasing affordability of lasers, the researchers estimate that the cost per malaria diagnosis will be low. The study also confirmed that the safety of the Cytophone device. “Cytophone has the potential to be a breakthrough device allowing for non-invasive, rapid, label-free and safe in vivo diagnosis of malaria,” they conclude.

The researchers are now performing further malaria-related clinical studies focusing on asymptomatic individuals and children (for whom the needle-free aspect is particularly important). Simultaneously, they are continuing melanoma trials to detect early-stage disease and investigating the use of Cytophone to detect circulating blood clots in stroke patients.

“We are integrating multiple innovations to further enhance Cytophone’s sensitivity and specificity,” says Zharov. “We are also developing a cost-effective wearable Cytophone for continuous monitoring of disease progression and early warning of the risk of deadly disease.”

The study is described in Nature Communications.

The post From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood appeared first on Physics World.

First human retinal image brings sight-saving portable OCT a step closer

Par : Tami Freeman
Image of a human retina taken with the Akepa photonic chip
First in human Image of a human retina taken with the Akepa photonic chip, showing the retinal layers and key clinical features used for disease diagnosis and monitoring. (Courtesy: Siloton)

UK health technology start-up Siloton is developing a portable optical coherence tomography (OCT) system that uses photonic integrated circuits to miniaturize a tabletop’s-worth of expensive and fragile optical components onto a single coin-sized chip. In a first demonstration by a commercial organization, Siloton has now used its photonic chip technology to capture a sub-surface image of a human retina.

OCT is a non-invasive imaging technique employed as the clinical gold standard for diagnosing retinal disease. Current systems, however, are bulky and expensive and only available at hospital clinics or opticians. Siloton aims to apply its photonic chip – the optical equivalent of an electronic chip – to create a rugged, portable OCT system that patients could use to monitor disease progression in their own homes.

Siloton's Akepa photonic chip
Compact device Siloton’s photonic chip Akepa replaces approximately 70% of the optics found in traditional OCT systems. (Courtesy: Siloton)

The image obtained using Siloton’s first-generation OCT chip, called Akepa, reveals the fine layered structure of the retina in a healthy human eye. It clearly shows layers such as the outer photoreceptor segment and the retinal pigment epithelium, which are key clinical features for diagnosing and monitoring eye diseases.

“The system imaged the part of the retina that’s responsible for all of your central vision, most of your colour vision and the fine detail that you see,” explains Alasdair Price, Siloton’s CEO. “This is the part of the eye that you really care about looking at to detect disease biomarkers for conditions like age-related macular degeneration [AMD] or various diabetic eye conditions.”

Faster and clearer

Since Siloton first demonstrated that Akepa could acquire OCT images of a retinal phantom, the company has deployed some major software enhancements. For example, while the system previously took 5 min to image the phantom – an impractical length of time for human imaging – the imaging speed is now less than a second. The team is also exploring ways to improve image quality using artificial intelligence techniques.

Price explains that the latest image was recorded using the photonic chip in a benchtop set-up, noting that the company is about halfway through the process of miniaturizing all of the optics and electronics into a handheld binocular device.

“The electronics is all off-the-shelf, so we’re not going to focus too heavily on miniaturizing that until right at the end,” he says. “The innovative part is in miniaturizing the optics. We are very close to having it in that binocular headset now, the aim being that by early next year we will have that fully miniaturized.”

As such, the company plans to start deploying some research-only systems commercially next year. These will be handheld binocular-style devices that users hold up to their faces, complete with a base station for charging and communications. Speaking with over 100 patients in focus groups, Siloton confirmed that they prefer this binocular design over the traditional chin rest employed in full-size OCT systems.

“We were worried about that because we thought we may not be able to get the level of stability required,” says Price. “But we did further tests on the stability of the binocular system compared with the chin rest and actually found that the binoculars showed greater stability. Right now we’re still using a chin rest, so we’re hopeful that the binocular system will further improve our ability to record high-quality images.”

The Siloton founding team
Siloton founding team Left to right: Alasdair Price, Euan Allen and Ben Hunt. (Courtesy: Siloton)

Expanding applications

The principal aim of Siloton’s portable OCT system is to make the diagnosis and monitoring of eye diseases – such as diabetic macular oedema, retinal vein occlusion and AMD, the leading cause of sight loss in the developed world – more affordable and accessible.

Neovascular or “wet” AMD, for example, can be treated with regular eye injections, but this requires regular OCT scans at hospital appointments, which may not be available frequently enough for effective monitoring. With an OCT system in their own homes, patients can scan themselves every few days, enabling timely treatments as soon as disease progression is detected – as well as saving hospitals substantial amounts of money.

Ongoing improvements in “quality versus cost” of the Akepa chip has also enabled Siloton to expand its target applications outside of ophthalmology. The ability to image structures such as the optic nerve, for example, enables the use of OCT to screen for optic neuritis, a common early symptom in patients with multiple sclerosis.

The company is also working with the European Space Agency (ESA) on a project investigating spaceflight-associated neuro-ocular syndrome (SANS), a condition suffered by about 70% of astronauts and which requires regular monitoring.

“At the moment, there is an OCT system on the International Space Station. But for longer-distance space missions, things like Gateway, there won’t be room for such a large system,” Price tells Physics World. “So we’re working with ESA to look at getting our chip technology onto future space missions.”

The post First human retinal image brings sight-saving portable OCT a step closer appeared first on Physics World.

Daily adaptive proton therapy employed in the clinic for the first time

Par : Tami Freeman

Adaptive radiotherapy – in which a patient’s treatment is regularly replanned throughout their course of therapy – can compensate for uncertainties and anatomical changes and improve the accuracy of radiation delivery. Now, a team at the Paul Scherrer Institute’s Center for Proton Therapy has performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow.

Proton therapy benefits from a well-defined Bragg peak range that enables highly targeted dose delivery to a tumour while minimizing dose to nearby healthy tissues. This precision, however, also makes proton delivery extremely sensitive to anatomical changes along the beam path – arising from variations in mucus, air, muscle or fat in the body – or changes in the tumour’s position and shape.

“For cancer patients who are irradiated with protons, even small changes can have significant effects on the optimal radiation dose,” says first author Francesca Albertini in a press statement.

Online plan adaptation, where the patient remains on the couch during the replanning process, could help address the uncertainties arising from anatomical changes. But while this technique is being introduced into photon-based radiotherapy, daily online adaptation has not yet been applied to proton treatments, where it could prove even more valuable.

To address this shortfall, Albertini and colleagues developed a three-phase DAPT workflow, describing the procedure in Physics in Medicine & Biology. In the pre-treatment phase, two independent plans are created from the patient’s planning CT: a “template plan” that acts as a reference for the online optimized plan, and a “fallback plan” that can be selected on any day as a back-up if necessary.

Next, the online phase involves acquiring a daily CT before each irradiation, while the patient is on the treatment couch. For this, the researchers use an in-room CT-on-rails with a low-dose protocol. They then perform a fully automated re-optimization of the treatment plan based on the daily CT image. If the adapted plan meets the required clinical goals and passes an automated quality assurance (QA) procedure, it is used to treat the patient. If not, the fallback plan is delivered instead.

Finally, in the offline phase, the delivered dose in each fraction is recalculated retrospectively from the log files using a Monte Carlo algorithm. This step enables the team to accurately assess the dose delivered to the patient each day.

First clinical implementation

The researchers employed their DAPT protocol in five adults with tumours in rigid body regions, such as the brain or skull base. As this study was designed to demonstrate proof-of-principle and ensure clinical safety, they specified some additional constraints: only the last few consecutive fractions of each patient’s treatment course were delivered using DAPT; the plans used standard field arrangements and safety margins; and the template and fallback plans were kept the same.

“It’s important to note that these criteria are not optimized to fully exploit the potential clinical benefits of our approach,” the researchers write. “As our implementation progresses and matures, we anticipate refining these criteria to maximize the clinical advantages offered by DAPT.”

Across the five patients, the team performed DAPT for 26 treatment fractions. In 22 of these, the online adapted plans were chosen for delivery. In three fractions, the fallback plan was chosen due to a marginal dose increase to a critical structure, while for one fraction, the fallback plan was utilized due to a miscommunication. The team emphasize that all of the adapted plans passed the online QA steps and all agreed well with the log file-based dose calculations.

The daily adapted plans provided target coverage to within 1.1% of the planned dose and, in 92% of fractions, exhibited improved dose metrics to the targets and/or organs-at-risk (OARs). The researchers observed that a non-DAPT delivery (using the fallback plan) could have significantly increased the maximum dose to both the target and OARs. For one patient, this would have increased the dose to their brainstem by up to 10%. In contrast, the DAPT approach ensured that the OAR doses remained within the 5% threshold for all fractions.

Albertini emphasizes, however, that the main aim of this feasibility study was not to demonstrate superior plan quality with DAPT, but rather to establish that it could be implemented safely and efficiently. “The observed decrease in maximum dose to some OARs was a bonus and reinforces the potential benefits of adaptive strategies,” she tells Physics World.

Importantly, the DAPT process took just a few minutes longer than a non-adaptive session, averaging just above 23 min per fraction (including plan adaptation and assessment of clinical goals). Keeping the adaptive treatment within the typical 30-min time slot allocated for a proton therapy fraction is essential to maintain the patient workflow.

To reduce the time requirement, the team automated key workflow components, including the independent dose calculations. “Once registration between the daily and reference images is completed, all subsequent steps are automatically processed in the background, while the users are evaluating the daily structure and plan,” Albertini explains. “Once the plan is approved, all the QA has already been performed and the plan is ready to be delivered.

Following on from this first-in-patient demonstration, the researchers now plan to use DAPT to deliver full treatments (all fractions), as well as to enable margin reduction and potentially employ more conformal beam angles. “We are currently focused on transitioning our workflow to a commercial treatment planning system and enhancing it to incorporate deformable anatomy considerations,” says Albertini.

The post Daily adaptive proton therapy employed in the clinic for the first time appeared first on Physics World.

❌