↩ Accueil

Vue lecture

Harnessing the power of light for healthcare

Light has always played a central role in healthcare, enabling a wide range of tools and techniques for diagnosing and treating disease. Nick Stone from the University of Exeter is a pioneer in this field, working with technologies ranging from laser-based cancer therapies to innovative spectroscopy-based diagnostics. Stone was recently awarded the Institute of Physics’ Rosalind Franklin Medal and Prize for developing novel Raman spectroscopic tools for rapid in vivo cancer diagnosis and monitoring. Physics World’s Tami Freeman spoke with Stone about his latest research.

What is Raman spectroscopy and how does it work?

Think about how we see the sky. It is blue due to elastic (specifically Rayleigh) scattering – when an incident photon scatters off a particle without losing any energy. But in about one in a million events, photons interacting with molecules in the atmosphere will be inelastically scattered. This changes the energy of the photon as some of it is taken by the molecule to make it vibrate.

If you shine laser light on a molecule and cause it to vibrate, the photon that is scattered from that molecule will be shifted in energy by a specific amount relating to the molecule’s vibrational mode. Measuring the wavelength of this inelastically scattered light reveals which molecule it was scattered from. This is Raman spectroscopy.

Because most of the time we’re working at room or body temperatures, most of what we observe is Stokes Raman scattering, in which the laser photons lose energy to the molecules. But if a molecule is already vibrating in an excited state (at higher temperature), it can give up energy and shift the laser photon to a higher energy. This anti-Stokes spectrum is much weaker, but can be very useful – as I’ll come back to later.

How are you using Raman spectroscopy for cancer diagnosis?

A cell in the body is basically a nucleus: one set of molecules, surrounded by the cytoplasm: another set of molecules. These molecules change subtlety depending on the phenotype [set of observable characteristics] of the particular cell. If you have a genetic mutation, which is what drives cancer, the cell tends to change its relative expression of proteins, nucleic acids, glycogen and so on.

We can probe these molecules with light, and therefore determine their molecular composition. Cancer diagnostics involves identifying minute changes between the different compositions. Most of our work has been in tissues, but it can also be done in biofluids such as tears, blood plasma or sweat. You build up a molecular fingerprint of the tissue or cell of interest, and then you can compare those fingerprints to identify the disease.

We tend to perform measurements under a microscope and, because Raman scattering is a relatively weak effect, this requires good optical systems. We’re trying to use a single wavelength of light to probe molecules of interest and look for wavelengths that are shifted from that of the laser illumination. Technology improvements have provided holographic filters that remove the incident laser wavelength readily, and less complex systems that enable rapid measurements.

Raman spectroscopy can classify tissue samples removed in cancer surgery, for example. But can you use it to detect cancer without having to remove tissue from the patient?

Absolutely, we’ve developed probes that fit inside an endoscope for diagnosing oesophageal cancer.

Earlier in my career I worked on photodynamic therapy. We would look inside the oesophagus with an endoscope to find disease, then give the patient a phototoxic drug that would target the diseased cells. Shining light on the drug causes it to generate singlet oxygen that kills the cancer cells. But I realized that the light we were using could also be used for diagnosis.

Currently, to find this invisible disease, you have to take many, many biopsies. But our in vivo probes allow us to measure the molecular composition of the oesophageal lining using Raman spectroscopy, to be and determine where to take biopsies from. Oesophageal cancer has a really bad outcome once it’s diagnosed symptomatically, but if you can find the disease early you can deliver effective treatments. That’s what we’re trying to do.

Two photos: macro of a narrow probe inside a tube a few millimetres wide; a doctor wearing scrubs feeding a narrow tube into a piece of surgical equipment
Tiny but mighty (left) A Raman probe protruding from the instrument channel of an endoscope. (right) Oliver Old, consultant surgeon, passing the probe down an endoscope for a study led by the University of Exeter, with the University of Bristol and Gloucestershire Hospitals NHS Foundation Trust as partners. (Courtesy: RaPIDE Team)

The very weak Raman signal, however, causes problems. With a microscope, we can use advanced filters to remove the incident laser wavelength. But sending light down an optical fibre generates unwanted signal, and we also need to remove elastically scattered light from the oesophagus. So we had to put a filter on the end of this tiny 2 mm fibre probe. In addition, we don’t want to collect photons that have travelled a long way through the body, so we needed a confocal system. We built a really complex probe, working in collaboration with John Day at the University of Bristol – it took a long time to optimize the optics and the engineering.

Are there options for diagnosing cancer in places that can’t be accessed via an endoscope?

Yes, we have also developed a smart needle probe that’s currently in trials. We are using this to detect lymphomas – the primary cancer in lymph nodes – in the head and neck, under the armpit and in the groin.

If somebody comes forward with lumps in these areas, they usually have a swollen lymph node, which shows that something is wrong. Most often it’s following an infection and the node hasn’t gone back down in size.

This situation usually requires surgical removal of the node to decide whether cancer is present or not. Instead, we can just insert our needle probe and send light in. By examining the scattered light and measuring its fingerprint we can identify if it’s lymphoma. Indeed, we can actually see what type of cancer it is and where it has come from. 

Nick Stone sat on stage holding up a prototype needle probe
Novel needle Nick Stone demonstrates a prototype Raman needle probe. (Courtesy: Matthew Jones Photography)

Currently, the prototype probe is quite bulky because we are trying to make it low in cost. It has to have a disposable tip, so we can use a new needle each time, and the filters and optics are all in the handpiece.

Are you working on any other projects at the moment?

As people don’t particularly want a needle stuck in them, we are now trying to understand where the photons travel if you just illuminate the body. Red and near-infrared light travel a long way through the body, so we can use near-infrared light to probe photons that have travelled many, many centimetres.

We are doing a study looking at calcifications in a very early breast cancer called ductal carcinoma in situ (DCIS) – it’s a Cancer Research UK Grand Challenge called DCIS PRECISION, and we are just moving on to the in vivo phase.

Calcifications aren’t necessarily a sign of breast cancer – they are mostly benign; but in patients with DCIS, the composition of the calcifications can show how their condition will progress. Mammographic screening is incredibly good at picking up breast cancer, but it’s also incredibly good at detecting calcifications that are not necessarily breast cancer yet. The problem is how to treat these patients, so our aim is to determine whether the calcifications are completely fine or if they require biopsy.

We are using Raman spectroscopy to understand the composition of these calcifications, which are different in patients who are likely to progress onto invasive disease. We can do this in biopsies under a microscope and are now trying to see whether it works using transillumination, where we send near-infrared light through the breast. We could use this to significantly reduce the number of biopsies, or monitor individuals with DCIS over many years.

Light can also be harnessed to treat disease, for example using photodynamic therapy as you mentioned earlier. Another approach is nanoparticle-based photothermal therapy, how does this work?

This is an area I’m really excited about. Nanoscale gold can enhance Raman signals by many orders of magnitude – it’s called surface-enhanced Raman spectroscopy. We can also “label” these nanoparticles by adding functional molecules to their surfaces. We’ve used unlabelled gold nanoparticles to enhance signals from the body and labelled gold to find things.

During that process, we also realized that we can use gold to provide heat. If you shine light on gold at its resonant frequency, it will heat the gold up and can cause cell death. You could easily blow holes in people with a big enough laser and lots of nanoparticles – but we want to do is more subtle. We’re decorating the tiny gold nanoparticles with a label that will tell us their temperature.

By measuring the ratio between Stokes and anti-Stokes scattering signals (which are enhanced by the gold nanoparticles), we can measure the temperature of the gold when it is in the tumour. Then, using light, we can keep the temperature at a suitable level for treatment to optimize the outcome for the patient.

Ideally, we want to use 100 nm gold particles, but that is not something you can simply excrete through the kidneys. So we’ve spent the last five years trying to create nanoconstructs made from 5 nm gold particles that replicate the properties of 100 nm gold, but can be excreted. We haven’t demonstrated this excretion yet, but that’s the process we’re looking at.

This research is part of a project to combine diagnosis and heat treatment into one nanoparticle system – if the Raman spectra indicate cancer, you could then apply light to the nanoparticle to heat and destroy the tumour cells. Can you tell us more about this?

We’ve just completed a five-year programme called Raman Nanotheranostics. The aim is to label our nanoparticles with appropriate antibodies that will help the nanoparticles target different cancer types. This could provide signals that tell us what is or is not present and help decide how to treat the patient.

We have demonstrated the ability to perform treatments in preclinical models, control the temperature and direct the nanoparticles. We haven’t yet achieved a multiplexed approach with all the labels and antibodies that we want. But this is a key step forward and something we’re going to pursue further.

We are also trying to put labels on the gold that will enable us to measure and monitor treatment outcomes. We can use molecules that change in response to pH, or the reactive oxygen species that are present, or other factors. If you want personalized medicine, you need ways to see how the patient reacts to the treatment, how their immune system responds. There’s a whole range of things that will enable us to go beyond just diagnosis and therapy, to actually monitor the treatment and potentially apply a boost if the gold is still there.

Looking to the future, what do you see as the most promising applications of light within healthcare?

Light has always been used for diagnosis: “you look yellow, you’ve got something wrong with your liver”; “you’ve got blue-tinged lips, you must have oxygen depletion”. But it’s getting more and more advanced. I think what’s most encouraging is our ability to measure molecular changes that potentially reveal future outcomes of patients, and individualization of the patient pathway.

But the real breakthrough is what’s on our wrists. We are all walking around with devices that shine light in us – to measure heartbeat, blood oxygenation and so on. There are already Raman spectrometers that sort of size. They’re not good enough for biological measurements yet, but it doesn’t take much of a technology step forward.

I could one day have a chip implanted in my wrist that could do all the things the gold nanoconstructs might do, and my watch could read it out. And this is just Raman – there are a whole host of approaches, such as photoacoustic imaging or optical coherence tomography. Combining different techniques together could provide greater understanding in a much less invasive way than many traditional medical methods. Light will always play a really important role in healthcare.

The post Harnessing the power of light for healthcare appeared first on Physics World.

  •  

Ultrafast PET imaging could shed light on cardiac and neurological disease

Dynamic PET imaging is an important preclinical research tool used to visualize real-time functional information in a living animal. Currently, however, the temporal resolution of small-animal PET scanners is on the order of seconds, which is too slow to image blood flow in the heart or track the brain’s neuronal activity. To remedy this, the Imaging Physics Group at the National Institutes for Quantum Science and Technology (QST) in Japan has developed an ultrasensitive small-animal PET scanner that enables sub-second dynamic imaging of a rat.

The limited temporal resolution of conventional preclinical PET scanners stems from their low sensitivity (around 10%), caused by relatively thin detection crystals (10 mm) and a short axial field-of-view (FOV). Thus the QST team built a system based on four-layer, depth-encoding detectors with a total thickness of 30 mm. The scanner has a 325.6 mm-long axial FOV, providing total-body coverage without any bed movement, while a small inner diameter of 155 mm further increases detection efficiency.

“The main application of the total-body small-animal PET (TBS-PET) scanner will be assessment of new radiopharmaceuticals, especially for cardiovascular and neurodegenerative diseases, by providing total-body rodent PET images with sub-second temporal resolution,” first author Han Gyu Kang tells Physics World. “In addition, the scanner will be used for in-beam PET imaging, and single-cell tracking, where ultrahigh sensitivity is required.”

Performance evaluation

The TBS-PET scanner contains six detector rings, each incorporating 10 depth-of-interaction (DOI) detectors. Each DOI detector comprises a four-layer zirconium-doped gadolinium oxyorthosilicate (GSOZ) crystal array (16×16 crystals per layer) and an array of multi-anode photomultiplier tubes. The team selected GSOZ crystals because they have no intrinsic radiation signal, thus enabling low activity PET imaging.

The researchers performed a series of tests to characterize the scanner performance. Measurements of a 68Ge line source at the centre of the FOV showed that the TBS-PET had an energy resolution of 18.4% and a coincidence timing resolution of 7.9 ns.

Imaging a NEMA 22Na point source revealed a peak sensitivity of 45.0% in the 250–750 keV energy window – more than four times that of commercial or laboratory small-animal PET scanners. The system exhibited a uniform spatial resolution of around 2.6 mm across the FOV, thanks to the four-layer DOI information, which effectively reduced the parallax error.

In vivo imaging

Kang and colleagues next obtained in vivo total-body PET images of healthy rats using a single bed position. Static imaging using Na18F and 18F-FDG tracers clearly visualized bone structures and glucose metabolism, respectively, of the entire rat body.

Moving to dynamic imaging, the researchers injected an 18F-FDG bolus into the tail vein of an anesthetized rat for 15 s, followed by a saline injection 15 s after injection. They acquired early-phase dynamic PET data every second until 27 s after injection. To enable sub-second PET imaging, they used custom-written software to subdivide the list-mode data (1 s time frame) into time frames of 0.5 s, 0.25 s and 0.1 s.

Dynamic PET images with a 0.5 s time frame clearly visualized the blood stream from the tail to the heart through the iliac vein and inferior vena cava for the first 2 s, after which the tracer reached the right atrium and right ventricle. At 4.0 s after injection, blood flowed from the left ventricle into the brain via the carotid arteries. The cortex and kidneys were identified 5.5 s after injection. After roughly 17.5 s, the saline peak could be identified in the time-activity curves (TACs).

At 0.25 s temporal resolution, the early-phase images visualized the first pass blood circulation of the rat heart, showing the 18F-FDG bolus flowing from the inferior vena cava to the right ventricle from 2.25 s. The tracer next circulated to the lungs via the pulmonary artery from 2.5 s, and then flowed to the left ventricle from 3.75 s.

The TACs clearly visualized the time dispersion between the right and left ventricles (1.25 s). This value can change for animals with cardiac disease, and the team plans to explore the benefit of fast temporal resolution PET for diagnosing cardiovascular and neurodegenerative diseases.

The researchers conclude that the TBS-PET scanner enables dynamic imaging with a nearly real-time frame rate, visualizing cardiac function and pulmonary circulation of a rat with 0.25 s temporal resolution, a feat that is not possible with conventional small-animal PET scanners.

“One drawback of the TBS-PET scanner is the relatively low spatial resolution of around 2.6 mm, which is limited by the relatively large crystal pitch of 2.85 mm,” says Kang. “To solve this issue, we are now developing a new small-animal PET scanner employing three-layer depth-encoding detectors with 0.8 mm crystal pitch, towards our final goal of sub-millimetre and sub-second temporal resolution PET imaging in rodent models.”

The TBS-PET scanner is described in Physics in Medicine & Biology.

The post Ultrafast PET imaging could shed light on cardiac and neurological disease appeared first on Physics World.

  •  

Generative AI speeds medical image analysis without impacting accuracy

Artificial intelligence (AI) holds great potential for a range of data-intensive healthcare tasks: detecting cancer in diagnostic images, segmenting images for adaptive radiotherapy and perhaps one day even fully automating the radiation therapy workflow.

Now, for the first time, a team at Northwestern Medicine in Illinois has integrated a generative AI tool into a live clinical workflow to draft radiology reports on X-ray images. In routine use, the AI model increased documentation efficiency by an average of 15.5%, while maintaining diagnostic accuracy.

Medical images such as X-ray scans play a central role in diagnosing and staging disease. To interpret an X-ray, a patient’s imaging data are typically input into the hospital’s PACS (picture archiving and communication system) and sent to radiology reporting software. The radiologist then reviews and interprets the imaging and clinical data and creates a report to help guide treatment decisions.

To speed up this process, Mozziyar Etemadi and colleagues proposed that generative AI could create a draft report that radiologists could then check and edit, saving them from having to start from scratch. To enable this, the researchers built a generative AI model specifically for radiology at Northwestern, based on historical data from the 12-hospital Northwestern Medicine network.

They then integrated this AI model into the existing radiology clinical workflow, enabling it to receive data from the PACS and generate a draft AI report. Within seconds of image acquisition, this report is available within the reporting software, enabling radiologists to create a final report from the AI-generated draft.

“Radiology is a great fit [for generative AI] because the practice of radiology is inherently generative – radiologists are looking very carefully at images and then generating text to summarize what is in the image,” Etemadi tells Physics World. “This is similar, if not identical, to what generative models like ChatGPT do today. Our [AI model] is unique in that it is far more accurate than ChatGPT for this task, was developed years earlier and is thousands of times less costly.”

Clinical application

The researchers tested their AI model on radiographs obtained at Northwestern hospitals over a five month period, reporting their findings in JAMA Network Open. They first examined the AI model’s impact on documentation efficiency for 23 960 radiographs. Unlike previous AI investigations that only used chest X-rays, this work covered all anatomies, with 18.3% of radiographs from non-chest sites (including the abdomen, pelvis, spine, and upper and lower extremities).

Use of the AI model increased report completion efficiency by 15.5% on average – reducing mean documentation time from 189.2 s to 159.8 s – with some radiologists achieving gains as high as 40%. The researchers note that this corresponds to a time saving of more than 63 h over the five months, representing a reduction from roughly 79 to 67 radiologist shifts.

To assess the quality of the AI-based documentation, they investigated the rate at which addenda (used to rectify reporting errors) were made to the final reports. Addenda were required in 17 model-assisted reports and 16 non-model reports, suggesting that use of AI did not impact the quality of radiograph interpretation.

To further verify this, the team also conducted a peer review analysis – in which a second radiologist rates a report according to how well they agree with its findings and text quality – in 400 chest and 400 non-chest studies, split evenly between AI-assisted and non-assisted reports. The peer review revealed no differences in clinical accuracy or text quality between AI-assisted and non-assisted interpretations, reinforcing the radiologist’s ability to create high-quality documentation using the AI.

Rapid warning system

Finally, the researchers applied the model to flag unexpected life-threatening pathologies, such as pneumothorax (collapsed lung), using an automated prioritization system that monitors the AI-generated reports. The system exhibited a sensitivity of 72.7% and specificity of 99.9% for detecting unexpected pneumothorax. Importantly, these priority flags were generated between 21 and 45 s after study completion, compared with a median of 24.5 min for radiologist notifications.

Etemadi notes that previous AI systems were designed to detect specific findings and output a “yes” or “no” for each disease type. The team’s new model, on the other hand, creates a full text draft containing detailed comments.

“This precise language can then be searched to make more precise and actionable alerts,” he explains. “For example, we don’t need to know if a patient has a pneumothorax if we already know they have one and it is getting better. This cannot be done with existing systems that just provide a simple yes/no response.”

The team is now working to increase the accuracy of the AI tool, to enable more subtle and rare findings, as well as expand beyond X-ray images. “We currently have CT working and are looking to expand to MRI, ultrasound, mammography, PET and more, as well as modalities beyond radiology like ophthalmology and dermatology,” says Etemadi.

The researchers conclude that their generative AI tool could help alleviate radiologist shortages, with radiologist and AI collaborating to improve clinical care delivery. They emphasize, though, that the technology won’t replace humans. “You still need a radiologist as the gold standard,” says co-author Samir Abboud in a press statement. “Our role becomes ensuring every interpretation is right for the patient.”

The post Generative AI speeds medical image analysis without impacting accuracy appeared first on Physics World.

  •  

Wireless e-tattoos help manage mental workload

Managing one’s mental workload is a tricky balancing act that can affect cognitive performance and decision making abilities. Too little engagement with an ongoing task can lead to boredom and mistakes; too high could cause a person to become overwhelmed.

For those performing safety-critical tasks, such as air traffic controllers or truck drivers for example, monitoring how hard their brain is working is even more important – lapses in focus could have serious consequences. But how can a person’s mental workload be assessed? A team at the University of Texas at Austin proposes the use of temporary face tattoos that can track when a person’s brain is working too hard.

“Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded,” says lead author Nanshu Lu in a press statement. “There is an optimal mental workload for optimal performance, which differs from person to person.”

The traditional approach for monitoring mental workload is electroencephalography (EEG), which analyses the brain’s electrical activity. But EEG devices are wired, bulky and uncomfortable, making them impractical for real-world situations. Measurements of eye movements using electrooculography (EOG) are another option for assessing mental workload.

Lu and colleagues have developed an ultrathin wireless e-tattoo that records high-fidelity EEG and EOG signals from the forehead. The e-tattoo combines a disposable sticker-like electrode layer and a reusable battery-powered flexible printed circuit (FPC) for data acquisition and wireless transmission.

The serpentine-shaped electrodes and interconnects are made from low-cost, conductive graphite-deposited polyurethane, coated with an adhesive polymer composite to reduce contact impedance and improve skin attachment. The e-tattoo stretches and conforms to the skin, providing reliable signal acquisition, even during dynamic activities such as walking and running.

To assess the e-tattoo’s ability to record basic neural activities, the team used it to measure alpha brainwaves as a volunteer opened and closed their eyes. The e-tattoo captured equivalent neural spectra to that recorded by a commercial gel electrode-based EEG system with comparable signal fidelity.

The researchers next tested the e-tattoo on six participants while they performed a visuospatial memory task that gradually increased in difficulty. They analysed the signals collected by the e-tattoo during the tasks, extracting EEG band powers for delta, theta, alpha, beta and gamma brainwaves, plus various EOG features.

As the task got more difficult, the participants showed higher activity in the theta and delta bands, a feature associated with increased cognitive demand. Meanwhile, activity in the alpha and beta bands decreased, indicating mental fatigue.

The researchers built a machine learning model to predict the level of mental workload experienced during the tasks, training it on forehead EEG and EOG features recorded by the e-tattoo. The model could reliably estimate mental workload in each of the six subjects, demonstrating the feasibility of real-time cognitive state decoding.

“Our key innovation lies in the successful decoding of mental workload using a wireless, low-power, low-noise and ultrathin EEG/EOG e-tattoo device,” the researchers write. “It addresses the unique challenges of monitoring forehead EEG and EOG, where wearability, non-obstructiveness and signal stability are critical to assessing mental workload in the real world.”

They suggest that future applications could include real-time cognitive load monitoring in pilots, operators and healthcare professionals. “We’ve long monitored workers’ physical health, tracking injuries and muscle strain,” says co-author Luis Sentis. “Now we have the ability to monitor mental strain, which hasn’t been tracked. This could fundamentally change how organizations ensure the overall well-being of their workforce.”

The e-tattoo is described in Device.

The post Wireless e-tattoos help manage mental workload appeared first on Physics World.

  •  

Secondary dose checks ensure safe and accurate delivery of adaptive radiotherapy

Adaptive radiotherapy, an advanced cancer treatment in which each fraction is tailored to the patient’s daily anatomy, offers the potential to maximize target conformality and minimize dose to surrounding healthy tissue. Based on daily scans – such as MR images recorded by an MR-Linac, for example – treatment plans are adjusted each day to account for anatomical changes in the tumour and surrounding healthy tissue.

Creating a new plan for every treatment fraction, however, increase the potential for errors, making fast and effective quality assurance (QA) procedures more important than ever. To meet this need, the physics team at Hospital Almater in Mexicali, Mexico, is using Elekta ONE | QA, powered by ThinkQA Secondary Dose Check* (ThinkQA SDC) software to ensure that each adaptive plan is safe and accurate before it is delivered to the patient.

Radiotherapy requires a series of QA checks prior to treatment delivery, starting with patient-specific QA, where the dose calculated by the treatment planning system is delivered to a phantom. This procedure ensures that the delivered dose distribution matches the prescribed plan. Alongside, secondary dose checks can be performed, in which an independent algorithm verifies that the calculated dose distribution corresponds with that delivered to the actual patient anatomy.

“The secondary dose check is an independent dose calculation that uses a different algorithm to the one in the treatment planning system,” explains Alexis Cabrera Santiago, a medical physicist at Hospital Almater. “ThinkQA SDC software calculates the dose based on the patient anatomy, which is actually more realistic than using a rigid phantom, so we can compare both results and catch any differences before treatment.”

ThinkQA SDC
Pre-treatment verification ThinkQA SDC’s unique dose calculation method has been specifically designed for Elekta Unity. (Courtesy: Elekta)

For adaptive radiotherapy in particular, this second check is invaluable. Performing phantom-based QA following each daily imaging session is often impractical. Instead, in many cases, it’s possible to use ThinkQA SDC instead.

“Secondary dose calculation is necessary in adaptive treatments, for example using the MR-Linac, because you are changing the treatment plan for each session,” says José Alejandro Rojas‑López, who commissioned and validated ThinkQA SDC at Hospital Almater. “You are not able to shift the patient to realise patient-specific QA, so this secondary dose check is needed to analyse each treatment session.”

ThinkQA SDC’s ability to achieve patient-specific QA without shifting the patient is extremely valuable, allowing time savings while upholding the highest level of QA safety. “The AAPM TG 219 report recognises secondary dose verification as a validated alternative to patient-specific QA, especially when there is no time for traditional phantom checks in adaptive fractions,” adds Cabrera Santiago.

The optimal choice

At Hospital Almater, all external-beam radiation treatments are performed using an Elekta Unity MR-Linac (with brachytherapy employed for gynaecological cancers). This enables the hospital to offer adaptive radiotherapy for all cases, including head-and-neck, breast, prostate, rectal and lung cancers.

To ensure efficient workflow and high-quality treatments, the team turned to the ThinkQA SDC software. ThinkQA SDC received FDA 510(k) clearance in early 2024 for use with both the Unity MR-Linac and conventional Elekta linacs.

Rojas‑López (who now works at Hospital Angeles Puebla) says that the team chose ThinkQA SDC because of its user-friendly interface, ease of integration into the clinical workflow and common integrated QA platform for both CT and MR-Linac systems. The software also offers the ability to perform 3D evaluation of the entire planning treatment volume (PTV) and the organs-at-risk, making the gamma evaluation more robust.

Alexis Cabrera Santiago and José Alejandro Rojas‑López
Physics team Alexis Cabrera Santiago and José Alejandro Rojas‑López. (Courtesy: José Alejandro Rojas‑López/Hospital Almater)

Commissioning of ThinkQA SDC was fast and straightforward, Rojas‑López notes, requiring minimal data input into the software. For absolute dose calibration, the only data needed are the cryostat dose attenuation response, the output dose geometry and the CT calibration.

“This makes a difference compared with other commercial solutions where you have to introduce more information, such as MLC [multileaf collimator] leakage and MLC dosimetric leaf gap, for example,” he explains. “If you have to introduce more data for commissioning, this delays the clinical introduction of the software.”

Cabrera Santiago is now using ThinkQA SDC to provide secondary dose calculations for all radiotherapy treatments at Hospital Almater. The team has established a protocol with a 3%/2 mm gamma criterion, a tolerance limit of 95% and an action limit of 90%. He emphasizes that the software has proved robust and flexible, and provides confidence in the delivered treatment.

“ThinkQA SDC lets us work with more confidence, reduces risk and saves time without losing control over the patient’s safety,” he says. “It checks that the plan is correct, catches issues before treatment and helps us find any problems like set-up errors, contouring mistakes and planning issues.”

The software integrates smoothly into the Elekta ONE adaptive workflow, providing reliable results without slowing down the clinical workflow. “In our institution, we set up ThinkQA SDC so that it automatically receives the new plan, runs the check, compares it with the original plan and creates a report – all in around two minutes,” says Cabrera Santiago. “This saves us a lot of time and removes the need to do everything manually.”

A case in point

As an example of ThinkQA SDC’s power to ease the treatment workflow, Rojas‑López describes a paediatric brain tumour case at Hospital Almater. The young patient needed sedation during their treatment, requiring the physics team to optimize the treatment time for the entire adaptive radiotherapy workflow. “ThinkQA SDC served to analyse, in a fast mode, the treatment plan QA for each session. The measurements were reliable, enabling us to deliver all of the treatment sessions without any delay,” he explains.

Indeed, the ability to use secondary dose checks for each treatment fraction provides time advantages for the entire clinical workflow over phantom-based pre-treatment QA. “Time in the bunker is very expensive,” Rojas‑López points out. “If you reduce the time required for QA, you can use the bunker for patient treatments instead and treat more patients during the clinical time. Secondary dose check can optimize the workflow in the entire department.”

Importantly, in a recent study comparing patient-specific QA measurements using Sun Nuclear’s ArcCheck with ThinkQA SDC calculations, Rojas‑López and colleagues confirmed that the two techniques provided comparable results, with very similar gamma passing rates. As such, they are working to reduce phantom measurements and, in most cases, replace them with a secondary dose check using ThinkQA SDC.

The team at Hospital Almater concur that ThinkQA SDC provides a reliable tool to evaluate radiation treatments, including the first fraction and all of the adaptive sessions, says Rojas‑López. “You can use it for all anatomical sites, with reliable and confident results,” he notes. “And you can reduce the need for measurements using another patient-specific QA tool.”

“I think that any centre doing adaptive radiotherapy should seriously consider using a tool like ThinkQA SDC,” adds Cabrera Santiago.

*ThinkQA is manufactured by DOSIsoft S.A. and distributed by Elekta.

The post Secondary dose checks ensure safe and accurate delivery of adaptive radiotherapy appeared first on Physics World.

  •  

Miniaturized pixel detector characterizes radiation quality in clinical proton fields

Experimental setups for phantom measurements
Experimental setup Top: schematic and photo of the setup for measurements behind a homogeneous phantom. Bottom: IMPT treatment plan for the head phantom (left); the detector sensor position (middle, sensor thickness not to scale); and the setup for measurements behind the phantom (right). (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

Proton therapy is a highly effective and conformal cancer treatment. Proton beams deposit most of their energy at a specific depth – the Bragg peak – and then stop, enabling proton treatments to destroy tumour cells while sparing surrounding normal tissue. To further optimize the clinical treatment planning process, there’s recently been increased interest in considering the radiation quality, quantified by the proton linear energy transfer (LET).

LET – defined as the mean energy deposited by a charged particle over a given distance – increases towards the end of the proton range. Incorporating LET as an optimization parameter could better exploit the radiobiological properties of protons, by reducing LET in healthy tissue, while maintaining or increasing it within the target volume. This approach, however, requires a method for experimental verification of proton LET distributions and patient-specific quality assurance in terms of proton LET.

To meet this need, researchers at the Institute of Nuclear Physics, Polish Academy of Sciences have used the miniaturized semiconductor pixel detector Timepix3 to perform LET characterization of intensity-modulated proton therapy (IMPT) plans in homogeneous and heterogeneous phantoms. They report their findings in Physics in Medicine & Biology.

Experimental validation

First author Paulina Stasica-Dudek and colleagues performed a series of experiments in a gantry treatment room at the Cyclotron Centre Bronowice (CCB), a proton therapy facility equipped with a proton cyclotron accelerator and pencil-beam scanning system that provides IMPT for up to 50 cancer patients per day.

The MiniPIX Timepix3 is a radiation imaging pixel detector based on the Timepix3 chip developed at CERN within the Medipix collaboration (provided commercially by Advacam). It provides quasi-continuous single particle tracking, allowing particle type recognition and spectral information in a wide range of radiation environments.

For this study, the team used a Timepix3 detector with a 300 µm-thick silicon sensor operated as a miniaturized online radiation camera. To overcome the problem of detector saturation in the relatively high clinical beam currents, the team developed a pencil-beam scanning method with the beam current reduced to the picoampere (pA) level.

The researchers used Timepix3 to measure the deposited energy and LET spectra for spread-out Bragg peak (SOBP) and IMPT plans delivered to a homogeneous water-equivalent slab phantom, with each plan energy layer irradiated and measured separately. They also performed measurements on an IMPT plan delivered to a heterogeneous head phantom. For each scenario, they used a Monte Carlo (MC) code to simulate the corresponding spectra of deposited energy and LET for comparison.

The team first performed a series of experiments using a homogeneous phantom irradiated with various fields, mimicking patient-specific quality assurance procedures. The measured and simulated dose-averaged LET (LETd) and LET spectra agreed to within a few percent, demonstrating proper calibration of the measurement methodology.

The researchers also performed an end-to-end test in a heterogeneous CIRS head phantom, delivering a single field of an IMPT plan to a central 4 cm-diameter target volume in 13 energy layers (96.57–140.31 MeV) and 315 spots.

Energy deposition and LET spectra for an IMPT plan delivered to a head phantom
End-to-end testing Energy deposition (left) and LET in water (right) spectra for an IMPT plan measured in the CIRS head phantom obtained based on measurements (blue) and MC simulations (orange). The vertical lines indicate LETd values. (Courtesy: Phys. Med. Biol. 10.1088/1361-6560/adcaf9)

For head phantom measurements, the peak positions for deposited energy and LET spectra obtained based on experiment and simulation agreed within the error bars, with LETd values of about 1.47 and 1.46 keV/µm, respectively. The mean LETd values derived from MC simulation and measurement differed on average by 5.1% for individual energy layers.

Clinical translation

The researchers report that implementing the proposed LET measurement scheme using Timepix3 in a clinical setting requires irradiating IMPT plans with a reduced beam current (at the pA level). While they successfully conducted LET measurements at low beam currents in the accelerator’s research mode, pencil-beam scanning at pA-level currents is not currently available in the commercial clinical or quality assurance modes. Therefore, they note that translating the proposed approach into clinical practice would require vendors to upgrade the beam delivery system to enable beam monitoring at low beam currents.

“The presented results demonstrate the feasibility of the Timepix3 detector to validate LET computations in IMPT fields and perform patient-specific quality assurance in terms of LET. This will support the implementation of LET in treatment planning, which will ultimately increase the effectiveness of the treatment,” Stasica-Dudek and colleagues write. “Given the compact design and commercial availability of the Timepix3 detector, it holds promise for broad application across proton therapy centres.”

The post Miniaturized pixel detector characterizes radiation quality in clinical proton fields appeared first on Physics World.

  •  

Retinal stimulation reveals colour never before seen by the human eye

A new retinal stimulation technique called Oz enabled volunteers to see colours that lie beyond the natural range of human vision. Developed by researchers at UC Berkeley, Oz works by stimulating individual cone cells in the retina with targeted microdoses of laser light, while compensating for the eye’s motion.

Colour vision is enabled by cone cells in the retina. Most humans have three types of cone cells, known as L, M and S (long, medium and short), which respond to different wavelengths of visible light. During natural human vision, the spectral distribution of light reaching these cone cells determines the colours that we see.

Spectral sensitivity curves
Spectral sensitivity curves The response function of M cone cells overlaps completely with those of L and S cones. (Courtesy: Ben Rudiak-Gould)

Some colours, however, simply cannot be seen. The spectral sensitivity curves of the three cone types overlap – in particular, there is no wavelength of light that stimulates only the M cone cells without stimulating nearby L (and sometimes also S) cones as well.

The Oz approach, however, is fundamentally different. Rather than being based on spectral distribution, colour perception is controlled by shaping the spatial distribution of light on the retina.

Describing the technique in Science Advances, Ren Ng and colleagues showed that targeting individual cone cells with a 543 nm laser enabled subjects to see a range of colours in both images and videos. Intriguingly, stimulating only the M cone cells sent a colour signal to the brain that never occurs in natural vision.

The Oz laser system uses a technique called adaptive optics scanning light ophthalmoscopy (AOSLO) to simultaneously image and stimulate the retina with a raster scan of laser light. The device images the retina with infrared light to track eye motion in real time and targets pulses of visible laser light at individual cone cells, at a rate of 105 per second.

In a proof-of-principle experiment, the researchers tested a prototype Oz system on five volunteers. In a preparatory step, they used adaptive optics-based optical coherence tomography (AO-OCT) to classify the LMS spectral type of 1000 to 2000 cone cells in a region of each subject’s retina.

When exclusively targeting M cone cells in these retinal regions, subjects reported seeing a new blue–green colour of unprecedented saturation – which the researchers named “olo”. They could also clearly perceive Oz hues in image and video form, reliably detecting the orientation of a red line and the motion direction of a rotating red dot on olo backgrounds. In colour matching experiments, subjects could only match olo with the closest monochromatic light by desaturating it with white light – demonstrating that olo lies beyond the range of natural vision.

The team also performed control experiments in which the Oz microdoses were intentionally “jittered” by a few microns. With the target locations no longer delivered accurately, the subjects instead perceived the natural colour of the stimulating laser. In the image and video recognition experiments, jittering the microdose target locations reduced the task accuracy to guessing rate.

Ng and colleagues conclude that “Oz represents a new class of experimental platform for vision science and neuroscience [that] will enable diverse new experiments”. They also suggest that the technique could one day help to elicit full colour vision in people with colour blindness.

The post Retinal stimulation reveals colour never before seen by the human eye appeared first on Physics World.

  •  

Very high-energy electrons could prove optimal for FLASH radiotherapy

Electron therapy has long played an important role in cancer treatments. Electrons with energies of up to 20 MeV can treat superficial tumours while minimizing delivered dose to underlying tissues; they are also ideal for performing total skin therapy and intraoperative radiotherapy. The limited penetration depth of such low-energy electrons, however, limits the range of tumour sites that they can treat. And as photon-based radiotherapy technology continues to progress, electron therapy has somewhat fallen out of fashion.

That could all be about to change with the introduction of radiation treatments based on very high-energy electrons (VHEEs). Once realised in the clinic, VHEEs – with energies from 50 up to 400 MeV – will deliver highly penetrating, easily steerable, conformal treatment beams with the potential to enable emerging techniques such as FLASH radiotherapy. French medical technology company THERYQ is working to make this opportunity a reality.

Therapeutic electron beams are produced using radio frequency (RF) energy to accelerate electrons within a vacuum cavity. An accelerator of a just over 1 m in length can boost electrons to energies of about 25 MeV – corresponding to a tissue penetration depth of a few centimetres. It’s possible to create higher energy beams by simply daisy chaining additional vacuum chambers. But such systems soon become too large and impractical for clinical use.

THERYQ is focusing on a totally different approach to generating VHEE beams. “In an ideal case, these accelerators allow you to reach energy transfers of around 100 MeV/m,” explains THERYQ’s Sébastien Curtoni. “The challenge is to create a system that’s as compact as possible, closer to the footprint and cost of current radiotherapy machines.”

Working in collaboration with CERN, THERYQ is aiming to modify CERN’s Compact Linear Collider technology for clinical applications. “We are adapting the CERN technology, which was initially produced for particle physics experiments, to radiotherapy,” says Curtoni. “There are definitely things in this design that are very useful for us and other things that are difficult. At the moment, this is still in the design and conception phase; we are not there yet.”

VHEE advantages

The higher energy of VHEE beams provides sufficient penetration to treat deep tumours, with the dose peak region extending up to 20–30 cm in depth for parallel (non-divergent) beams using energy levels of 100–150 MeV (for field sizes of 10 x 10 cm or above). And in contrast to low-energy electrons, which have significant lateral spread, VHEE beams have extremely narrow penumbra with sharp beam edges that help to create highly conformal dose distributions.

“Electrons are extremely light particles and propagate through matter in very straight lines at very high energies,” Curtoni explains. “If you control the initial direction of the beam, you know that the patient will receive a very steep and well defined dose distribution and that, even for depths above 20 cm, the beam will remain sharp and not spread laterally.”

Electrons are also relatively insensitive to tissue inhomogeneities, such as those encountered as the treatment beam passes through different layers of muscle, bone, fat or air. “VHEEs have greater robustness against density variations and anatomical changes,” adds THERYQ’s Costanza Panaino. “This is a big advantage for treatments in locations where there is movement, such as the lung and pelvic areas.”

It’s also possible to manipulate VHEEs via electromagnetic scanning. Electrons have a charge-to-mass ratio roughly 1800 times higher than that of protons, meaning that they can be steered with a much weaker magnetic field than required for protons. “As a result, the technology that you are building has a smaller footprint and the possibility costing less,” Panaino explains. “This is extremely important because the cost of building a proton therapy facility is prohibitive for some countries.”

Enabling FLASH

In addition to expanding the range of clinical indications that can be treated with electrons, VHEE beams can also provide a tool to enable the emerging – and potentially game changing – technique known as FLASH radiotherapy. By delivering therapeutic radiation at ultrahigh dose rates (higher than 100 Gy/s), FLASH vastly reduces normal tissue toxicity while maintaining anti-tumour activity, potentially minimizing harmful side-effects.

The recent interest in the FLASH effect began back in 2014 with the report of a differential response between normal and tumour tissue in mice exposed to high dose-rate, low-energy electrons. Since then, most preclinical FLASH studies have used electron beams, as did the first patient treatment in 2019 – a skin cancer treatment at Lausanne University Hospital (CHUV) in Switzerland, performed with the Oriatron eRT6 prototype from PMB-Alcen, the French company from which THERYQ originated.

FLASH radiotherapy is currently being used in clinical trials with proton beams, as well as with low-energy electrons, where it remains intrinsically limited to superficial treatments. Treating deep-seated tumours with FLASH requires more highly penetrating beams. And while the most obvious option would be to use photons, it’s extremely difficult to produce an X-ray beam with a high enough dose rate to induce the FLASH effect without excessive heat generation destroying the conversion target.

“It’s easier to produce a high dose-rate electron beam for FLASH than trying to [perform FLASH] with X-rays, as you use the electron beam directly to treat the patient,” Curtoni explains. “The possibility to treat deep-seated tumours with high-energy electron beams compensates for the fact that you can’t use X-rays.”

Panaino points out that in addition to high dose rates, FLASH radiotherapy also relies on various interdependent parameters. “Ideally, to induce the FLASH effect, the beam should be pulsed at a frequency of about 100 Hz, the dose-per-pulse should be 1 Gy or above, and the dose rate within the pulse should be higher than 106 Gy/s,” she explains.

VHEE infographic

Into the clinic

THERYQ is using its VHEE expertise to develop a clinical FLASH radiotherapy system called FLASHDEEP, which will use electrons at energies of 100 to 200 MeV to treat tumours at depths of up to 20 cm. The first FLASHDEEP systems will be installed at CHUV (which is part of a consortium with CERN and THERYQ) and at the Gustave Roussy cancer centre in France.

“We are trying to introduce FLASH into the clinic, so we have a prototype FLASHKNiFE machine that allows us to perform low-energy, 6 and 9 MeV, electron therapy,” says Charlotte Robert, head of the medical physics department research group at Gustave Roussy. “The first clinical trials using low-energy electrons are all on skin tumours, aiming to show that we can safely decrease the number of treatment sessions.”

While these initial studies are limited to skin lesions, clinical implementation of the FLASHDEEP system will extend the benefits of FLASH to many more tumour sites. Robert predicts that VHEE-based FLASH will prove most valuable for treating radioresistant cancers that cannot currently be cured. The rationale is that FLASH’s ability to spare normal tissue will allow delivery of higher target doses without increasing toxicity.

“You will not use this technology for diseases that can already be cured, at least initially,” she explains. “The first clinical trial, I’m quite sure, will be either glioblastoma or pancreatic cancers that are not effectively controlled today. If we can show that VHEE FLASH can spare normal tissue more than conventional radiotherapy can, we hope this will have a positive impact on lesion response.”

“There are a lot of technological challenges around this technology and we are trying to tackle them all,” Curtoni concludes. “The ultimate goal is to produce a VHEE accelerator with a very compact beamline that makes this technology and FLASH a reality for a clinical environment.”

The post Very high-energy electrons could prove optimal for FLASH radiotherapy appeared first on Physics World.

  •  

Tiny sensor creates a stable, wearable brain–computer interface

Brain–computer interfaces (BCIs) enable the flow of information between the brain and an external device such as a computer, smartphone or robotic limb. Applications range from use in augmented and virtual reality (AR and VR), to restoring function to people with neurological disorders or injuries.

Electroencephalography (EEG)-based BCIs use sensors on the scalp to noninvasively record electrical signals from the brain and decode them to determine the user’s intent. Currently, however, such BCIs require bulky, rigid sensors that prevent use during movement and don’t work well with hair on the scalp, which affects the skin–electrode impedance. A team headed up at Georgia Tech’s WISH Center has overcome these limitations by creating a brain sensor that’s small enough to fit between strands of hair and is stable even while the user is moving.

“This BCI system can find wide applications. For example, we can realize a text spelling interface for people who can’t speak,” says W Hong Yeo, Harris Saunders Jr Professor at Georgia Tech and director of the WISH Center, who co-led the project with Tae June Kang from Inha University in Korea. “For people who have movement issues, this BCI system can offer connectivity with human augmentation devices, a wearable exoskeleton, for example. Then, using their brain signals, we can detect the user’s intentions to control the wearable system.”

A tiny device

The microscale brain sensor comprises a cross-shaped structure of five microneedle electrodes, with sharp tips (less than 30°) that penetrate the skin easily with nearly pain-free insertion. The researchers used UV replica moulding to create the array, followed by femtosecond laser cutting to shape it to the required dimensions – just 850 x 1000 µm – to fit into the space between hair follicles. They then coated the microsensor with a highly conductive polymer (PEDOT:Tos) to enhance its electrical conductivity.

Microscale brain sensor between hair strands
Between the hairs The size and lightweight design of the sensor significantly reduces motion artefacts. (Courtesy: W Hong Yeo)

The microneedles capture electrical signals from the brain and transmit them along ultrathin serpentine wires that connect to a miniaturized electronics system on the back of the neck. The serpentine interconnector stretches as the skin moves, isolating the microsensor from external vibrations and preventing motion artefacts. The miniaturized circuits then wirelessly transmit the recorded signals to an external system (AR glasses, for example) for processing and classification.

Yeo and colleagues tested the performance of the BCI using three microsensors inserted into the scalp of the occipital lobe (the brain’s visual processing centre). The BCI exhibited excellent stability, offering high-quality measurement of neural signals – steady-state visual evoked potentials (SSVEPs) – for up to 12 h, while maintaining low contact impedance density (0.03 kΩ/cm2).

The team also compared the quality of EEG signals measured using the microsensor-based BCI with those obtained from conventional gold-cup electrodes. Participants wearing both sensor types closed and opened their eyes while standing, walking or running.

With the participant stood still, both electrode types recorded stable EEG signals, with an increased amplitude upon closing the eyes, due to the rise in alpha wave power. During motion, however, the EEG time series recorded with the conventional electrodes showed noticeable fluctuations. The microsensor measurements, on the other hand, exhibited minimal fluctuations while walking and significantly fewer fluctuations than the gold-cup electrodes while running.

Overall, the alpha wave power recorded by the microsensors during eye-closing was higher than that of the conventional electrode, which could not accurately capture EEG signals while the user was running. The microsensors only exhibited minor motion artefacts, with little to no impact on the EEG signals in the alpha band, allowing reliable data extraction even during excessive motion.

Real-world scenario

Next, the team showed how the BCI could be used within everyday activities – such as making calls or controlling external devices – that require a series of decisions. The BCI enables a user to make these decisions using their thoughts, without needing physical input such as a keyboard, mouse or touchscreen. And the new microsensors free the user from environmental and movement constraints.

The researchers demonstrated this approach in six subjects wearing AR glasses and a microsensor-based EEG monitoring system. They performed experiments with the subjects standing, walking or running on a treadmill, with two distinct visual stimuli from the AR system used to induce SSVEP responses. Using a train-free SSVEP classification algorithm, the BCI determined which stimulus the subject was looking at with a classification accuracy of 99.2%, 97.5% and 92.5%, while standing, walking and running, respectively.

The team also developed an AR-based video call system controlled by EEG, which allows users to manage video calls (rejecting, answering and ending) with their thoughts, demonstrating its use during scenarios such as ascending and descending stairs and navigating hallways.

“By combining BCI and AR, this system advances communication technology, offering a preview of the future of digital interactions,” the researchers write. “Additionally, this system could greatly benefit individuals with mobility or dexterity challenges, allowing them to utilize video calling features without physical manipulation.”

The microsensor-based BCI is described in Proceedings of the National Academy of Sciences.

The post Tiny sensor creates a stable, wearable brain–computer interface appeared first on Physics World.

  •