↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Defying gravity: insights into hula hoop levitation

Par : Tami Freeman

Popularized in the late 1950s as a child’s toy, the hula hoop is undergoing renewed interest as a fitness activity and performance art. But have you ever wondered how a hula hoop stays aloft against the pull of gravity?

Wonder no more. A team of researchers at New York University have investigated the forces involved as a hoop rotates around a gyrating body, aiming to explain the physics and mathematics of hula hooping.

To determine the conditions required for successful hula hoop levitation, Leif Ristroph and colleagues conducted robotic experiments with hoops twirling around various shapes – including cones, cylinders and hourglass shapes. The 3D-printed shapes had rubberized surfaces to achieve high friction with a thin, rigid plastic hoop, and were driven to gyrate by a motor. The researchers launched the hoops onto the gyrating bodies by hand and recorded the resulting motion using high-speed videography and motion tracking algorithms.

They found that successful hula hooping is dependent on meeting two conditions. Firstly, the hoop orbit must be synchronized with the body gyration. This requires the hoop to be launched at sufficient speed and in the same direction as the gyration, following which, the outward pull by centrifugal action and damping due to rolling frication result in stable twirling.

Body shape impacts hula hooping ability
Shape matters Successful hula hooping requires a body type with the right slope and curvature. (Courtesy: NYU’s Applied Math Lab)

This process, however, does not necessarily keep the hoop elevated at a stable height – any perturbations could cause it to climb or fall away. The team found that maintaining hoop levitation requires the gyrating body to have a particular “body type”, including an appropriately angled or sloped surface – the “hips” – plus an hourglass-shaped profile with a sufficiently curved “waist”.

Indeed, in the robotic experiments, an hourglass-shaped body enabled steady-state hula hooping, while the cylinders and cones failed to successfully hula hoop.

The researchers also derived dynamical models that relate the motion and shape of the hoop and body to the contact forces generated. They note that their findings can be generalized to a wide range of different shapes and types of motion, and could be used in “robotic applications for transforming motions, extracting energy from vibrations, and controlling and manipulating objects without gripping”.

“We were surprised that an activity as popular, fun and healthy as hula hooping wasn’t understood even at a basic physics level,” says Ristroph in a press statement. “As we made progress on the research, we realized that the maths and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

The researchers present their findings in the Proceedings of the National Academy of Sciences.

The post Defying gravity: insights into hula hoop levitation appeared first on Physics World.

Virtual patient populations enable more inclusive medical device development

Par : Tami Freeman

Medical devices are thoroughly tested before being introduced into the clinic. But traditional testing approaches do not fully account for the diversity of patient populations. This can result in the launch to market of devices that may underperform in some patient subgroups or even cause harm, with often devastating consequences.

Aiming to solve this challenge, University of Leeds spin-out adsilico is working to enable more inclusive, efficient and patient-centric device development. Launched in 2021, the company is using computational methods pioneered in academia to revolutionize the way that medical devices are developed, tested and brought to market.

Sheena Macpherson, adsilico’s CEO, talks to Tami Freeman about the potential of advanced modelling and simulation techniques to help protect all patients, and how in silico trials could revolutionize medical device development.

What procedures are required to introduce a new medical device?

Medical devices currently go through a series of testing phases before reaching the market, including bench testing, animal studies and human clinical trials. These trials aim to establish the device’s safety and efficacy in the intended patient population. However, the patient populations included in clinical trials often do not adequately represent the full diversity of patients who will ultimately use the device once it is approved.

Why does this testing often exclude large segments of the population?

Traditional clinical trials tend to underrepresent women, ethnic minorities, elderly patients and those with rare conditions. This exclusion occurs for various reasons, including restrictive eligibility criteria, lack of diversity at trial sites, socioeconomic barriers to participation, and implicit biases in trial design and recruitment.

Sheena Macpherson
Computational medicine pioneer Sheena Macpherson is CEO of adsilico. (Courtesy: adsilico)

As a result, the data generated from these trials may not capture important variations in device performance across different subgroups.

This lack of diversity in testing can lead to devices that perform sub-optimally or even dangerously in certain demographic groups, with potentially life-threatening device flaws going undetected until the post-market phase when a much broader patient population is exposed.

Can you describe a real-life case of insufficient testing causing harm?

A poignant example is the recent vaginal mesh scandal. Mesh implants were widely marketed to hospitals as a simple fix for pelvic organ prolapse and urinary incontinence, conditions commonly linked to childbirth. However, the devices were often sold without adequate testing.

As a result, debilitating complications went undetected until the meshes were already in widespread use. Many women experienced severe chronic pain, mesh eroding into the vagina, inability to walk or have sex, and other life-altering side effects. Removal of the mesh often required complex surgery. A 2020 UK government inquiry found that this tragedy was further compounded by an arrogant culture in medicine that dismissed women’s concerns as “women’s problems” or a natural part of aging.

This case underscores how a lack of comprehensive and inclusive testing before market release can devastate patients’ lives. It also highlights the importance of taking patients’ experiences seriously, especially those from demographics that have been historically marginalized in medicine.

How can adsilico help to address these shortfalls?

adsilico is pioneering the use of advanced computational techniques to create virtual patient populations for testing medical devices. By leveraging massive datasets and sophisticated modelling, adsilico can generate fully synthetic “virtual patients” that capture the full spectrum of anatomical diversity in humans. These populations can then be used to conduct in silico trials, where devices are tested computationally on the virtual patients before ever being used in a real human. This allows identification of potential device flaws or limitations in specific subgroups much earlier in the development process.

How do you produce these virtual populations?

Virtual patients are created using state-of-the-art generative AI techniques. First, we generate digital twins – precise computational replicas of real patients’ anatomy and physiology – from a diverse set of fully anonymized patient medical images. We then apply generative AI to computationally combine elements from different digital twins, producing a large population of new, fully synthetic virtual patients. While these AI-generated virtual patients do not replicate any individual real patient, they collectively represent the full diversity of the real patient population in a statistically accurate way.

And how are they used in device testing?

Medical devices can be virtually implanted and simulated in these diverse synthetic anatomies to study performance across a wide range of patient variations. This enables comprehensive virtual trials that would be infeasible with traditional physical or digital twin approaches. Our solution ensures medical devices are tested on representative samples before ever reaching real patients. It’s a transformative approach to making clinical trials more inclusive, insightful and efficient.

In the cardiac space, for example, we might start with MRI scans of the heart from a broad cohort. We then computationally combine elements from different patient scans to generate a large population of new virtual heart anatomies that, while not replicating any individual real patient, collectively represent the full diversity of the real patient population. Medical devices such as stents or prosthetic heart valves can then be virtually implanted in these synthetic patients, and various simulations run to study performance and safety across a wide range of anatomical variations.

How do in silico trials help patients?

The in silico approach using virtual patients helps protect all patients by allowing more comprehensive device testing before human use. It enables the identification of potential flaws or limitations that might disproportionately affect specific subgroups, which can be missed in traditional trials with limited diversity.

This methodology also provides a way to study device performance in groups that are often underrepresented in human trials, such as ethnic minorities or those with rare conditions. By computationally generating virtual patients with these characteristics, we can proactively ensure that devices will be safe and effective for these populations. This helps prevent the kinds of adverse outcomes that can occur when devices are used in populations on which they were not adequately tested.

Could in silico trials replace human trials?

In silico trials using virtual patients are intended to supplement, rather than fully replace, human clinical trials. They provide a powerful tool for both detecting potential issues early and also enhancing the evidence available preclinically, allowing refinement of designs and testing protocols before moving to human trials. This can make the human trials more targeted, efficient and inclusive.

In silico trials can also be used to study device performance in patient types that are challenging to sufficiently represent in human trials, such as those with rare conditions. Ultimately, the combination of computational and human trials provides a more comprehensive assessment of device safety and efficacy across real-world patient populations.

Will this reduce the need for studies on animals?

In silico trials have the potential to significantly reduce the use of animals in medical device testing. Currently, animal studies remain an important step for assessing certain biological responses that are difficult to comprehensively model computationally, such as immune reactions and tissue healing. However, as computational methods become increasingly sophisticated, they are able to simulate an ever-broader range of physiological processes.

By providing a more comprehensive preclinical assessment of device safety and performance, in silico trials can already help refine designs and reduce the number of animals needed in subsequent live studies.

Ultimately, could this completely eliminate animal testing?

Looking ahead, we envision a future where advanced in silico models, validated against human clinical data, can fully replicate the key insights we currently derive from animal experiments. As these technologies mature, we may indeed see a time when animal testing is no longer a necessary precursor to human trials. Getting to that point will require close collaboration between industry, academia, regulators and the public to ensure that in silico methods are developed and validated to the highest scientific and ethical standards.

At adsilico, we are committed to advancing computational approaches in order to minimize the use of animals in the device development pipeline, with the ultimate goal of replacing animal experiments altogether. We believe this is not only a scientific imperative, but an ethical obligation as we work to build a more humane and patient-centric testing paradigm.

What are the other benefits of in silico testing?

Beyond improving device safety and inclusivity, the in silico approach can significantly accelerate the development timeline. By frontloading more comprehensive testing into the preclinical phase, device manufacturers can identify and resolve issues earlier, reducing the risk of costly failures or redesigns later in the process. The ability to generate and test on large virtual populations also enables much more rapid iteration and optimization of designs.

Additionally, by reducing the need for animal testing and making human trials more targeted and efficient, in silico methods can help bring vital new devices to patients faster and at lower cost. Industry analysts project that by 2025, in silico methods could enable 30% more new devices to reach the market each year compared with the current paradigm.

Are in silico trials being employed yet?

The use of in silico methods in medicine is rapidly expanding, but still nascent in many areas. Computational approaches are increasingly used in drug discovery and development, and regulatory agencies like the US Food and Drug Administration are actively working to qualify in silico methods for use in device evaluation.

Several companies and academic groups are pioneering the use of virtual patients for in silico device trials, and initial results are promising. However, widespread adoption is still in the early stages. With growing recognition of the limitations of traditional approaches and the power of computational methods, we expect to see significant growth in the coming years. Industry projections suggest that by 2025, 50% of new devices and 25% of new drugs will incorporate in silico methods in their development.

What’s next for adsilico?

Our near-term focus is on expanding our virtual patient capabilities to encompass an even broader range of patient diversity, and to validate our methods across multiple clinical application areas in partnership with device manufacturers.

Ultimately, our mission is to ensure that every patient, regardless of their demographic or anatomical characteristics, can benefit from medical devices that are thoroughly tested and optimized for someone like them. We won’t stop until in silico methods are a standard, integral part of developing safe and effective devices for all.

The post Virtual patient populations enable more inclusive medical device development appeared first on Physics World.

Mathematical model sheds light on how exercise suppresses tumour growth

Par : Tami Freeman

Physical exercise plays an important role in controlling disease, including cancer, due to its effect on the human body’s immune system. A research team from the USA and India has now developed a mathematical model to quantitatively investigate the complex relationship between exercise, immune function and cancer.

Exercise is thought to supress tumour growth by activating the body’s natural killer (NK) cells. In particular, skeletal muscle contractions drive the release of interleukin-6 (IL-6), which causes NK cells to shift from an inactive to an active state. The activated NK cells can then infiltrate and kill tumour cells. To investigate this process in more depth, the team developed a mathematical model describing the transition of a NK cell from its inactive to active state, at a rate driven by exercise-induced IL-6 levels.

“We developed this model to study how the interplay of exercise intensity and exercise duration can lead to tumour suppression and how the parameters associated with these exercise features can be tuned to get optimal suppression,” explains senior author Niraj Kumar from the University of Massachusetts Boston.

Impact of exercise intensity and duration

The model, reported in Physical Biology, is constructed from three ordinary differential equations that describe the temporal evolution of the number of inactive NK cells, active NK cells and tumour cells, as functions of the growth rates, death rates, switching rates (for NK cells) and the rate of tumour cell kill by activated NK cells.

Kumar and collaborators – Jay Taylor at Northeastern University and T Bagarti at Tata Steel’s Graphene Center – first investigated how exercise intensity impacts tumour suppression. They used their model to determine the evolution over time of tumour cells for different values of α0 – a parameter that correlates with the maximum level of IL-6 and increases with increased exercise intensity.

Temporal evolution of tumour cells
Modelling suppression Temporal evolution of tumour cells for different values of α0 (left) and exercise time scale τ (right). (Courtesy: J Taylor et al Phys. Biol. 10.1088/1478-3975/ad899d)

Simulating tumour growth over 20 days showed that the tumour population increased non-monotonically, exhibiting a minimum population (maximum tumour suppression) at a certain critical time before increasing and then reaching a steady-state value in the long term. At all time points, the largest tumour population was seen for the no-exercise case, confirming the premise that exercise helps suppress tumour growth.

The model revealed that as the intensity of the exercise increased, the level of tumour suppression increased alongside, due to the larger number of active NK cells. In addition, greater exercise intensity sustained tumour suppression for a longer time. The researchers also observed that if the initial tumour population was closer to the steady state, the effect of exercise on tumour suppression was reduced.

Next, the team examined the effect of exercise duration, by calculating tumour evolution over time for varying exercise time scales. Again, the tumour population showed non-monotonic growth with a minimum population at a certain critical time and a maximum population in the no-exercise case.  The maximum level of tumour suppression increased with increasing exercise duration.

Finally, the researchers analysed how multiple bouts of exercise impact tumour suppression, modelling a series of alternating exercise and rest periods. The model revealed that the effect of exercise on maximum tumour suppression exhibits a threshold response with exercise frequency. Up to a critical frequency, which varies with exercise intensity, the maximum tumour suppression doesn’t change. However, if the exercise frequency exceeds the critical frequency, it leads to a corresponding increase in maximum tumour suppression.

Clinical potential

Overall, the model demonstrated that increasing the intensity or duration of exercise leads to greater and sustained tumour suppression. It also showed that manipulating exercise frequency and intensity within multiple exercise bouts had a pronounced effect on tumour evolution.

These results highlight the model’s potential to guide the integration of exercise into a patient’s cancer treatment programme. While still at the early development stage, the model offers valuable insight into how exercise can influence immune responses. And as Taylor points out, as more experimental data become available, the model has potential for further extension.

“In the future, the model could be adapted for clinical use by testing its predictions in human trials,” he explains. “For now, it provides a foundation for designing exercise regimens that could optimize immune function and tumour suppression in cancer patients, based on the exercise intensity and duration.”

Next, the researchers plan to extend the model to incorporate both exercise and chemotherapy dosing. They will also explore how heterogeneity in the tumour population can influence tumour suppression.

The post Mathematical model sheds light on how exercise suppresses tumour growth appeared first on Physics World.

Optimization algorithm gives laser fusion a boost

A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.

A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.

In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.

In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.

Reducing computing cost and saving time

To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.

The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.

The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.

Developing tools

The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”

As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.

A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.

The post Optimization algorithm gives laser fusion a boost appeared first on Physics World.

Electromagnetic waves solve partial differential equations

Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.

Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.

Alternatives to numerical computing

To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.

The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.

Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.

Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.

Interconnected metatronic elements

In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.

Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.

The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.

Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.

In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”

The post Electromagnetic waves solve partial differential equations appeared first on Physics World.

❌