↩ Accueil

Vue lecture

MRID3D phantom eases the introduction of MRI into the radiotherapy clinic

Radiotherapy is a precision cancer therapy that employs personalized treatment plans to target radiation to tumours with high accuracy. Such plans are usually created from high-resolution CT scans of the patient. But interest is growing in an alternative approach: MR simulation, in which MR images are used to generate the treatment plans – for delivery on conventional linac systems as well as the increasingly prevalent MR-guided radiotherapy systems.

One site that has transitioned to this approach is the Institut Jules Bordet in Belgium, which in 2021 acquired both an Elekta Unity MR-Linac and a Siemens MAGNETOM Aera MR-Simulator. “It was a long-term objective for our clinic to have an MR-only workflow,” says Akos Gulyban, a medical physicist at Institut Jules Bordet. “When we moved to a new campus, we decided to purchase the MR-Linac. Then we thought that if we are getting into the MR world for treatment adaptation, we also need to step up in terms of simulation.”

The move to MR simulation delivers many clinical benefits, with MR images providing the detailed anatomical information required to delineate targets and organs-at-risk with the highest precision. But it also creates new challenges for the physicists, particularly when it comes to quality assurance (QA) of MR-based systems. “The biggest concern is geometric distortion,” Gulyban explains. “If there is no distortion correction, then the usability of the machine or the sequence is very limited.”

Addressing distortion

While the magnetic field gradient is theoretically linear, and MRI is indeed extremely accurate at the imaging isocentre, moving away from the isocentre increases distortion. Images of regions 30 or 40 cm away from the isocentre – a reasonable distance for a classical linac – can differ from reality by 15 to 20 mm, says Gulyban. Thankfully, 3D correction algorithms can reduce this discrepancy down to just a couple of millimetres. But such corrections first require an accurate way to measure the distortion.

Akos Gulyban
Akos Gulyban: “The biggest concern is geometric distortion.” (Courtesy: Bordet – Service Communication)

To address this task, the team at Institut Jules Bordet employ a geometric distortion phantom –the QUASAR MRID3D Geometric Distortion Analysis System from IBA Dosimetry. Gulyban explains that the MRID3D was chosen following discussions with experienced users, and that key selling points included the phantom’s automated software and its ability to efficiently store results for long-term traceability.

“My concern was how much time we spend cross-processing, generating reports or evaluating results,” he says. “This software is fully automated, making it much easier to perform the evaluation and less dependent on the operator.”

Gulyban adds that the team was looking for a vendor-independent solution. “I think it is a good approach to use the tools provided [by the vendor] but now we have a way to measure the same thing using a different approach. Since our new campus has a mixture of Siemens MRs and the MR-Linac, this phantom provides a vendor-independent bridge between the two worlds.”

For quality control of the MR-Simulator, the team perform distortion measurements every three months, as well as after system interventions such as shimming and following any problems arising during other routine QA procedures. “We should not consider tests as individual islands in the QA process,” says Gulyban. “For instance, the ACR image quality phantom, which is used for more frequent evaluation, also partly assesses distortion. If we see that failing, I would directly trigger measurements with the more appropriate geometric distortion phantom.”

A lightweight option

To perform MR simulation, the images used for treatment planning must encompass both the target volume and the surrounding region, to ensure accurate delineation of the tumour and nearby organs-at-risk. This requires a large field-of-view (FOV) scan – plus geometric distortion QA that covers the same large FOV.

Kawtar Lakrad
Kawtar Lakrad: “The idea behind the phantom was very smart.” (Courtesy: Kawtar Lakrad)

“You’re using this image to delineate the target and also to spare the organs-at-risk, so the image must reflect reality,” explains Kawtar Lakrad, medical physicist and clinical application specialist at IBA Dosimetry. “You don’t want that image to be twisted or the target volume to appear smaller or bigger than it actually is. You want to make sure that all geometric qualities of the image align with what’s real.”

Typically, geometric distortion phantoms are grid-like, with control points spaced every 0.5 or 1 cm. The entire volume is imaged in the MR scanner and the locations of control points seen in the image compared with their actual positions. “If we apply this to a large FOV phantom, which for MRI will be filled with either water or oil, it’s going to be a very large grid and it’s going to be heavy, 40 or 50 kg,” says Lakrad.

To overcome this obstacle, IBA researchers used innovative harmonic analysis algorithms to design a lightweight geometric distortion phantom with submillimetre accuracy and a large (35 x 30 cm) FOV: the MRID3D. The phantom comprises two concentric hollow acrylic cylinders, the only liquid being a prefilled mineral oil layer between the two shells, reducing its weight to just 21 kg.

The MRID<sup>3D</sup> geometric distortion phantom
Lightweight and accurate The MRID3D geometric distortion phantom in use on the treatment couch. (Courtesy: IBA Dosimetry)

“The idea behind the phantom was very smart because it relies on a mathematical tool,” explains Lakrad. “There is a Fourier transform for the linear signal, which is used for standard grids. But there are also spherical harmonics – and this is what’s used in the MRID3D. The control points are all on the cylinder surface, plus one in the isocentre, creating a virtual grid that measures 3D geometric distortion.” She adds that the MRID3D can also differentiate distortion due to the main magnetic field from gradient non-linearity distortion.

Moving into the MR world

Gulyban and his team at Institut Jules Bordet first used MR simulation for pelvic treatments, particularly prostate cancer, he tells Physics World. This was followed by abdominal tumours, such as pancreatic and liver cancers (where many patients were being treated on the MR-Linac) and more recently, cranial and head-and-neck irradiations.

Gulyban points out that the introduction of the MR-Simulator was eased by the team’s experience with the MR-Linac, which helped them “step into the MR world”. Here also, the MRID3D phantom is used to quantify geometric distortion, both for initial commissioning and continuous QA of the MR-Linac.

Screen shot of distortion mapping
Screen shot B0 distortion mapping with MRID3D. (Courtesy: IBA Dosimetry)

“It’s like a consistency check,” he explains. “We have certain manufacturer-defined conditions that we need to meet for the MR-Linac – for instance, that distortion within a 40 mm diameter should be less than 1 mm. To ensure that these are met in a consistent fashion, we repeat the measurements with the manufacturer’s phantom and with the MRID3D. This gives us extra peace of mind that the machine is performing under the correct conditions.”

For other cancer centres looking to integrate MR into their radiotherapy clinics, Gulyban has some key points of advice. These include starting with MR-guided radiotherapy and then adding MR simulation, identifying a suitable pathology to treat first and gain familiarity, and attending relevant courses or congresses for inspiration.

“The biggest change is actually a change in culture because you have an active MRI in the radiotherapy department,” he notes. “We are used to the radioprotection aspects of radiotherapy, wearing a dosimeter and observing radiation protection principles. MRI is even less forgiving – every possible thing that could go wrong you have to eliminate. Closing all the doors and emptying your pockets must become a reflex habit. You have to prepare mentally for that.”

“When you’re used to CT-based machines, moving to an MR workflow can be a little bit new,” adds Lakrad. “Most physicists are already familiar with the MR concept, but when it comes to the QA process, that’s the most challenging part. Some people would just repeat what’s done in radiology – but the use case is different. In radiotherapy, you have to delineate the target and surrounding volumes exactly. You’re going to be delivering dose, which means the tolerance between diagnostic and radiation therapy is different. That’s the biggest challenge.”

The post MRID<sup>3D</sup> phantom eases the introduction of MRI into the radiotherapy clinic appeared first on Physics World.

  •  

Artificial intelligence could help detect ‘predatory’ journals

Artificial intelligence (AI) could help sniff out questionable open-access publications that are more interested in profit than scientific integrity. That is according to an analysis of 15,000 scientific journals by an international team of computer scientists. They find that dubious journals tend to publish an unusually high number of articles and feature authors who have many affiliations and frequently self-cite (Sci. Adv. 11 eadt2792).

Open access removes the requirement for traditional subscriptions. Articles are instead made immediately and freely available for anyone to read, with publication costs covered by authors by paying an article-processing charge.

But as the popularity of open-access journals has risen, there has been a growth in “predatory” journals that exploit the open-access model by making scientists pay publication fees without a proper peer-review process in place.

To build an AI-based method for distinguishing legitimate from questionable journals, Daniel Acuña, a computer scientist at the University of Colorado Boulder, and colleagues used the Directory of Open Access Journals (DOAJ) – an online, community-curated index of open-access journals.

The researchers trained their machine-learning model on 12,869 journals indexed on the DOAJ and 2536 journals that have been removed from the DOAJ due to questionable practices that violate the community’s listing criteria. The team then tested the tool on 15,191 journals listed by Unpaywall, an online directory of free research articles.

To identify questionable journals, the AI-system analyses journals’ bibliometric information and the content and design of their websites, scrutinising details such as the affiliations of editorial board members and the average author h-index – a metric that quantifies a researcher’s productivity and impact.

The AI-model flagged 1437 journals as questionable, with the researchers concluding that 1092 were genuinely questionable while 345 were false positives.

They also identified around 1780 problematic journals that the AI screening failed to flag. According to the study authors, their analysis shows that problematic publishing practices leave detectable patterns in citation behaviour such as the last authors having a low h-index together with a high rate of self-citation.

Acuña adds that the tool could help to pre-screen large numbers of journals, adding, however, that “human professionals should do the final analysis”. The researcher’s novel AI screening system isn’t publicly accessible but they hope to make it available to universities and publishing companies soon.

The post Artificial intelligence could help detect ‘predatory’ journals appeared first on Physics World.

  •  

Are longer quantum algorithms actually good?

It’s almost impossible to avoid reading about advances in quantum computing these days. Despite this, we’re still some way off having fully fault-tolerant, large-scale quantum computers as of right now. One practical difficulty is that even the best present-day quantum computers suffer from noise that can often cause them to return erroneous results.

Research in this field can be broadly divided into two areas: a) designing quantum algorithms with potential practical advantages over classical algorithms (the software) and b) physically building a quantum computer (the hardware).

One of the main approaches to algorithm design is to minimise the number of operations or runtime in an algorithm. One intuitively expects that reducing the number of operations would decrease the chance of errors – the key to constructing a reliable quantum computer.

However, this is not always the case. In a recent paper, the research team found that minimising the number of operations in a quantum algorithm can sometimes be counterproductive, leading to an increased sensitivity to noise. Essentially, running a faster algorithm in non-ideal conditions can result in more errors than if a slower algorithm had been used.

The authors proved that there’s a trade-off between an algorithm’s number of operations and its resilience to noise. This means that, for certain types of errors, slower algorithms might actually be better in some real-world conditions.

These results bring together research on quantum hardware and software. The mathematical framework developed will enable quantum algorithms to be designed with the limitations of current real quantum computers in mind.

Read the full article

Resilience–runtime tradeoff relations for quantum algorithms – IOPscience

García-Pintos et al. 2025 Rep. Prog. Phys. 88 037601

The post Are longer quantum algorithms actually good? appeared first on Physics World.

  •  

The hunt for long-lived particles produced in proton-proton collisions at the LHC

Despite the huge success of the Standard Model of particle physics, we know it’s not complete. Dark matter and neutrino masses are just two of the things that are conspicuously missing in our current theory.

However, it’s been notoriously difficult to perform an experiment that actually disagrees with the model’s predictions.

Many proposed extensions of the Standard Model, such as the fraternal twin Higgs or folded supersymmetry models, include so-called long-lived particles (LLPs).

Unlike most particles produced in high-energy collisions, which decay almost instantaneously, LLPs have relatively long lifetimes, meaning they travel a measurable distance before decaying.

A new paper from the CMS collaboration at CERN searched for evidence of these particles by re-examining previous data from proton-proton collision events.

The new analysis used new techniques such as machine-learning methods to enhance the sensitivity to LLPs.

So, did they find any new particles? The short answer, unfortunately, is no.

However, the new study achieves up to a tenfold improvement over previous limits for LLP masses. It also places the first constraints on many proposed models that predict these particles.

Although this study found no new physics, we’re still confident that something must be out there. And by narrowing down the possible spaces where we might find new particles, we’re one step closer to finding them.

The search continues.

Read the full article

Search for light long-lived particles decaying to displaced jets in proton–proton collisions at – IOPscience

The CMS Collaboration, 2025 Rep. Prog. Phys. 88 037801

The post The hunt for long-lived particles produced in proton-proton collisions at the LHC appeared first on Physics World.

  •  

Relive the two decades when physicists basked in the afterglow of the Standard Model

The Large Electron–Positron collider
Tunnel vision The successful consolidation of particle physics in the 1980s and 1990s, typified by work at the Large Electron–Positron collider, is the theme of a symposium held at CERN from 10–13 November 2025. (Courtesy: CERN)

Call it millennial, generation Y or fin de siècle, high-energy physics during the last two decades of the 20th century had a special flavour. The principal pieces of the Standard Model of particle physics had come together remarkably tightly – so tightly, in fact, that physicists had to rethink what instruments to build, what experiments to plan, and what theories to develop to move forward. But it was also an era when the hub of particle physics moved from the US to Europe.

The momentous events of the 1980s and 1990s will be the focus of the 4th International Symposium on the History of Particle Physics, which is being held from 10–13 November at CERN. The meeting will take place more than four decades after the first symposium in the series was held at Fermilab near Chicago in 1980. Entitled The Birth of Particle Physics, that initial meeting covered the years 1930 to 1950.

Speakers back then included trailblazers such as Paul Dirac, Julian Schwinger and Victor Weisskopf. They reviewed discoveries such as the neutron and the positron and the development of relativistic quantum field theory. Those two decades before 1950 were a time when particle physicists “constructed the room”, so to speak, in which the discipline would be based.

The second symposium – Pions to Quarks – was also held at Fermilab and covered the 1950s. Accelerators could now create particles seen in cosmic-ray collisions, populating what Robert Oppenheimer called the “particle zoo”. Certain discoveries of this era, such as parity violation in the weak interaction, were so shocking that C N Yang likened it to having a blackout and not knowing if the room would look the same when the lights came back on. Speakers at that 1985 event included Luis Alvarez, Val Fitch, Abdus Salam, Robert Wilson and Yang himself.

The third symposium, The Rise of the Standard Model, was held in Stanford, California, in 1992 and covered the 1960s and 1970s. It was a time not of blackouts but of disruptions that dimmed the lights. Charge-parity violation and the existence of two types of neutrino were found in the 1960s, followed in the 1970s by deep inelastic electron scattering and quarks, neutral currents, a fourth quark and gluon jets.

These discoveries decimated alternative approaches to quantum field theory, which was duly established for good as the skeleton of high-energy physics. The era culminated with Sheldon Glashow, Abdus Salam and Steven Weinberg winning the 1979 Nobel Prize for Physics for their part in establishing the Standard Model. Speakers at that third symposium included Murray Gell-Mann, Leon Lederman and Weinberg himself.

Changing times

The upcoming CERN event, on whose programme committee I serve, will start exactly where the previous symposium ended. “1980 is a natural historical break,” says conference co-organizer Michael Riordan, who won the 2025 Abraham Pais Prize for History of Physics. “It begins a period of the consolidation of the Standard Model. Colliders became the main instruments, and were built with specific standard-model targets in mind. And the centre of gravity of the discipline moved across the Atlantic to Europe.”

The conference will address physics that took place at CERN’s Super Proton Synchrotron (SPS), where the W and Z particles were discovered in 1983. It will also examines the SPS’s successor – the Large Electron-Positron (LEP) collider. Opened in 1989, it was used to make precise measurements of these and other implications of the Standard Model until being controversially shut down in 2000 to make way for the Large Hadron Collider (LHC).

There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities

Speakers at the meeting will also discuss Fermilab’s Tevatron, where the top quark – another Standard Model component – was found in 1995. Work at the Stanford Linear Accelerator Center, DESY in Germany, and Tsukuba, Japan, will be tackled too. There will be coverage as well of failed accelerator projects, which – perhaps perversely – can be equally interesting and revealing as successful facilities.

In particular, I will speak about ISABELLE, a planned and partially built proton–proton collider at Brookhaven National Laboratory, which was terminated in 1983 to make way for the far more ambitious Superconducting Super Collider (SSC). ISABELLE was then transformed into the Relativistic Heavy Ion Collider (RHIC), which was completed in 1999 and took nuclear physics into the high-energy regime.

Riordan will talk about the fate of the SSC, which was supposed to discover the Higgs boson or whatever else plays its mass-generating role. But in 1993 the US Congress terminated that project, a traumatic episode for US physics, about which Riordan co-authored the book Tunnel Visions. Its cancellation signalled the end of the glory years for US particle physics and the realization of the need for international collaborations in ever-costlier accelerator projects.

The CERN meeting will also explore more positive developments such as the growing convergence of particle physics and cosmology during the 1980s and 1990s. During that time, researchers stepped up their studies of dark matter, neutrino oscillations and supernovas. It was a period that saw the construction of underground detectors at Gran Sasso in Italy and Kamiokande in Japan.

Other themes to be explored include the development of the Web – which transformed the world – and the impact of globalization, the end of the Cold War, and the rise of high-energy physics in China, and physics in Russia, former Soviet Union republics, and former Eastern Bloc countries. While particle physics became more global, it also grew more dependent on, and vulnerable to, changing political ambitions, economic realities and international collaborations. The growing importance of diversity, communication and knowledge transfer will be looked at too.

The critical point

The years between 1980 and 2000 were a distinct period in the history of particle physics. It took place in the afterglow of the triumph of the Standard Model. The lights in high energy physics did not go out or even dim, to use Yang’s metaphor. Instead, the Standard Model shed so much light on high-energy physics that the effort and excitement focused around consolidating the model.

Particle physics, during those years, was all about finding the deeply hidden outstanding pieces, developing the theory, and connecting with other areas of physics. The triumph was so complete that physicists began to wonder what bigger and more comprehensive structure the Standard Model’s “room” might be embedded in – what was “beyond the Standard Model”. A quarter of a century on, out attempts to make out that structure is still an ongoing task.

The post Relive the two decades when physicists basked in the afterglow of the Standard Model appeared first on Physics World.

  •  

Axiom and Spacebilt to establish ISS data center node

PARIS, France – Axiom Space and Spacebilt announced plans Sept. 16 to deliver a data center node with optical communications links to the International Space Station in 2027. The Axiom Orbital Data Center Node on ISS, called AxODC Node ISS, being developed in collaboration with Spacebilt, will feature an optical communications terminal from Skyloom plus […]

The post Axiom and Spacebilt to establish ISS data center node appeared first on SpaceNews.

  •  

Are we heading for a future of superintelligent AI mathematicians?

When researchers at Microsoft released a list of the 40 jobs most likely to be affected by generative artificial intelligence (gen AI), few outsiders would have expected to see “mathematician” among them. Yet according to speakers at this year’s Heidelberg Laureate Forum (HLF), which connects early-career researchers with distinguished figures in mathematics and computer science, computers are already taking over many tasks formerly performed by human mathematicians – and the humans have mixed feelings about it.

One of those expressing disquiet is Yang-Hui He, a mathematical physicist at the London Institute for Mathematical Sciences. In general, He is extremely keen on AI. He’s written a textbook about the use of AI in mathematics, and he told the audience at an HLF panel discussion that he’s been peddling machine-learning techniques to his mathematical physics colleagues since 2017.

More recently, though, He has developed concerns about gen AI specifically. “It is doing mathematics so well without any understanding of mathematics,” he said, a note of wonder creeping into his voice. Then, more plaintively, he added, “Where is our place?”

AI advantages

Some of the things that make today’s gen AI so good at mathematics are the same as the ones that made Google’s DeepMind so good at the game of Go. As the theoretical computer scientist Sanjeev Arora pointed out in his HLF talk, “The reason it’s better than humans is that it’s basically tireless.” Put another way, if the 20th-century mathematician Alfréd Rényi once described his colleagues as “machines for turning coffee into theorems”, one advantage of 21st-century AI is that it does away with the coffee.

Arora, however, sees even greater benefits. In his view, AI’s ability to use feedback to improve its own performance – a technique known as reinforcement learning – is particularly well-suited to mathematics.

In the standard version of reinforcement learning, Arora explains, the AI model is given a large bank of questions, asked to generate many solutions and told to use the most correct ones (as labelled by humans) to refine its model. But because mathematics is so formalized, with answers that are so verifiably true or false, Arora thinks it will soon be possible to replace human correctness checkers with AI “proof assistants”. Indeed, he’s developing one such assistant himself, called Lean, with his colleagues at Princeton University in the US.

Humans in the loop?

But why stop there? Why not use AI to generate mathematical questions as well as producing and checking their solutions? Indeed, why not get it to write a paper, peer review it and publish it for its fellow AI mathematicians – which are, presumably, busy combing the literature for information to help them define new questions?

Arora clearly thinks that’s where things are heading, and many of his colleagues seem to agree, at least in part. His fellow HLF panellist Javier Gómez-Serrano, a mathematician at Brown University in the US, noted that AI is already generating results in a day or two that would previously have taken a human mathematician months. “Progress has been quite quick,” he said.

The panel’s final member, Maia Fraser of the University of Ottawa, Canada, likewise paid tribute to the “incredible things that are possible with AI now”.  But Fraser, who works on mathematical problems related to neuroscience, also sounded a note of caution. “My concern is the speed of the changes,” she told the HLF audience.

The risk, Fraser continued, is that some of these changes may end up happening by default, without first considering whether humans want or need them. While we can’t un-invent AI, “we do have agency” over what we want, she said.

So, do we want a world in which AI mathematicians take humans “out of the loop” entirely? For He, the benefits may outweigh the disadvantages. “I really want to see a proof of the Riemann hypothesis,” he said,  to ripples of laughter. If that means that human mathematicians “become priests to oracles”, He added, so be it.

The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.

  •  

Space–time crystal emerges in a liquid crystal

The first-ever “space–time crystal” has been created in the US by Hanqing Zhao and Ivan Smalyukh at the University of Colorado Boulder. The system is patterned in both space and time and comprises a rigid lattice of topological solitons that are sustained by steady oscillations in the orientations of liquid crystal molecules.

In an ordinary crystal atomic or molecular structures repeat at periodic intervals in space. In 2012, however, Frank Wilczek suggested that systems might also exist with quantum states that repeat at perfectly periodic intervals in time – even as they remain in their lowest-energy state.

First observed experimentally in 2017, these time crystals are puzzling to physicists because they spontaneously break time–translation symmetry, which states that the laws of physics are the same no matter when you observe them. In contrast, a time crystal continuously oscillates over time, without consuming energy.

A space–time crystal is even more bizarre. In addition to breaking time–translation symmetry, such a system would also break spatial symmetry, just like the repeating molecular patterns of an ordinary crystal. Until now, however, a space–time crystal had not been observed directly.

Rod-like molecules

In their study, Zhao and Smalyukh created a space–time crystal in the nematic phase of a liquid crystal. In this phase the crystal’s rod-like molecules align parallel to each other and also flow like a liquid. Building on computer simulations, they confined the liquid crystal between two glass plates coated with a light-sensitive dye.

“We exploited strong light–matter interactions between dye-coated, light-reconfigurable surfaces, and the optical properties of the liquid crystal,” Smalyukh explains.

When the researchers illuminated the top plate with linearly polarized light at constant intensity, the dye molecules rotate to align perpendicular to the direction of polarization. This reorients nearby liquid crystal molecules, and the effect propagates deeper into the bulk. However, the influence weakens with depth, so that molecules farther from the top plate are progressively less aligned.

As light travels through this gradually twisting structure, its linear polarization is transformed, becoming elliptically polarized by the time it reaches the bottom plate. The dye molecules there become aligned with this new polarization, altering the liquid crystal alignment near the bottom plate. These changes propagate back upward, influencing molecules near the top plate again.

Feedback loop

This is a feedback loop, with the top and bottom plates continuously influencing each other via the polarized light passing through the liquid crystal.

“These light-powered dynamics in confined liquid crystals leads to the emergence of particle-like topological solitons and the space–time crystallinity,” Smalyukh says.

In this environment, particle-like topological solitons emerge as stable, localized twists in the liquid crystal’s orientation that do not decay over time. Like particles, the solitons move and interact with each other while remaining intact.

Once the feedback loop is established, these solitons emerge in a repeating lattice-like pattern. This arrangement not only persisted as the feedback loop continued, but is sustained by it. This is a clear sign that the system exhibits crystalline order in time and space simultaneously.

Accessible system

Having confirmed their conclusions with simulations, Zhao and Smalyukh are confident this is the first experimental demonstration of a space–time crystal. The discovery that such an exotic state can exist in a classical, room-temperature system may have important implications.

“This is the first time that such a phenomenon is observed emerging in a liquid crystalline soft matter system,” says Smalyukh. “Our study calls for a re-examining of various time-periodic phenomena to check if they meet the criteria of time-crystalline behaviour.”

Building on these results, the duo hope to broaden the scope of time crystal research beyond a purely theoretical and experimental curiosity. “This may help expand technological utility of liquid crystals, as well as expand the currently mostly fundamental focus of studies of time crystals to more applied aspects,” Smalyukh adds.

The research is described in Nature Materials.

The post Space–time crystal emerges in a liquid crystal appeared first on Physics World.

  •