↩ Accueil

Vue lecture

High-speed 3D microscope improves live imaging of fast biological processes

A new high-speed multifocus microscope could facilitate discoveries in developmental biology and neuroscience thanks to its ability to image rapid biological processes over the entire volume of tiny living organisms in real time.

The pictures from many 3D microscopes are obtained sequentially by scanning through different depths, making them too slow for accurate live imaging of fast-moving natural functions in individual cells and microscopic animals. Even current multifocus microscopes that capture 3D images simultaneously have either relatively poor image resolution or can only image to shallow depths.

In contrast, the new 25-camera “M25” microscope – developed during his doctorate by Eduardo Hirata-Miyasaki and his supervisor Sara Abrahamsson, both then at the University of California Santa Cruz, together with collaborators at the Marine Biological Laboratory in Massachusetts and the New Jersey Institute of Technology – enables high-resolution 3D imaging over a large field-of-view, with each camera capturing 180 × 180 × 50 µm volumes at a rate of 100 per second.

“Because the M25 microscope is geared towards advancing biomedical imaging we wanted to push the boundaries for speed, high resolution and looking at large volumes with a high signal-to-noise ratio,” says Hirata-Miyasaki, who is now based in the Chan Zuckerberg Biohub in San Francisco.

The M25, detailed in Optica, builds on previous diffractive-based multifocus microscopy work by Abrahamsson, explains Hirata-Miyasaki. In order to capture multiple focal planes simultaneously, the researchers devised a multifocus grating (MFG) for the M25. This diffraction grating splits the image beam coming from the microscope into a 5 × 5 grid of evenly illuminated 2D focal planes, each of which is recorded on one of the 25 synchronized machine vision cameras, such that every camera in the array captures a 3D volume focused on a different depth. To avoid blurred images, a custom-designed blazed grating in front of each camera lens corrects for the chromatic dispersion (which spreads out light of different wavelengths) introduced by the MFG.

The team used computer simulations to reveal the optimal designs for the diffractive optics, before creating them at the University of California Santa Barbara nanofabrication facility by etching nanometre-scale patterns into glass. To encourage widespread use of the M25, the researchers have published the fabrication recipes for their diffraction gratings and made the bespoke software for acquiring the microscope images open source. In addition, the M25 mounts to the side port of a standard microscope, and uses off-the-shelf cameras and camera lenses.

The M25 can image a range of biological systems, since it can be used for fluorescence microscopy – in which fluorescent dyes or proteins are used to tag structures or processes within cells – and can also work in transmission mode, in which light is shone through transparent samples. The latter allows small organisms like C. elegans larvae, which are commonly used for biological research, to be studied without disrupting them.

The researchers performed various imaging tests using the prototype M25, including observations of the natural swimming motion of entire C. elegans larvae. This ability to study cellular-level behaviour in microscopic organisms over their whole volume may pave the way for more detailed investigations into how the nervous system of C. elegans controls its movement, and how genetic mutations, diseases or medicinal drugs affect that behaviour, Hirata-Miyasaki tells Physics World. He adds that such studies could further our understanding of human neurodegenerative and neuromuscular diseases.

“We live in a 3D world that is also very dynamic. So with this microscope I really hope that we can keep pushing the boundaries of acquiring live volumetric information from small biological organisms, so that we can capture interactions between them and also [see] what is happening inside cells to help us understand the biology,” he continues.

As part of his work at the Chan Zuckerberg Biohub, Hirata-Miyasaki is now developing deep-learning models for analysing dynamic cell and organism multichannel dynamic live datasets, like those acquired by the M25, “so that we can extract as much information as possible and learn from their dynamics”.

Meanwhile Abrahamsson, who is currently working in industry, hopes that other microscopy development labs will make their own M25 systems.  She is also considering commercializing the instrument to help ensure its widespread use.

The post High-speed 3D microscope improves live imaging of fast biological processes appeared first on Physics World.

  •  

New hollow-core fibres break a 40-year limit on light transmission

Optical fibres form the backbone of the Internet, carrying light signals across the globe. But some light is always lost as it travels, becoming attenuated by about 0.14 decibels per kilometre even in the best fibres. That means signals must be amplified every few dozen kilometres – a performance that hasn’t improved in nearly four decades.

Physicists at the University of Southampton, UK have now developed an alternative that could call time on that decades-long lull. Writing in Nature Photonics, they report hollow-core fibres that exhibit 35% less attenuation while transmitting signals 45% faster than standard glass fibres.

“A bit like a soap bubble”

The core of conventional fibres is made of pure glass and is surrounded by a cladding of slightly different glass. Because the core has a higher refractive index than the cladding, light entering the fibre reflects internally, bouncing back and forth in a process known as total internal reflection. This effect traps the light and guides it along the fibre’s length.

The Southampton team led by Francesco Poletti swapped the standard glass core for air. Because air is more transparent than glass, channelling light through it cuts down on scattering and speeds up signals. The problem is that air’s refractive index is lower, so the new fibre can’t use total internal reflection. Instead, Poletti and colleagues guided the light using a mechanism called anti-resonance, which requires the walls of the hollow core to be made from ultra-thin glass membranes.

“It’s a bit like a soap bubble,” Poletti says, explaining that such bubbles appear iridescent because their thin films reflect some wavelengths and lets others through. “We designed our fibre the same way, with glass membranes that reflect light at certain frequencies back into the core.” That anti-resonant reflection, he adds, keeps the light trapped and moving through the fibre’s hollow centre.

Greener telecommunications

To make the new air-core fibre, the researchers stacked thin glass capillaries in a precise pattern, forming a hollow channel in the middle. Heating and drawing the stack into a hair-thin filament preserved this pattern on a microscopic scale. The finished fibre has a nested design: an air core surrounded by ultra-thin layers that provide anti-resonant guidance and cut down on leakage.

To test their design, the team measured transmission through a full spool of fibre, then cut the fibre shorter and compared the results. They also fired in light pulses and tracked the echoes. Their results show that the hollow fibres reduce attenuation to just 0.091 decibels per kilometre. This lower loss implies that fewer amplifiers would be needed in long cables, lowering costs and energy use. “There’s big potential for greener telecommunications when using our fibres,” says Poletti.

Poletti adds that reduced attenuation (and thus lower energy use) is only one of the new fibre’s advantages. At the 0.14 dB/km attenuation benchmark, the new hollow fibre supports a bandwidth of 54 THz compared to 10 THz for a normal fibre. At the reduced 0.1 dB/km attenuation, the bandwidth is still 18 THz, which is close to twice that of a normal cable. This means that a single strand can carry far more channels at once.

Perhaps the most impressive advantage is that because the speed of light is faster in air than in glass, data could travel the same distance up to 45% faster. “It’s almost the same speed light takes when we look at a distant star,” Poletti says. The resulting drop in latency, he adds, could be crucial for real-time services like online gaming or remote surgery, and could also speed up computing tasks such as training large language models.

Field testing

As well as the team’s laboratory tests, Microsoft has begun testing the fibres in real systems, installing segments in its network and sending live traffic through them. These trials prove the hollow-core design works with existing telecom equipment, opening the door to gradual rollout. In the longer run, adapting amplifiers and other gear that are currently tuned for solid glass fibres could unlock even better performance.

Poletti believes the team’s new fibres could one day replace existing undersea cables. “I’ve been working on this technology for more than 20 years,” he says, adding that over that time, scepticism has given way to momentum, especially now with Microsoft as an industry partner. But scaling up remains a real hurdle. Making short, flawless samples is one thing; mass-producing thousands of kilometres at low cost is another. The Southampton team is now refining the design and pushing toward large-scale manufacturing. They’re hopeful that improvements could slash losses by another order of magnitude and that the anti-resonant design can be tuned to different frequency bands, including those suited to new, more efficient amplifiers.

Other experts agree the advance marks a turning point. “The work builds on decades of effort to understand and perfect hollow-core fibres,” says John Ballato, whose group at Clemson University in the US develops fibres with specialty cores for high-energy laser and biomedical applications. While Ballato notes that such fibres have been used commercially in shorter-distance communications “for some years now”, he believes this work will open them up to long-haul networks.

The post New hollow-core fibres break a 40-year limit on light transmission appeared first on Physics World.

  •  

Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury

Traumatic brain injury (TBI), caused by a sudden impact to the head, is a leading cause of death and disability. After such an injury, the most important indicator of how severe the injury is intracranial pressure – the pressure inside the skull. But currently, the only way to assess this is by inserting a pressure sensor into the patient’s brain. UK-based startup Crainio aims to change this by developing a non-invasive method to measure intracranial pressure using a simple optical probe attached to the patient’s forehead.

Can you explain why diagnosing TBI is such an important clinical challenge?

Every three minutes in the UK, someone is admitted to hospital with a head injury, it’s a very common problem. But when someone has a blow to the head, nobody knows how bad it is until they actually reach the hospital. TBI is something that, at the moment, cannot be assessed at the point of injury.

From the time of impact to the time that the patient receives an assessment by a neurosurgical expert is known as the golden hour. And nobody knows what’s happening to the brain during this time – you don’t know how best to manage the patient, whether they have a severe TBI with intracranial pressure rising in the head, or just a concussion or a medium TBI.

Once at the hospital, the neurosurgeons have to assess the patient’s intracranial pressure, to determine whether it is above the threshold that classifies the injury as severe. And to do that, they have to drill a hole in the head – literally – and place an electrical probe into the brain. This really is one of the most invasive non-therapeutic procedures, and you obviously can’t do this to every patient that comes with a blow in the head. It has its risks, there is a risk of haemorrhage or of infection.

Therefore, there’s a need to develop technologies that can measure intracranial pressure more effectively, earlier and in a non-invasive manner. For many years, this was almost like a dream: “How can you access the brain and see if the pressure is rising in the brain, just by placing an optical sensor on the forehead?”

Crainio has now created such a non-invasive sensor; what led to this breakthrough?

The research goes back to 2016, at the Research Centre for Biomedical Engineering at City, University of London (now City St George’s, University of London), when the National Institute for Health Research (NIHR) gave us our first grant to investigate the feasibility of a non-invasive intracranial sensor based on light technologies. We developed a prototype, secured the intellectual property and conducted a feasibility study on TBI patients at the Royal London Hospital, the biggest trauma hospital in the UK.

It was back in 2021, before Crainio was established, that we first discovered that after we shone certain frequencies of light, like near-infrared, into the brain through the forehead, the optical signals coming back – known as the photoplethysmogram, or PPG – contained information about the physiology or the haemodynamics of the brain.

When the pressure in the brain rises, the brain swells up, but it cannot go anywhere because the skull is like concrete. Therefore, the arteries and vessels in the brain are compressed by that pressure. PPG measures changes in blood volume as it pulses through the arteries during the cardiac cycle. If you have a viscoelastic artery that is opening and closing, the volume of blood changes and this is captured by the PPG. Now, if you have an artery that is compromised, pushed down because of pressure in the brain, that viscoelastic property is impacted and that will impact the PPG.

Changes in the PPG signal due to changes arising from compression of the vessels in the brain, can give us information about the intracranial pressure. And we developed algorithms to interrogate this optical signal and machine learning models to estimate intracranial pressure.

How did the establishment of Crainio help to progress the sensor technology?

Following our research within the university, Crainio was set up in 2022. It brought together a team of experts in medical devices and optical sensors to lead the further development and commercialization of this device. And this small team worked tirelessly over the last few years to generate funding to progress the development of the optical sensor technology and bring it to a level that is ready for further clinical trials.

Panicos Kyriacou
Panicos Kyriacou “At Crainio we want to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.” (Courtesy: Crainio)

In 2023, Crainio was successful with an Innovate UK biomedical catalyst grant, which will enable the company to engage in a clinical feasibility study, optimize the probe technology and further develop the algorithms. The company was later awarded another NIHR grant to move into a validation study.

The interest in this project has been overwhelming. We’ve had a very positive feedback from the neurocritical care community. But we also see a lot of interest from communities where injury to the brain is significant, such as rugby associations, for example.

Could the device be used in the field, at the site of an accident?

While Crainio’s primary focus is to deliver a technology for use in critical care, the system could also be used in ambulances, in helicopters, in transfer patients and beyond. The device is non-invasive, the sensor is just like a sticking plaster on the forehead and the backend is a small box containing all the electronics. In the past few years, working in a research environment, the technology was connected into a laptop computer. But we are now transferring everything into a graphical interface, with a monitor to be able to see the signals and the intracranial pressure values in a portable device.

Following preliminary tests on patients, Crainio is now starting a new clinical trial. What do you hope to achieve with the next measurements?

The first study, a feasibility study on the sensor technology, was done during the time when the project was within the university. The second round is led by Crainio using a more optimized probe. Learning from the technical challenges we had in the first study, we tried to mitigate them with a new probe design. We’ve also learned more about the challenges associated with the acquisition of signals, the type of patients, how long we should monitor.

We are now at the stage where Crainio has redeveloped the sensor and it looks amazing. The technology has received approval by MHRA, the UK regulator, for clinical studies and ethical approvals have been secured. This will be an opportunity to work with the new probe, which has more advanced electronics that enable more detailed acquisition of signals from TBI patients.

We are again partnering with the Royal London Hospital, as well as collaborators from the traumatic brain injury team at Cambridge and we’re expecting to enter clinical trials soon. These are patients admitted into neurocritical trauma units and they all have an invasive intracranial pressure bolt. This will allow us to compare the physiological signal coming from our intracranial pressure sensor with the gold standard.

The signals will be analysed by Crainio’s data science team, with machine learning algorithms used to look at changes in the PPG signal, extract morphological features and build models to develop the technology further. So we’re enriching the study with a more advanced technology, and this should lead to more accurate machine learning models for correctly capturing dynamic changes in intracranial pressure.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose traumatic brain injury effectively, faster, accurately and earlier

This time around, we will also record more information from the patients. We will look at CT scans to see whether scalp density and thickness have an impact. We will also collect data from commercial medical monitors within neurocritical care to see the relation between intracranial pressure and other physiological data acquired in the patients. We aim to expand our knowledge of what happens when a patient’s intracranial pressure rises – what happens to their blood pressures? What happens to other physiological measurements?

How far away is the system from being used as a standard clinical tool?

Crainio is very ambitious. We’re hoping that within the next couple of years we will progress adequately in order to achieve CE marking and all meet the standards that are necessary to launch a medical device.

The primary motivation of Crainio is to create solutions for healthcare, developing a technology that can help clinicians to diagnose TBI effectively, faster, accurately and earlier. This can only yield better outcomes and improve patients’ quality-of-life.

Of course, as a company we’re interested in being successful commercially. But the ambition here is, first of all, to keep the cost affordable. We live in a world where medical technologies need to be affordable, not only for Western nations, but for nations that cannot afford state-of-the-art technologies. So this is another of Crainio’s primary aims, to create a technology that could be used widely, because there is a massive need, but also because it’s affordable.

The post Crainio’s Panicos Kyriacou explains how their light-based instrument can help diagnose brain injury appeared first on Physics World.

  •  

Optical imaging tool could help diagnose and treat sudden hearing loss

Optical coherence tomography (OCT), a low-cost imaging technology used to diagnose and plan treatment for eye diseases, also shows potential as a diagnostic tool for assessing rapid hearing loss.

Researchers at the Keck School of Medicine of USC have developed an OCT device that can acquire diagnostic quality images of the inner ear during surgery. These images enable accurate measurement of fluids in the inner ear compartments. The team’s proof-of-concept study, described in Science Translational Medicine, revealed that the fluid levels correlated with the severity of a patient’s hearing loss.

An imbalance between the two inner ear fluids, endolymph and perilymph, is associated with sudden, unexplainable hearing loss and acute vertigo, symptoms of ear conditions such as Ménière’s disease, cochlear hydrops and vestibular schwannomas. This altered fluid balance – known as endolymphatic hydrops (ELH) – occurs when the volume of endolymph increases in one compartment and the volume of perilymph decreases in the other.

Because the fluid chambers of the inner ear are so small, there has previously been no effective way to assess endolymph-to-perilymph fluid balance in a living patient. Now, the Keck OCT device enables imaging of inner ear structures in real time during mastoidectomy – a procedure performed during many ear and skull base surgeries, and which provides optical access to the lateral and posterior semicircular canals (SCCs) of the inner ear.

OCT offers a quicker, more accurate and less expensive way to see inner ear fluids, hair cells and other structures compared with the “gold standard” MRI scans. The researchers hope that ultimately, the device will evolve into an outpatient assessment tool for personalized treatments for hearing loss and vertigo. If it can be used outside a surgical suite, OCT technology could also support the development and testing of new treatments, such as gene therapies to regenerate lost hair cells in the inner ear.

Intraoperative OCT

The intraoperative OCT system, developed by senior author John Oghalai and colleagues, comprises an OCT adaptor containing the entire interferometer, which attaches to the surgical microscope, plus a medical cart containing electronic devices including the laser, detector and computer.

The OCT system uses a swept-source laser with a central wavelength of 1307 nm and a bandwidth of 89.84 nm. The scanning beam spot size is 28.8 µm and has a depth-of-focus of 3.32 mm. The system’s axial resolution of 14.0 µm and lateral resolution of 28.8 µm provide an in-plane resolution of 403 µm2.

The laser output is directed into a 90:10 optical fibre fused coupler, with the 10% portion illuminating the interferometer’s reference arm. The other 90% illuminates the sample arm, passes through a fibre-optic circulator, and is combined with a red aiming beam that’s used to visually position the scanning beam on the region-of-interest.

After the OCT and aiming beams are guided onto the sample for scanning, and the interferometric signal needed for OCT imaging is generated, two output ports of the 50:50 fibre optic coupler direct the light signal into a balanced photodetector for conversion into an electronic signal. A low-pass dichroic mirror allows back-reflected visible light to pass through into an eyepiece and a camera. The surgeon can then use the eyepiece and real-time video to ensure correct positioning for the OCT imaging.

Feasibility study

The team performed a feasibility study on 19 patients undergoing surgery at USC to treat Ménière’s disease (an inner-ear disorder), vestibular schwannoma (a benign tumour) or middle-ear infection with normal hearing (the control group). All surgical procedures required a mastoidectomy.

Immediately after performing the mastoidectomy, the surgeon positioned the OCT microscope with the red aiming beam targeted at the SCCs of the inner ear. After acquiring a 3D volume image of the fluid compartments in the inner ear, which took about 2 min, the OCT microscope was removed from the surgical suite and the surgical procedure continued.

The OCT system could clearly distinguish the two fluid chambers within the SCCs. The researchers determined that higher endolymph levels correlated with patients having greater hearing loss. In addition to accurately measuring fluid levels, the system revealed that patients with vestibular schwannoma had higher endolymph-to-perilymph ratios than patients with Ménière’s disease, and that compared with the controls, both groups had increased endolymph and reduced perilymph, indicating ELH.

The success of this feasibility study may help improve current microsurgery techniques, by guiding complex temporal bone surgery that requires drilling close to the inner ear. OCT technology could help reduce surgical damage to delicate ear structures and better distinguish brain tumours from healthy tissue. The OCT system could also be used to monitor the endolymph-to-perilymph ratio in patients with Ménière’s disease undergoing endolymphatic shunting, to verify that the procedure adequately decompresses the endolymphatic space. Efforts to make a smaller, less expensive system for these types of surgical use are underway.

The researchers are currently working to improve the software and image processing techniques in order to obtain images from patients without having to remove the mastoid bone, which would enable use of the OCT system for outpatient diagnosis.

The team also plans to adapt a handheld version of an OCT device currently used to image the tympanic membrane and middle ear to enable imaging of the human cochlea in the clinic. Imaging down the ear canal non-invasively offers many potential benefits when diagnosing and treating patients who do not require surgery. For example, patients determined to have ELH could be diagnosed and treated rapidly, a process that currently takes 30 days or more.

Oghalai and colleagues are optimistic about improvements being made in OCT technology, particularly in penetration depth and tissue contrast. “This will enhance the utility of this imaging modality for the ear, complementing its potential to be completely non-invasive and expanding its indication to a wider range of diseases,” they write.

The post Optical imaging tool could help diagnose and treat sudden hearing loss appeared first on Physics World.

  •  

Switchable metasurfaces deliver stronger light control

A team of researchers in Sweden has demonstrated how smart optical metasurfaces can respond far more strongly to incoming light when switched to their conducting states. By fine-tuning the spacing between arrays of nanoantennae on a polymer metasurface, Magnus Jonsson and colleagues at Linköping University were able to generate nonlocal electromagnetic coupling between the antennae – vastly strengthening the metasurface’s optical responses.

Metasurfaces are rapidly emerging as a key component of smart optical devices, which can dynamically manipulate the wavefronts and spectral signals of incoming light. “They work in a way that nanostructures are placed in patterns on a flat surface and become receivers for light,” Jonsson explains. “Each receiver, or antenna, captures the light in a certain way and together these nanostructures allow the light to be controlled as you desire.”

One promising route towards such intelligent metasurfaces is to fabricate their antennae from conducting polymers, such as PEDOT. In such materials, the intrinsic permittivity – which determines how the material responds to electric fields, such as those from incoming light – can be manually switched by altering the oxidation state through a redox reaction. This, in turn, modifies the polymer’s carrier density and mobility, altering the number and behaviour of mobile charge carriers that contribute to its optical properties.

A key measure of how well these materials resonate with light is the “quality factor”, which describes how sharp and long-lived a resonance is. A higher quality factor signifies a stronger, more precise interaction with light, while a lower value indicates weaker and broader responses.

When PEDOT is in its metallic oxidation state, incident light will drive the resonance of surface plasmons: collective oscillations of mobile charges that are confined near the surface of the material. At specific wavelengths, these plasmons can strongly enhance electromagnetic fields – altering properties including the phase, amplitude and spectral composition of the light reflected and transmitted by the metasurface.

Alternatively, when PEDOT is switched to its insulating state, the resulting lack of available charge carriers will significantly suppress surface plasmon formation, leading to diminished optical response.

In principle, this effect offers a useful way to modulate the nanoantennae of smart metasurfaces via redox reactions. So far, however, the surface plasmons generated through this approach have only resonated weakly in response to incident light, and have quickly lost their energy after excitation – even when the polymer is switched to its metallic state. This has made the approach impractical for use in smart, switchable metasurfaces that require strong and coherent plasmonic behaviour.

Jonsson’s team addressed this problem by considering the spacing of PEDOT nanoantennae within periodic arrays. When separated at precisely the right distance, the array generated nonlocal coupling through coherent diffractive interactions – involving the constructive interference of light scattered by each antenna.

As a result, this arrangement supported collective lattice resonances (CLRs) – in which entire arrays of nanoantennae respond collectively and coherently to incident light. This drastically boosted the strength and sharpness of the material’s plasmonic response, boosting its quality factor by up to ten times that of previous conducting polymer nanoantennae. Such high-quality resonances indicate more coherent, longer-lived plasmonic modes.

As before, the researchers could manually switch the nanoantenna array between metallic and insulating states via redox reactions, which reversibly weakened its plasmonic responses as required. This dynamic tuning offers a pathway towards electrically or chemically programmable optical behaviour.

Based on this performance, Jonsson’s team is now confident that this approach could have promising implications for the future of smart optical metasurfaces. “We show that metasurfaces made of conducting polymers seem to be able to provide sufficiently high performance to be relevant for practical applications,” says co-author Dongqing Lin.

For now, the researchers have demonstrated their approach across mid-infrared wavelengths. But with some further tweaks to their fabrication process, allowing for closer spacings between the nanoantennae and smaller antenna sizes, they aim to generate CLRs in the visible spectrum. If achieved, this could open up new opportunities for smart optical metasurfaces in cutting-edge optical applications as wide-ranging as holography, invisibility cloaking and biomedical imaging.

The study is described in Nature Communications.

The post Switchable metasurfaces deliver stronger light control appeared first on Physics World.

  •  

New definition of second ticks closer after international optical-clock comparison

Atomic clocks are crucial to many modern technologies including satellite navigation and telecoms networks, and are also used in fundamental research. The most commonly used clock is based on caesium-133. It uses microwave radiation to excite an electron between two specific hyperfine energy levels in the atom’s ground state. This radiation has a very precise frequency, which is currently used to define the second as the SI unit of time.

Atomic clocks are currently being supplanted by the optical clocks, which use light rather than microwaves to excite atoms. Because optical clocks operate at higher frequencies, they are much more accurate than microwave-based timekeepers.

Despite the potential of optical atomic clocks, the international community has yet to use one to define the second. Before this can happen, metrologists must be able to compare the timekeeping of different types of optical clocks across long distances to verify that they are performing as expected. Now, as part of an EU-funded project, researchers have made a highly coordinated comparison of optical clocks across six countries in two continents: the UK, France, Germany, Italy, Finland and Japan.

Time flies

The study consisted of 38 comparisons (frequency ratios) performed simultaneously with ten different optical clocks. These were an indium ion clock at LUH in Germany; ytterbium ion clocks of two different types at PTB in Germany; a ytterbium ion clock at NPL in the UK; ytterbium atom clocks at INRIM in Italy and NMIJ in Japan; a strontium ion clock at VTT in Finland; and strontium atom clocks at LTE in France and at NPL and PTB.

To compare the clocks, the researchers linked the frequency outputs from the different systems using two methods: radio signals from satellites and laser light travelling through optical fibres. The satellite method used GPS satellite navigation signals, which were available to all the clocks in the study. The team also used customized fibre links, which allowed measurements with 100 times greater precision than the satellite technique. However, fibres could only be used for international connections between clocks in France, Germany and Italy. Short fibre links were used to connect clocks within institutes located in the UK and Germany.

A major challenge was to coordinate the simultaneous operation of all the clocks and links. Another challenge arose at the analysis stage because the results did not always confirm the expected values and there were some inconsistencies in the measurements. However, the benefit of comparing so many clocks at once and using more than one link technique is that it was often possible to identify the source of problems.

Wait a second

The measurements provided a significant addition to the body of data for international clock comparisons. The uncertainties and consistency of such data will influence the choice of which optical transition(s) to use in the new definition of the second.  However, before the redefinition, even lower uncertainties will be required in the comparisons. There are also several other very different criteria that need to be met as well, such as demonstrating that optical clocks can make regular contributions to the international atomic time scale.

Rachel Godun at NPL, who coordinated the clock comparison campaign, says that repeated measurements will be needed to build confidence that the optical clocks and links can be operated reliably and always achieve the expected performance.  She also says that the community must push towards lower measurement uncertainties to reach less than 5 parts in 1018 – which is the target ahead of the redefinition of the second.  “More comparisons via optical fibre links are therefore needed because these have lower uncertainties than comparisons via satellite techniques”, she tells Physics World.

Pierre Dubé of Canada’s National Research Council says that the unprecedented number of clocks involved in the measurement campaign yielded an extensive data set of frequency ratios that were used to verify the consistency of the results and detect anomalies. Dubé, who was not involved in the study, adds that it significantly improves our knowledge of several optical frequency ratios and our confidence in the measurement methods, which are especially significant for the redefinition of the SI second using optical clocks.

“The optical clock community is strongly motivated to obtain the best possible set of measurements before the SI second is redefined using an optical transition (or a set of optical transitions, depending on the redefinition option chosen)”, Dubé concludes.

The research is described in Optica.

The post New definition of second ticks closer after international optical-clock comparison appeared first on Physics World.

  •