Atomic clocks are crucial to many modern technologies including satellite navigation and telecoms networks, and are also used in fundamental research. The most commonly used clock is based on caesium-133. It uses microwave radiation to excite an electron between two specific hyperfine energy levels in the atom’s ground state. This radiation has a very precise frequency, which is currently used to define the second as the SI unit of time.
Atomic clocks are currently being supplanted by the optical clocks, which use light rather than microwaves to excite atoms. Because optical clocks operate at higher frequencies, they are much more accurate than microwave-based timekeepers.
Despite the potential of optical atomic clocks, the international community has yet to use one to define the second. Before this can happen, metrologists must be able to compare the timekeeping of different types of optical clocks across long distances to verify that they are performing as expected. Now, as part of an EU-funded project, researchers have made a highly coordinated comparison of optical clocks across six countries in two continents: the UK, France, Germany, Italy, Finland and Japan.
Time flies
The study consisted of 38 comparisons (frequency ratios) performed simultaneously with ten different optical clocks. These were an indium ion clock at LUH in Germany; ytterbium ion clocks of two different types at PTB in Germany; a ytterbium ion clock at NPL in the UK; ytterbium atom clocks at INRIM in Italy and NMIJ in Japan; a strontium ion clock at VTT in Finland; and strontium atom clocks at LTE in France and at NPL and PTB.
To compare the clocks, the researchers linked the frequency outputs from the different systems using two methods: radio signals from satellites and laser light travelling through optical fibres. The satellite method used GPS satellite navigation signals, which were available to all the clocks in the study. The team also used customized fibre links, which allowed measurements with 100 times greater precision than the satellite technique. However, fibres could only be used for international connections between clocks in France, Germany and Italy. Short fibre links were used to connect clocks within institutes located in the UK and Germany.
A major challenge was to coordinate the simultaneous operation of all the clocks and links. Another challenge arose at the analysis stage because the results did not always confirm the expected values and there were some inconsistencies in the measurements. However, the benefit of comparing so many clocks at once and using more than one link technique is that it was often possible to identify the source of problems.
Wait a second
The measurements provided a significant addition to the body of data for international clock comparisons. The uncertainties and consistency of such data will influence the choice of which optical transition(s) to use in the new definition of the second. However, before the redefinition, even lower uncertainties will be required in the comparisons. There are also several other very different criteria that need to be met as well, such as demonstrating that optical clocks can make regular contributions to the international atomic time scale.
Rachel Godun at NPL, who coordinated the clock comparison campaign, says that repeated measurements will be needed to build confidence that the optical clocks and links can be operated reliably and always achieve the expected performance. She also says that the community must push towards lower measurement uncertainties to reach less than 5 parts in 1018 – which is the target ahead of the redefinition of the second. “More comparisons via optical fibre links are therefore needed because these have lower uncertainties than comparisons via satellite techniques”, she tells Physics World.
Pierre Dubé of Canada’s National Research Council says that the unprecedented number of clocks involved in the measurement campaign yielded an extensive data set of frequency ratios that were used to verify the consistency of the results and detect anomalies. Dubé, who was not involved in the study, adds that it significantly improves our knowledge of several optical frequency ratios and our confidence in the measurement methods, which are especially significant for the redefinition of the SI second using optical clocks.
“The optical clock community is strongly motivated to obtain the best possible set of measurements before the SI second is redefined using an optical transition (or a set of optical transitions, depending on the redefinition option chosen)”, Dubé concludes.
Light has always played a central role in healthcare, enabling a wide range of tools and techniques for diagnosing and treating disease. Nick Stone from the University of Exeter is a pioneer in this field, working with technologies ranging from laser-based cancer therapies to innovative spectroscopy-based diagnostics. Stone was recently awarded the Institute of Physics’ Rosalind Franklin Medal and Prize for developing novel Raman spectroscopic tools for rapid in vivo cancer diagnosis and monitoring. Physics World’s Tami Freeman spoke with Stone about his latest research.
What is Raman spectroscopy and how does it work?
Think about how we see the sky. It is blue due to elastic (specifically Rayleigh) scattering – when an incident photon scatters off a particle without losing any energy. But in about one in a million events, photons interacting with molecules in the atmosphere will be inelastically scattered. This changes the energy of the photon as some of it is taken by the molecule to make it vibrate.
If you shine laser light on a molecule and cause it to vibrate, the photon that is scattered from that molecule will be shifted in energy by a specific amount relating to the molecule’s vibrational mode. Measuring the wavelength of this inelastically scattered light reveals which molecule it was scattered from. This is Raman spectroscopy.
Because most of the time we’re working at room or body temperatures, most of what we observe is Stokes Raman scattering, in which the laser photons lose energy to the molecules. But if a molecule is already vibrating in an excited state (at higher temperature), it can give up energy and shift the laser photon to a higher energy. This anti-Stokes spectrum is much weaker, but can be very useful – as I’ll come back to later.
How are you using Raman spectroscopy for cancer diagnosis?
A cell in the body is basically a nucleus: one set of molecules, surrounded by the cytoplasm: another set of molecules. These molecules change subtlety depending on the phenotype [set of observable characteristics] of the particular cell. If you have a genetic mutation, which is what drives cancer, the cell tends to change its relative expression of proteins, nucleic acids, glycogen and so on.
We can probe these molecules with light, and therefore determine their molecular composition. Cancer diagnostics involves identifying minute changes between the different compositions. Most of our work has been in tissues, but it can also be done in biofluids such as tears, blood plasma or sweat. You build up a molecular fingerprint of the tissue or cell of interest, and then you can compare those fingerprints to identify the disease.
We tend to perform measurements under a microscope and, because Raman scattering is a relatively weak effect, this requires good optical systems. We’re trying to use a single wavelength of light to probe molecules of interest and look for wavelengths that are shifted from that of the laser illumination. Technology improvements have provided holographic filters that remove the incident laser wavelength readily, and less complex systems that enable rapid measurements.
Raman spectroscopy can classify tissue samples removed in cancer surgery, for example. But can you use it to detect cancer without having to remove tissue from the patient?
Absolutely, we’ve developed probes that fit inside an endoscope for diagnosing oesophageal cancer.
Earlier in my career I worked on photodynamic therapy. We would look inside the oesophagus with an endoscope to find disease, then give the patient a phototoxic drug that would target the diseased cells. Shining light on the drug causes it to generate singlet oxygen that kills the cancer cells. But I realized that the light we were using could also be used for diagnosis.
Currently, to find this invisible disease, you have to take many, many biopsies. But our in vivo probes allow us to measure the molecular composition of the oesophageal lining using Raman spectroscopy, to be and determine where to take biopsies from. Oesophageal cancer has a really bad outcome once it’s diagnosed symptomatically, but if you can find the disease early you can deliver effective treatments. That’s what we’re trying to do.
Tiny but mighty (left) A Raman probe protruding from the instrument channel of an endoscope. (right) Oliver Old, consultant surgeon, passing the probe down an endoscope for a study led by the University of Exeter, with the University of Bristol and Gloucestershire Hospitals NHS Foundation Trust as partners. (Courtesy: RaPIDE Team)
The very weak Raman signal, however, causes problems. With a microscope, we can use advanced filters to remove the incident laser wavelength. But sending light down an optical fibre generates unwanted signal, and we also need to remove elastically scattered light from the oesophagus. So we had to put a filter on the end of this tiny 2 mm fibre probe. In addition, we don’t want to collect photons that have travelled a long way through the body, so we needed a confocal system. We built a really complex probe, working in collaboration with John Day at the University of Bristol – it took a long time to optimize the optics and the engineering.
Are there options for diagnosing cancer in places that can’t be accessed via an endoscope?
Yes, we have also developed a smart needle probe that’s currently in trials. We are using this to detect lymphomas – the primary cancer in lymph nodes – in the head and neck, under the armpit and in the groin.
If somebody comes forward with lumps in these areas, they usually have a swollen lymph node, which shows that something is wrong. Most often it’s following an infection and the node hasn’t gone back down in size.
This situation usually requires surgical removal of the node to decide whether cancer is present or not. Instead, we can just insert our needle probe and send light in. By examining the scattered light and measuring its fingerprint we can identify if it’s lymphoma. Indeed, we can actually see what type of cancer it is and where it has come from.
Novel needle Nick Stone demonstrates a prototype Raman needle probe. (Courtesy: Matthew Jones Photography)
Currently, the prototype probe is quite bulky because we are trying to make it low in cost. It has to have a disposable tip, so we can use a new needle each time, and the filters and optics are all in the handpiece.
Are you working on any other projects at the moment?
As people don’t particularly want a needle stuck in them, we are now trying to understand where the photons travel if you just illuminate the body. Red and near-infrared light travel a long way through the body, so we can use near-infrared light to probe photons that have travelled many, many centimetres.
We are doing a study looking at calcifications in a very early breast cancer called ductal carcinoma in situ (DCIS) – it’s a Cancer Research UK Grand Challenge called DCIS PRECISION, and we are just moving on to the in vivo phase.
Calcifications aren’t necessarily a sign of breast cancer – they are mostly benign; but in patients with DCIS, the composition of the calcifications can show how their condition will progress. Mammographic screening is incredibly good at picking up breast cancer, but it’s also incredibly good at detecting calcifications that are not necessarily breast cancer yet. The problem is how to treat these patients, so our aim is to determine whether the calcifications are completely fine or if they require biopsy.
We are using Raman spectroscopy to understand the composition of these calcifications, which are different in patients who are likely to progress onto invasive disease. We can do this in biopsies under a microscope and are now trying to see whether it works using transillumination, where we send near-infrared light through the breast. We could use this to significantly reduce the number of biopsies, or monitor individuals with DCIS over many years.
Light can also be harnessed to treat disease, for example using photodynamic therapy as you mentioned earlier. Another approach is nanoparticle-based photothermal therapy, how does this work?
This is an area I’m really excited about. Nanoscale gold can enhance Raman signals by many orders of magnitude – it’s called surface-enhanced Raman spectroscopy. We can also “label” these nanoparticles by adding functional molecules to their surfaces. We’ve used unlabelled gold nanoparticles to enhance signals from the body and labelled gold to find things.
During that process, we also realized that we can use gold to provide heat. If you shine light on gold at its resonant frequency, it will heat the gold up and can cause cell death. You could easily blow holes in people with a big enough laser and lots of nanoparticles – but we want to do is more subtle. We’re decorating the tiny gold nanoparticles with a label that will tell us their temperature.
By measuring the ratio between Stokes and anti-Stokes scattering signals (which are enhanced by the gold nanoparticles), we can measure the temperature of the gold when it is in the tumour. Then, using light, we can keep the temperature at a suitable level for treatment to optimize the outcome for the patient.
Ideally, we want to use 100 nm gold particles, but that is not something you can simply excrete through the kidneys. So we’ve spent the last five years trying to create nanoconstructs made from 5 nm gold particles that replicate the properties of 100 nm gold, but can be excreted. We haven’t demonstrated this excretion yet, but that’s the process we’re looking at.
This research is part of a project to combine diagnosis and heat treatment into one nanoparticle system – if the Raman spectra indicate cancer, you could then apply light to the nanoparticle to heat and destroy the tumour cells. Can you tell us more about this?
We’ve just completed a five-year programme called Raman Nanotheranostics. The aim is to label our nanoparticles with appropriate antibodies that will help the nanoparticles target different cancer types. This could provide signals that tell us what is or is not present and help decide how to treat the patient.
We have demonstrated the ability to perform treatments in preclinical models, control the temperature and direct the nanoparticles. We haven’t yet achieved a multiplexed approach with all the labels and antibodies that we want. But this is a key step forward and something we’re going to pursue further.
We are also trying to put labels on the gold that will enable us to measure and monitor treatment outcomes. We can use molecules that change in response to pH, or the reactive oxygen species that are present, or other factors. If you want personalized medicine, you need ways to see how the patient reacts to the treatment, how their immune system responds. There’s a whole range of things that will enable us to go beyond just diagnosis and therapy, to actually monitor the treatment and potentially apply a boost if the gold is still there.
Looking to the future, what do you see as the most promising applications of light within healthcare?
Light has always been used for diagnosis: “you look yellow, you’ve got something wrong with your liver”; “you’ve got blue-tinged lips, you must have oxygen depletion”. But it’s getting more and more advanced. I think what’s most encouraging is our ability to measure molecular changes that potentially reveal future outcomes of patients, and individualization of the patient pathway.
But the real breakthrough is what’s on our wrists. We are all walking around with devices that shine light in us – to measure heartbeat, blood oxygenation and so on. There are already Raman spectrometers that sort of size. They’re not good enough for biological measurements yet, but it doesn’t take much of a technology step forward.
I could one day have a chip implanted in my wrist that could do all the things the gold nanoconstructs might do, and my watch could read it out. And this is just Raman – there are a whole host of approaches, such as photoacoustic imaging or optical coherence tomography. Combining different techniques together could provide greater understanding in a much less invasive way than many traditional medical methods. Light will always play a really important role in healthcare.
This article is based on a session at the Institute of Physics’ Celebration of Physics in April 2025.
Laser World of Photonics, the leading trade show for the laser and photonics industry, takes place in Munich from 24 to 27 June. Attracting visitors and exhibitors from around the world, the event features 11 exhibition areas covering the entire spectrum of photonic technologies – including illumination and energy, biophotonics, data transmission, integrated photonics, laser systems, optoelectronics, sensors and much more.
Running parallel and co-located with Laser World of Photonics is World of Quantum, the world’s largest trade fair for quantum technologies. Showcasing all aspects of quantum technologies – from quantum sensors and quantum computers to quantum communications and cryptography – the event provides a platform to present innovative quantum-based products and discuss potential applications.
Finally, the World of Photonics Congress (running from 22 to 27 June) features seven specialist conferences, over 3000 lectures and around 6700 experts from scientific and industrial research.
The event is expecting to attract around 40,000 visitors from 70 countries, with the trade shows incorporating 1300 exhibitors from 40 countries. Here are some of the companies and product innovations to look out for on the show floor.
HOLOEYE Photonics AG, a leading provider of spatial light modulator (SLM) devices, announces the release of the GAEA-C spatial light modulator, a compact version of the company’s high-resolution SLM series. The GAEA-C will be officially launched at Laser World of Photonics, showcasing its advanced capabilities and cost-effective design.
Compact and cost-effective The GAEA-C spatial light modulator is ideal for a variety of applications requiring precise light modulation. (Courtesy: HOLOEYE)
The GAEA-C is a phase-only SLM with a 4K resolution of 4094 x 2400 pixels, with an exceptionally small pixel pitch of 3.74 µm. This compact model is equipped with a newly developed driver solution that not only reduces costs but also enhances phase stability, making it ideal for a variety of applications requiring precise light modulation.
The GAEA-C SLM features a reflective liquid crystal on silicon (LCOS) display (phase only). Other parameters include a fill factor of 90%, an input frame rate of 30 Hz and a maximum spatial resolution of 133.5 lp/mm.
The GAEA-C is available in three versions, each optimized for a different wavelength range: a VIS version (420–650 nm), a NIR version (650–1100 nm) and a version tailored for the telecommunications waveband around 1550 nm. This versatility ensures that the GAEA-C can meet the diverse needs of industries ranging from telecoms to scientific research.
HOLOEYE continues to lead the market with its innovative SLM solutions, providing unparalleled resolution and performance. The introduction of the GAEA-C underscores HOLOEYE’s commitment to delivering cutting-edge technology that meets the evolving demands of its customers.
For more information about the GAEA-C and other SLM products, visit HOLOEYE at booth #225 in Hall A2.
Avantes launches NIR Enhanced spectrometers
At this year’s Laser World of Photonics, Avantes unveils its newest generation of spectrometers: the NEXOS NIR Enhanced and VARIUS NIR Enhanced. Both instruments mark a significant leap in near-infrared (NIR) spectroscopy, offering up to 2x improved sensitivity and unprecedented data quality for integration into both research and industry applications.
Solving spectroscopy challenges Visit Avantes at booth 218, Hall A3, for hands-on demonstrations of its newest generation of spectrometers. (Courtesy: Avantes)
Compact, robust and highly modular, the NEXOS NIR Enhanced spectrometer redefines performance in a small form factor. It features enhanced NIR quantum efficiency in the 700–1100 nm range, with up to 2x increased sensitivity, fast data transfer and improved signal-to-noise ratio. The USB-powered spectrometer is designed with a minimal footprint of just 105 x 80 x 20 mm and built using AvaMation production for top-tier reproducibility and scalability. It also offers seamless integration with third-party software platforms.
The NEXOS NIR Enhanced is ideal for food sorting, Raman applications and VCSEL/laser system integration, providing research-grade performance in a compact housing. See the NEXOS NIR Enhanced product page for further information.
Designed for flexibility and demanding industrial environments, the VARIUS NIR Enhanced spectrometer introduces a patented optical bench for supreme accuracy, with replaceable slits for versatile configurations. The spectrometer offers a dual interface – USB 3.0 and Gigabit Ethernet – plus superior stray light suppression, high dynamic range and enhanced NIR sensitivity in the 700–1100 nm region.
With its rugged form factor (183 x 130 x 45.2 mm) and semi-automated production process, the VARIUS NIR is optimized for real-time applications, ensuring fast data throughput and exceptional reliability across industries. For further information, see the VARIUS NIR Enhanced product page.
Avantes invites visitors to experience both systems live at Laser World of Photonics 2025. Meet the team for hands-on demonstrations, product insights and expert consultations. Avantes offers free feasibility studies and tailored advice to help you identify the optimal solution for your spectroscopy challenges.
For more information, visit www.avantes.com or meet Avantes at booth #218 in Hall A3.
HydraHarp 500: a new era in time-correlated single-photon counting
Laser World of Photonics sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.
Redefining what’s possible PicoQuant presents HydraHarp 500, a next-generation TCSPC unit that maximizes precision, flexibility and efficiency. (Courtesy: PicoQuant)
At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.
In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.
The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.
The HydraHarp 500 is fully compatible with UniHarp, a sleek, powerful and intuitive graphical user interface. UniHarp revolutionizes the interaction with PicoQuant’s TCSPC and time tagging electronics, offering seamless access to advanced measurement modes like time trace, histogram, unfold, raw and correlation (including FCS and g²).
Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering precision, flexibility and efficiency combined with reliability and seamless integration to achieve breakthrough results.
With a strong focus on turnkey, application-specific solutions, SmarAct offers nanometre-precise motion systems, measurement equipment and scalable micro-assembly platforms for photonics, quantum technologies, semiconductor manufacturing and materials research – whether in research laboratories or high-throughput production environments.
State-of-the-art solutions The SmarAct Group returns to Laser World of Photonics in 2025 with a comprehensive showcase of integrated, high-precision technologies. (Courtesy: SmarAct)
At Laser World of Photonics, SmarAct presents a new modular multi-axis positioning system for quantum computing applications and photonic integrated circuit (PIC) testing. The compact system is made entirely from titanium and features a central XY stage with integrated rotation, flanked by two XYZ modules – one equipped with a tip-tilt goniometer.
For cryogenic applications, the system can be equipped with cold plates and copper braids to provide a highly stable temperature environment, even at millikelvin levels. Thanks to its modularity, the platform can be reconfigured for tasks such as low-temperature scanning or NV centre characterization. When combined with SmarAct’s interferometric sensors, the system delivers unmatched accuracy and long-term stability under extreme conditions.
Also debuting is the SGF series of flexure-based goniometers – compact, zero-backlash rotation stages developed in collaboration with the University of Twente. Constructed entirely from non-ferromagnetic materials, the goniometers are ideal for quantum optics, electron and ion beam systems. Their precision has been validated in a research paper presented at EUSPEN 2023.
Targeting the evolving semiconductor and photonics markets, SmarAct’s optical assembly platforms enable nanometre-accurate alignment and integration of optical components. At their core is a modular high-performance toolkit for application-specific configurations, with the new SmarAct robot control software serving as the digital backbone. Key components include SMARPOD parallel kinematic platforms, long-travel SMARSHIFT electromagnetic linear stages and ultraprecise microgrippers – all seamlessly integrated to perform complex optical alignment tasks with maximum efficiency.
Highlights at Laser World of Photonics include a gantry-based assembly system developed for the active alignment of beam splitters and ferrules, and a compact, fully automated fibre array assembly system designed for multicore and polarization-maintaining fibres. Also on display are modular probing systems for fast, accurate and reliable alignment of fibres and optical elements – providing the positioning precision required for chip- and wafer-level testing of PICs prior to packaging. Finally, the microassembly platform P50 from SmarAct Automation offers a turnkey solution for automating critical micro-assembly tasks such as handling, alignment and joining of tiny components.
Whether you’re working on photonic chip packaging, quantum instrumentation, miniaturized medical systems or advanced semiconductor metrology, SmarAct invites researchers, engineers and decision-makers to experience next-generation positioning, automation and metrology solutions live in Munich.
A new solid-state laser can make a vast number of precise optical measurements each second, while sweeping across a broad range of optical wavelengths. Created by a team led by Qiang Lin at the University of Rochester in the US, the device can be fully integrated onto a single chip.
Optical metrology is a highly versatile technique that uses light to gather information about the physical properties of target objects. It involves illuminating a sample and measuring the results with great precision – using techniques such as interferometry and spectroscopy. In the 1960s, the introduction of lasers and the coherent light they emit boosted the technique to an unprecedented level of precision. This paved the way for advances ranging from optical clocks, to the detection of gravitational waves.
Yet despite the indispensable role they have played so far, lasers have also created a difficult challenge. To ensure the best possible precision, experimentalists much achieve very tight control over the wavelength, phase, polarization and other properties of the laser light. This is very difficult to do within the tiny solid-state laser diodes that are very useful in metrology.
Currently, the light from laser diodes is improved externally using optical modules. This added infrastructure is inherently bulky and it remains difficult to integrate the entire setup onto chip-scale components – which limits the development of small, fast lasers for metrology.
Two innovations
Lin and colleagues addressed this challenge by designing a new laser with two key components. One is a laser cavity that comprises a thin film of lithium niobate. Thanks to the Pockels effect, this material’s refractive index can vary depending on the strength of an applied electric field. This provides control over the wavelength of the light amplified by the cavity.
The other component is a distributed Bragg reflector (DBR), which is a structure containing periodic grooves that create alternating regions of refractive index. With the right spacing of these grooves, a DBR can strongly reflect light at a single, narrow linewidth, while scattering all other wavelengths. In previous studies, lasers were created by etching a DBR directly onto a lithium niobate film – but due to the material’s optical properties, this resulted in a broad linewidth.
“Instead, we developed an ‘extended DBR’ structure, where the Bragg grating is defined in a silica cladding,” explains team member Mingxiao Li at the University of California Santa Barbara. “This allowed for flexible control over the grating strength, via the thickness and etch depth of the cladding. It also leverages silica’s superior etchability to achieve low scattering strength, which is essential for narrow linewidth operation.”
Using a system of integrated electrodes, Lin’s team can adjust the strength of the electric field they applied to the lithium niobate film. This allows them to rapidly tune the wavelengths amplified by the cavity via the Pockels effect. In addition, they used a specially designed waveguide to control the phase of light passing into the cavity. This design enabled them to tune their laser over a broad range of wavelengths, without needing external correction modules to achieve narrow linewidths.
Narrowband performance
Altogether, the laser demonstrated an outstanding performance on a single chip – producing a clean, single wavelength with very little noise. Most importantly, the light had a linewidth of just 167 Hz – the smallest range achieved to date for a single-chip lithium niobate laser. This exceptional performance enabled the laser to rapidly sweep across a bandwidth of over 10 GHz – equivalent to scanning quintillions of points per second.
“These capabilities translated directly into successful applications,” Li describes. “The laser served as the core light source in a high-speed LIDAR system, measuring the velocity of a target 0.4 m away with better than 2 cm distance resolution. The system supports a velocity measurement as high as Earth’s orbital velocity – around 7.91 km/s – at 1 m.” Furthermore, Lin’s team were able to lock their laser’s frequency with a reference gas cell, integrated directly onto the same chip.
By eliminating the need for bulky control modules, the team’s design could now pave the way for the full miniaturization of optical metrology – with immediate benefits for technologies including optical clocks, quantum computers, self-driving vehicles, and many others.
“Beyond these, the laser’s core advantages – exceptional coherence, multifunctional control, and scalable fabrication – position it as a versatile platform for transformative advances in high-speed communications, ultra-precise frequency generation, and microwave photonics,” Lin says.
In a conversation with Physics World’s Tami Freeman Krausz talks about his research into using ultrashort-pulsed laser technology to develop a diagnostic tool for early disease detection. He also discusses his collaboration with Semmelweis University to establish the John von Neumann Institute for Data Science, and describes the Science4People initiative, a charity that he and his colleagues founded to provide education for children who have been displaced by the war in Ukraine.
On 13–14 May, The Economist is hosting Commercialising Quantum Global 2025 in London. The event is supported by the Institute of Physics – which brings you Physics World. Participants will join global leaders from business, science and policy for two days of real-world insights into quantum’s future. In London you will explore breakthroughs in quantum computing, communications and sensing, and discover how these technologies are shaping industries, economies and global regulation. Register now and use code QUANTUM20 to receive 20% off. This offer ends on 4 May.
Until now, researchers have had to choose between thermal and visible imaging: One reveals heat signatures while the other provides structural detail. Recording both and trying to align them manually — or harder still, synchronizing them temporally — can be inconsistent and time-consuming. The result is data that is close but never quite complete. The new FLIR MIX is a game changer, capturing and synchronizing high-speed thermal and visible imagery at up to 1000 fps. Visible and high-performance infrared cameras with FLIR Research Studio software work together to deliver one data set with perfect spatial and temporal alignment — no missed details or second guessing, just a complete picture of fast-moving events.
Jerry Beeney
Jerry Beeney is a seasoned global business development leader with a proven track record of driving product growth and sales performance in the Teledyne FLIR Science and Automation verticals. With more than 20 years at Teledyne FLIR, he has played a pivotal role in launching new thermal imaging solutions, working closely with technical experts, product managers, and customers to align products with market demands and customer needs. Before assuming his current role, Beeney held a variety of technical and sales positions, including senior scientific segment engineer. In these roles, he managed strategic accounts and delivered training and product demonstrations for clients across diverse R&D and scientific research fields. Beeney’s dedication to achieving meaningful results and cultivating lasting client relationships remains a cornerstone of his professional approach.
A new retinal stimulation technique called Oz enabled volunteers to see colours that lie beyond the natural range of human vision. Developed by researchers at UC Berkeley, Oz works by stimulating individual cone cells in the retina with targeted microdoses of laser light, while compensating for the eye’s motion.
Colour vision is enabled by cone cells in the retina. Most humans have three types of cone cells, known as L, M and S (long, medium and short), which respond to different wavelengths of visible light. During natural human vision, the spectral distribution of light reaching these cone cells determines the colours that we see.
Spectral sensitivity curves The response function of M cone cells overlaps completely with those of L and S cones. (Courtesy: Ben Rudiak-Gould)
Some colours, however, simply cannot be seen. The spectral sensitivity curves of the three cone types overlap – in particular, there is no wavelength of light that stimulates only the M cone cells without stimulating nearby L (and sometimes also S) cones as well.
The Oz approach, however, is fundamentally different. Rather than being based on spectral distribution, colour perception is controlled by shaping the spatial distribution of light on the retina.
Describing the technique in Science Advances, Ren Ng and colleagues showed that targeting individual cone cells with a 543 nm laser enabled subjects to see a range of colours in both images and videos. Intriguingly, stimulating only the M cone cells sent a colour signal to the brain that never occurs in natural vision.
The Oz laser system uses a technique called adaptive optics scanning light ophthalmoscopy (AOSLO) to simultaneously image and stimulate the retina with a raster scan of laser light. The device images the retina with infrared light to track eye motion in real time and targets pulses of visible laser light at individual cone cells, at a rate of 105 per second.
In a proof-of-principle experiment, the researchers tested a prototype Oz system on five volunteers. In a preparatory step, they used adaptive optics-based optical coherence tomography (AO-OCT) to classify the LMS spectral type of 1000 to 2000 cone cells in a region of each subject’s retina.
When exclusively targeting M cone cells in these retinal regions, subjects reported seeing a new blue–green colour of unprecedented saturation – which the researchers named “olo”. They could also clearly perceive Oz hues in image and video form, reliably detecting the orientation of a red line and the motion direction of a rotating red dot on olo backgrounds. In colour matching experiments, subjects could only match olo with the closest monochromatic light by desaturating it with white light – demonstrating that olo lies beyond the range of natural vision.
The team also performed control experiments in which the Oz microdoses were intentionally “jittered” by a few microns. With the target locations no longer delivered accurately, the subjects instead perceived the natural colour of the stimulating laser. In the image and video recognition experiments, jittering the microdose target locations reduced the task accuracy to guessing rate.
Ng and colleagues conclude that “Oz represents a new class of experimental platform for vision science and neuroscience [that] will enable diverse new experiments”. They also suggest that the technique could one day help to elicit full colour vision in people with colour blindness.