In-flight connectivity (IFC) is fast becoming a major arena for innovation and growth. As airlines seek new ways to differentiate and drive revenue through enhanced passenger experiences, seamless satellite connectivity […]
NASA’s acting administrator expects to decide on a new “top-level” structure for the agency within weeks, but a Senate-confirmed administrator may not be in place until next year.
A new experiment has offered the clearest view yet of how gluons behave inside atomic nuclei. Conducted at the Thomas Jefferson National Accelerator Facility in the US, the study focused on a rare process called photoproduction. This involves high-energy photons interacting with protons confined in nuclei to produce J/psi mesons. The research sheds light on how gluons are distributed in nuclear matter and is a crucial step toward understanding the nature of protons within nuclei.
While gluons are responsible for generating most of the visible mass in the universe, their role inside nuclei remains poorly understood. These massless particles mediate the strong nuclear force, which binds quarks as well as protons and neutrons in nuclei. Gluons carry no electric charge and cannot be directly detected.
The theory that describes gluons is called quantum chromodynamics (QCD) and it is notoriously complex and difficult to test – especially in the dense, strongly interacting environment of a nucleus. That makes precision experiments essential for revealing how matter is held together at the deepest level.
Probing gluons with light
The Jefferson Lab experiment focused on photoproduction, a process in which a high-energy photon strikes a particle and creates something new, in this case, a J/psi meson.
The J/psi comprises a charm quark and its antiquark and is especially useful for studying gluons. Charm quarks are much heavier than those found in ordinary matter and are not present in protons or neutrons. Therefore, they must be created entirely during the interaction, making the J/psi a particularly clean and sensitive probe of gluon behaviour inside nuclei.
Earlier studies had observed this process using free protons. This new experiment extends the approach to protons confined in nuclei to see how that environment affects gluon behaviour. The modification of quarks inside nuclei has been known since the 1980s and is called the EMC effect. However, much less is known about how gluons behave under the same conditions.
“Protons and neutrons do behave differently when they are bound inside nuclei than they do on their own,” says Jackson Pybus, now a postdoctoral fellow at Los Alamos National Laboratory and one of the experiment’s collaborators. “The nuclear physics community is still trying to work out the mechanisms behind the EMC effect. Until now, the distribution of high-momentum gluons in nuclei has remained an unexplored area.”
Pybus and colleagues used Jefferson Lab’s Experimental Hall D, which delivers an intense beam of high-energy photons. This setup had previously been used to study simpler systems, but this was the first time it was applied to heavier nuclei.
“This study looked for events where a photon strikes a proton inside the nucleus to knock it out while producing a J/psi,” Pybus explains. “By measuring the knocked-out proton, the produced J/psi, and the energy of the photon, we can reconstruct the reaction and learn how the gluons were behaving inside the nucleus.” This was done using the GlueX spectrometer.
Unexpected signals
Significantly, the experiment was accessing the “threshold” region – where the photon has just enough energy to produce a J/psi meson. Near-threshold interactions are particularly valuable because they are highly sensitive to the gluon structure of the target. Creating a heavy charm-anticharm pair requires a large energy transfer so interactions in this region reveal how gluons behave when little momentum is available. This is a regime where theoretical uncertainties in QCD are especially large.
Even more striking were the observations below this threshold. In so-called “sub-threshold” photoproduction, the incoming photon does not carry enough energy to produce the J/psi on its own, so it must draw additional energy from the internal motion of protons or from the nuclear medium itself. This is a well-understood mechanism in principle, but the rate at which it occurred in the experiment came as a surprise.
“Our study was the first to measure J/psi photoproduction from nuclei in the threshold region,” Pybus said. “The data indicate that the J/psi is produced more commonly than expected from protons that are moving with large momentum inside the nucleus, suggesting that these fast-moving protons could experience significant distortion to their internal gluons.”
The sub-threshold results were even harder to explain. “The number of subthreshold J/psi exceeded expectations,” Pybus added. “That raises questions about how the photon is able to pick up so much energy from the nucleus.”
Towards a deeper theory
The results suggest that gluons may be modified inside nuclei in ways that are not described by existing models – suggesting a new frontier in nuclear physics.
“This study has given us the first look at this sort of rare phenomenon that can teach us about the gluon inside the nucleus – just enough data to point to unexpected behaviours,” said Pybus. “Now that we know this measurement is possible, and that there are signs of interesting and unexplored phenomena, we’d like to perform a dedicated measurement focused on pinning down the sort of exotic effects we’re just now glimpsing.”
Follow-up experiments, including those planned at the future Electron-Ion Collider, are expected to build on these results. For now, this first glimpse at gluons in nuclei reveals that even decades after QCD’s development, the inner workings of nuclear matter remain only partially illuminated.
Modern-day communications rely on both fibre-optic cables and wireless radiofrequency (RF) microwave communications. Reaching higher data transmission capabilities is going to require technologies that can efficiently process and convert both optical and microwave signals in a small and energy-efficient package that’s compatible with existing communication networks.
Microwave photonics (MWP) is one of the frontrunning technologies, as it can perform signal processing tasks within the optical domain. Current MWP approaches, however, are typically power intensive and often require many off-chip devices to achieve the desired device capabilities and functionalities – so are not very scalable. Researchers from Belgium and France have now managed to overcome some of these limitations, reporting their findings in Nature Communications.
“We wanted to demonstrate that photonic chips can be as versatile as electronic chips, and one of the fields where the two overlap is that of microwave photonics,” one of the paper’s lead authors, Wim Bogaerts from Ghent University, tells Physics World.
A photonic engine
The researchers have created a photonic engine that processes microwave and optical signals and can convert the signals between the two domains. It is a silicon chip that can generate and detect optical and analogue electrical signals. The chip uses a combination of tuneable lasers (created by using an optical amplifier with on-chip filter circuits), electro-optic modulators and photodetectors, low-loss waveguides and passive components, and a programmable optical filter – which enables the chip to filter signals in both domains.
“We managed to integrate all key functionalities for manipulation of microwave signals and optical signals together on a single silicon chip and use that chip as a programmable engine in different experimental demonstrations,” says Bogaerts.
This setup allowed the researchers to operate the chip as a black-box microwave photonics processor, where the user can process high-frequency RF signals, without being exposed to the internal optical operations (they are hidden).
Optical signals from an external optical fibre are coupled to the chip using a grating coupler and high-speed RF signals are fed into the chip using electro-optic modulators. The RF signal is imprinted into an optical carrier wavelength – which is generated by the on-chip laser – and the signal is then processed on the chip using an optical filter bank. The signal then gets converted back into an RF signal using photodetectors.
All of the signals travelling into and out of the chip can be confined to the RF domain, so the chip doesn’t require any external optical components, unlike many other MWP devices. Moreover, the signals are locally programmed and tuned using thermo-optic phase shifters, enabling users to select any combination of microwave and optical inputs and outputs across the chip.
Extensive applications
The researchers used the photonic engine to create multiple systems that showcase its different optical and RF signal processing capabilities and demonstrate a potential pathway towards smaller MWP systems for high-speed wireless communication networks and microwave sensing applications.
As well as being used for simple light-tuning applications, the chip can also perform optical-to-electrical signal conversion, electrical-to-optical signal conversion, microwave frequency doubling, and microwave/optical filtering and equalization. These functions allow it to be used as a transmitter, receiver, optical/microwave filter, frequency converter or a tuneable opto-electronic oscillator.
When asked about the future of the chip, Bogaerts states that “we plan combine this functionality with more general purpose photonic circuits to enable even more functions and applications to help product developers roll out new photonic products as easily as new electronics products”.
Some other potential applications for the chip that have been touted – but not physically tested in this study – include RF instantaneous frequency measuring, radio-over-fibre links, RF phase tuning, optical and RF switching, optical sensing and signal temporal computing. With so many possibilities, this small-scale and low-power chip could become increasingly important as technologies such as communications advance further.
Join us for a webinar that takes you deep into the Geospatial intelligence lifecycle—from the capture of data to the advanced analytical tools and how advances in AI and machine learning are transforming it into actionable insights.
Learn about new warnings from scientists about how disposable e-cigarettes contain higher levels of hazardous lead, nickel, and antimony than traditional cigarettes.
Records of hundreds of emergency calls from ICE detention centers obtained by WIRED—including audio recordings—show a system inundated by life-threatening incidents, delayed treatment, and overcrowding.
Rocket Lab will launch a pair of European navigation technology demonstration satellites, as Europe again is forced to look outside the continent for launch services.
Lux Aeterna has emerged from stealth with $4 million in pre-seed funding to develop a fully reusable satellite slated to launch in 2027, equipped with a heat shield and parachute for safe atmospheric reentry and landing.
The development of advance quantum materials and devices often involves making measurements at very low temperatures. This is crucial when developing single-photon emitters and detectors for quantum technologies. And even if a device or material will not be used at cryogenic temperatures, researchers will sometimes make measurements at low temperatures in order to reduce thermal noise.
This R&D will often involve optical techniques such as spectroscopy and imaging, which use lasers and free-space optics. These optical systems must remain in alignment to ensure the quality and repeatability of the measurements. Furthermore, the vibration of optical components must be kept to an absolute minimum because motion will degrade the performance of instrumentation.
Minimizing vibration is usually achieved by doing experiments on optical tables, which are very large, heavy and rigid in order to dampen motion. Therefore, when a cryogenic cooler (cryocooler) is deployed on an optical table it is crucial that it does not introduce unwanted vibrations.
Closed-cycle cryocoolers offer an efficient way to cool samples to temperatures as low as ~2 K to 4 K (−272 °C to −269 °C). Much like a domestic refrigerator or air conditioner, these cryocoolers involve the cyclic compression and expansion of a gas – which is helium in cryogenic systems.
In 2010 Montana Instruments founder Luke Mauritsen, a mechanical engineer and entrepreneur, recognized that the future development of quantum materials and devices would rely on optical cryostats that allow researchers to make optical measurements at very low temperatures and at very low levels of vibration. To make that possible, Mauritsen founded Montana Instruments, which in 2010 launched its first low-vibration cryostats. Based in Bozeman, Montana, the company was acquired by Sweden’s Atlas Copco in 2022 and it continues to develop cryogenic technologies for cutting-edge quantum science and other demanding applications.
Until recently, all of Montana’s low-vibration optical cryostats used Gifford–McMahon (GM) cryocoolers. While these systems provide low temperatures and low vibrations, they are limited in terms of the cooling power that they can deliver. This is because operating GM cryocoolers at higher powers results in greater vibrations.
To create a low-vibration cryostat with more cooling power, Montana has developed the Cryostation 200 PT, which is the first Montana system to use a pulse-tube cryocooler. Pulse tubes offer similar cooling powers to GM cryocoolers but at much lower vibration levels. As a result, the Cryostation 200 PT delivers much higher cooling power, while maintaining very low vibrations on par with Montana’s other cryostats.
Montana’s R&D manager Josh Doherty explains, “One major reason that a pulse tube has lower vibrations is that its valve motor can be ‘remote’, located a short distance from coldhead of the cryostat. This allows us to position the valve motor, which generates vibrations, on a cart next to the optical table so its energy can be shunted to the ground, away from the experimental space on the optical table.”
However, isolating the coldhead from the valve motor is not enough to achieve the new cryostat’s very low levels of vibration. During operation, helium gas moves back and forth in the pulse tube and this causes tiny vibrations that are very difficult to mitigate. Using its extensive experience, Montana has minimized the vibrations at the sample/device mount and has also reduced the vibrational energy transferred from the pulse tube to the optical table. Doherty explains that this was done using the company’s patented technologies that minimize the transfer of vibrational energy, while at the same time maximizing thermal conductance between the pulse tube’s first stage and second stage flanges and the sample/device mounting surface(s). This includes the use of flexible, high-thermal-conductivity links and flexible vacuum bellows connections between the coldhead and the sample/device.
200mm breadboard The Cryostation 200 PT offers a large working area that can be accessed via multiple feedthrough options that support free-space optics, RF and DC electrical connections, optical fibres and a vacuum connection. (Courtesy: Montana Instruments)
Doherty adds, “we intentionally design the supporting structure to de-tune it from the pulse tube vibration source”. This was done by first measuring the pulse-tube vibrations in the lab to determine the vibrational frequencies at which energy is transferred to the optical table. Doherty and colleagues then used the ANSYS engineering/multiphysics software to simulate designs of the pulse tube support and the sample mount supporting structures.
“We optimized the supporting structure design, through material choices, assembly methods and geometry to mismatch the simulated natural frequencies of the support structure from the dominant vibrations of the source,” he explains.
As a result, the Cryostation 200 PT delivers more that 250 mW cooling power at 4.2 K, with a peak-to-peak vibrational amplitude of less than 30 nm. This is more than three times the cooling power delivered by Montana’s Cryostation s200, which offers a similarly-sized sample/device area and vibrational performance.
The control unit has a touchscreen user interface, which displays the cryostat temperature, temperature stability and vacuum pressure.
The cryostat has multiple feedthrough options that support free-space optics, RF and DC electrical connections, optical fibres and a vacuum connection. The Cryostation 200 PT supports Montana’s Cryo-Optic microscope objective and nanopositioner, which can be integrated within the cryostat. Also available is a low working distance window, which supports the use of an external microscope.
According to Montana Instrument senior product manager Patrick Gale, the higher cooling power of the Cryostation 200 PT means that it can support larger experimental payloads – meaning that a much wider range of experiments can be done within the cryostat. For example, more electrical connections can be made with the outside world than had been possible before.
“Every wire that you bring into the cryostat increases that heat load a little bit,” explains Gale, adding, “By using a 1 W pulse tube, we can cool the system down faster than any of our other systems”. While Montana’s other systems have typical cooling times of about 10 h, this has been reduced to about 6 h in the Cryostation 200. “This is particularly important for commercial users who are testing multiple samples in a week,” says Gale. “Saving that four hours per measurement allows a user to do two tests per day, versus just one per day.”
According to Gale, applications of the Cryostation 200 PT include developing ion traps for use in quantum computing, quantum sensing and atomic clocks. Other applications related to quantum technologies include the development of photonic devices; spin-based devices included those based on nitrogen-vacancies in diamond; quantum dots; and superconducting circuits.
It was a crisp, chilly morning in Bombay (Mumbai) on 4 December 2024 as delegates made their way to the Indian Institute of Technology, Bombay (IITB) for the opening day of the Women in Optics and Photonics in India-Asia (WOPI) conference. The cool, air-conditioned auditorium was soon packed with almost 300 people – mostly female postgraduate students and researchers – with notepads in hand and lanyards around their necks, eager to learn from the talks ahead.
The three-day event was organized by the IITB and sponsored by institutions, companies and publishers, including IOP Publishing, which publishes Physics World. Aimed at highlighting female voices that often go unheard, the meeting brought together scientists and experts from all over the world.
Yet not everything went according to plan. Although the room was full of curious students, each time the floor opened for questions, nervous glances were exchanged. A few moments of silence lingered as though everyone was waiting for a “better” question to come along. The silence rung of missed opportunities.
This kind of reticence is not new. From classrooms and labs to meetings, we see hesitance by students, and especially female students, everywhere. Growing up, I was told to look up the basics first and only ask so-called “high-level” questions. Good advice in theory but not if you’re already unsure of your place in the room. Add to that a dismissive answer such as “You should have known this by now” and the urge to speak up is nipped right in the bud.
Perhaps the hesitation is understandable. We live in a world where every answer is an Internet search away. Why ask a “silly” question when everyone else probably already knows better – or at least looks like they do? Yet asking questions is a fundamental part of learning. Science doesn’t move forward because people stay quiet until they have the perfect question, it moves because someone dares to ask – even when they aren’t sure.
Another contributing factor to such silence is imposter syndrome – the feeling that you don’t deserve to be there and aren’t good at your work. Multiple studies have shown that women consistently score higher on measures of imposter syndrome than men. Women in technical fields such as engineering, science and maths also report lower self-efficacy than men, regardless of actual performance or ability – so not only do we question whether we belong but we also underestimate ourselves.
All of this means women are less likely to ask questions or speak up when uncertain. But for a scientist, uncertainty is the norm. We spend most of our time sitting with the unknown and with it the need to ask questions and chip away at it. Yet the very process of scientific inquiry can feel to women like a trap.
A new way
So what can we do differently? Events like WOPI create time and space not just for presenting research and innovation, but for mentorship, for insight into the real-world machinery of science. It’s not just about “What are you working on?” but also “Where are you heading, and who’s with you?”
WOPI 2024 modelled a new approach to inclusivity by running panel sessions that included the families of successful leaders to showcase the kind of support that is necessary to “make it”. Women were invited who fearlessly shared their stories of pivoted careers and/or failures, while acknowledging the challenges that they encountered along the way. It reminded us that you don’t just grow by knowing but by asking, exploring and doing.
Rallying cry Organizing chair Shobha Shukla speaks at the the Women in Optics and Photonics in India meeting in 2024. Shukla is a professor in the Department of Metallurgical Engineering and Materials Science at IIT Bombay. (Courtesy: Prof. Shobha Shukla, WOPI 2024/IIT Bombay)
But we need to do more. Encouraging young female scientists today means accounting for the weight of cultural expectations, social atmosphere and gender-based tendencies. Events and conferences need to go beyond formal settings to highlight not just the science but the scaffolding that holds it all together – networking, informal mentorship, vulnerability and visibility. This must be integrated into the very fabric of all scientific events, not just those for women.
The real pulse of the WOPI conference was in the moments after the talks – students lining up to talk to speakers. Nervous, curious and determined to ask their questions anyway. That moment stayed with me. It was steadfast curiosity that had survived immense self-doubt and fear of speaking in front of an audience.
Hopefully at the next WOPI meeting to be held in India in 2026, students will be much more confident in asking questions. So, to every student: don’t wait for the perfect moment. Ask the question that’s been sitting at the edge of your mind, making you wonder. After all, that is where discovery starts.
What is the largest organism? Until a few years ago, Oregon’s “humongous fungus” was considered the world’s largest organism. Now, some experts aren’t so sure.
Occupational transition network Graphical visualization of the weighted and directed labour market network in France derived from the transition probability matrix, computed from data spanning 2012 to 2020. Each node symbolizes an occupation, with links illustrating transitions between them. Node sizes correspond to the occupation’s workforce size, line widths are proportional to the transition probability. (Courtesy: M Knicker, K Naumann-Woleske and M Benzaquen, École Polytechnique Paris)
A new statistical physics analysis of the French labour market has revealed that the vast majority of occupations act as so-called condenser occupations. These structural bottlenecks attract workers from many other types of jobs, but offer very limited options for further mobility. This finding could help explain why changing jobs in response to shocks like technological change or economic crises is often so slow, say scientists at the Ecole Polytechnique in Paris, who performed the study.
“By pinpointing where mobility gets ‘stuck’, we provide a new lens to understand – and potentially improve – the adaptability of labour markets,” explains Max Knicker of the EconophysiX lab, who led this new research effort.
Knicker and colleagues borrowed a concept from statistical physics known as the “fitness and complexity” framework, which is used to study the structure of economies and ecosystems. In their work, the researchers treated occupations as nodes in a network and analysed real transition data – that is, how workers actually moved between different jobs in France from 2012 to 2020. The data came from official sources and were provided by the National Institute of Statistics and Economic Studies through the Secure Data Access Center (CASD).
“In total, we had access to information on about 30 million workers and employers in France, whom we tracked over a 10-year period,” explains Knicker. “We also worked with high-resolution administrative data from INSEE (the French National Institute of Statistics), and specifically the BTS-Postes (Base Tous Salariés-Postes).”
Two key metrics
The researchers assigned a score for each occupation and developed two key metrics. These were: accessibility, which measures how many different jobs “feed into” a given occupation; and transferability, which measures how many different jobs someone can move to from that occupation.
By studying the network of job flows with these metrics, they observed hidden patterns and constraints in occupational mobility and identified four main clusters, or categories, of jobs. The first are defined as “diffuser” occupations and have high transferability but low accessibility. “These require specific training to enter, but that training allows for transitions to many other areas,” explains Knicker. “This means they are more difficult to get into, but offer a wide range of exit opportunities.”
The second group are called “channel” occupations. These are both hard to enter and offer few onward transitions, he says. “They often involve highly specialized skills, such as specific types of machine operation.”
The third class are “hubs” and are both widely accessible and highly transferable – so much so that they act as central nodes in the transition network. “This class includes jobs like retail sellers, which require a broad, yet not highly specialized skill set,” says Knicker.
The fourth and last category is the most common type and dubbed “condenser” occupations. “Workers from many different backgrounds can easily enter these, but they can’t easily get out afterwards,” explains Knicker. “Examples of such jobs include caregiving roles.”
A valuable tool for policymakers
The researchers explain that they undertook their study to answer a broader question: why do some economies adapt quickly to shocks while others struggle? “Despite increasing attention to issues like automation or the green transition, we still lacked tools to diagnose where worker mobility breaks down,” says Knicker. “A key challenge was dealing with the sheer complexity and size of the labour flow data – we analysed over 250 million person–year observations. Another was interpreting the results in a meaningful, policy-relevant way, since the transition network is shaped by many intertwined factors like skill compatibility, employer preferences and worker choices.”
The new framework could become a valuable tool for policymakers seeking to make labour markets more responsive, he tells Physics World. “For example, by identifying specific occupations that function as bottlenecks, we can better target reskilling efforts or job transition programmes. It also suggests that simply increasing training isn’t enough – what matters is where people are coming from and where they can go next.”
The researchers also showed that the structure of job transitions itself can limit mobility. Over time, this could inform the design of more strategic labour interventions, especially in the face of structural shocks like AI-driven job displacement, states Knicker.
Looking forward, the Ecole Polytechnique team plans to extend its approach by studying how the career paths of individual workers evolve over time. This, says Knicker, will be done using panel data, not just year-to-year snapshots as in the present analysis. He and his colleagues are also interested in linking their metrics to wage dynamics – for example, does low transferability make workers more vulnerable to exploitation or wage stagnation? “Finally, we hope to explore whether similar bottleneck structures exist in other countries, which could reveal whether these patterns are universal or country-specific.”