↩ Accueil

Vue lecture

Motion blur brings a counterintuitive advantage for high-resolution imaging

Three pairs of greyscale images, showing text, a pattern of lines, and an image. The left images are blurred, the right images are clearer
Blur benefit: Images on the left were taken by a camera that was moving during exposure. Images on the right used the researchers’ algorithm to increase their resolution with information captured by the camera’s motion. (Courtesy: Pedro Felzenszwalb/Brown University)

Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.

“We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”

Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.

Exploiting the “tracks” left by small points of light

Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.

“There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”

Application scenarios

When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.

“Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”

The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.

“While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”

The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.

The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

  •  

Bayes’ rule goes quantum

How would Bayes’ rule – a technique to calculate probabilities – work in the quantum world? Physicists at the National University of Singapore, Japan’s University of Nagoya, and the Hong Kong University of Science and Technology in Guangzhou have now put forward a possible explanation. Their work could help improve quantum machine learning and quantum error correction in quantum computing.

Bayes’ rule is named after Thomas Bayes who first defined it for conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances” in 1763.  It describes the probability of an event based on prior knowledge of conditions that might be related to the event. One area in which it is routinely used is to update beliefs based on new evidence (data). In classical statistics, the rule can be derived from the principle of minimum change, meaning that the updated beliefs must be consistent with the new data while only minimally deviating from the previous belief.

In mathematical terms, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Simply put, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. For example, when a person tests positive for Covid-19, they may have suspected that they were ill, but the new information confirms this. Bayes’ rule is a therefore way to calculate the probability of having contracted Covid-19 based not only on the test result, and the chance of the test yielding a false negative, but also on the patient’s initial suspicions.

Quantum analogue

Quantum versions of Bayes’ rule have been around for decades, but the approach through the minimum change principle had not been tried before. In the new work, a team led by Ge Bai, Francesco Buscemi and Valerio Scarani set out to do just that.

“We found which quantum Bayes’ rule is singled out when one maximizes the fidelity (which is equivalent to minimizing the change) between two processes,” explains Bai. “In many cases, the solution is the ‘Petz recovery map’, proposed by Dénes Petz in the 1980s and which was already considered as being one of the best candidates for the quantum Bayes’ rule. It is based on the rules of information processing, crucial not only for human reasoning, but also for machine learning models that update their parameters with new data.”

Quantum theory is counter-intuitive, and the mathematics is hard, says Bai. “Our work provides a mathematically sound way to update knowledge about a quantum system, rigorously derived from simple principles of reasoning, he tells Physics World. “It demonstrates that the mathematical description of a quantum system—the density matrix—is not just a predictive tool, but is genuinely useful for representing our understanding of an underlying system. “It effectively extends the concept of gaining knowledge, which mathematically corresponds to a change in probabilities, into the quantum realm.”

A conservative stance

The “simple principles of reasoning” encompass the minimum change principle, adds Buscemi. “The idea is that while new data should lead us to update our opinion or belief about something, the change should be as small as possible, given the data received.

“It’s a conservative stance of sorts: I’m willing to change my mind, but only by the amount necessary to accept the hard facts presented to me, no more.”

“This is the simple (yet powerful) principle that Ge mentioned,” he says, “and it guides scientific inference by preventing unwanted biases from entering the reasoning process.”

An axiomatic approach to the Petz recovery map

While several quantum versions of the Bayes’ rule have been put forward before now, these were mostly based on the fact of having analogous properties to their classical counterpart, adds Scarani. “Recently, Francesco and one co-author proposed an axiomatic approach to the most frequently-used quantum Bayes rule, the one using the Petz recovery map. Our work is the first to derive a quantum Bayes rule from an optimization principle, which works very generally for classical information, but which has been used here for the first time in quantum information.

The result is very intriguing, he says: “we recover the Petz map in many cases, but not all. If we take that our new approach is the correct way to define a quantum Bayes rule, then previous constructions based on analogies were correct very often, but not quite always; and one or more of the axioms are not to be enforced after all. Our work is therefore is a major advance, but it is not the end of the road – and this is nice.”

Indeed, the researchers say they are now busy further refining their quantum Bayes’ rule. They are also looking into applications for it. “Beyond machine learning, this rule could be powerful for inference—not just for predicting the future but also retrodicting the past,” says Bai. “This is directly applicable to problems in quantum communication, where one must recover encoded messages, and in quantum tomography, where the goal is to infer a system’s internal state from observations.

“We will be using our results to develop new, hopefully more efficient, and mathematically well-founded methods for these tasks,” he concludes.

The present study is detailed in Physical Review Letters.

The post Bayes’ rule goes quantum appeared first on Physics World.

  •  

Reformulation of general relativity brings it closer to Newtonian physics

The first-ever detection of gravitational waves was made by LIGO in 2015 and since then researchers have been trying to understand the physics of the black-hole and neutron-star mergers that create the waves. However, the physics is very complicated and is defined by Albert Einstein’s general theory of relativity.

Now Jiaxi Wu, Siddharth Boyeneni and Elias Most at the California Institute of Technology (Caltech) have addressed this challenge by developing a new formulation of general relativity that is inspired by the equations that describe electromagnetic interactions. They show that general relativity behaves in the same way as the gravitational inverse square law described by Isaac Newton more than 300 years ago. “This is a very non-trivial insight,” says Most.

One of the fascinations of black holes is the extreme physics they invoke. These astronomical objects  pack so much mass into so little space that not even light can escape their gravitational pull. Black holes (and neutron stars) can exist in binary systems in which the objects orbit each other. These pairs eventually merge to create single black holes in events that create detectable gravitational waves. The study of these waves provides an important testbed for gravitational physics. However, the mathematics of general relativity that describe these mergers is very complicated.

Inverse square law

According to Newtonian physics, the gravitational attraction between two masses is proportional to the inverse of the square of the distance between them – the inverse square law. However, as Most points out, “Unless in special cases, general relativity was not thought to act in the same way.”

Over the past decade, gravitational-wave researchers have taken various approaches including post-Newtonian theory and effective one-body approaches to better understand the physics of black-hole mergers. One important challenge is how to model parameters such as orbital eccentricity and precession in black hole systems and how best to understand “ringdown”. The latter is the process whereby a black hole formed by a merger emits gravitational waves as it relaxes into a stable state.

The trio’s recasting of the equations of general relativity was inspired by the Maxwell equations that describe how electric and magnetic fields leapfrog each other through space. According to these equations, the forces between electric charges diminish according to the same inverse square law as Newton’s gravitational attraction.

Early reformulations

The original reformulations of “gravitoelectromagnetism” date back to the 90s. Most explains how among those who did this early work was his Caltech colleague and LIGO Nobel laureate Kip Thorne, who exploited a special mathematical structure of the curvature of space–time.

“This structure mathematically looks like the equations governing light and the attraction of electric charges, but the physics is quite different,” Most tells Physics World. The gravito-electric field thus derived describes how an object might squish under the forces of gravity. “Mathematically this means that the previous gravito-electric field falls off with inverse distance cubed, which is unlike the inverse distance square law of Newtonian gravity or electrostatic attraction,” adds Most.

Most’s own work follows on from previous studies of the potential radio emission from the interaction of magnetic fields during the collision of neutron stars and black holes from which it seemed reasonable to then “think about whether some of these insights naturally carry over to Einstein’s theory of gravity”. The trio began with different formulations of general relativity and electromagnetism with the aim of deriving gravitational analogues for the electric and magnetic fields that behave more closely to classical theories of electromagnetism. They then demonstrated how their formulation might describe the behaviour of a non-rotating Schwarzschild black hole, as well as a black hole binary.

Not so different

“Our work says that actually general relativity is not so different from Newtonian gravity (or better, electric forces) when expressed in the right way,” explains Most. The actual behaviour predicted is the same in both formulations but the trio’s reformulation reveals how general relativity and Newtonian physics are more similar than they are generally considered to be. “The main new thing is then what does it mean to ‘observe’ gravity, and what does it mean to measure distances relative to how you ‘observe’.”

Alexander Phillipov is a black-hole expert at the University of Maryland in the US and was not directly involved with Most’s research. He describes the research as “very nice”, adding that while the analogy between gravity and electromagnetism has been extensively explored in the past, there is novelty in the interpretation of results from fully nonlinear general relativistic simulations in terms of effective electromagnetic fields. “It promises to provide valuable intuition for a broad class of problems involving compact object mergers.”

The research is described in Physical Review Letters.

The post Reformulation of general relativity brings it closer to Newtonian physics appeared first on Physics World.

  •