Digital images and videos are ubiquitous and have become indisputably the main carrier of information over the last few decades. At the Computational Imaging Lab we are interested in a wide range of signal and image processing problems both in scientific imaging and computational photography. The underlying common theme of our research is how to make efficient inference about the world through imaging by developing and employing novel techniques from probabilistic modeling and machine learning.
Digital image restoration as a key area of signal and image processing aims at computationally enhancing the quality of images by undoing the adverse effects of image degradation such as noise and blur and plays an ever important role in both scientific imaging and everyday photography. Using probabilistic generative models of the imaging process, our research aims to recover the most likely original image given a low-quality image, or image sequence. In the following we give a number of illustrative examples to highlight some of our work.
Spatially varying blurs: our work on blind deconvolution of astronomical image sequences [ ] can recover sharp images through atmospheric turbulence, but is limited to relatively small patches of the sky, since the image defect is modeled as a space-invariant blur. Images that cover larger areas require a convolutional model allowing for space-variant blur. In [ ] we proposed such a model based on an overlap-add, called Efficient Filter Flow (EFF). Deconvolution based on EFF successfully recovers a sharp image from an image sequences distorted by air turbulence.
Fig. Example frames from an image sequence degraded by air turbulence and reconstructed image on the right.
Removing camera shake: photographs taken with long exposure times are affected by camera shake creating smoothly varying blur. Real hand-held camera movement involves both translation and rotation, which can be modeled with our EFF framework for space-variant blur [ ]. We were also able to recover a sharp image from a single distorted image, using sparsity-inducing image priors and an alternating update algorithm. The algorithm can be made more robust by restricting the EFF to blurs consistent with physical camera shake [ ]. This leads to higher image quality and computational advantages. To foster and simplify comparisons between different algorithms removing camera shake, we created a public benchmark dataset and an initial comparison of current methods [ ].
Correcting lens aberration: even good lenses exhibit optical aberration off the center when used with wide apertures. Similar to camera shake, this creates a certain type of blur that can be modeled with our EFF framework; but optical aberrations affects each color channel differently (chromatic aberration). We measured these effects with a robotic setup and corrected them using a non-blind deconvolution based on the EFF framework [ ]. We were also able to implement a blind method rectifying optical aberrations in single images [ ]. The key was to constrain the EFF framework to rotationally symmetric blurs varying smoothly from the image center to the edges. We elaborated on this idea by employing powerful non-parametric kernel regression techniques that enabled faithful optical aberration estimation and correction[ ], which might lead to new approaches in lens design.
Dark frame denoising: in astronomical imaging, dim celestial objects require very long exposure times, which causes high sensor noise. It is common to subtract a dark frame from the image - an image taken with covered lens, containing only sensor noise. The difficulty with this is that the sensor noise is stochastic, and image information is not taken into account. We studied the distribution of sensor noise generated by a specific camera sensor and proposed a parameterized model [ ]. Combined with a simple image model for astronomical images, this gives superior denoising.
Multi-scale denoising: most denoising methods process small image patches, limiting themselves to recovering high-frequencies. This is valid in low noise situations, but not for large amounts of noise which also affect low frequencies. Most existing denoising methods can be improved for this high noise setting by applying them in a multi-scale fashion [ ].
Image restoration as a learning problem: denoising can be also seen as a non-trivial mapping from noisy to clean images. In [ ] we showed how a multi-layer perceptron (MLP) can be trained to learn such a mapping, leading to the new state-of-the-art denoising method [ ]. Such a discriminative approach turned out to be effective also for other image processing tasks such as inpainting [ ], non-blind deconvolution [ ], as well as blind image deconvolution [ ].