This upcoming summer term I will be teaching a course on Computational Photography together with Prof. Hendrik Lensch.
Our paper "Learning to deblur" has been accepted for publication in PAMI and can be downloaded from the journal webpage here. An earlier version of our work can be found on arXiv here.
I am now leading the Computational Imaging Group in the Department of Empirical Inference at the Max Planck Institute for Intelligent Systems in Tübingen.
I am interested in a wide range of signal and image processing problems in scientific imaging as well as computational photography. My particular interest is on physical modeling and the development of efficient and novel inference schemes for inverse problems.
Please do visit our group page for more information on our research.
IEEE International Conference on Computer Vision (ICCV 2015), Workshop on Inverse Rendering, 2015, Note: This work has been presented as a poster and is not included in the workshop proceedings. (poster)
In Computer Vision - ECCV 2012, LNCS Vol. 7574, pages: 187-200, (Editors: A Fitzgibbon, S Lazebnik, P Perona, Y Sato, and C Schmid), Springer, Berlin, Germany, 12th IEEE European Conference on Computer Vision, ECCV, 2012 (inproceedings)
Camera lenses are a critical component of optical imaging systems, and lens imperfections compromise image quality. While traditionally, sophisticated lens design and quality control aim at limiting optical aberrations, recent works [1,2,3] promote the correction of optical flaws by computational means. These approaches rely on elaborate measurement procedures to characterize an optical system, and perform image correction by non-blind deconvolution.
In this paper, we present a method that utilizes physically plausible assumptions to estimate non-stationary lens aberrations blindly, and thus can correct images without knowledge of specifics of camera and lens. The blur estimation features a novel preconditioning step that enables fast deconvolution. We obtain results that are competitive with state-of-the-art non-blind approaches.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems