For more information about our work, see also our group's homepage.
NEWS:
 We have several open PhD positions
 Our paper on GaussMarkov RungeKutta methods has been selected for an oral presentation at NIPS.
 An emerging community of people interested in assigning uncertainty to the result of deterministic computations met for the roundtable on probabilistic numerics, at the MPI, on 21/22 August. See also Mike Osborne's blog posts. This is an exciting time for our young circle.
 We have launched a community webpage: http://www.probabilisticnumerics.org
 Videos of my Gaussian Process tutorial at the RNLS summer school at the ETH Zürich are online. This is the most recent and most compact version of my GP tutorial. If you want to learn about GPs in 90 minutes, I recommend watching this one instead of the older ones listed below.
 I recently organised the Machine Learning Summer School 2013 (videos and slides)
 Videos of my talks at the MLSS are up (slide animations only work in Adobe Reader):

I also recently spoke at the Gaussian Process Winter School 2013. Here is a video, and the slides of this talk (slide animations only work in Adobe Reader).
 I'm coteaching the lecture course Intelligent Systems I of the University of Tübingen
 I recently coorganized the 2012 NIPS workshop on Probabilistic Numerics
See also my CV for more information on myself.
My work concerns
Probabilistic Uncertainty over the Result of Computations
Intelligence is the ability to act under uncertainty. It exists on a broad range, not just of physical, but also of computational scales: From simplistic ideas like gradient descent, which may be a microbe's strategy to get closer to a source of nutrients, to an adult human's reasoning about career goals. Much of modern research in machine learning and artificial intelligence aims for the top of this hierarchy: algorithms capable of building highly structured models, and taking complicated decisions, at high computational cost. I believe that there is still plenty of room for improvement left at the bottom, too.
Algorithms for the bottom end of the intelligence hierarchy are those constructed by numerical mathematics. They are methods that estimate the result of intractable computations: Optimizers estimate the location of (local or global) extrema. Quadrature methods estimate values of integrals. Differential equation solvers estimate curves fulfilling constraints on their derivatives. In this sense, all these methods can be seen as inference algorithms, estimating latent quantities from the result of tractable computations.
These algorithms are the building blocks for the more complex top level intelligence. So they have to be modular, to be reusable. They have to be robust, because their failure may cause big problems upstream. And of course they have to be cheap. In my work, I try to address theses requirements. Our analysis starts with the observation that many existing, basic numerical methods can be interpreted precisely as maximum aposterior estimators under Gaussian priors and likelihoods. This includes real classics, like the conjugate gradient algorithm [Hennig, 2014], the BFGS rule [Hennig & Kiefel, 2013], Gaussian quadrature (including Romberg's rule and others) and the family of RungeKutta methods [Schober, Duvenaud & Hennig, 2014]. Taking this interpretation as a starting point, we extend the established methods to address contemporary challenges of largescale learning: Modelling noise in observations and building structured priors for improved performance on specific tasks. A particularly promising area of generalization is the sharing of information among related problems, and the propagation of information from one computational step to the next. Over time, we plan to extend this notion into a general purpose framework for uncertain computation.
Here is an older selection of some of my work. See "publications" for pdfs and detailed citations, my CV (link above) for more recent information, and http://probabilisticnumerics.org for a frequently updated overview of our emerging community:
GaussMarkov RungeKutta methods for stochastic models of ODE solutions
RungeKutta methods, the bedrock of numerical ODE solution methods, can be interpreted as the posterior mean of a specific kind of linear statespace model under a Wiener drift [Schober, Duvenaud & Hennig. Probabilistic ODE Solvers with RungeKutta Means. NIPS 2014]. This insight can be used to construct methods assigning probability measures over the solution of an ODE. This measure can be visualised as an uncertainty [Schober et al., Probabilistic Shortest Path Tractography in DTI using Gaussian Process ODE solvers, MICCAI 2014], and used to control the computational cost of ODE solvers [Hennig & Hauberg, Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics, AISTATS 2014].
propagation of uncertainty for radiation treatment planning
together with Mark Bangert at the German Cancer Research Centre, we are developing a toolchain for the analytic propagation of setup uncertainties through the optimization chain used in radiation therapy treatment planning, aiming to find more robust treatment plans that reduce the incidence of complications and sideeffects in radiation tumor therapy. See Bangert, Hennig & Oelfke "Analytical probabilistic modeling for radiation therapy treatment planning Physics in Medicine and Biology" 58(16) 54015419.
quadratic optimization under noise
Stochastic gradient descent is still the dominant algorithm for the training of many online learning algorithms, like neural networks. All just because more elaborate ideas, like quasiNewton methods, cannot deal with noise? See what can be done about that: Hennig. "Fast Probabilistic Optimization from Noisy Gradients". ICML 2013
nonparametric quasiNewton methods
Did you know that BFGS is a leastsquares regressor? See what happens when you make it nonparametric: Hennig & Kiefel. "QuasiNewton methods, a new direction". ICML 2012
informationefficient experimental design
When optimizing experimental parameters in search of a global optimum, algorithms shouldn't try evaluating close to the optimum. They should try to evaluate where they expect to learn most about the optimum. Hennig & Schuler, "Entropy Search for Information Efficient Global Optimization". JMLR 13 (2012).
optimal Bayesian reinforcement learning
Probability theory offers a uniquely coherent view on the infamous exploration/exploitation tradeoff: From the Bayesian view, reinforcement learning is about modelling the effect of possible future observations on the optimality of decisions taken in the present. In general, this decision process is intractable. But under Gaussian process assumptions (which, depending how on look on it, is either a quite general, or a quite limited set of assumptions), the right answer moves within reach of numerical analysis. Hennig, "Optimal Reinforcement Learning for Gaussian Systems", NIPS 2011
kernel topic models: fast inference in dependent Dirichlet models
Topic modelling is a very popular area of machine learning at the moment. Documents come with metadata, and topics change over time, and from document to document depending on the author, the subject, and many other features. The probabilistic extension of topic models that allows modelling such effects requires an algorithmic link between discrete distributions and continuous domains, often realised as a set of "dependent Dirichlets". We pointed out how to do this, in a numerically extremely efficient way. Hennig, Stern, Herbrich and Graepel, "Kernel Topic Models". AISTATS 2011.
Bayesian tree search
Tree search, finding the optimal leaf of a tree, is exponentially hard in the depth of the tree, because trees are exponentially big in their depth. But what happens during that exponentially long search? If you have a probabilistic belief over the value and location of the optimal leaf, and get one more observation of one individual leaf's values? Shouldn't updating the belief cost only linear time? It does. Hennig, Stern and Grapel. "Coherent Inference on Optimal Play in Game Trees". AISTATS 2010

E. Sgouritsa, D. Janzing, P. Hennig, B. Schölkopf
(2015). Inference of Cause and Effect with Unsupervised Inverse Regression In: Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, JMLR.org, AISTATS 2015

T. Gunter, M. Osborne, R. Garnett, P. Hennig, S. Roberts
(2014). Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature In: Advances in Neural Information Processing Systems 27, (Ed) Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence and K.Q. Weinberger, Curran Associates, Inc., 2789–2797, 28th Annual Conference on Neural Information Processing Systems (NIPS 2014)

F. Meier, P. Hennig, S. Schaal
(2014). Incremental Local Gaussian Regression In: Advances in Neural Information Processing Systems 27, (Ed) Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence and K.Q. Weinberger, Curran Associates, Inc., 972–980, 28th Annual Conference on Neural Information Processing Systems (NIPS 2014)

F. Meier, P. Hennig, S. Schaal
(2014). Efficient Bayesian Local Model Learning for Control In: IEEE International Conference on Intelligent Robotics Systems 2014, IROS 2014

M. Schober, D. Duvenaud, P. Hennig
(2014). Probabilistic ODE Solvers with RungeKutta Means In: Advances in Neural Information Processing Systems 27, (Ed) Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence and K.Q. Weinberger, Curran Associates, Inc., 739747, 28th Annual Conference on Neural Information Processing Systems (NIPS 2014)

M. Kiefel, C. H. Schuler, P. Hennig
(2014). Probabilistic Progress Bars In: Pattern Recognition  36th German Conference GCPR, LNCS Vol. 8753, (Ed) Jiang, X., Hornegger, J., and Koch, R., Springer, 331341, GCPR 2014

M. Schober, N. Kasenburg, A. Feragen, P. Hennig, S. Hauberg
(2014). Probabilistic Shortest Path Tractography in DTI~Using Gaussian Process ODE Solvers In: Medical Image Computing and ComputerAssisted Intervention – MICCAI 2014, Lecture Notes in Computer Science Vol. 8675, (Ed) P. Golland, N. Hata, C. Barillot, J. Hornegger and R. Howe, Springer, Heidelberg, 265272, ISBN: 9783319104423, MICCAI 2014

P. Hennig, S. Hauberg
(2014). Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, JMLR W&CP volume 33, (Ed) S Kaski and J Corander, JMLR.org, 347355, AISTATS 2014

M. Bangert, P. Hennig, U. Oelfke
(2013). Analytical probabilistic modeling for radiation therapy treatment planning Physics in Medicine and Biology, 58, (16), 54015419

P. Hennig, M. Kiefel
(2013). QuasiNewton Methods: A New Direction Journal of Machine Learning Research, 14, 807829

E. Klenske, M. Zeilinger, B. Schölkopf, P. Hennig
(2013). Nonparametric dynamics estimation for time periodic systems In: Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing, 486  493

M. Bangert, P. Hennig, U. Oelfke
(2013). Analytical probabilistic proton dose calculation and range uncertainties International Conference on the Use of Computers in Radiation Therapy

D. LopezPaz, P. Hennig, B. Schölkopf
(2013). The Randomized Dependence Coefficient In: Advances in Neural Information Processing Systems 26, (Ed) C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, 1–9, 27th Annual Conference on Neural Information Processing Systems (NIPS 2013)

D. LopezPaz, P. Hennig, B. Schölkopf
(2013). The Randomized Dependence Coefficient Neural Information Processing Systems (NIPS 2013)

P. Hennig, CJ. Schuler
(2012). Entropy Search for InformationEfficient Global Optimization Journal of Machine Learning Research, 13, 18091837, 

P. Hennig, D. Stern, R. Herbrich, T. Graepel
(2012). Kernel Topic Models 19, Fifteenth International Conference on Artificial Intelligence and Statistics (AI & Statistics 2012)

JP. Cunningham, P. Hennig, S. LacosteJulien
(2012). Approximate Gaussian Integration using Expectation Propagation 111, , State: submitted

P. Hennig
(2011). Optimal Reinforcement Learning for Gaussian Systems In: Advances in Neural Information Processing Systems 24, (Ed) J ShaweTaylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger, 325333, TwentyFifth Annual Conference on Neural Information Processing Systems (NIPS 2011)

M. Bangert, P. Hennig, U. Oelfke
(2010). Using an Infinite Von MisesFisher Mixture Model to Cluster Treatment Beam Directions in External Radiation Therapy (Ed) Draghici, S. , T.M. Khoshgoftaar, V. Palade, W. Pedrycz, M.A. Wani, X. Zhu, IEEE, Piscataway, NJ, USA, 746751 , ISBN: 9781424492114 , Ninth International Conference on Machine Learning and Applications (ICMLA 2010)

P. Hennig, D. Stern, T. Graepel
(2010). Coherent Inference on Optimal Play in Game Trees In: JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, (Ed) Teh, Y.W. , M. Titterington , JMLR, Cambridge, MA, USA, 326333, Thirteenth International Conference on Artificial Intelligence and Statistics

P. Hennig, D. Stern, T. Graepel
(2009). Bayesian Quadratic Reinforcement Learning NIPS 2009 Workshop on Probabilistic Approaches for Robotics and Control

P. Hennig, W. Denk
(2007). Pointspread functions for backscattered imaging in the scanning electron microscope Journal of Applied Physics , 102, (12), 18