Department Talks

Archived Talks

IS Colloquium

- 25 January 2016 • 11:15 12:15

- Aldo Faisal

- MPH Lecture Hall

Our research questions are centred on a basic characteristic of human brains: variability in their behaviour and their underlying meaning for cognitive mechanisms. Such variability is emerging as a key ingredient in understanding biological principles (Faisal, Selen & Wolpert, 2008, Nature Rev Neurosci) and yet lacks adequate quantitative and computational methods for description and analysis. Crucially, we find that biological and behavioural variability contains important information that our brain and our technology can make us of (instead of just averaging it away): Using advanced body sensor networks, we measured eye-movements, full-body and hand kinematics of humans living in a studio flat and are going to present some insightful results on motor control and visual attention that suggest that the control of behaviour "in-the-wild" is predictably different ways than what we measure "in-the-lab". The results have implications for robotics, prosthetics and neuroscience.

Organizers: Matthias Hohmann

- Tim Sullivan

Beginning with a seminal paper of Diaconis (1988), the aim of so-called "probabilistic numerics" is to compute probabilistic solutions to deterministic problems arising in numerical analysis by casting them as statistical inference problems. For example, numerical integration of a deterministic function can be seen as the integration of an unknown/random function, with evaluations of the integrand at the integration nodes proving partial information about the integrand. Advantages offered by this viewpoint include: access to the Bayesian representation of prior and posterior uncertainties; better propagation of uncertainty through hierarchical systems than simple worst-case error bounds; and appropriate accounting for numerical truncation and round-off error in inverse problems, so that the replicability of deterministic simulations is not confused with their accuracy, thereby yielding an inappropriately concentrated Bayesian posterior. This talk will describe recent work on probabilistic numerical solvers for ordinary and partial differential equations, including their theoretical construction, convergence rates, and applications to forward and inverse problems. Joint work with Andrew Stuart (Warwick).

Organizers: Philipp Hennig

IS Colloquium

- 23 November 2015 • 11:15 12:15

- Gernot Müller-Putz

- MPH Lecture Hall

More than half of the persons with spinal cord injuries (SCI) are suffering from impairments of both hands, which results in a tremendous decrease of quality of life and represents a major barrier for inclusion in society. Functional restoration is possible with neuroprostheses (NPs) based on functional electrical stimulation (FES). A Brain-Computer Interface provides a means of control for such neuroprosthetics since users have limited abilities to use traditional assistive devices. This talk presents our early research on BCI-based NP control based on motor imagery, discusses hybrid BCI solutions and shows our work and effort on movement trajectory decoding. An outlook to future BCI applications will conclude this talk.

Organizers: Moritz Grosse-Wentrup

- Jonas Richiardi

- Max Planck House, Lecture Hall

During rest, brain activity is intrinsically synchronized between different brain regions, forming networks of coherent activity. These functional networks (FNs), consisting of multiple regions widely distributed across lobes and hemispheres, appear to be a fundamental theme of neural organization in mammalian brains. Despite hundreds of studies detailing this phenomenon, the genetic and molecular mechanisms supporting these functional networks remain undefined. Previous work has mostly focused on polymorphisms in candidate genes, or used a twin study approach to demonstrate heritability of aspects of resting-state connectivity. The recent availability of high spatial resolution post-mortem brain gene expression datasets, together with several large-scale imaging genetics datasets, which contain joint in-vivo functional brain imaging data and genotype data for several hundred subjects, opens intriguing data analysis avenues. Using novel cross-modal graph-based statistics, we show that functional brain networks defined with resting-state fMRI can be recapitulated using measures of correlated gene expression, and that the relationship is not driven by gross tissue types. The set of genes we identify is significantly enriched for certain types of ion channels and synapse-related genes. We validate results by showing that polymorphisms in this set significantly correlate with alterations of in-vivo resting-state functional connectivity in a group of 259 adolescents. We further validate results on another species by showing that our list of genes is significantly associated with neuronal connectivity in the mouse brain. These results provide convergent, multimodal evidence that resting-state functional networks emerge from the orchestrated activity of dozens of genes linked to ion channel activity and synaptic function. Functional brain networks are also known to be perturbed in a variety of neurological and neuropsychological disorders, including Alzheimer's and schizophrenia. Given this link between disease and networks, and the fact that many brain disorders have genetic contributions, it seems that functional brain networks may be an interesting endophenotype for clinical use. We discuss the translational potential of the imaging genomics techniques we developed.

Organizers: Moritz Grosse-Wentrup Michel Besserve

IS Colloquium

- 28 September 2015 • 12:00 13:00

- Sach Mukherjee

- Max Planck House Lecture Hall

Human diseases show considerable heterogeneity at the molecular level. Such heterogeneity is central to personalized medicine efforts that seek to exploit molecular data to better understand disease biology and inform clinical decision making. An emerging notion is that diseases and disease subgroups may differ not only at the level of mean molecular abundance, but also with respect to patterns of molecular interplay. I will discuss our ongoing efforts to develop methods to investigate such heterogeneity, with an emphasis on some high-dimensional aspects.

Organizers: Michel Besserve Jonas Peters

IS Colloquium

- 27 July 2015 • 11:15 12:15

- Kevin T. Kelly

- Max Planck House Lecture Hall

In machine learning, the standard explanation of Ockham's razor is to minimize predictive risk. But prediction is interpreted passively---one may not rely on predictions to change the probability distribution used for training. That limitation may be overcome by studying alternatively manipulated systems in randomized experimental trials, but experiments on multivariate systems or on human subjects are often infeasible or immoral. Happily, the past three decades have witnessed the development of a range of statistical techniques for discovering causal relations from non-experimental data. One characteristic of such methods is a strong Ockham bias toward simpler causal theories---i.e., theories with fewer causal connections among the variables of interest. Our question is what Ockham's razor has to do with finding true (rather than merely plausible) causal theories from non-experimental data. The traditional story of minimizing predictive risk does not apply, because uniform consistency is often infeasible in non-experimental causal discovery: without strong and implausible assumptions, the probability of erroneous causal orientation may be arbitrarily high at any sample size. The standard justification for causal discovery methods is point-wise consistency, or convergence in probability to the true causes. But Ockham's razor is not necessary for point-wise convergence: a Bayesian with a strong prior bias toward a complex model would also be point-wise consistent. Either way, the crucial Ockham bias remains disconnected from learning performance. A method reverses its opinion in probability when it probably says A at some sample size and probably says B incompatible with A at a higher sample size. A method cycles in probability when it probably says A, then probably says B incompatible with A, and then probably says A again. Uniform consistency allows for no reversals or cycles in probability. Point-wise consistency allows for arbitrarily many. Lying plausibly between those two extremes is straightest possible convergence to the truth, which allows for only as many cycles and reversals in probability as are necessary to solve the learning problem at hand. We show that Ockham's razor is necessary for cycle-minimal convergence and that patience, or waiting for nature to choose among simplest theories, is necessary for reversal-minimal convergence. The idea yields very tight constraints on inductive statistical methods, both classical and Bayesian, with causal discovery methods as an important special case. It also provides a valid interpretation of significance and power when tests are used to fish inductively for models. The talk is self-contained for a general scientific audience. Novel concepts are illustrated amply with figures and simulations.

Organizers: Michel Besserve Kun Zhang

IS Colloquium

- 11 August 2014 • 11:15 12:30

- Joaquin Quiñonero Candela

- Max Planck Haus Lecture Hall

Facebook serves close to a billion people every day, who are only able to consume a small subset of the information available to them. In this talk I will give some examples of how machine learning is used to personalize people’s Facebook experience. I will also present some data science experiments with fairly counter-intuitive results.

IS Colloquium

- 15 July 2014 • 11:15 12:15

- Manfred Opper

- Max Planck Haus Lecture Hall

Stochastic differential equations (SDEs) arise naturally as descriptions of continuous time dynamical systems. My talk addresses the problem of inferring the dynamical state and parameters of such systems from observations taken at discrete times. I will discuss the application of approximate inference methods such as the variational method and expectation propagation and show how higher dimensional systems can be treated by a mean field approximation. In the second part of my talk I will discuss the nonparametric estimation of the drift (i.e. the deterministic part of the ‘force’ which governs the dynamics) as a function of the state using Gaussian process approaches.

Organizers: Philipp Hennig Michel Besserve

- Holger Rauhut

- Max Planck Haus Lecture Hall

The recent theory of compressive sensing predicts that (approximately) sparse vectors can be recovered from vastly incomplete linear measurements using efficient algorithms. This principle has a large number of potential applications in signal and image processing, machine learning and more. Optimal measurement matrices in this context known so far are based on randomness. Recovery algorithms include convex optimization approaches (l1-minimization) as well as greedy methods. Gaussian and Bernoulli random matrices are provably optimal in the sense that the smallest possible number of samples is required. Such matrices, however, are of limited practical interest because of the lack of any structure. In fact, applications demand for certain structure so that there is only limited freedom to inject randomness. We present recovery results for various structured random matrices including random partial Fourier matrices and partial random circulant matrices. We will also review recent extensions of compressive sensing for recovering matrices of low rank from incomplete information via efficient algorithms such as nuclear norm minimization. This principle has recently found applications for phaseless estimation, i.e., in situations where only the magnitude of measurements is available. Another extension considers the recovery of low rank tensors (multi-dimensional arrays) from incomplete linear information. Several obstacles arise when passing from matrices and tensors such as the lack of a singular value decomposition which shares all the nice properties of the matrix singular value decomposition. Although only partial theoretical results are available, we discuss algorithmic approaches for this problem.

Organizers: Michel Besserve