Department Talks

Metrics Matter, Examples from Binary and Multilabel Classification

IS Colloquium
  • 21 August 2017 • 11:15 12:15
  • Sanmi Koyejo
  • Empirical Inference meeting room (MPI-IS building, 4th floor)

Performance metrics are a key component of machine learning systems, and are ideally constructed to reflect real world tradeoffs. In contrast, much of the literature simply focuses on algorithms for maximizing accuracy. With the increasing integration of machine learning into real systems, it is clear that accuracy is an insufficient measure of performance for many problems of interest. Unfortunately, unlike accuracy, many real world performance metrics are non-decomposable i.e. cannot be computed as a sum of losses for each instance. Thus, known algorithms and associated analysis are not trivially extended, and direct approaches require expensive combinatorial optimization. I will outline recent results characterizing population optimal classifiers for large families of binary and multilabel classification metrics, including such nonlinear metrics as F-measure and Jaccard measure. Perhaps surprisingly, the prediction which maximizes the utility for a range of such metrics takes a simple form. This results in simple and scalable procedures for optimizing complex metrics in practice. I will also outline how the same analysis gives optimal procedures for selecting point estimates from complex posterior distributions for structured objects such as graphs. Joint work with Nagarajan Natarajan, Bowei Yan, Kai Zhong, Pradeep Ravikumar and Inderjit Dhillon.

Organizers: Mijung Park


Dominik Bach - TBA

IS Colloquium
  • 02 October 2017 • 11:15 12:15
  • Dominik Bach

Soft bioelectronics: Materials and Technology

Talk
  • 11 July 2017 • 14:00 15:20
  • Prof. Stéphanie Lacour
  • Lecture hall on the ground floor, N0.002 (broadcasted from Stuttgart)

Bioelectronics integrates principles of electrical engineering and materials science to biology, medicine and ultimately health. Soft bioelectronics focus on designing and manufacturing electronic devices with mechanical properties close to those of the host biological tissue so that long-term reliability and minimal perturbation are induced in vivo and/or truly wearable systems become possible. We illustrate the potential of this soft technology with examples ranging from prosthetic tactile skins to soft multimodal neural implants.

Organizers: Diana Rebmann


  • Chris Bauch
  • AGBS seminar room (N4)

Vaccine refusal can lead to outbreaks of previously eradicated diseases and is an increasing problem worldwide. Vaccinating decisions exemplify a complex, coupled system where vaccinating behavior and disease dynamics influence one another. Complex systems often exhibit characteristic dynamics near a tipping point to a new dynamical regime. For instance, critical slowing down -- the tendency for a system to start `wobbling'-- can increase close to a tipping point. We used a linear support vector machine to classify the sentiment of geo-located United States and California tweets concerning measles vaccination from 2011 to 2016. We also extracted data on internet searches on measles from Google Trends. We found evidence for critical slowing down in both datasets in the years before and after the 2014-15 Disneyland, California measles outbreak, suggesting that the population approached a tipping point corresponding to widespread vaccine refusal, but then receded from the tipping point in the face of the outbreak. A differential equation model of coupled behaviour-disease dynamics is shown to illustrate the same patterns. We conclude that studying critical phenomena in online social media data can help us develop analytical tools based on dynamical systems theory to identify populations at heightened risk of widespread vaccine refusal.

Organizers: Diana Rebmann


Causal Macro Variables

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? Often we have sensor level data but assume that the relevant causal interactions occur at a higher scale of aggregation. Sometimes we only have aggregate measurements of causal interactions at a finer scale. I will motivate the general problem of causal discovery and present recent work on a framework and method for the construction and identification of causal macro-variables that ensures that the resulting causal variables have well-defined intervention distributions. Time permitting, I will show an application of this approach to large scale climate data, for which we were able to identify the macro-phenomenon of El Nino using an unsupervised method on micro-level measurements of the sea surface temperature and wind speeds over the equatorial Pacific.

Organizers: Sebastian Weichwald


  • Felix Leibfried and Jordi Grau-Moya
  • N 4.022 (Seminar Room EI-Dept.)

Autonomous systems rely on learning from experience to automatically refine their strategy and adapt to their environment, and thereby have huge advantages over traditional hand engineered systems. At PROWLER.io we use reinforcement learning (RL) for sequential decision making under uncertainty to develop intelligent agents capable of acting in dynamic and unknown environments. In this talk we first give a general overview of the goals and the research conducted at PROWLER.io. Then, we will talk about two specific research topics. The first is Information-Theoretic Model Uncertainty which deals with the problem of making robust decisions that take into account unspecified models of the environment. The second is Deep Model-Based Reinforcement Learning which deals with the problem of learning the transition and the reward function of a Markov Decision Process in order to use it for data-efficient learning.

Organizers: Michel Besserve


  • Sebastian Nowozin
  • Max Planck House Lecture Hall

Probabilistic deep learning methods have recently made great progress for generative and discriminative modeling. I will give a brief overview of recent developments and then present two contributions. The first is on a generalization of generative adversarial networks (GAN), extending their use considerably. GANs can be shown to approximately minimize the Jensen-Shannon divergence between two distributions, the true sampling distribution and the model distribution. We extend GANs to the class of f-divergences which include popular divergences such as the Kullback-Leibler divergence. This enables applications to variational inference and likelihood-free maximum likelihood, as well as enables GAN models to become basic building blocks in larger models. The second contribution is to consider representation learning using variational autoencoder models. To make learned representations of data useful we need ground them in semantic concepts. We propose a generative model that can decompose an observation into multiple separate latent factors, each of which represents a separate concept. Such disentangled representation is useful for recognition and for precise control in generative modeling. We learn our representations using weak supervision in the form of groups of observations where all samples within a group share the same value in a given latent factor. To make such learning feasible we generalize recent methods for amortized probabilistic inference to the dependent case. Joint work with: Ryota Tomioka (MSR Cambridge), Botond Cseke (MSR Cambridge), Diane Bouchacourt (Oxford)

Organizers: Lars Mescheder


Statistical testing of epiphenomena for multi-index data

IS Colloquium
  • 06 March 2017 • 11:15 12:15
  • John Cunningham
  • MPH Lecture Hall

As large tensor-variate data increasingly become the norm in applied machine learning and statistics, complex analysis methods similarly increase in prevalence. Such a trend offers the opportunity to understand more intricate features of the data that, ostensibly, could not be studied with simpler datasets or simpler methodologies. While promising, these advances are also perilous: these novel analysis techniques do not always consider the possibility that their results are in fact an expected consequence of some simpler, already-known feature of simpler data (for example, treating the tensor like a matrix or a univariate quantity) or simpler statistic (for example, the mean and covariance of one of the tensor modes). I will present two works that address this growing problem, the first of which uses Kronecker algebra to derive a tensor-variate maximum entropy distribution that shares modal moments with the real data. This distribution of surrogate data forms the basis of a statistical hypothesis test, and I use this method to answer a question of epiphenomenal tensor structure in populations of neural recordings in the motor and prefrontal cortex. In the second part, I will discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling, and I will use this method in a texture synthesis application.

Organizers: Philipp Hennig


Brain-machine interfaces: New treatment options for psychiatric disorders

IS Colloquium
  • 06 February 2017 • 11:15 12:15
  • Surjo R. Soekadar

Organizers: Moritz Grosse-Wentrup


  • Fabien Lotte
  • Max Planck House Lecture Hall

Brain-Computer Interfaces (BCIs) are systems that can translate brain activity patterns of a user into messages or commands for an interactive application. Such brain activity is typically measured using Electroencephalography (EEG), before being processed and classified by the system. EEG-based BCIs have proven promising for a wide range of applications ranging from communication and control for motor impaired users, to gaming targeted at the general public, real-time mental state monitoring and stroke rehabilitation, to name a few. Despite this promising potential, BCIs are still scarcely used outside laboratories for practical applications. The main reason preventing EEG-based BCIs from being widely used is arguably their poor usability, which is notably due to their low robustness and reliability, as well as their long training times. In this talk I present some of our research aimed at addressing these points in order to make EEG-based BCIs usable, i.e., to increase their efficacy and efficiency. In particular, I will present a set of contributions towards this goal 1) at the user training level, to ensure that users can learn to control a BCI efficiently and effectively, and 2) at the usage level, to explore novel applications of BCIs for which the current reliability can already be useful, e.g., for neuroergonomics or real-time brain activity and mental state visualization.


  • Hannes Nickisch, Philips Research, Hamburg
  • MRZ seminar room

Coronary artery disease (CAD) is the single leading cause of death worldwide and Cardiac Computed Tomography Angiography (CCTA) is a non-invasive test to rule out CAD using the anatomical characterization of the coronary lesions. Recent studies suggest that coronary lesions’ hemodynamic significance can be assessed by Fractional Flow Reserve (FFR), which is usually measured invasively in the CathLab but can also be simulated from a patient-specific biophysical model based on CCTA data. We learn a parametric lumped model (LM) enabling fast computational fluid dynamic simulations of blood flow in elongated vessel networks to alleviate the computational burden of 3D finite element (FE) simulations. We adapt the coefficients balancing the local nonlinear hydraulic effects from a training set of precomputed FE simulations. Our LM yields accurate pressure predictions suggesting that costly FE simulations can be replaced by our fast LM paving the way to use a personalised interactive biophysical model with realtime feedback in clinical practice.


  • Catrin Misselhorn
  • Max Planck Haus Lecture Hall

The development of increasingly intelligent and autonomous technologies will inevitably lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. It will, therefore, be necessary in the long run to develop machines which have the capacity for a certain amount of autonomous moral decision-making. The goal of this talk is to provide the theoretical foundations for artificial morality, i.e., for implementing moral capacities in artificial systems in general and a roadmap for developing an assistive system in geriatric care which is capable of moral learning.

Organizers: Ludovic Righetti Philipp Hennig