Archived Talks

- Robert Peharz

- AGBS seminar room

Probabilistic modeling is the method of choice when it comes to reasoning under uncertainty. However, one of the main practical downsides of probabilistic models is that inference, i.e. the process of using the model to answer statistical queries, is notoriously hard in general. This led to a common folklore that probabilistic models which allow exact inference are necessarily simplistic and undermodel any practical task. In this talk, I will present sum-product networks (SPNs), a recently proposed architecture representing a rich and expressive class of probability distributions, which also allows exact and efficient computation of many inference tasks. I will discuss representational properties, inference routines and learning approaches in SPNs. Furthermore, I will provide some examples of practical applications using SPNs.

- Simon Lacoste-Julien

- IS Lecture Hall

Machine learning has become a popular application domain for modern optimization techniques, pushing its algorithmic frontier. The need for large scale optimization algorithms which can handle millions of dimensions or data points, typical for the big data era, have brought a resurgence of interest for first order algorithms, making us revisit the venerable stochastic gradient method [Robbins-Monro 1951] as well as the Frank-Wolfe algorithm [Frank-Wolfe 1956]. In this talk, I will review recent improvements on these algorithms which can exploit the structure of modern machine learning approaches. I will explain why the Frank-Wolfe algorithm has become so popular lately; and present a surprising tweak on the stochastic gradient method which yields a fast linear convergence rate. Motivating applications will include weakly supervised video analysis and structured prediction problems.

Organizers: Philipp Hennig

IS Colloquium

- 02 October 2017 • 11:15—12:15

- Dominik Bach

Under acute threat, biological agents need to choose adaptive actions to survive. In my talk, I will provide a decision-theoretic view on this problem and ask, what are potential computational algorithms for this choice, and how are they implemented in neural circuits. Rational design principles and non-human animal data tentatively suggest a specific architecture that heavily relies on tailored algorithms for specific threat scenarios. Virtual reality computer games provide an opportunity to translate non-human animal tasks to humans and investigate these algorithms across species. I will discuss the specific challenges for empirical inference on underlying neural circuits given such architecture.

Organizers: Michel Besserve

IS Colloquium

- 21 August 2017 • 11:15—12:15

- Sanmi Koyejo

- Empirical Inference meeting room (MPI-IS building, 4th floor)

Performance metrics are a key component of machine learning systems, and are ideally constructed to reflect real world tradeoffs. In contrast, much of the literature simply focuses on algorithms for maximizing accuracy. With the increasing integration of machine learning into real systems, it is clear that accuracy is an insufficient measure of performance for many problems of interest. Unfortunately, unlike accuracy, many real world performance metrics are non-decomposable i.e. cannot be computed as a sum of losses for each instance. Thus, known algorithms and associated analysis are not trivially extended, and direct approaches require expensive combinatorial optimization. I will outline recent results characterizing population optimal classifiers for large families of binary and multilabel classification metrics, including such nonlinear metrics as F-measure and Jaccard measure. Perhaps surprisingly, the prediction which maximizes the utility for a range of such metrics takes a simple form. This results in simple and scalable procedures for optimizing complex metrics in practice. I will also outline how the same analysis gives optimal procedures for selecting point estimates from complex posterior distributions for structured objects such as graphs. Joint work with Nagarajan Natarajan, Bowei Yan, Kai Zhong, Pradeep Ravikumar and Inderjit Dhillon.

Organizers: Mijung Park

- Manfred K. Warmuth

- Seminar room N4.022, department Schölkopf (4th floor)

Organizers: Bernhard Schölkopf

Talk

- 24 July 2017 • 11:00—12:00

- Ioannis Papantonis

- S2 Seminar Room

We present a way to set the step size of Stochastic Gradient Descent, as the solution of a distance minimization problem. The obtained result has an intuitive interpretation and resembles the update rules of well known optimization algorithms. Also, asymptotic results to its relation to the optimal learning rate of Gradient Descent are discussed. In addition, we talk about two different estimators, with applications in Variational inference problems, and present approximate results about their variance. Finally, we combine all of the above, to present an optimization algorithm that can be used on both mini-batch optimization and Variational problems.

Organizers: Philipp Hennig

- Prof. Stéphanie Lacour

- Lecture hall on the ground floor, N0.002 (broadcasted from Stuttgart)

Bioelectronics integrates principles of electrical engineering and materials science to biology, medicine and ultimately health. Soft bioelectronics focus on designing and manufacturing electronic devices with mechanical properties close to those of the host biological tissue so that long-term reliability and minimal perturbation are induced in vivo and/or truly wearable systems become possible. We illustrate the potential of this soft technology with examples ranging from prosthetic tactile skins to soft multimodal neural implants.

Talk

- 10 July 2017 • 11:00—12:15

- Chris Bauch

- AGBS seminar room (N4)

Vaccine refusal can lead to outbreaks of previously eradicated diseases and is an increasing problem worldwide. Vaccinating decisions exemplify a complex, coupled system where vaccinating behavior and disease dynamics influence one another. Complex systems often exhibit characteristic dynamics near a tipping point to a new dynamical regime. For instance, critical slowing down -- the tendency for a system to start `wobbling'-- can increase close to a tipping point. We used a linear support vector machine to classify the sentiment of geo-located United States and California tweets concerning measles vaccination from 2011 to 2016. We also extracted data on internet searches on measles from Google Trends. We found evidence for critical slowing down in both datasets in the years before and after the 2014-15 Disneyland, California measles outbreak, suggesting that the population approached a tipping point corresponding to widespread vaccine refusal, but then receded from the tipping point in the face of the outbreak. A differential equation model of coupled behaviour-disease dynamics is shown to illustrate the same patterns. We conclude that studying critical phenomena in online social media data can help us develop analytical tools based on dynamical systems theory to identify populations at heightened risk of widespread vaccine refusal.

- Frederick Eberhardt

- Max Planck House Lecture Hall

Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? Often we have sensor level data but assume that the relevant causal interactions occur at a higher scale of aggregation. Sometimes we only have aggregate measurements of causal interactions at a finer scale. I will motivate the general problem of causal discovery and present recent work on a framework and method for the construction and identification of causal macro-variables that ensures that the resulting causal variables have well-defined intervention distributions. Time permitting, I will show an application of this approach to large scale climate data, for which we were able to identify the macro-phenomenon of El Nino using an unsupervised method on micro-level measurements of the sea surface temperature and wind speeds over the equatorial Pacific.

Organizers: Sebastian Weichwald

- Felix Leibfried and Jordi Grau-Moya

- N 4.022 (Seminar Room EI-Dept.)

Autonomous systems rely on learning from experience to automatically refine their strategy and adapt to their environment, and thereby have huge advantages over traditional hand engineered systems. At PROWLER.io we use reinforcement learning (RL) for sequential decision making under uncertainty to develop intelligent agents capable of acting in dynamic and unknown environments. In this talk we first give a general overview of the goals and the research conducted at PROWLER.io. Then, we will talk about two specific research topics. The first is Information-Theoretic Model Uncertainty which deals with the problem of making robust decisions that take into account unspecified models of the environment. The second is Deep Model-Based Reinforcement Learning which deals with the problem of learning the transition and the reward function of a Markov Decision Process in order to use it for data-efficient learning.

Organizers: Michel Besserve