Header logo is ei


2020


no image
Kernel Conditional Moment Test via Maximum Moment Restriction

Muandet, K., Jitkrittum, W., Kübler, J. M.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), August 2020 (conference) Accepted

[BibTex]

2020

[BibTex]


no image
Bayesian Online Prediction of Change Points

Agudelo-España, D., Gomez-Gonzalez, S., Bauer, S., Schölkopf, B., Peters, J.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), August 2020 (conference) Accepted

[BibTex]

[BibTex]


no image
Algorithmic Recourse: from Counterfactual Explanations to Interventions

Karimi, A., Schölkopf, B., Valera, I.

37th International Conference on Machine Learning (ICML), July 2020 (conference) Submitted

[BibTex]

[BibTex]


no image
Model-Agnostic Counterfactual Explanations for Consequential Decisions

Karimi, A., Barthe, G., Balle, B., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), June 2020 (conference) Accepted

arXiv [BibTex]

arXiv [BibTex]


no image
A Continuous-time Perspective for Modeling Acceleration in Riemannian Optimization

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), June 2020 (conference) Accepted

[BibTex]

[BibTex]


no image
Kernel Conditional Density Operators

Schuster, I., Mollenhauer, M., Klus, S., Muandet, K.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Proceedings of Machine Learning Research, June 2020 (conference) Accepted

[BibTex]

[BibTex]


no image
A Kernel Mean Embedding Approach to Reducing Conservativeness in Stochastic Programming and Control

Zhu, J., Diehl, M., Schölkopf, B.

2nd Annual Conference on Learning for Dynamics and Control (L4DC), June 2020 (conference) Accepted

arXiv [BibTex]

arXiv [BibTex]


no image
Disentangling Factors of Variations Using Few Labels

Locatello, F., Tschannen, M., Bauer, S., Rätsch, G., Schölkopf, B., Bachem, O.

8th International Conference on Learning Representations (ICLR), April 2020 (conference)

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Mixed-curvature Variational Autoencoders

Skopek, O., Ganea, O., Becigneul, G.

8th International Conference on Learning Representations (ICLR), April 2020 (conference)

link (url) [BibTex]

link (url) [BibTex]


Non-linear interlinkages and key objectives amongst the Paris Agreement and the Sustainable Development Goals
Non-linear interlinkages and key objectives amongst the Paris Agreement and the Sustainable Development Goals

Laumann, F., von Kügelgen, J., Barahona, M.

ICLR 2020 Workshop "Tackling Climate Change with Machine Learning", April 2020 (conference)

arXiv PDF [BibTex]

arXiv PDF [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv [BibTex]

arXiv [BibTex]


Towards causal generative scene models via competition of experts
Towards causal generative scene models via competition of experts

von Kügelgen*, J., Ustyuzhaninov*, I., Gehler, P., Bethge, M., Schölkopf, B.

ICLR 2020 Workshop "Causal Learning for Decision Making", April 2020, *equal contribution (conference)

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
On Mutual Information Maximization for Representation Learning

Tschannen, M., Djolonga, J., Rubenstein, P. K., Gelly, S., Lucic, M.

8th International Conference on Learning Representations (ICLR), April 2020 (conference)

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Adaptation and Robust Learning of Probabilistic Movement Primitives

Gomez-Gonzalez, S., Neumann, G., Schölkopf, B., Peters, J.

IEEE Transactions on Robotics, 36(2):366-379, IEEE, March 2020 (article)

arXiv DOI Project Page [BibTex]

arXiv DOI Project Page [BibTex]


no image
Real Time Trajectory Prediction Using Deep Conditional Generative Models

Gomez-Gonzalez, S., Prokudin, S., Schölkopf, B., Peters, J.

IEEE Robotics and Automation Letters, 5(2):970-976, IEEE, January 2020 (article)

arXiv DOI [BibTex]


no image
More Powerful Selective Kernel Tests for Feature Selection

Lim, J. N., Yamada, M., Jitkrittum, W., Terada, Y., Matsui, S., Shimodaira, H.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020 (conference) To be published

arXiv [BibTex]

arXiv [BibTex]


no image
Computationally Tractable Riemannian Manifolds for Graph Embeddings

Cruceru, C., Becigneul, G., Ganea, O.

37th International Conference on Machine Learning (ICML), 2020 (conference) Submitted

[BibTex]

[BibTex]


no image
A Real-Robot Dataset for Assessing Transferability of Learned Dynamics Models

Agudelo-España, D., Zadaianchuk, A., Wenk, P., Garg, A., Akpo, J., Grimminger, F., Viereck, J., Naveau, M., Righetti, L., Martius, G., Krause, A., Schölkopf, B., Bauer, S., Wüthrich, M.

IEEE International Conference on Robotics and Automation (ICRA), 2020 (conference) Accepted

Project Page PDF [BibTex]

Project Page PDF [BibTex]


no image
An Adaptive Optimizer for Measurement-Frugal Variational Algorithms

Kübler, J. M., Arrasmith, A., Cincio, L., Coles, P. J.

Quantum, 4, pages: 263, 2020 (article)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem
Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem

Zhu, J., Jitkrittum, W., Diehl, M., Schölkopf, B.

In 59th IEEE Conference on Decision and Control (CDC), 2020 (inproceedings) Accepted

[BibTex]

[BibTex]


no image
Advances in Latent Variable and Causal Models

Rubenstein, P.

University of Cambridge, UK, 2020, (Cambridge-Tuebingen-Fellowship) (phdthesis)

[BibTex]

[BibTex]


no image
Practical Accelerated Optimization on Riemannian Manifolds

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

37th International Conference on Machine Learning (ICML), 2020 (conference) Submitted

[BibTex]

[BibTex]


no image
Fair Decisions Despite Imperfect Predictions

Kilbertus, N., Gomez Rodriguez, M., Schölkopf, B., Muandet, K., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020 (conference) Accepted

[BibTex]

[BibTex]


no image
Counterfactual Mean Embedding

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukatat, S.

Journal of Machine Learning Research, 2020 (article) Accepted

[BibTex]

[BibTex]


no image
Constant Curvature Graph Convolutional Networks

Bachmann*, G., Becigneul*, G., Ganea, O.

37th International Conference on Machine Learning (ICML), 2020, *equal contribution (conference) Submitted

[BibTex]

[BibTex]


no image
Causal Discovery from Heterogeneous/Nonstationary Data

Huang, B., Zhang, K., J., Z., Ramsey, J., Sanchez-Romero, R., Glymour, C., Schölkopf, B.

Journal of Machine Learning Research, 21(89):1-53, 2020 (article)

link (url) [BibTex]

link (url) [BibTex]


no image
Divide-and-Conquer Monte Carlo Tree Search for goal directed planning

Parascandolo*, G., Buesing*, L., Merel, J., Hasenclever, L., Aslanides, J., Hamrick, J. B., Heess, N., Neitz, A., Weber, T.

2020, *equal contribution (conference) Submitted

arXiv [BibTex]

arXiv [BibTex]


A machine learning route between band mapping and band structure
A machine learning route between band mapping and band structure

Xian*, R. P., Stimper*, V., Zacharias, M., Dong, S., Dendzik, M., Beaulieu, S., Schölkopf, B., Wolf, M., Rettig, L., Carbogno, C., Bauer, S., Ernstorfer, R.

2020, *equal contribution (misc)

arXiv [BibTex]

arXiv [BibTex]

2008


no image
BCPy2000

Hill, N., Schreiner, T., Puzicha, C., Farquhar, J.

Workshop "Machine Learning Open-Source Software" at NIPS, December 2008 (talk)

Web [BibTex]

2008

Web [BibTex]


no image
Stereo Matching for Calibrated Cameras without Correspondence

Helmke, U., Hüper, K., Vences, L.

In CDC 2008, pages: 2408-2413, IEEE Service Center, Piscataway, NJ, USA, 47th IEEE Conference on Decision and Control, December 2008 (inproceedings)

Abstract
We study the stereo matching problem for reconstruction of the location of 3D-points on an unknown surface patch from two calibrated identical cameras without using any a priori information about the pointwise correspondences. We assume that camera parameters and the pose between the cameras are known. Our approach follows earlier work for coplanar cameras where a gradient flow algorithm was proposed to match associated Gramians. Here we extend this method by allowing arbitrary poses for the cameras. We introduce an intrinsic Riemannian Newton algorithm that achieves local quadratic convergence rates. A closed form solution is presented, too. The efficiency of both algorithms is demonstrated by numerical experiments.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Joint Kernel Support Estimation for Structured Prediction

Lampert, C., Blaschko, M.

In Proceedings of the NIPS 2008 Workshop on "Structured Input - Structured Output" (NIPS SISO 2008), pages: 1-4, NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (inproceedings)

Abstract
We present a new technique for structured prediction that works in a hybrid generative/ discriminative way, using a one-class support vector machine to model the joint probability of (input, output)-pairs in a joint reproducing kernel Hilbert space. Compared to discriminative techniques, like conditional random elds or structured out- put SVMs, the proposed method has the advantage that its training time depends only on the number of training examples, not on the size of the label space. Due to its generative aspect, it is also very tolerant against ambiguous, incomplete or incorrect labels. Experiments on realistic data show that our method works eciently and robustly in situations for which discriminative techniques have computational or statistical problems.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Predictive Model for Imitation Learning in Partially Observable Environments

Boularias, A.

In ICMLA 2008, pages: 83-90, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. A. Kurgan, T. Hu, K. Hafeez), IEEE, Piscataway, NJ, USA, Seventh International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
Learning by imitation has shown to be a powerful paradigm for automated learning in autonomous robots. This paper presents a general framework of learning by imitation for stochastic and partially observable systems. The model is a Predictive Policy Representation (PPR) whose goal is to represent the teacher‘s policies without any reference to states. The model is fully described in terms of actions and observations only. We show how this model can efficiently learn the personal behavior and preferences of an assistive robot user.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

In ICDM 2008, pages: 953-958, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric $epsilon$-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric$epsilon$-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is lar ger than for non-geometric graph mining,the total time is within a reasonable level even for small minimum support.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Block Iterative Algorithms for Non-negative Matrix Approximation

Sra, S.

In ICDM 2008, pages: 1037-1042, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Service Center, Piscataway, NJ, USA, Eighth IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
In this paper we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung~cite{lee00} for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular textbf {block-iterative} acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of~cite{suv.nips}. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Metropolis Algorithms for Representative Subgraph Sampling

Hübler, C., Kriegel, H., Borgwardt, K., Ghahramani, Z.

In pages: 283-292, (Editors: Giannotti, F.), IEEE, Piscataway, NJ, USA, Eighth IEEE International Conference on Data Mining (ICDM '08) , December 2008 (inproceedings)

Abstract
While data mining in chemoinformatics studied graph data with dozens of nodes, systems biology and the Internet are now generating graph data with thousands and millions of nodes. Hence data mining faces the algorithmic challenge of coping with this significant increase in graph size: Classic algorithms for data analysis are often too expensive and too slow on large graphs. While one strategy to overcome this problem is to design novel efficient algorithms, the other is to 'reduce' the size of the large graph by sampling. This is the scope of this paper: We will present novel Metropolis algorithms for sampling a 'representative' small subgraph from the original large graph, with 'representative' describing the requirement that the sample shall preserve crucial graph properties of the original graph. In our experiments, we improve over the pioneering work of Leskovec and Faloutsos (KDD 2006), by producing representative subgraph samples that are both smaller and of higher quality than those produced by other methods from the literature.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Bayesian Approach to Switching Linear Gaussian State-Space Models for Unsupervised Time-Series Segmentation

Chiappa, S.

In ICMLA 2008, pages: 3-9, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. Kurgan, T. Hu, K. Hafeez), IEEE Computer Society, Los Alamitos, CA, USA, 7th International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
Time-series segmentation in the fully unsupervised scenario in which the number of segment-types is a priori unknown is a fundamental problem in many applications. We propose a Bayesian approach to a segmentation model based on the switching linear Gaussian state-space model that enforces a sparse parametrization, such as to use only a small number of a priori available different dynamics to explain the data. This enables us to estimate the number of segment-types within the model, in contrast to previous non-Bayesian approaches where training and comparing several separate models was required. As the resulting model is computationally intractable, we introduce a variational approximation where a reformulation of the problem enables the use of efficient inference algorithms.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Infinite Kernel Learning

Gehler, P., Nowozin, S.

In Proceedings of the NIPS 2008 Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels", pages: 1-4, NIPS Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels" (LK ASOK´08), December 2008 (inproceedings)

Abstract
In this paper we build upon the Multiple Kernel Learning (MKL) framework and in particular on [1] which generalized it to infinitely many kernels. We rewrite the problem in the standard MKL formulation which leads to a Semi-Infinite Program. We devise a new algorithm to solve it (Infinite Kernel Learning, IKL). The IKL algorithm is applicable to both the finite and infinite case and we find it to be faster and more stable than SimpleMKL [2]. Furthermore we present the first large scale comparison of SVMs to MKL on a variety of benchmark datasets, also comparing IKL. The results show two things: a) for many datasets there is no benefit in using MKL/IKL instead of the SVM classifier, thus the flexibility of using more than one kernel seems to be of no use, b) on some datasets IKL yields massive increases in accuracy over SVM/MKL due to the possibility of using a largely increased kernel set. For those cases parameter selection through Cross-Validation or MKL is not applicable.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Logistic Regression for Graph Classification

Shervashidze, N., Tsuda, K.

NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (talk)

Abstract
In this paper we deal with graph classification. We propose a new algorithm for performing sparse logistic regression for graphs, which is comparable in accuracy with other methods of graph classification and produces probabilistic output in addition. Sparsity is required for the reason of interpretability, which is often necessary in domains such as bioinformatics or chemoinformatics.

Web [BibTex]

Web [BibTex]


no image
New Projected Quasi-Newton Methods with Applications

Sra, S.

Microsoft Research Tech-talk, December 2008 (talk)

Abstract
Box-constrained convex optimization problems are central to several applications in a variety of fields such as statistics, psychometrics, signal processing, medical imaging, and machine learning. Two fundamental examples are the non-negative least squares (NNLS) problem and the non-negative Kullback-Leibler (NNKL) divergence minimization problem. The non-negativity constraints are usually based on an underlying physical restriction, for e.g., when dealing with applications in astronomy, tomography, statistical estimation, or image restoration, the underlying parameters represent physical quantities such as concentration, weight, intensity, or frequency counts and are therefore only interpretable with non-negative values. Several modern optimization methods can be inefficient for simple problems such as NNLS and NNKL as they are really designed to handle far more general and complex problems. In this work we develop two simple quasi-Newton methods for solving box-constrained (differentiable) convex optimization problems that utilize the well-known BFGS and limited memory BFGS updates. We position our method between projected gradient (Rosen, 1960) and projected Newton (Bertsekas, 1982) methods, and prove its convergence under a simple Armijo step-size rule. We illustrate our method by showing applications to: Image deblurring, Positron Emission Tomography (PET) image reconstruction, and Non-negative Matrix Approximation (NMA). On medium sized data we observe performance competitive to established procedures, while for larger data the results are even better.

PDF [BibTex]

PDF [BibTex]


no image
Prediction-Directed Compression of POMDPs

Boularias, A., Izadi, M., Chaib-Draa, B.

In ICMLA 2008, pages: 99-105, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. A. Kurgan, T. Hu, K. Hafeez), IEEE, Piscataway, NJ, USA, Seventh International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
High dimensionality of belief space in partially observable Markov decision processes (POMDPs) is one of the major causes that severely restricts the applicability of this model. Previous studies have demonstrated that the dimensionality of a POMDP can eventually be reduced by transforming it into an equivalent predictive state representation (PSR). In this paper, we address the problem of finding an approximate and compact PSR model corresponding to a given POMDP model. We formulate this problem in an optimization framework. Our algorithm tries to minimize the potential error that missing some core tests may cause. We also present an empirical evaluation on benchmark problems, illustrating the performance of this approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Iterative Subgraph Mining for Principal Component Analysis

Saigo, H., Tsuda, K.

In ICDM 2008, pages: 1007-1012, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Graph mining methods enumerate frequent subgraphs efficiently, but they are not necessarily good features for machine learning due to high correlation among features. Thus it makes sense to perform principal component analysis to reduce the dimensionality and create decorrelated features. We present a novel iterative mining algorithm that captures informative patterns corresponding to major entries of top principal components. It repeatedly calls weighted substructure mining where example weights are updated in each iteration. The Lanczos algorithm, a standard algorithm of eigendecomposition, is employed to update the weights. In experiments, our patterns are shown to approximate the principal components obtained by frequent mining.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Modelling contrast discrimination data suggest both the pedestal effect and stochastic resonance to be caused by the same mechanism

Goris, R., Wagemans, J., Wichmann, F.

Journal of Vision, 8(15):1-21, November 2008 (article)

Abstract
Computational models of spatial vision typically make use of a (rectified) linear filter, a nonlinearity and dominant late noise to account for human contrast discrimination data. Linear–nonlinear cascade models predict an improvement in observers' contrast detection performance when low, subthreshold levels of external noise are added (i.e., stochastic resonance). Here, we address the issue whether a single contrast gain-control model of early spatial vision can account for both the pedestal effect, i.e., the improved detectability of a grating in the presence of a low-contrast masking grating, and stochastic resonance. We measured contrast discrimination performance without noise and in both weak and moderate levels of noise. Making use of a full quantitative description of our data with few parameters combined with comprehensive model selection assessments, we show the pedestal effect to be more reduced in the presence of weak noise than in moderate noise. This reduction rules out independent, additive sources of performance improvement and, together with a simulation study, supports the parsimonious explanation that a single mechanism underlies the pedestal effect and stochastic resonance in contrast perception.

Web DOI [BibTex]