Header logo is ei


2019


no image
Neural Signatures of Motor Skill in the Resting Brain

Ozdenizci, O., Meyer, T., Wichmann, F., Peters, J., Schölkopf, B., Cetin, M., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2019), October 2019 (conference) Accepted

[BibTex]

2019

[BibTex]


no image
Beta Power May Mediate the Effect of Gamma-TACS on Motor Performance

Mastakouri, A., Schölkopf, B., Grosse-Wentrup, M.

Engineering in Medicine and Biology Conference (EMBC), July 2019 (conference) Accepted

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Coordinating Users of Shared Facilities via Data-driven Predictive Assistants and Game Theory

Geiger, P., Besserve, M., Winkelmann, J., Proissl, C., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 49, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
The Sensitivity of Counterfactual Fairness to Unmeasured Confounding

Kilbertus, N., Ball, P. J., Kusner, M. J., Weller, A., Silva, R.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 213, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
The Incomplete Rosetta Stone problem: Identifiability results for Multi-view Nonlinear ICA

Gresele*, L., Rubenstein*, P. K., Mehrjou, A., Locatello, F., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 53, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019, *equal contribution (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Shao, X., Trapp, M., Kersting, K., Ghahramani, Z.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 124, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
Kernel Mean Matching for Content Addressability of GANs

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 3140-3151, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019, *equal contribution (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., Bachem, O.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 4114-4124, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Thumb xl cvpr2019 demo v2.001
Local Temporal Bilinear Pooling for Fine-grained Action Parsing

Zhang, Y., Tang, S., Muandet, K., Jarvers, C., Neumann, H.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Fine-grained temporal action parsing is important in many applications, such as daily activity understanding, human motion analysis, surgical robotics and others requiring subtle and precise operations in a long-term period. In this paper we propose a novel bilinear pooling operation, which is used in intermediate layers of a temporal convolutional encoder-decoder net. In contrast to other work, our proposed bilinear pooling is learnable and hence can capture more complex local statistics than the conventional counterpart. In addition, we introduce exact lower-dimension representations of our bilinear forms, so that the dimensionality is reduced with neither information loss nor extra computation. We perform intensive experiments to quantitatively analyze our model and show the superior performances to other state-of-the-art work on various datasets.

Code video demo pdf link (url) [BibTex]

Code video demo pdf link (url) [BibTex]


no image
Generate Semantically Similar Images with Kernel Mean Matching

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

6th Workshop Women in Computer Vision (WiCV) (oral presentation), June 2019, *equal contribution (conference) Accepted

[BibTex]

[BibTex]


no image
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

Suter, R., Miladinovic, D., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 6056-6065, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Simon-Gabriel, C., Ollivier, Y., Bottou, L., Schölkopf, B., Lopez-Paz, D.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 5809-5817, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

Ialongo, A. D., Van Der Wilk, M., Hensman, J., Rasmussen, C. E.

In Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 2931-2940, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (inproceedings)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Meta learning variational inference for prediction

Gordon, J., Bronskill, J., Bauer, M., Nowozin, S., Turner, R.

7th International Conference on Learning Representations (ICLR), May 2019 (conference) Accepted

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning

Lutter, M., Ritter, C., Peters, J.

7th International Conference on Learning Representations (ICLR), May 2019 (conference) Accepted

link (url) [BibTex]

link (url) [BibTex]


no image
DeepOBS: A Deep Learning Optimizer Benchmark Suite

Schneider, F., Balles, L., Hennig, P.

7th International Conference on Learning Representations (ICLR), May 2019 (conference) Accepted

link (url) [BibTex]

link (url) [BibTex]


no image
Disentangled State Space Models: Unsupervised Learning of Dynamics across Heterogeneous Environments

Miladinović*, D., Gondal*, M. W., Schölkopf, B., Buhmann, J. M., Bauer, S.

Deep Generative Models for Highly Structured Data Workshop at ICLR, May 2019, *equal contribution (conference) Accepted

link (url) [BibTex]

link (url) [BibTex]


no image
SOM-VAE: Interpretable Discrete Representation Learning on Time Series

Fortuin, V., Hüser, M., Locatello, F., Strathmann, H., Rätsch, G.

7th International Conference on Learning Representations (ICLR), May 2019 (conference) Accepted

link (url) [BibTex]

link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

22nd International Conference on Artificial Intelligence and Statistics, April 2019 (conference) Accepted

arXiv [BibTex]

arXiv [BibTex]


no image
Semi-Generative Modelling: Covariate-Shift Adaptation with Cause and Effect Features

von Kügelgen, J., Mey, A., Loog, M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1361-1369, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Sobolev Descent

Mroueh, Y., Sercu, T., Raj, A.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 2976-2985, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Fast and Robust Shortest Paths on Manifolds Learned from Data

Arvanitidis, G., Hauberg, S., Hennig, P., Schober, M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1506-1515, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Fast Gaussian Process Based Gradient Matching for Parameter Identification in Systems of Nonlinear ODEs

Wenk, P., Gotovos, A., Bauer, S., Gorbach, N., Krause, A., Buhmann, J. M.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1351-1360, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

PDF PDF link (url) [BibTex]

PDF PDF link (url) [BibTex]


Thumb xl 543 figure0 1
Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

de Roos, F., Hennig, P.

Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1448-1457, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (conference)

Abstract
Pre-conditioning is a well-known concept that can significantly improve the convergence of optimization algorithms. For noise-free problems, where good pre-conditioners are not known a priori, iterative linear algebra methods offer one way to efficiently construct them. For the stochastic optimization problems that dominate contemporary machine learning, however, this approach is not readily available. We propose an iterative algorithm inspired by classic iterative linear solvers that uses a probabilistic model to actively infer a pre-conditioner in situations where Hessian-projections can only be constructed with strong Gaussian noise. The algorithm is empirically demonstrated to efficiently construct effective pre-conditioners for stochastic gradient descent and its variants. Experiments on problems of comparably low dimensionality show improved convergence. In very high-dimensional problems, such as those encountered in deep learning, the pre-conditioner effectively becomes an automatic learning-rate adaptation scheme, which we also empirically show to work well.

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
AReS and MaRS Adversarial and MMD-Minimizing Regression for SDEs

Abbati*, G., Wenk*, P., Osborne, M. A., Krause, A., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 1-10, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, 2019, *equal contribution (conference)

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Kernel Stein Tests for Multiple Model Comparison

Lim, J. N., Yamada, M., Schölkopf, B., Jitkrittum, W.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

[BibTex]

[BibTex]


no image
MYND: A Platform for Large-scale Neuroscientific Studies

Hohmann, M. R., Hackl, M., Wirth, B., Zaman, T., Enficiaud, R., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI), 2019 (conference) Accepted

[BibTex]

[BibTex]


no image
A Kernel Stein Test for Comparing Latent Variable Models

Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., Gretton, A.

2019 (conference) Submitted

arXiv [BibTex]

arXiv [BibTex]


Thumb xl rae
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

2019, *equal contribution (conference) Submitted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

arXiv [BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

arXiv [BibTex]

arXiv [BibTex]

2007


no image
Towards compliant humanoids: an experimental assessment of suitable task space position/orientation controllers

Nakanishi, J., Mistry, M., Peters, J., Schaal, S.

In IROS 2007, 2007, pages: 2520-2527, (Editors: Grant, E. , T. C. Henderson), IEEE Service Center, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems, November 2007 (inproceedings)

Abstract
Compliant control will be a prerequisite for humanoid robotics if these robots are supposed to work safely and robustly in human and/or dynamic environments. One view of compliant control is that a robot should control a minimal number of degrees-of-freedom (DOFs) directly, i.e., those relevant DOFs for the task, and keep the remaining DOFs maximally compliant, usually in the null space of the task. This view naturally leads to task space control. However, surprisingly few implementations of task space control can be found in actual humanoid robots. This paper makes a first step towards assessing the usefulness of task space controllers for humanoids by investigating which choices of controllers are available and what inherent control characteristics they have—this treatment will concern position and orientation control, where the latter is based on a quaternion formulation. Empirical evaluations on an anthropomorphic Sarcos master arm illustrate the robustness of the different controllers as well as the eas e of implementing and tuning them. Our extensive empirical results demonstrate that simpler task space controllers, e.g., classical resolved motion rate control or resolved acceleration control can be quite advantageous in face of inevitable modeling errors in model-based control, and that well chosen formulations are easy to implement and quite robust, such that they are useful for humanoids.

PDF Web DOI [BibTex]

2007

PDF Web DOI [BibTex]


no image
Sistema avanzato per la classificazione delle aree agricole in immagini ad elevata risoluzione geometrica: applicazione al territorio del Trentino

Arnoldi, E., Bruzzone, L., Carlin, L., Pedron, L., Persello, C.

In pages: 1-6, 11. Conferenza Nazionale ASITA, November 2007 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Performance Stabilization and Improvement in Graph-based Semi-supervised Learning with Ensemble Method and Graph Sharpening

Choi, I., Shin, H.

In Korean Data Mining Society Conference, pages: 257-262, Korean Data Mining Society, Seoul, Korea, Korean Data Mining Society Conference, November 2007 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Discriminative Subsequence Mining for Action Classification

Nowozin, S., BakIr, G., Tsuda, K.

In ICCV 2007, pages: 1919-1923, IEEE Computer Society, Los Alamitos, CA, USA, 11th IEEE International Conference on Computer Vision, October 2007 (inproceedings)

Abstract
Recent approaches to action classification in videos have used sparse spatio-temporal words encoding local appearance around interesting movements. Most of these approaches use a histogram representation, discarding the temporal order among features. But this ordering information can contain important information about the action itself, e.g. consider the sport disciplines of hurdle race and long jump, where the global temporal order of motions (running, jumping) is important to discriminate between the two. In this work we propose to use a sequential representation which retains this temporal order. Further, we introduce Discriminative Subsequence Mining to find optimal discriminative subsequence patterns. In combination with the LPBoost classifier, this amounts to simultaneously learning a classification function and performing feature selection in the space of all possible feature sequences. The resulting classifier linearly combines a small number of interpretable decision functions, each checking for the presence of a single discriminative pattern. The classifier is benchmarked on the KTH action classification data set and outperforms the best known results in the literature.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Unsupervised Classification for non-invasive Brain-Computer-Interfaces

Eren, S., Grosse-Wentrup, M., Buss, M.

In Automed 2007, pages: 65-66, VDI Verlag, Düsseldorf, Germany, Automed Workshop, October 2007 (inproceedings)

Abstract
Non-invasive Brain-Computer-Interfaces (BCIs) are devices that infer the intention of human subjects from signals generated by the central nervous system and recorded outside the skull, e.g., by electroencephalography (EEG). They can be used to enable basic communication for patients who are not able to communicate by normal means, e.g., due to neuro-degenerative diseases such as amyotrophic lateral sclerosis (ALS) (see [Vaughan2003] for a review). One challenge in research on BCIs is minimizing the training time prior to usage of the BCI. Since EEG patterns vary across subjects, it is usually necessary to record a number of trials in which the intention of the user is known to train a classifier. This classifier is subsequently used to infer the intention of the BCI-user. In this paper, we present the application of an unsupervised classification method to a binary noninvasive BCI based on motor imagery. The result is a BCI that does not require any training, since the mapping from EEG pattern changes to the intention of the user is learned online by the BCI without any feedback. We present experimental results from six healthy subjects, three of which display classification errors below 15%. We conclude that unsupervised BCIs are a viable option, but not yet as reliable as supervised BCIs. The rest of this paper is organized as follows. In the Methods section, we first introduce the experimental paradigm. This is followed by a description of the methods used for spatial filtering, feature extraction, and unsupervised classification. We then present the experimental results, and conclude the paper with a brief discussion.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Hilbert Space Embedding for Distributions

Smola, A., Gretton, A., Song, L., Schölkopf, B.

In Algorithmic Learning Theory, Lecture Notes in Computer Science 4754 , pages: 13-31, (Editors: M Hutter and RA Servedio and E Takimoto), Springer, Berlin, Germany, 18th International Conference on Algorithmic Learning Theory (ALT), October 2007 (inproceedings)

Abstract
We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in two-sample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Cluster Identification in Nearest-Neighbor Graphs

Maier, M., Hein, M., von Luxburg, U.

In ALT 2007, pages: 196-210, (Editors: Hutter, M. , R. A. Servedio, E. Takimoto), Springer, Berlin, Germany, 18th International Conference on Algorithmic Learning Theory, October 2007 (inproceedings)

Abstract
Assume we are given a sample of points from some underlying distribution which contains several distinct clusters. Our goal is to construct a neighborhood graph on the sample points such that clusters are ``identified‘‘: that is, the subgraph induced by points from the same cluster is connected, while subgraphs corresponding to different clusters are not connected to each other. We derive bounds on the probability that cluster identification is successful, and use them to predict ``optimal‘‘ values of k for the mutual and symmetric k-nearest-neighbor graphs. We point out different properties of the mutual and symmetric nearest-neighbor graphs related to the cluster identification problem.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Inducing Metric Violations in Human Similarity Judgements

Laub, J., Macke, J., Müller, K., Wichmann, F.

In Advances in Neural Information Processing Systems 19, pages: 777-784, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
Attempting to model human categorization and similarity judgements is both a very interesting but also an exceedingly difficult challenge. Some of the difficulty arises because of conflicting evidence whether human categorization and similarity judgements should or should not be modelled as to operate on a mental representation that is essentially metric. Intuitively, this has a strong appeal as it would allow (dis)similarity to be represented geometrically as distance in some internal space. Here we show how a single stimulus, carefully constructed in a psychophysical experiment, introduces l2 violations in what used to be an internal similarity space that could be adequately modelled as Euclidean. We term this one influential data point a conflictual judgement. We present an algorithm of how to analyse such data and how to identify the crucial point. Thus there may not be a strict dichotomy between either a metric or a non-metric internal space but rather degrees to which potentially large subsets of stimuli are represented metrically with a small subset causing a global violation of metricity.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Cross-Validation Optimization for Large Scale Hierarchical Classification Kernel Methods

Seeger, M.

In Advances in Neural Information Processing Systems 19, pages: 1233-1240, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We propose a highly efficient framework for kernel multi-class models with a large and structured set of classes. Kernel parameters are learned automatically by maximizing the cross-validation log likelihood, and predictive probabilities are estimated. We demonstrate our approach on large scale text classification tasks with hierarchical class structure, achieving state-of-the-art results in an order of magnitude less time than previous work.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Local Learning Approach for Clustering

Wu, M., Schölkopf, B.

In Advances in Neural Information Processing Systems 19, pages: 1529-1536, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We present a local learning approach for clustering. The basic idea is that a good clustering result should have the property that the cluster label of each data point can be well predicted based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated such that its solution has the above property. Relaxation and eigen-decomposition are applied to solve this optimization problem. We also briefly investigate the parameter selection issue and provide a simple parameter selection method for the proposed algorithm. Experimental results are provided to validate the effectiveness of the proposed approach.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Adaptive Spatial Filters with predefined Region of Interest for EEG based Brain-Computer-Interfaces

Grosse-Wentrup, M., Gramann, K., Buss, M.

In Advances in Neural Information Processing Systems 19, pages: 537-544, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
The performance of EEG-based Brain-Computer-Interfaces (BCIs) critically depends on the extraction of features from the EEG carrying information relevant for the classification of different mental states. For BCIs employing imaginary movements of different limbs, the method of Common Spatial Patterns (CSP) has been shown to achieve excellent classification results. The CSP-algorithm however suffers from a lack of robustness, requiring training data without artifacts for good performance. To overcome this lack of robustness, we propose an adaptive spatial filter that replaces the training data in the CSP approach by a-priori information. More specifically, we design an adaptive spatial filter that maximizes the ratio of the variance of the electric field originating in a predefined region of interest (ROI) and the overall variance of the measured EEG. Since it is known that the component of the EEG used for discriminating imaginary movements originates in the motor cortex, we design two adaptive spatial filters with the ROIs centered in the hand areas of the left and right motor cortex. We then use these to classify EEG data recorded during imaginary movements of the right and left hand of three subjects, and show that the adaptive spatial filters outperform the CSP-algorithm, enabling classification rates of up to 94.7 % without artifact rejection.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Branch and Bound for Semi-Supervised Support Vector Machines

Chapelle, O., Sindhwani, V., Keerthi, S.

In Advances in Neural Information Processing Systems 19, pages: 217-224, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
Semi-supervised SVMs (S3VMs) attempt to learn low-density separators by maximizing the margin over labeled and unlabeled examples. The associated optimization problem is non-convex. To examine the full potential of S3VMs modulo local minima problems in current implementations, we apply branch and bound techniques for obtaining exact, globally optimal solutions. Empirical evidence suggests that the globally optimal solution can return excellent generalization performance in situations where other implementations fail completely. While our current implementation is only applicable to small datasets, we discuss variants that can potentially lead to practically useful algorithms.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

In Advances in Neural Information Processing Systems 19, pages: 513-520, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

PDF Web [BibTex]

PDF Web [BibTex]


no image
An Efficient Method for Gradient-Based Adaptation of Hyperparameters in SVM Models

Keerthi, S., Sindhwani, V., Chapelle, O.

In Advances in Neural Information Processing Systems 19, pages: 673-680, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We consider the task of tuning hyperparameters in SVM models based on minimizing a smooth performance validation function, e.g., smoothed k-fold cross-validation error, using non-linear optimization techniques. The key computation in this approach is that of the gradient of the validation function with respect to hyperparameters. We show that for large-scale problems involving a wide choice of kernel-based models and validation functions, this computation can be very efficiently done; often within just a fraction of the training time. Empirical results show that a near-optimal set of hyperparameters can be identified by our approach with very few training rounds and gradient computations.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Dense 3D Correspondence

Steinke, F., Schölkopf, B., Blanz, V.

In Advances in Neural Information Processing Systems 19, pages: 1313-1320, (Editors: B Schölkopf and J Platt and T Hofmann), MIT Press, Cambridge, MA, USA, 20th Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Optimal Dominant Motion Estimation using Adaptive Search of Transformation Space

Ulges, A., Lampert, CH., Keysers, D., Breuel, TM.

In DAGM 2007, pages: 204-215, (Editors: Hamprecht, F. A., C. Schnörr, B. Jähne), Springer, Berlin, Germany, 29th Annual Symposium of the German Association for Pattern Recognition, September 2007 (inproceedings)

Abstract
The extraction of a parametric global motion from a motion field is a task with several applications in video processing. We present two probabilistic formulations of the problem and carry out optimization using the RAST algorithm, a geometric matching method novel to motion estimation in video. RAST uses an exhaustive and adaptive search of transformation space and thus gives -- in contrast to local sampling optimization techniques used in the past -- a globally optimal solution. Among other applications, our framework can thus be used as a source of ground truth for benchmarking motion estimation algorithms. Our main contributions are: first, the novel combination of a state-of- the-art MAP criterion for dominant motion estimation with a search procedure that guarantees global optimality. Second, experimental re- sults that illustrate the superior performance of our approach on synthetic flow fields as well as real-world video streams. Third, a significant speedup of the search achieved by extending the mod el with an additional smoothness prior.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Solving Deep Memory POMDPs with Recurrent Policy Gradients

Wierstra, D., Förster, A., Peters, J., Schmidhuber, J.

In ICANN‘07, pages: 697-706, Springer, Berlin, Germany, International Conference on Artificial Neural Networks, September 2007 (inproceedings)

Abstract
This paper presents Recurrent Policy Gradients, a modelfree reinforcement learning (RL) method creating limited-memory stochastic policies for partially observable Markov decision problems (POMDPs) that require long-term memories of past observations. The approach involves approximating a policy gradient for a Recurrent Neural Network (RNN) by backpropagating return-weighted characteristic eligibilities through time. Using a “Long Short-Term Memory” architecture, we are able to outperform other RL methods on two important benchmark tasks. Furthermore, we show promising results on a complex car driving simulation task.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Output Grouping using Dirichlet Mixtures of Linear Gaussian State-Space Models

Chiappa, S., Barber, D.

In ISPA 2007, pages: 446-451, IEEE Computer Society, Los Alamitos, CA, USA, 5th International Symposium on Image and Signal Processing and Analysis, September 2007 (inproceedings)

Abstract
We consider a model to cluster the components of a vector time-series. The task is to assign each component of the vector time-series to a single cluster, basing this assignment on the simultaneous dynamical similarity of the component to other components in the cluster. This is in contrast to the more familiar task of clustering a set of time-series based on global measures of their similarity. The model is based on a Dirichlet Mixture of Linear Gaussian State-Space models (LGSSMs), in which each LGSSM is treated with a prior to encourage the simplest explanation. The resulting model is approximated using a ‘collapsed’ variational Bayes implementation.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Manifold Denoising

Hein, M., Maier, M.

In Advances in Neural Information Processing Systems 19, pages: 561-568, (Editors: Schölkopf, B. , J. Platt, T. Hofmann), MIT Press, Cambridge, MA, USA, Twentieth Annual Conference on Neural Information Processing Systems (NIPS), September 2007 (inproceedings)

Abstract
We consider the problem of denoising a noisily sampled submanifold $M$ in $R^d$, where the submanifold $M$ is a priori unknown and we are only given a noisy point sample. The presented denoising algorithm is based on a graph-based diffusion process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that our method is capable of dealing with non-trivial high-dimensional noise. Moreover using the denoising algorithm as pre-processing method we can improve the results of a semi-supervised learning algorithm.

PDF Web [BibTex]

PDF Web [BibTex]