Header logo is ei


2016


no image
Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data

Weichwald, S., Gretton, A., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 6th International Workshop on Pattern Recognition in NeuroImaging (PRNI 2016), June 2016 (conference)

PDF Arxiv Code DOI Project Page [BibTex]

2016

PDF Arxiv Code DOI Project Page [BibTex]


no image
Domain Adaptation with Conditional Transferable Components

Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., Schölkopf, B.

Proceedings of the 33nd International Conference on Machine Learning (ICML), 48, pages: 2839-2848, JMLR Workshop and Conference Proceedings, (Editors: Balcan, M.-F. and Weinberger, K. Q.), June 2016 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Causal Interaction Network of Multivariate Hawkes Processes

Etesami, S., Kiyavash, N., Zhang, K., Singhal, K.

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), June 2016, poster presentation (conference)

[BibTex]

[BibTex]


no image
Efficient Large-scale Approximate Nearest Neighbor Search on the GPU

Wieschollek, P., Wang, O., Sorkine-Hornung, A., Lensch, H. P. A.

29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2027 - 2035, IEEE, June 2016 (conference)

DOI [BibTex]

DOI [BibTex]


no image
On the Identifiability and Estimation of Functional Causal Models in the Presence of Outcome-Dependent Selection

Zhang, K., Zhang, J., Huang, B., Schölkopf, B., Glymour, C.

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), pages: 825-834, (Editors: Ihler, A. and Janzing, D.), AUAI Press, June 2016 (conference)

link (url) [BibTex]

link (url) [BibTex]


Active Uncertainty Calibration in Bayesian ODE Solvers
Active Uncertainty Calibration in Bayesian ODE Solvers

Kersting, H., Hennig, P.

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), pages: 309-318, (Editors: Ihler, A. and Janzing, D.), AUAI Press, June 2016 (conference)

Abstract
There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates. Recently, Conrad et al.~introduced a sampling-based class of methods that are `well-calibrated' in a specific sense. But the computational cost of these methods is significantly above that of classic methods. On the other hand, Schober et al.~pointed out a precise connection between classic Runge-Kutta ODE solvers and Gaussian filters, which gives only a rough probabilistic calibration, but at negligible cost overhead. By formulating the solution of ODEs as approximate inference in linear Gaussian SDEs, we investigate a range of probabilistic ODE solvers, that bridge the trade-off between computational cost and probabilistic calibration, and identify the inaccurate gradient measurement as the crucial source of uncertainty. We propose the novel filtering-based method Bayesian Quadrature filtering (BQF) which uses Bayesian quadrature to actively learn the imprecision in the gradient measurement by collecting multiple gradient evaluations.

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
The Arrow of Time in Multivariate Time Serie

Bauer, S., Schölkopf, B., Peters, J.

Proceedings of the 33rd International Conference on Machine Learning (ICML), 48, pages: 2043-2051, JMLR Workshop and Conference Proceedings, (Editors: Balcan, M. F. and Weinberger, K. Q.), JMLR, June 2016 (conference)

link (url) [BibTex]

link (url) [BibTex]


no image
A Kernel Test for Three-Variable Interactions with Random Processes

Rubenstein, P. K., Chwialkowski, K. P., Gretton, A.

Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence (UAI), (Editors: Ihler, Alexander T. and Janzing, Dominik), June 2016 (conference)

PDF Supplement Arxiv [BibTex]

PDF Supplement Arxiv [BibTex]


no image
Continuous Deep Q-Learning with Model-based Acceleration

Gu, S., Lillicrap, T., Sutskever, I., Levine, S.

Proceedings of the 33nd International Conference on Machine Learning (ICML), 48, pages: 2829-2838, JMLR Workshop and Conference Proceedings, (Editors: Maria-Florina Balcan and Kilian Q. Weinberger), JMLR.org, June 2016 (conference)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Bounded Rational Decision-Making in Feedforward Neural Networks

Leibfried, F, Braun, D

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), pages: 407-416, June 2016 (conference)

Abstract
Bounded rational decision-makers transform sensory input into motor output under limited computational resources. Mathematically, such decision-makers can be modeled as information-theoretic channels with limited transmission rate. Here, we apply this formalism for the first time to multilayer feedforward neural networks. We derive synaptic weight update rules for two scenarios, where either each neuron is considered as a bounded rational decision-maker or the network as a whole. In the update rules, bounded rationality translates into information-theoretically motivated types of regularization in weight space. In experiments on the MNIST benchmark classification task for handwritten digits, we show that such information-theoretic regularization successfully prevents overfitting across different architectures and attains results that are competitive with other recent techniques like dropout, dropconnect and Bayes by backprop, for both ordinary and convolutional neural networks.

[BibTex]

[BibTex]


no image
Batch Bayesian Optimization via Local Penalization

González, J., Dai, Z., Hennig, P., Lawrence, N.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 648-657, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C.), May 2016 (conference)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
MuProp: Unbiased Backpropagation for Stochastic Neural Networks

Gu, S., Levine, S., Sutskever, I., Mnih, A.

4th International Conference on Learning Representations (ICLR), May 2016 (conference)

Arxiv [BibTex]

Arxiv [BibTex]


no image
An Improved Cognitive Brain-Computer Interface for Patients with Amyotrophic Lateral Sclerosis

Hohmann, M. R., Fomina, T., Jayaram, V., Förster, C., Just, J., M., S., Schölkopf, B., Schöls, L., Grosse-Wentrup, M.

Proceedings of the Sixth International BCI Meeting, pages: 44, (Editors: Müller-Putz, G. R. and Huggins, J. E. and Steyrl, D.), BCI, May 2016 (conference)

DOI [BibTex]

DOI [BibTex]


no image
Movement Primitives with Multiple Phase Parameters

Ewerton, M., Maeda, G., Neumann, G., Kisner, V., Kollegger, G., Wiemeyer, J., Peters, J.

IEEE International Conference on Robotics and Automation (ICRA), pages: 201-206, IEEE, May 2016 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
TerseSVM : A Scalable Approach for Learning Compact Models in Large-scale Classification

Babbar, R., Muandet, K., Schölkopf, B.

Proceedings of the 2016 SIAM International Conference on Data Mining (SDM), pages: 234-242, (Editors: Sanjay Chawla Venkatasubramanian and Wagner Meira), May 2016 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Lightweight Robotic Arm with Pneumatic Muscles for Robot Learning
A Lightweight Robotic Arm with Pneumatic Muscles for Robot Learning

Büchler, D., Ott, H., Peters, J.

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 4086-4092, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (conference)

ICRA16final DOI Project Page [BibTex]

ICRA16final DOI Project Page [BibTex]


Probabilistic Approximate Least-Squares
Probabilistic Approximate Least-Squares

Bartels, S., Hennig, P.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 676-684, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C. ), May 2016 (conference)

Abstract
Least-squares and kernel-ridge / Gaussian process regression are among the foundational algorithms of statistics and machine learning. Famously, the worst-case cost of exact nonparametric regression grows cubically with the data-set size; but a growing number of approximations have been developed that estimate good solutions at lower cost. These algorithms typically return point estimators, without measures of uncertainty. Leveraging recent results casting elementary linear algebra operations as probabilistic inference, we propose a new approximate method for nonparametric least-squares that affords a probabilistic uncertainty estimate over the error between the approximate and exact least-squares solution (this is not the same as the posterior variance of the associated Gaussian process regressor). This allows estimating the error of the least-squares solution on a subset of the data relative to the full-data solution. The uncertainty can be used to control the computational effort invested in the approximation. Our algorithm has linear cost in the data-set size, and a simple formal form, so that it can be implemented with a few lines of code in programming languages with linear algebra functionality.

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
Learning soft task priorities for control of redundant robots

Modugno, V., Neumann, G., Rueckert, E., Oriolo, G., Peters, J., Ivaldi, S.

IEEE International Conference on Robotics and Automation (ICRA), pages: 221-226, IEEE, May 2016 (conference)

DOI [BibTex]

DOI [BibTex]


no image
On the Reliability of Information and Trustworthiness of Web Sources in Wikipedia

Tabibian, B., Farajtabar, M., Valera, I., Song, L., Schölkopf, B., Gomez Rodriguez, M.

Wikipedia workshop at the 10th International AAAI Conference on Web and Social Media (ICWSM), May 2016 (conference)

[BibTex]

[BibTex]


Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines
Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines

Sajjadi, M. S. M., Alamgir, M., von Luxburg, U.

Proceedings of the 3rd ACM conference on Learning @ Scale, pages: 369-378, (Editors: Haywood, J. and Aleven, V. and Kay, J. and Roll, I.), ACM, L@S, April 2016, (An earlier version of this paper had been presented at the ICML 2015 workshop for Machine Learning for Education.) (conference)

Arxiv Peer-Grading dataset request [BibTex]

Arxiv Peer-Grading dataset request [BibTex]


no image
Fabular: Regression Formulas As Probabilistic Programming

Borgström, J., Gordon, A. D., Ouyang, L., Russo, C., Ścibior, A., Szymczak, M.

Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pages: 271-283, POPL ’16, ACM, January 2016 (conference)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Modeling Variability of Musculoskeletal Systems with Heteroscedastic Gaussian Processes
Modeling Variability of Musculoskeletal Systems with Heteroscedastic Gaussian Processes

Büchler, D., Calandra, R., Peters, J.

Workshop on Neurorobotics, Neural Information Processing Systems (NIPS), 2016 (conference)

NIPS16Neurorobotics [BibTex]

NIPS16Neurorobotics [BibTex]


no image
Screening Rules for Convex Problems

Raj, A., Olbrich, J., Gärtner, B., Schölkopf, B., Jaggi, M.

2016 (unpublished) Submitted

[BibTex]

[BibTex]


no image
Causal and statistical learning

Schölkopf, B., Janzing, D., Lopez-Paz, D.

Oberwolfach Reports, 13(3):1896-1899, (Editors: A. Christmann and K. Jetter and S. Smale and D.-X. Zhou), 2016 (conference)

DOI [BibTex]

DOI [BibTex]

2008


no image
Stereo Matching for Calibrated Cameras without Correspondence

Helmke, U., Hüper, K., Vences, L.

In CDC 2008, pages: 2408-2413, IEEE Service Center, Piscataway, NJ, USA, 47th IEEE Conference on Decision and Control, December 2008 (inproceedings)

Abstract
We study the stereo matching problem for reconstruction of the location of 3D-points on an unknown surface patch from two calibrated identical cameras without using any a priori information about the pointwise correspondences. We assume that camera parameters and the pose between the cameras are known. Our approach follows earlier work for coplanar cameras where a gradient flow algorithm was proposed to match associated Gramians. Here we extend this method by allowing arbitrary poses for the cameras. We introduce an intrinsic Riemannian Newton algorithm that achieves local quadratic convergence rates. A closed form solution is presented, too. The efficiency of both algorithms is demonstrated by numerical experiments.

PDF Web DOI [BibTex]

2008

PDF Web DOI [BibTex]


no image
Joint Kernel Support Estimation for Structured Prediction

Lampert, C., Blaschko, M.

In Proceedings of the NIPS 2008 Workshop on "Structured Input - Structured Output" (NIPS SISO 2008), pages: 1-4, NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (inproceedings)

Abstract
We present a new technique for structured prediction that works in a hybrid generative/ discriminative way, using a one-class support vector machine to model the joint probability of (input, output)-pairs in a joint reproducing kernel Hilbert space. Compared to discriminative techniques, like conditional random elds or structured out- put SVMs, the proposed method has the advantage that its training time depends only on the number of training examples, not on the size of the label space. Due to its generative aspect, it is also very tolerant against ambiguous, incomplete or incorrect labels. Experiments on realistic data show that our method works eciently and robustly in situations for which discriminative techniques have computational or statistical problems.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

In ICDM 2008, pages: 953-958, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric $epsilon$-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric$epsilon$-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is lar ger than for non-geometric graph mining,the total time is within a reasonable level even for small minimum support.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Block Iterative Algorithms for Non-negative Matrix Approximation

Sra, S.

In ICDM 2008, pages: 1037-1042, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Service Center, Piscataway, NJ, USA, Eighth IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
In this paper we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung~cite{lee00} for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular textbf {block-iterative} acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of~cite{suv.nips}. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Bayesian Approach to Switching Linear Gaussian State-Space Models for Unsupervised Time-Series Segmentation

Chiappa, S.

In ICMLA 2008, pages: 3-9, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. Kurgan, T. Hu, K. Hafeez), IEEE Computer Society, Los Alamitos, CA, USA, 7th International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
Time-series segmentation in the fully unsupervised scenario in which the number of segment-types is a priori unknown is a fundamental problem in many applications. We propose a Bayesian approach to a segmentation model based on the switching linear Gaussian state-space model that enforces a sparse parametrization, such as to use only a small number of a priori available different dynamics to explain the data. This enables us to estimate the number of segment-types within the model, in contrast to previous non-Bayesian approaches where training and comparing several separate models was required. As the resulting model is computationally intractable, we introduce a variational approximation where a reformulation of the problem enables the use of efficient inference algorithms.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Iterative Subgraph Mining for Principal Component Analysis

Saigo, H., Tsuda, K.

In ICDM 2008, pages: 1007-1012, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Graph mining methods enumerate frequent subgraphs efficiently, but they are not necessarily good features for machine learning due to high correlation among features. Thus it makes sense to perform principal component analysis to reduce the dimensionality and create decorrelated features. We present a novel iterative mining algorithm that captures informative patterns corresponding to major entries of top principal components. It repeatedly calls weighted substructure mining where example weights are updated in each iteration. The Lanczos algorithm, a standard algorithm of eigendecomposition, is employed to update the weights. In experiments, our patterns are shown to approximate the principal components obtained by frequent mining.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

(180), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric epsilon-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric epsilon-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is larger than for non-geometric graph mining, the total time is within a reasonable level even for small minimum support.

PDF [BibTex]

PDF [BibTex]


no image
Probabilistic Inference for Fast Learning in Control

Rasmussen, CE., Deisenroth, MP.

In EWRL 2008, pages: 229-242, (Editors: Girgin, S. , M. Loth, R. Munos, P. Preux, D. Ryabko), Springer, Berlin, Germany, 8th European Workshop on Reinforcement Learning, November 2008 (inproceedings)

Abstract
We provide a novel framework for very fast model-based reinforcement learning in continuous state and action spaces. The framework requires probabilistic models that explicitly characterize their levels of confidence. Within this framework, we use flexible, non-parametric models to describe the world based on previously collected experience. We demonstrate learning on the cart-pole problem in a setting where we provide very limited prior knowledge about the task. Learning progresses rapidly, and a good policy is found after only a hand-full of iterations.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simultaneous Implicit Surface Reconstruction and Meshing

Giesen, J., Maier, M., Schölkopf, B.

(179), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We investigate an implicit method to compute a piecewise linear representation of a surface from a set of sample points. As implicit surface functions we use the weighted sum of piecewise linear kernel functions. For such a function we can partition Rd in such a way that these functions are linear on the subsets of the partition. For each subset in the partition we can then compute the zero level set of the function exactly as the intersection of a hyperplane with the subset.

PDF [BibTex]

PDF [BibTex]


no image
Policy Learning: A Unified Perspective with Applications in Robotics

Peters, J., Kober, J., Nguyen-Tuong, D.

In EWRL 2008, pages: 220-228, (Editors: Girgin, S. , M. Loth, R. Munos, P. Preux, D. Ryabko), Springer, Berlin, Germany, 8th European Workshop on Reinforcement Learning, November 2008 (inproceedings)

Abstract
Policy Learning approaches are among the best suited methods for high-dimensional, continuous control systems such as anthropomorphic robot arms and humanoid robots. In this paper, we show two contributions: firstly, we show a unified perspective which allows us to derive several policy learning algorithms from a common point of view, i.e, policy gradient algorithms, natural-gradient algorithms and EM-like policy learning. Secondly, we present several applications to both robot motor primitive learning as well as to robot control in task space. Results both from simulation and several different real robots are shown.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Taxonomy Inference Using Kernel Dependence Measures

Blaschko, M., Gretton, A.

(181), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to substantially improve the quality of the clustering, when compared with state-of-the-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text data.

PDF [BibTex]

PDF [BibTex]


no image
Learning to Localize Objects with Structured Output Regression

Blaschko, MB., Lampert, CH.

In ECCV 2008, pages: 2-15, (Editors: Forsyth, D. A., P. H.S. Torr, A. Zisserman), Springer, Berlin, Germany, 10th European Conference on Computer Vision, October 2008, Best Student Paper Award (inproceedings)

Abstract
Sliding window classifiers are among the most successful and widely applied techniques for object localization. However, training is typically done in a way that is not specific to the localization task. First a binary classifier is trained using a sample of positive and negative examples, and this classifier is subsequently applied to multiple regions within test images. We propose instead to treat object localization in a principled way by posing it as a problem of predicting structured data: we model the problem not as binary classification, but as the prediction of the bounding box of objects located in images. The use of a joint-kernel framework allows us to formulate the training procedure as a generalization of an SVM, which can be solved efficiently. We further improve computational efficiency by using a branch-and-bound strategy for localization during both training and testing. Experimental evaluation on the PASCAL VOC and TU Darmstadt datasets show that the structured training procedure improves pe rformance over binary training as well as the best previously published scores.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic Image Colorization Via Multimodal Predictions

Charpiat, G., Hofmann, M., Schölkopf, B.

In Computer Vision - ECCV 2008, Lecture Notes in Computer Science, Vol. 5304, pages: 126-139, (Editors: DA Forsyth and PHS Torr and A Zisserman), Springer, Berlin, Germany, 10th European Conference on Computer Vision, October 2008 (inproceedings)

Abstract
We aim to color automatically greyscale images, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors, instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a nonuniform spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Nonparametric Independence Tests: Space Partitioning and Kernel Approaches

Gretton, A., Györfi, L.

In ALT08, pages: 183-198, (Editors: Freund, Y. , L. Györfi, G. Turán, T. Zeugmann), Springer, Berlin, Germany, 19th International Conference on Algorithmic Learning Theory (ALT08), October 2008 (inproceedings)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. All tests reject the null hypothesis of independence if the test statistics become large. The large deviation and limit distribution properties of all three test statistics are given. Following from these results, distributionfree strong consistent tests of independence are derived, as are asymptotically alpha-level tests. The performance of the tests is evaluated experimentally on benchmark data.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic 3D Face Reconstruction from Single Images or Video

Breuer, P., Kim, K., Kienzle, W., Schölkopf, B., Blanz, V.

In FG 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Automatic Face and Gesture Recognition, September 2008 (inproceedings)

Abstract
This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a highresolution 3D surface model.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Kernel Measures of Conditional Dependence

Fukumizu, K., Gretton, A., Sun, X., Schölkopf, B.

In Advances in neural information processing systems 20, pages: 489-496, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
We propose a new measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces. Unlike previous kernel dependence measures, the proposed criterion does not depend on the choice of kernel in the limit of infinite data, for a wide class of kernels. At the same time, it has a straightforward empirical estimate with good convergence behaviour. We discuss the theoretical properties of the measure, and demonstrate its application in experiments.

PDF Web [BibTex]

PDF Web [BibTex]


no image
An Analysis of Inference with the Universum

Sinz, F., Chapelle, O., Agarwal, A., Schölkopf, B.

In Advances in neural information processing systems 20, pages: 1369-1376, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
We study a pattern classification algorithm which has recently been proposed by Vapnik and coworkers. It builds on a new inductive principle which assumes that in addition to positive and negative data, a third class of data is available, termed the Universum. We assay the behavior of the algorithm by establishing links with Fisher discriminant analysis and oriented PCA, as well as with an SVM in a projected subspace (or, equivalently, with a data-dependent reduced kernel). We also provide experimental results.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning with Transformation Invariant Kernels

Walder, C., Chapelle, O.

In Advances in neural information processing systems 20, pages: 1561-1568, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
This paper considers kernels invariant to translation, rotation and dilation. We show that no non-trivial positive definite (p.d.) kernels exist which are radial and dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly, we discuss the c.p.d. case and provide some novel analysis, including an elementary derivation of a c.p.d. representer theorem. On the practical side, we give a support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thinplate kernel this leads to a classifier with only one parameter (the amount of regularisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian kernel, even though the Gaussian involves a second parameter (the length scale).

PDF Web [BibTex]

PDF Web [BibTex]


no image
Episodic Reinforcement Learning by Logistic Reward-Weighted Regression

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In ICANN 2008, pages: 407-416, (Editors: Kurkova-Pohlova, V. , R. Neruda, J. Koutnik), Springer, Berlin, Germany, 18th International Conference on Artificial Neural Networks, September 2008 (inproceedings)

Abstract
It has been a long-standing goal in the adaptive control community to reduce the generically difficult, general reinforcement learning (RL) problem to simpler problems solvable by supervised learning. While this approach is today’s standard for value function-based methods, fewer approaches are known that apply similar reductions to policy search methods. Recently, it has been shown that immediate RL problems can be solved by reward-weighted regression, and that the resulting algorithm is an expectation maximization (EM) algorithm with strong guarantees. In this paper, we extend this algorithm to the episodic case and show that it can be used in the context of LSTM recurrent neural networks (RNNs). The resulting RNN training algorithm is equivalent to a weighted self-modeling supervised learning technique. We focus on partially observable Markov decision problems (POMDPs) where it is essential that the policy is nonstationary in order to be optimal. We show that this new reward-weighted logistic regression u sed in conjunction with an RNN architecture can solve standard benchmark POMDPs with ease.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Local Gaussian Processes Regression for Real-time Model-based Robot Control

Nguyen-Tuong, D., Peters, J.

In IROS 2008, pages: 380-385, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2008 (inproceedings)

Abstract
High performance and compliant robot control require accurate dynamics models which cannot be obtained analytically for sufficiently complex robot systems. In such cases, machine learning offers a promising alternative for approximating the robot dynamics using measured data. This approach offers a natural framework to incorporate unknown nonlinearities as well as to continually adapt online for changes in the robot dynamics. However, the most accurate regression methods, e.g. Gaussian processes regression (GPR) and support vector regression (SVR), suffer from exceptional high computational complexity which prevents their usage for large numbers of samples or online learning to date. Inspired by locally linear regression techniques, we propose an approximation to the standard GPR using local Gaussian processes models. Due to reduced computational cost, local Gaussian processes (LGP) can be applied for larger sample-sizes and online learning. Comparisons with other nonparametric regressions, e.g. standard GPR, nu-SVR and locally weighted projection regression (LWPR), show that LGP has higher accuracy than LWPR close to the performance of standard GPR and nu-SVR while being sufficiently fast for online learning.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Colored Maximum Variance Unfolding

Song, L., Smola, A., Borgwardt, K., Gretton, A.

In Advances in neural information processing systems 20, pages: 1385-1392, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Maximum variance unfolding (MVU) is an effective heuristic for dimensionality reduction. It produces a low-dimensional representation of the data by maximizing the variance of their embeddings while preserving the local distances of the original data. We show that MVU also optimizes a statistical dependence measure which aims to retain the identity of individual observations under the distancepreserving constraints. This general view allows us to design "colored" variants of MVU, which produce low-dimensional representations for a given task, e.g. subject to class labels or other side information.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Assessing Nonlinear Granger Causality from Multivariate Time Series

Sun, X.

In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2008, pages: 440-455, (Editors: Daelemans, W. , B. Goethals, K. Morik), Springer, Berlin, Germany, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), September 2008 (inproceedings)

Abstract
A straightforward nonlinear extension of Granger’s concept of causality in the kernel framework is suggested. The kernel-based approach to assessing nonlinear Granger causality in multivariate time series enables us to determine, in a model-free way, whether the causal relation between two time series is present or not and whether it is direct or mediated by other processes. The trace norm of the so-called covariance operator in feature space is used to measure the prediction error. Relying on this measure, we test the improvement of predictability between time series by subsampling-based multiple testing. The distributional properties of the resulting p-values reveal the direction of Granger causality. Experiments with simulated and real-world data show that our method provides encouraging results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

(175), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Bethge, M., Berens, P.

In Advances in neural information processing systems 20, pages: 97-104, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data - the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

PDF Web [BibTex]

PDF Web [BibTex]