Header logo is ei


2010


no image
A PAC-Bayesian Analysis of Co-clustering, Graph Clustering, and Pairwise Clustering

Seldin, Y.

In ICML 2010 Workshop on Social Analytics: Learning from human interactions, pages: 1-5, ICML Workshop on Social Analytics: Learning from human interactions, June 2010 (inproceedings)

Abstract
We review briefly the PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008, 2009, 2010), which provided generalization guarantees and regularization terms absent in the preceding formulations of this problem and achieved state-of-the-art prediction results in MovieLens collaborative filtering task. Inspired by this analysis we formulate weighted graph clustering1 as a prediction problem: given a subset of edge weights we analyze the ability of graph clustering to predict the remaining edge weights. This formulation enables practical and theoretical comparison of different approaches to graph clustering as well as comparison of graph clustering with other possible ways to model the graph. Following the lines of (Seldin and Tishby, 2010) we derive PAC-Bayesian generalization bounds for graph clustering. The bounds show that graph clustering should optimize a trade-off between empirical data fit and the mutual information that clusters preserve on the graph nodes. A similar trade-off derived from information-theoretic considerations was already shown to produce state-of-the-art results in practice (Slonim et al., 2005; Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by providing a better theoretical foundation, suggesting formal generalization guarantees, and offering a more accurate way to deal with finite sample issues.

PDF Web [BibTex]

2010

PDF Web [BibTex]


no image
Solving Large-Scale Nonnegative Least Squares

Sra, S.

16th Conference of the International Linear Algebra Society (ILAS), June 2010 (talk)

Abstract
We study the fundamental problem of nonnegative least squares. This problem was apparently introduced by Lawson and Hanson [1] under the name NNLS. As is evident from its name, NNLS seeks least-squares solutions that are also nonnegative. Owing to its wide-applicability numerous algorithms have been derived for NNLS, beginning from the active-set approach of Lawson and Han- son [1] leading up to the sophisticated interior-point method of Bellavia et al. [2]. We present a new algorithm for NNLS that combines projected subgradients with the non-monotonic gradient descent idea of Barzilai and Borwein [3]. Our resulting algorithm is called BBSG, and we guarantee its convergence by ex- ploiting properties of NNLS in conjunction with projected subgradients. BBSG is surprisingly simple and scales well to large problems. We substantiate our claims by empirically evaluating BBSG and comparing it with established con- vex solvers and specialized NNLS algorithms. The numerical results suggest that BBSG is a practical method for solving large-scale NNLS problems.

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Matrix Approximation Problems

Sra, S.

EU Regional School: Rheinisch-Westf{\"a}lische Technische Hochschule Aachen, May 2010 (talk)

PDF AVI [BibTex]

PDF AVI [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 realtime software package.

PDF [BibTex]

PDF [BibTex]


no image
Extending BCI2000 Functionality with Your Own C++ Code

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to use BCI2000 C++ framework to write your own real-time signal-processing modules.

[BibTex]

[BibTex]


no image
Apprenticeship learning via soft local homomorphisms

Boularias, A., Chaib-Draa, B.

In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), pages: 2971-2976, IEEE, Piscataway, NJ, USA, 2010 IEEE International Conference on Robotics and Automation (ICRA), May 2010 (inproceedings)

Abstract
We consider the problem of apprenticeship learning when the expert's demonstration covers only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient solution to this problem based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). However, past work on IRL requires an accurate estimate of the frequency of encountering each feature of the states when the robot follows the expert‘s policy. Given that the complete policy of the expert is unknown, the features frequencies can only be empirically estimated from the demonstrated trajectories. In this paper, we propose to use a transfer method, known as soft homomorphism, in order to generalize the expert‘s policy to unvisited regions of the state space. The generalized policy can be used either as the robot‘s final policy, or to calculate the features frequencies within an IRL algorithm. Empirical results show that our approach is able to learn good policies from a small number of demonstrations.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Using Model Knowledge for Learning Inverse Dynamics

Nguyen-Tuong, D., Peters, J.

In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), pages: 2677-2682, IEEE, Piscataway, NJ, USA, 2010 IEEE International Conference on Robotics and Automation (ICRA), May 2010 (inproceedings)

Abstract
In recent years, learning models from data has become an increasingly interesting tool for robotics, as it allows straightforward and accurate model approximation. However, in most robot learning approaches, the model is learned from scratch disregarding all prior knowledge about the system. For many complex robot systems, available prior knowledge from advanced physics-based modeling techniques can entail valuable information for model learning that may result in faster learning speed, higher accuracy and better generalization. In this paper, we investigate how parametric physical models (e.g., obtained from rigid body dynamics) can be used to improve the learning performance, and, especially, how semiparametric regression methods can be applied in this context. We present two possible semiparametric regression approaches, where the knowledge of the physical model can either become part of the mean function or of the kernel in a nonparametric Gaussian process regression. We compare the learning performance o f these methods first on sampled data and, subsequently, apply the obtained inverse dynamics models in tracking control on a real Barrett WAM. The results show that the semiparametric models learned with rigid body dynamics as prior outperform the standard rigid body dynamics models on real data while generalizing better for unknown parts of the state space.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Coherent Inference on Optimal Play in Game Trees

Hennig, P., Stern, D., Graepel, T.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 326-333, (Editors: Teh, Y.W. , M. Titterington ), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Round-based games are an instance of discrete planning problems. Some of the best contemporary game tree search algorithms use random roll-outs as data. Relying on a good policy, they learn on-policy values by propagating information upwards in the tree, but not between sibling nodes. Here, we present a generative model and a corresponding approximate message passing scheme for inference on the optimal, off-policy value of nodes in smooth AND/OR trees, given random roll-outs. The crucial insight is that the distribution of values in game trees is not completely arbitrary. We define a generative model of the on-policy values using a latent score for each state, representing the value under the random roll-out policy. Inference on the values under the optimal policy separates into an inductive, pre-data step and a deductive, post-data part. Both can be solved approximately with Expectation Propagation, allowing off-policy value inference for any node in the (exponentially big) tree in linear time.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Incremental Sparsification for Real-time Online Model Learning

Nguyen-Tuong, D., Peters, J.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 557-564, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Online model learning in real-time is required by many applications such as in robot tracking control. It poses a difficult problem, as fast and incremental online regression with large data sets is the essential component which cannot be achieved by straightforward usage of off-the-shelf machine learning methods (such as Gaussian process regression or support vector regression). In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for large scale real-time model learning. The proposed approach combines a sparsification method based on an independence measure with a large scale database. In combination with an incremental learning approach such as sequential support vector regression, we obtain a regression method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real robot emphasizes the applicability of the proposed approach in real-time online model learning for real world systems.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multitask Learning for Brain-Computer Interfaces

Alamgir, M., Grosse-Wentrup, M., Altun, Y.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 17-24, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics , May 2010 (inproceedings)

Abstract
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subjectspecific calibration data prior to actual use of the BCI for communication. In this paper, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process. We discuss how this out-of-the-box BCI can be further improved in a computationally efficient manner as subject-specific data becomes available. The feasibility of the approach is demonstrated on two sets of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of 19 healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and combining prior recordings with subjectspecific calibration data substantially outperforms using subject-specific data only. Our results further show that transfer between recordings under slightly different experimental setups is feasible.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Identifying Cause and Effect on Discrete Data using Additive Noise Models

Peters, J., Janzing, D., Schölkopf, B.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 597-604, (Editors: YW Teh and M Titterington), JMLR, Cambridge, MA, USA, 13th International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Inferring the causal structure of a set of random variables from a finite sample of the joint distribution is an important problem in science. Recently, methods using additive noise models have been suggested to approach the case of continuous variables. In many situations, however, the variables of interest are discrete or even have only finitely many states. In this work we extend the notion of additive noise models to these cases. Whenever the joint distribution P(X;Y ) admits such a model in one direction, e.g. Y = f(X) + N; N ? X, it does not admit the reversed model X = g(Y ) + ~N ; ~N ? Y as long as the model is chosen in a generic way. Based on these deliberations we propose an efficient new algorithm that is able to distinguish between cause and effect for a finite sample of discrete variables. We show that this algorithm works both on synthetic and real data sets.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-supervised Learning via Generalized Maximum Entropy

Erkan, A., Altun, Y.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 209-216, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics , May 2010 (inproceedings)

Abstract
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.

PDF Web [BibTex]

PDF Web [BibTex]


no image
A New Algorithm for Improving the Resolution of Cryo-EM Density Maps

Hirsch, M., Schölkopf, B., Habeck, M.

In Research in Computational Molecular Biology, Lecture Notes in Bioinformatics, Vol. 6044 , pages: 174-188, (Editors: B Berger), Springer, Berlin, Germany, 14th International Conference on Research in Computational Molecular Biology (RECOMB), May 2010 (inproceedings)

Abstract
Cryo-electron microscopy (cryo-EM) plays an increasingly prominent role in structure elucidation of macromolecular assemblies. Advances in experimental instrumentation and computational power have spawned numerous cryo-EM studies of large biomolecular complexes resulting in the reconstruction of three-dimensional density maps at intermediate and low resolution. In this resolution range, identification and interpretation of structural elements and modeling of biomolecular structure with atomic detail becomes problematic. In this paper, we present a novel algorithm that enhances the resolution of intermediate- and low-resolution density maps. Our underlying assumption is to model the low-resolution density map as a blurred and possibly noise-corrupted version of an unknown high-resolution map that we seek to recover by deconvolution. By exploiting the nonnegativity of both the high-resolution map and blur kernel we derive multiplicative updates reminiscent of those used in nonnegative matrix factorization. Our framework allows for easy incorporation of additional prior knowledge such as smoothness and sparseness, on both the sharpened density map and the blur kernel. A probabilistic formulation enables us to derive updates for the hyperparameters, therefore our approach has no parameter that needs adjustment. We apply the algorithm to simulated three-dimensional electron microscopic data. We show that our method provides better resolved density maps when compared with B-factor sharpening, especially in the presence of noise. Moreover, our method can use additional information provided by homologous structures, which helps to improve the resolution even further.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Movement Templates for Learning of Hitting and Batting

Kober, J., Mülling, K., Krömer, O., Lampert, C., Schölkopf, B., Peters, J.

In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), pages: 853-858, IEEE, Piscataway, NJ, USA, 2010 IEEE International Conference on Robotics and Automation (ICRA), May 2010 (inproceedings)

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Solving large-scale nonnegative least squares using an adaptive non-monotonic method

Sra, S., Kim, D., Dhillon, I.

24th European Conference on Operational Research (EURO 2010), 24, pages: 223, April 2010 (poster)

Abstract
We present an efficient algorithm for large-scale non-negative least-squares (NNLS). We solve NNLS by extending the unconstrained quadratic optimization method of Barzilai and Borwein (BB) to handle nonnegativity constraints. Our approach is simple yet efficient. It differs from other constrained BB variants as: (i) it uses a specific subset of variables for computing BB steps; and (ii) it scales these steps adaptively to ensure convergence. We compare our method with both established convex solvers and specialized NNLS methods, and observe highly competitive empirical performance.

PDF [BibTex]

PDF [BibTex]


no image
Sparse regression via a trust-region proximal method

Kim, D., Sra, S., Dhillon, I.

24th European Conference on Operational Research (EURO 2010), 24, pages: 278, April 2010 (poster)

Abstract
We present a method for sparse regression problems. Our method is based on the nonsmooth trust-region framework that minimizes a sum of smooth convex functions and a nonsmooth convex regularizer. By employing a separable quadratic approximation to the smooth part, the method enables the use of proximity operators, which in turn allow tackling the nonsmooth part efficiently. We illustrate our method by implementing it for three important sparse regression problems. In experiments with synthetic and real-world large-scale data, our method is seen to be competitive, robust, and scalable.

PDF [BibTex]

PDF [BibTex]


no image
Machine-Learning Methods for Decoding Intentional Brain States

Hill, NJ.

Symposium "Non-Invasive Brain Computer Interfaces: Current Developments and Applications" (BIOMAG), March 2010 (talk)

Abstract
Brain-computer interfaces (BCI) work by making the user perform a specific mental task, such as imagining moving body parts or performing some other covert mental activity, or attending to a particular stimulus out of an array of options, in order to encode their intention into a measurable brain signal. Signal-processing and machine-learning techniques are then used to decode the measured signal to identify the encoded mental state and hence extract the user‘s initial intention. The high-noise high-dimensional nature of brain-signals make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since “it doesn‘t matter what classifier you use once your features are extracted.” Using examples from our own MEG and EEG experiments, I‘ll demonstrate how machine-learning principles can be applied in order to improve BCI performance, if they are formulated in a domain-specific way. The result is a type of data-driven analysis that is more than “just” classification, and can be used to find better feature extractors.

PDF Web [BibTex]

PDF Web [BibTex]


no image
PAC-Bayesian Analysis in Unsupervised Learning

Seldin, Y.

Foundations and New Trends of PAC Bayesian Learning Workshop, March 2010 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
PAC-Bayesian Bounds for Discrete Density Estimation and Co-clustering Analysis

Seldin, Y., Tishby, N.

Workshop "Foundations and New Trends of PAC Bayesian Learning", 2010, March 2010 (poster)

Abstract
We applied PAC-Bayesian framework to derive gen- eralization bounds for co-clustering1. The analysis yielded regularization terms that were absent in the preceding formulations of this task. The bounds sug- gested that co-clustering should optimize a trade-off between its empirical performance and the mutual in- formation that the cluster variables preserve on row and column indices. Proper regularization enabled us to achieve state-of-the-art results in prediction of the missing ratings in the MovieLens collaborative filtering dataset. In addition a PAC-Bayesian bound for discrete den- sity estimation was derived. We have shown that the PAC-Bayesian bound for classification is a spe- cial case of the PAC-Bayesian bound for discrete den- sity estimation. We further introduced combinatorial priors to PAC-Bayesian analysis. The combinatorial priors are more appropriate for discrete domains, as opposed to Gaussian priors, the latter of which are suitable for continuous domains. It was shown that combinatorial priors lead to regularization terms in the form of mutual information.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Experiments with Motor Primitives to learn Table Tennis

Peters, J., Mülling, K., Kober, J.

In Experimental Robotics, pages: 1-13, (Editors: Khatib, O. , V. Kumar, G. Sukhatme), Springer, Berlin, Germany, 12th International Symposium on Experimental Robotics (ISER), March 2010 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Causality: Objectives and Assessment

Guyon, I., Janzing, D., Schölkopf, B.

In JMLR Workshop and Conference Proceedings: Volume 6 , pages: 1-42, (Editors: I Guyon and D Janzing and B Schölkopf), MIT Press, Cambridge, MA, USA, Causality: Objectives and Assessment (NIPS Workshop) , February 2010 (inproceedings)

Abstract
The NIPS 2008 workshop on causality provided a forum for researchers from different horizons to share their view on causal modeling and address the difficult question of assessing causal models. There has been a vivid debate on properly separating the notion of causality from particular models such as graphical models, which have been dominating the field in the past few years. Part of the workshop was dedicated to discussing the results of a challenge, which offered a wide variety of applications of causal modeling. We have regrouped in these proceedings the best papers presented. Most lectures were videotaped or recorded. All information regarding the challenge and the lectures are found at http://www.clopinet.com/isabelle/Projects/NIPS2008/. This introduction provides a synthesis of the findings and a gentle introduction to causality topics, which are the object of active research.

Web [BibTex]

Web [BibTex]


no image
Learning Motor Primitives for Robotics

Kober, J., Peters, J.

EVENT Lab: Reinforcement Learning in Robotics and Virtual Reality, January 2010 (talk)

Abstract
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing the Dynamic Systems Motor primitives originally introduced by Ijspeert et al. (2003), appropriate learning algorithms for a concerted approach of both imitation and reinforcement learning are presented. Using these algorithms new motor skills, i.e., Ball-in-a-Cup, Ball-Paddling and Dart-Throwing, are learned.

[BibTex]

[BibTex]


no image
Leveraging Sequence Classification by Taxonomy-based Multitask Learning

Widmer, C., Leiva, J., Altun, Y., Rätsch, G.

In Research in Computational Molecular Biology, LNCS, Vol. 6044, pages: 522-534, (Editors: B Berger), Springer, Berlin, Germany, 14th Annual International Conference, RECOMB, 2010 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic latent variable models for distinguishing between cause and effect

Mooij, J., Stegle, O., Janzing, D., Zhang, K., Schölkopf, B.

In Advances in Neural Information Processing Systems 23, pages: 1687-1695, (Editors: J Lafferty and CKI Williams and J Shawe-Taylor and RS Zemel and A Culotta), Curran, Red Hook, NY, USA, 24th Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
We propose a novel method for inferring whether X causes Y or vice versa from joint observations of X and Y. The basic idea is to model the observed data using probabilistic latent variable models, which incorporate the effects of unobserved noise. To this end, we consider the hypothetical effect variable to be a function of the hypothetical cause variable and an independent noise term (not necessarily additive). An important novel aspect of our work is that we do not restrict the model class, but instead put general non-parametric priors on this function and on the distribution of the cause. The causal direction can then be inferred by using standard Bayesian model selection. We evaluate our approach on synthetic data and real-world data and report encouraging results.

PDF Web [BibTex]

PDF Web [BibTex]


no image
JigPheno: Semantic Feature Extraction in biological images

Karaletsos, T., Stegle, O., Winn, J., Borgwardt, K.

In NIPS, Workshop on Machine Learning in Computational Biology, 2010 (inproceedings)

[BibTex]

[BibTex]


no image
Nonparametric Tree Graphical Models

Song, L., Gretton, A., Guestrin, C.

In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, Volume 9 , pages: 765-772, (Editors: YW Teh and M Titterington ), JMLR, AISTATS, 2010 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
Novel machine learning methods for MHC Class I binding prediction

Widmer, C., Toussaint, N., Altun, Y., Kohlbacher, O., Rätsch, G.

In Pattern Recognition in Bioinformatics, pages: 98-109, (Editors: TMH Dijkstra and E Tsivtsivadze and E Marchiori and T Heskes), Springer, Berlin, Germany, 5th IAPR International Conference, PRIB, 2010 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Bootstrapping Apprenticeship Learning

Boularias, A., Chaib-Draa, B.

In Advances in Neural Information Processing Systems 23, pages: 289-297, (Editors: Lafferty, J. , C. K.I. Williams, J. Shawe-Taylor, R. S. Zemel, A. Culotta), Curran, Red Hook, NY, USA, Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
We consider the problem of apprenticeship learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is maximizing a utility function that is a linear combination of state-action features. Most IRL algorithms use a simple Monte Carlo estimation to approximate the expected feature counts under the expert's policy. In this paper, we show that the quality of the learned policies is highly sensitive to the error in estimating the feature counts. To reduce this error, we introduce a novel approach for bootstrapping the demonstration by assuming that: (i), the expert is (near-)optimal, and (ii), the dynamics of the system is known. Empirical results on gridworlds and car racing problems show that our approach is able to learn good policies from a small number of demonstrations.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models

Zhang, K., Hyvärinen, A.

In JMLR Workshop and Conference Proceedings, Volume 6, pages: 157-164, (Editors: I Guyon and D Janzing and B Schölkopf), MIT Press, Cambridge, MA, USA, Causality: Objectives and Assessment (NIPS Workshop), 2010 (inproceedings)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Clustering Based Approach to Learning Regular Expressions over Large Alphabet for Noisy Unstructured Text

Babbar, R., Singh, N.

In Proceedings of the Fourth Workshop on Analytics for Noisy Unstructured Text Data, pages: 43-50, (Editors: R Basili and DP Lopresti and C Ringlstetter and S Roy and KU Schulz and LV Subramaniam), ACM, AND (in conjunction with CIKM), 2010 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Learning the Reward Model of Dialogue POMDPs

Boularias, A., Chinaei, H., Chaib-Draa, B.

NIPS Workshop on Machine Learning for Assistive Technology (MLAT-2010), 2010 (poster)

[BibTex]

[BibTex]


no image
Characteristic Kernels on Structured Domains Excel in Robotics and Human Action Recognition

Danafar, S., Gretton, A., Schmidhuber, J.

In Machine Learning and Knowledge Discovery in Databases, LNCS Vol. 6321, pages: 264-279, (Editors: JL Balcázar and F Bonchi and A Gionis and M Sebag), Springer, Berlin, Germany, ECML PKDD, 2010 (inproceedings)

Abstract
Embedding probability distributions into a sufficiently rich (characteristic) reproducing kernel Hilbert space enables us to take higher order statistics into account. Characterization also retains effective statistical relation between inputs and outputs in regression and classification. Recent works established conditions for characteristic kernels on groups and semigroups. Here we study characteristic kernels on periodic domains, rotation matrices, and histograms. Such structured domains are relevant for homogeneity testing, forward kinematics, forward dynamics, inverse dynamics, etc. Our kernel-based methods with tailored characteristic kernels outperform previous methods on robotics problems and also on a widely used benchmark for recognition of human actions in videos.

DOI [BibTex]

DOI [BibTex]


no image
Movement extraction by detecting dynamics switches and repetitions

Chiappa, S., Peters, J.

In Advances in Neural Information Processing Systems 23, pages: 388-396, (Editors: Lafferty, J. , C. K.I. Williams, J. Shawe-Taylor, R. S. Zemel, A. Culotta), Curran, Red Hook, NY, USA, Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
Many time-series such as human movement data consist of a sequence of basic actions, e.g., forehands and backhands in tennis. Automatically extracting and characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which an observed time-series is modeled as a concatenation of segments corresponding to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement, with possible time re-scaling. We analyze three different approximation methods for dealing with model intractability, and demonstrate how the proposed approach can successfully segment table tennis movements recorded using a robot arm as haptic input device.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Space-Variant Single-Image Blind Deconvolution for Removing Camera Shake

Harmeling, S., Hirsch, M., Schölkopf, B.

In Advances in Neural Information Processing Systems 23, pages: 829-837, (Editors: J Lafferty and CKI Williams and J Shawe-Taylor and RS Zemel and A Culotta), Curran, Red Hook, NY, USA, 24th Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
Modelling camera shake as a space-invariant convolution simplifies the problem of removing camera shake, but often insufficiently models actual motion blur such as those due to camera rotation and movements outside the sensor plane or when objects in the scene have different distances to the camera. In an effort to address these limitations, (i) we introduce a taxonomy of camera shakes, (ii) we build on a recently introduced framework for space-variant filtering by Hirsch et al. and a fast algorithm for single image blind deconvolution for space-invariant filters by Cho and Lee to construct a method for blind deconvolution in the case of space-variant blur, and (iii), we present an experimental setup for evaluation that allows us to take images with real camera shake while at the same time recording the spacevariant point spread function corresponding to that blur. Finally, we demonstrate that our method is able to deblur images degraded by spatially-varying blur originating from real camera shake, even without using additionally motion sensor information.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Getting lost in space: Large sample analysis of the resistance distance

von Luxburg, U., Radl, A., Hein, M.

In Advances in Neural Information Processing Systems 23, pages: 2622-2630, (Editors: Lafferty, J. , C. K.I. Williams, J. Shawe-Taylor, R. S. Zemel, A. Culotta), Curran, Red Hook, NY, USA, Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
The commute distance between two vertices in a graph is the expected time it takes a random walk to travel from the first to the second vertex and back. We study the behavior of the commute distance as the size of the underlying graph increases. We prove that the commute distance converges to an expression that does not take into account the structure of the graph at all and that is completely meaningless as a distance function on the graph. Consequently, the use of the raw commute distance for machine learning purposes is strongly discouraged for large graphs and in high dimensions. As an alternative we introduce the amplified commute distance that corrects for the undesired large sample effects.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Distinguishing between cause and effect

Mooij, J., Janzing, D.

In JMLR Workshop and Conference Proceedings: Volume 6, pages: 147-156, (Editors: Guyon, I. , D. Janzing, B. Schölkopf), MIT Press, Cambridge, MA, USA, Causality: Objectives and Assessment (NIPS Workshop) , 2010 (inproceedings)

Abstract
We describe eight data sets that together formed the CauseEffectPairs task in the Causality Challenge #2: Pot-Luck competition. Each set consists of a sample of a pair of statistically dependent random variables. One variable is known to cause the other one, but this information was hidden from the participants; the task was to identify which of the two variables was the cause and which one the effect, based upon the observed sample. The data sets were chosen such that we expect common agreement on the ground truth. Even though part of the statistical dependences may also be due to hidden common causes, common sense tells us that there is a significant cause-effect relation between the two variables in each pair. We also present baseline results using three different causal inference methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Erste Erfahrungen bei der Beurteilung hämato-onkologischer Krankheitsmanifestationen an den Extremitäten mit einem PET/MRT-Hybridsystem.

Sauter, A., Boss, A., Kolb, A., Mantlik, F., Bethge, W., Kanz, L., Pfannenberg, C., Stegger, L., Pichler, B., Claussen, C., Horger, M.

Thieme Verlag, Stuttgart, Germany, 91. Deutscher R{\"o}ntgenkongress, 2010 (poster)

Web DOI [BibTex]

Web DOI [BibTex]


no image
Kernel Methods for Detecting the Direction of Time Series

Peters, J., Janzing, D., Gretton, A., Schölkopf, B.

In Advances in Data Analysis, Data Handling and Business Intelligence, pages: 57-66, (Editors: A Fink and B Lausen and W Seidel and A Ultsch), Springer, Berlin, Germany, 32nd Annual Conference of the Gesellschaft f{\"u}r Klassifikation e.V. (GfKl), 2010 (inproceedings)

Abstract
We propose two kernel based methods for detecting the time direction in empirical time series. First we apply a Support Vector Machine on the finite-dimensional distributions of the time series (classification method) by embedding these distributions into a Reproducing Kernel Hilbert Space. For the ARMA method we fit the observed data with an autoregressive moving average process and test whether the regression residuals are statistically independent of the past values. Whenever the dependence in one direction is significantly weaker than in the other we infer the former to be the true one. Both approaches were able to detect the direction of the true generating model for simulated data sets. We also applied our tests to a large number of real world time series. The ARMA method made a decision for a significant fraction of them, in which it was mostly correct, while the classification method did not perform as well, but still exceeded chance level.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Switched Latent Force Models for Movement Segmentation

Alvarez, M., Peters, J., Schölkopf, B., Lawrence, N.

In Advances in neural information processing systems 23, pages: 55-63, (Editors: J Lafferty and CKI Williams and J Shawe-Taylor and RS Zemel and A Culotta), Curran, Red Hook, NY, USA, 24th Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a BarrettWAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Naı̈ve Security in a Wi-Fi World

Swanson, C., Urner, R., Lank, E.

In Trust Management IV - 4th IFIP WG 11.11 International Conference Proceedings, pages: 32-47, (Editors: Nishigaki, M., Josang, A., Murayama, Y., Marsh, S.), IFIPTM, 2010 (inproceedings)

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2005


no image
Spectral clustering and transductive inference for graph data

Zhou, D.

NIPS Workshop on Kernel Methods and Structured Domains, December 2005 (talk)

PDF Web [BibTex]

2005

PDF Web [BibTex]


no image
Kernel ICA for Large Scale Problems

Jegelka, S., Gretton, A., Achlioptas, D.

In pages: -, NIPS Workshop on Large Scale Kernel Machines, December 2005 (inproceedings)

Web [BibTex]

Web [BibTex]


no image
Infinite dimensional exponential families by reproducing kernel Hilbert spaces

Fukumizu, K.

In IGAIA 2005, pages: 324-333, 2nd International Symposium on Information Geometry and its Applications, December 2005 (inproceedings)

Abstract
The purpose of this paper is to propose a method of constructing exponential families of Hilbert manifold, on which estimation theory can be built. Although there have been works on infinite dimensional exponential families of Banach manifolds (Pistone and Sempi, 1995; Gibilisco and Pistone, 1998; Pistone and Rogantin, 1999), they are not appropriate to discuss statistical estimation with finite number of samples; the likelihood function with finite samples is not continuous on the manifold. In this paper we use a reproducing kernel Hilbert space as a functional space for constructing an exponential manifold. A reproducing kernel Hilbert space is dened as a Hilbert space of functions such that evaluation of a function at an arbitrary point is a continuous functional on the Hilbert space. Since we can discuss the value of a function with this space, it is very natural to use a manifold associated with a reproducing kernel Hilbert space as a basis of estimation theory. We focus on the maximum likelihood estimation (MLE) with the exponential manifold of a reproducing kernel Hilbert space. As in many non-parametric estimation methods, straightforward extension of MLE to an infinite dimensional exponential manifold suffers the problem of ill-posedness caused by the fact that the estimator should be chosen from the infinite dimensional space with only finite number of constraints given by the data. To solve this problem, a pseudo-maximum likelihood method is proposed by restricting the infinite dimensional manifold to a series of finite dimensional submanifolds, which enlarge as the number of samples increases. Some asymptotic results in the limit of infinite samples are shown, including the consistency of the pseudo-MLE.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Some thoughts about Gaussian Processes

Chapelle, O.

NIPS Workshop on Open Problems in Gaussian Processes for Machine Learning, December 2005 (talk)

PDF Web [BibTex]

PDF Web [BibTex]


no image
Shortest-path kernels on graphs

Borgwardt, KM., Kriegel, H-P.

In pages: 74-81, IEEE Computer Society, Los Alamitos, CA, USA, Fifth International Conference on Data Mining (ICDM), November 2005 (inproceedings)

Abstract
Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Kernel methods for dependence testing in LFP-MUA

Gretton, A., Belitski, A., Murayama, Y., Schölkopf, B., Logothetis, N.

35(689.17), 35th Annual Meeting of the Society for Neuroscience (Neuroscience), November 2005 (poster)

Abstract
A fundamental problem in neuroscience is determining whether or not particular neural signals are dependent. The correlation is the most straightforward basis for such tests, but considerable work also focuses on the mutual information (MI), which is capable of revealing dependence of higher orders that the correlation cannot detect. That said, there are other measures of dependence that share with the MI an ability to detect dependence of any order, but which can be easier to compute in practice. We focus in particular on tests based on the functional covariance, which derive from work originally accomplished in 1959 by Renyi. Conceptually, our dependence tests work by computing the covariance between (infinite dimensional) vectors of nonlinear mappings of the observations being tested, and then determining whether this covariance is zero - we call this measure the constrained covariance (COCO). When these vectors are members of universal reproducing kernel Hilbert spaces, we can prove this covariance to be zero only when the variables being tested are independent. The greatest advantage of these tests, compared with the mutual information, is their simplicity – when comparing two signals, we need only take the largest eigenvalue (or the trace) of a product of two matrices of nonlinearities, where these matrices are generally much smaller than the number of observations (and are very simple to construct). We compare the mutual information, the COCO, and the correlation in the context of finding changes in dependence between the LFP and MUA signals in the primary visual cortex of the anaesthetized macaque, during the presentation of dynamic natural stimuli. We demonstrate that the MI and COCO reveal dependence which is not detected by the correlation alone (which we prove by artificially removing all correlation between the signals, and then testing their dependence with COCO and the MI); and that COCO and the MI give results consistent with each other on our data.

Web [BibTex]

Web [BibTex]


no image
Training Support Vector Machines with Multiple Equality Constraints

Kienzle, W., Schölkopf, B.

In Proceedings of the 16th European Conference on Machine Learning, Lecture Notes in Computer Science, Vol. 3720, pages: 182-193, (Editors: JG Carbonell and J Siekmann), Springer, Berlin, Germany, ECML, November 2005 (inproceedings)

Abstract
In this paper we present a primal-dual decomposition algorithm for support vector machine training. As with existing methods that use very small working sets (such as Sequential Minimal Optimization (SMO), Successive Over-Relaxation (SOR) or the Kernel Adatron (KA)), our method scales well, is straightforward to implement, and does not require an external QP solver. Unlike SMO, SOR and KA, the method is applicable to a large number of SVM formulations regardless of the number of equality constraints involved. The effectiveness of our algorithm is demonstrated on a more difficult SVM variant in this respect, namely semi-parametric support vector regression.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Measuring Statistical Dependence with Hilbert-Schmidt Norms

Gretton, A., Bousquet, O., Smola, A., Schoelkopf, B.

In Algorithmic Learning Theory, Lecture Notes in Computer Science, Vol. 3734, pages: 63-78, (Editors: S Jain and H-U Simon and E Tomita), Springer, Berlin, Germany, 16th International Conference ALT, October 2005 (inproceedings)

Abstract
We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on {methodname} do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
An Analysis of the Anti-Learning Phenomenon for the Class Symmetric Polyhedron

Kowalczyk, A., Chapelle, O.

In Algorithmic Learning Theory: 16th International Conference, pages: 78-92, Algorithmic Learning Theory, October 2005 (inproceedings)

Abstract
This paper deals with an unusual phenomenon where most machine learning algorithms yield good performance on the training set but systematically worse than random performance on the test set. This has been observed so far for some natural data sets and demonstrated for some synthetic data sets when the classification rule is learned from a small set of training samples drawn from some high dimensional space. The initial analysis presented in this paper shows that anti-learning is a property of data sets and is quite distinct from overfitting of a training data. Moreover, the analysis leads to a specification of some machine learning procedures which can overcome anti-learning and generate ma- chines able to classify training and test data consistently.

PDF [BibTex]

PDF [BibTex]