Header logo is ei


2008


no image
A Predictive Model for Imitation Learning in Partially Observable Environments

Boularias, A.

In ICMLA 2008, pages: 83-90, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. A. Kurgan, T. Hu, K. Hafeez), IEEE, Piscataway, NJ, USA, Seventh International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
Learning by imitation has shown to be a powerful paradigm for automated learning in autonomous robots. This paper presents a general framework of learning by imitation for stochastic and partially observable systems. The model is a Predictive Policy Representation (PPR) whose goal is to represent the teacher‘s policies without any reference to states. The model is fully described in terms of actions and observations only. We show how this model can efficiently learn the personal behavior and preferences of an assistive robot user.

PDF Web DOI [BibTex]

2008

PDF Web DOI [BibTex]


no image
Stereo Matching for Calibrated Cameras without Correspondence

Helmke, U., Hüper, K., Vences, L.

In CDC 2008, pages: 2408-2413, IEEE Service Center, Piscataway, NJ, USA, 47th IEEE Conference on Decision and Control, December 2008 (inproceedings)

Abstract
We study the stereo matching problem for reconstruction of the location of 3D-points on an unknown surface patch from two calibrated identical cameras without using any a priori information about the pointwise correspondences. We assume that camera parameters and the pose between the cameras are known. Our approach follows earlier work for coplanar cameras where a gradient flow algorithm was proposed to match associated Gramians. Here we extend this method by allowing arbitrary poses for the cameras. We introduce an intrinsic Riemannian Newton algorithm that achieves local quadratic convergence rates. A closed form solution is presented, too. The efficiency of both algorithms is demonstrated by numerical experiments.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Joint Kernel Support Estimation for Structured Prediction

Lampert, C., Blaschko, M.

In Proceedings of the NIPS 2008 Workshop on "Structured Input - Structured Output" (NIPS SISO 2008), pages: 1-4, NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (inproceedings)

Abstract
We present a new technique for structured prediction that works in a hybrid generative/ discriminative way, using a one-class support vector machine to model the joint probability of (input, output)-pairs in a joint reproducing kernel Hilbert space. Compared to discriminative techniques, like conditional random elds or structured out- put SVMs, the proposed method has the advantage that its training time depends only on the number of training examples, not on the size of the label space. Due to its generative aspect, it is also very tolerant against ambiguous, incomplete or incorrect labels. Experiments on realistic data show that our method works eciently and robustly in situations for which discriminative techniques have computational or statistical problems.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Metropolis Algorithms for Representative Subgraph Sampling

Hübler, C., Kriegel, H., Borgwardt, K., Ghahramani, Z.

In pages: 283-292, (Editors: Giannotti, F.), IEEE, Piscataway, NJ, USA, Eighth IEEE International Conference on Data Mining (ICDM '08) , December 2008 (inproceedings)

Abstract
While data mining in chemoinformatics studied graph data with dozens of nodes, systems biology and the Internet are now generating graph data with thousands and millions of nodes. Hence data mining faces the algorithmic challenge of coping with this significant increase in graph size: Classic algorithms for data analysis are often too expensive and too slow on large graphs. While one strategy to overcome this problem is to design novel efficient algorithms, the other is to 'reduce' the size of the large graph by sampling. This is the scope of this paper: We will present novel Metropolis algorithms for sampling a 'representative' small subgraph from the original large graph, with 'representative' describing the requirement that the sample shall preserve crucial graph properties of the original graph. In our experiments, we improve over the pioneering work of Leskovec and Faloutsos (KDD 2006), by producing representative subgraph samples that are both smaller and of higher quality than those produced by other methods from the literature.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

In ICDM 2008, pages: 953-958, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric $epsilon$-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric$epsilon$-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is lar ger than for non-geometric graph mining,the total time is within a reasonable level even for small minimum support.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Block Iterative Algorithms for Non-negative Matrix Approximation

Sra, S.

In ICDM 2008, pages: 1037-1042, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Service Center, Piscataway, NJ, USA, Eighth IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
In this paper we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung~cite{lee00} for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular textbf {block-iterative} acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of~cite{suv.nips}. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Bayesian Approach to Switching Linear Gaussian State-Space Models for Unsupervised Time-Series Segmentation

Chiappa, S.

In ICMLA 2008, pages: 3-9, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. Kurgan, T. Hu, K. Hafeez), IEEE Computer Society, Los Alamitos, CA, USA, 7th International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
Time-series segmentation in the fully unsupervised scenario in which the number of segment-types is a priori unknown is a fundamental problem in many applications. We propose a Bayesian approach to a segmentation model based on the switching linear Gaussian state-space model that enforces a sparse parametrization, such as to use only a small number of a priori available different dynamics to explain the data. This enables us to estimate the number of segment-types within the model, in contrast to previous non-Bayesian approaches where training and comparing several separate models was required. As the resulting model is computationally intractable, we introduce a variational approximation where a reformulation of the problem enables the use of efficient inference algorithms.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Infinite Kernel Learning

Gehler, P., Nowozin, S.

In Proceedings of the NIPS 2008 Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels", pages: 1-4, NIPS Workshop on "Kernel Learning: Automatic Selection of Optimal Kernels" (LK ASOK´08), December 2008 (inproceedings)

Abstract
In this paper we build upon the Multiple Kernel Learning (MKL) framework and in particular on [1] which generalized it to infinitely many kernels. We rewrite the problem in the standard MKL formulation which leads to a Semi-Infinite Program. We devise a new algorithm to solve it (Infinite Kernel Learning, IKL). The IKL algorithm is applicable to both the finite and infinite case and we find it to be faster and more stable than SimpleMKL [2]. Furthermore we present the first large scale comparison of SVMs to MKL on a variety of benchmark datasets, also comparing IKL. The results show two things: a) for many datasets there is no benefit in using MKL/IKL instead of the SVM classifier, thus the flexibility of using more than one kernel seems to be of no use, b) on some datasets IKL yields massive increases in accuracy over SVM/MKL due to the possibility of using a largely increased kernel set. For those cases parameter selection through Cross-Validation or MKL is not applicable.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Prediction-Directed Compression of POMDPs

Boularias, A., Izadi, M., Chaib-Draa, B.

In ICMLA 2008, pages: 99-105, (Editors: Wani, M. A., X.-W. Chen, D. Casasent, L. A. Kurgan, T. Hu, K. Hafeez), IEEE, Piscataway, NJ, USA, Seventh International Conference on Machine Learning and Applications, December 2008 (inproceedings)

Abstract
High dimensionality of belief space in partially observable Markov decision processes (POMDPs) is one of the major causes that severely restricts the applicability of this model. Previous studies have demonstrated that the dimensionality of a POMDP can eventually be reduced by transforming it into an equivalent predictive state representation (PSR). In this paper, we address the problem of finding an approximate and compact PSR model corresponding to a given POMDP model. We formulate this problem in an optimization framework. Our algorithm tries to minimize the potential error that missing some core tests may cause. We also present an empirical evaluation on benchmark problems, illustrating the performance of this approach.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Iterative Subgraph Mining for Principal Component Analysis

Saigo, H., Tsuda, K.

In ICDM 2008, pages: 1007-1012, (Editors: Giannotti, F. , D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, X. Wu), IEEE Computer Society, Los Alamitos, CA, USA, IEEE International Conference on Data Mining, December 2008 (inproceedings)

Abstract
Graph mining methods enumerate frequent subgraphs efficiently, but they are not necessarily good features for machine learning due to high correlation among features. Thus it makes sense to perform principal component analysis to reduce the dimensionality and create decorrelated features. We present a novel iterative mining algorithm that captures informative patterns corresponding to major entries of top principal components. It repeatedly calls weighted substructure mining where example weights are updated in each iteration. The Lanczos algorithm, a standard algorithm of eigendecomposition, is employed to update the weights. In experiments, our patterns are shown to approximate the principal components obtained by frequent mining.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

(180), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric epsilon-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric epsilon-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is larger than for non-geometric graph mining, the total time is within a reasonable level even for small minimum support.

PDF [BibTex]

PDF [BibTex]


no image
Probabilistic Inference for Fast Learning in Control

Rasmussen, CE., Deisenroth, MP.

In EWRL 2008, pages: 229-242, (Editors: Girgin, S. , M. Loth, R. Munos, P. Preux, D. Ryabko), Springer, Berlin, Germany, 8th European Workshop on Reinforcement Learning, November 2008 (inproceedings)

Abstract
We provide a novel framework for very fast model-based reinforcement learning in continuous state and action spaces. The framework requires probabilistic models that explicitly characterize their levels of confidence. Within this framework, we use flexible, non-parametric models to describe the world based on previously collected experience. We demonstrate learning on the cart-pole problem in a setting where we provide very limited prior knowledge about the task. Learning progresses rapidly, and a good policy is found after only a hand-full of iterations.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simultaneous Implicit Surface Reconstruction and Meshing

Giesen, J., Maier, M., Schölkopf, B.

(179), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We investigate an implicit method to compute a piecewise linear representation of a surface from a set of sample points. As implicit surface functions we use the weighted sum of piecewise linear kernel functions. For such a function we can partition Rd in such a way that these functions are linear on the subsets of the partition. For each subset in the partition we can then compute the zero level set of the function exactly as the intersection of a hyperplane with the subset.

PDF [BibTex]

PDF [BibTex]


no image
Policy Learning: A Unified Perspective with Applications in Robotics

Peters, J., Kober, J., Nguyen-Tuong, D.

In EWRL 2008, pages: 220-228, (Editors: Girgin, S. , M. Loth, R. Munos, P. Preux, D. Ryabko), Springer, Berlin, Germany, 8th European Workshop on Reinforcement Learning, November 2008 (inproceedings)

Abstract
Policy Learning approaches are among the best suited methods for high-dimensional, continuous control systems such as anthropomorphic robot arms and humanoid robots. In this paper, we show two contributions: firstly, we show a unified perspective which allows us to derive several policy learning algorithms from a common point of view, i.e, policy gradient algorithms, natural-gradient algorithms and EM-like policy learning. Secondly, we present several applications to both robot motor primitive learning as well as to robot control in task space. Results both from simulation and several different real robots are shown.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Taxonomy Inference Using Kernel Dependence Measures

Blaschko, M., Gretton, A.

(181), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to substantially improve the quality of the clustering, when compared with state-of-the-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text data.

PDF [BibTex]

PDF [BibTex]


no image
Two-Channel Control for Scaled Teleoperation

Son, HI., Lee, DY.

In International Conference on Control, Automation and Systems, pages: 1284-1289, IEEE, Piscataway, NJ, USA, International Conference on Control, Automation and Systems (ICCAS), October 2008 (inproceedings)

Abstract
There is a trade-off between stability and performance in haptic control systems. In this paper, a stability and performance analysis is presented for a scaled teleoperation system in an effort to increase the performance of the system while maintaining the stability. The stability is quantitatively defined as a metric using Llewellynpsilas absolute stability criterion. Position tracking and kinesthetic perception are used as the performance indices. The analysis is carried out using various scaling factors and impedances of human and environment. A two-channel position-position (PP) controller and a two-channel force-position (FP) controller are applied for the analysis and simulation.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning to Localize Objects with Structured Output Regression

Blaschko, MB., Lampert, CH.

In ECCV 2008, pages: 2-15, (Editors: Forsyth, D. A., P. H.S. Torr, A. Zisserman), Springer, Berlin, Germany, 10th European Conference on Computer Vision, October 2008, Best Student Paper Award (inproceedings)

Abstract
Sliding window classifiers are among the most successful and widely applied techniques for object localization. However, training is typically done in a way that is not specific to the localization task. First a binary classifier is trained using a sample of positive and negative examples, and this classifier is subsequently applied to multiple regions within test images. We propose instead to treat object localization in a principled way by posing it as a problem of predicting structured data: we model the problem not as binary classification, but as the prediction of the bounding box of objects located in images. The use of a joint-kernel framework allows us to formulate the training procedure as a generalization of an SVM, which can be solved efficiently. We further improve computational efficiency by using a branch-and-bound strategy for localization during both training and testing. Experimental evaluation on the PASCAL VOC and TU Darmstadt datasets show that the structured training procedure improves pe rformance over binary training as well as the best previously published scores.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic Image Colorization Via Multimodal Predictions

Charpiat, G., Hofmann, M., Schölkopf, B.

In Computer Vision - ECCV 2008, Lecture Notes in Computer Science, Vol. 5304, pages: 126-139, (Editors: DA Forsyth and PHS Torr and A Zisserman), Springer, Berlin, Germany, 10th European Conference on Computer Vision, October 2008 (inproceedings)

Abstract
We aim to color automatically greyscale images, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors, instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a nonuniform spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Infinite Kernel Learning

Gehler, P., Nowozin, S.

(178), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, October 2008 (techreport)

Abstract
In this paper we consider the problem of automatically learning the kernel from general kernel classes. Specifically we build upon the Multiple Kernel Learning (MKL) framework and in particular on the work of (Argyriou, Hauser, Micchelli, & Pontil, 2006). We will formulate a Semi-Infinite Program (SIP) to solve the problem and devise a new algorithm to solve it (Infinite Kernel Learning, IKL). The IKL algorithm is applicable to both the finite and infinite case and we find it to be faster and more stable than SimpleMKL (Rakotomamonjy, Bach, Canu, & Grandvalet, 2007) for cases of many kernels. In the second part we present the first large scale comparison of SVMs to MKL on a variety of benchmark datasets, also comparing IKL. The results show two things: a) for many datasets there is no benefit in linearly combining kernels with MKL/IKL instead of the SVM classifier, thus the flexibility of using more than one kernel seems to be of no use, b) on some datasets IKL yields impressive increases in accuracy over SVM/MKL due to the possibility of using a largely increased kernel set. In those cases, IKL remains practical, whereas both cross-validation or standard MKL is infeasible.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Nonparametric Independence Tests: Space Partitioning and Kernel Approaches

Gretton, A., Györfi, L.

In ALT08, pages: 183-198, (Editors: Freund, Y. , L. Györfi, G. Turán, T. Zeugmann), Springer, Berlin, Germany, 19th International Conference on Algorithmic Learning Theory (ALT08), October 2008 (inproceedings)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. All tests reject the null hypothesis of independence if the test statistics become large. The large deviation and limit distribution properties of all three test statistics are given. Following from these results, distributionfree strong consistent tests of independence are derived, as are asymptotically alpha-level tests. The performance of the tests is evaluated experimentally on benchmark data.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic 3D Face Reconstruction from Single Images or Video

Breuer, P., Kim, K., Kienzle, W., Schölkopf, B., Blanz, V.

In FG 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, 8th IEEE International Conference on Automatic Face and Gesture Recognition, September 2008 (inproceedings)

Abstract
This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a highresolution 3D surface model.

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Kernel Measures of Conditional Dependence

Fukumizu, K., Gretton, A., Sun, X., Schölkopf, B.

In Advances in neural information processing systems 20, pages: 489-496, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
We propose a new measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces. Unlike previous kernel dependence measures, the proposed criterion does not depend on the choice of kernel in the limit of infinite data, for a wide class of kernels. At the same time, it has a straightforward empirical estimate with good convergence behaviour. We discuss the theoretical properties of the measure, and demonstrate its application in experiments.

PDF Web [BibTex]

PDF Web [BibTex]


no image
An Analysis of Inference with the Universum

Sinz, F., Chapelle, O., Agarwal, A., Schölkopf, B.

In Advances in neural information processing systems 20, pages: 1369-1376, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
We study a pattern classification algorithm which has recently been proposed by Vapnik and coworkers. It builds on a new inductive principle which assumes that in addition to positive and negative data, a third class of data is available, termed the Universum. We assay the behavior of the algorithm by establishing links with Fisher discriminant analysis and oriented PCA, as well as with an SVM in a projected subspace (or, equivalently, with a data-dependent reduced kernel). We also provide experimental results.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning with Transformation Invariant Kernels

Walder, C., Chapelle, O.

In Advances in neural information processing systems 20, pages: 1561-1568, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
This paper considers kernels invariant to translation, rotation and dilation. We show that no non-trivial positive definite (p.d.) kernels exist which are radial and dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly, we discuss the c.p.d. case and provide some novel analysis, including an elementary derivation of a c.p.d. representer theorem. On the practical side, we give a support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thinplate kernel this leads to a classifier with only one parameter (the amount of regularisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian kernel, even though the Gaussian involves a second parameter (the length scale).

PDF Web [BibTex]

PDF Web [BibTex]


no image
Episodic Reinforcement Learning by Logistic Reward-Weighted Regression

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In ICANN 2008, pages: 407-416, (Editors: Kurkova-Pohlova, V. , R. Neruda, J. Koutnik), Springer, Berlin, Germany, 18th International Conference on Artificial Neural Networks, September 2008 (inproceedings)

Abstract
It has been a long-standing goal in the adaptive control community to reduce the generically difficult, general reinforcement learning (RL) problem to simpler problems solvable by supervised learning. While this approach is today’s standard for value function-based methods, fewer approaches are known that apply similar reductions to policy search methods. Recently, it has been shown that immediate RL problems can be solved by reward-weighted regression, and that the resulting algorithm is an expectation maximization (EM) algorithm with strong guarantees. In this paper, we extend this algorithm to the episodic case and show that it can be used in the context of LSTM recurrent neural networks (RNNs). The resulting RNN training algorithm is equivalent to a weighted self-modeling supervised learning technique. We focus on partially observable Markov decision problems (POMDPs) where it is essential that the policy is nonstationary in order to be optimal. We show that this new reward-weighted logistic regression u sed in conjunction with an RNN architecture can solve standard benchmark POMDPs with ease.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Exact Dynamic Programming for Decentralized POMDPs with Lossless Policy Compression

Boularias, A., Chaib-Draa, B.

In ICAPS 2008, pages: 20-27, (Editors: Rintanen, J. , B. Nebel, J. C. Beck, E. A. Hansen), AAAI Press, Menlo Park, CA, USA, Eighteenth International Conference on Automated Planning and Scheduling, September 2008 (inproceedings)

Abstract
High dimensionality of belief space in DEC-POMDPs is one of the major causes that makes the optimal joint policy computation intractable. The belief state for a given agent is a probability distribution over the system states and the policies of other agents. Belief compression is an efficient POMDP approach that speeds up planning algorithms by projecting the belief state space to a low-dimensional one. In this paper, we introduce a new method for solving DEC-POMDP problems, based on the compression of the policy belief space. The reduced policy space contains sequences of actions and observations that are linearly independent. We tested our approach on two benchmark problems, and the preliminary results confirm that Dynamic Programming algorithm scales up better when the policy belief is compressed.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Local Gaussian Processes Regression for Real-time Model-based Robot Control

Nguyen-Tuong, D., Peters, J.

In IROS 2008, pages: 380-385, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2008 (inproceedings)

Abstract
High performance and compliant robot control require accurate dynamics models which cannot be obtained analytically for sufficiently complex robot systems. In such cases, machine learning offers a promising alternative for approximating the robot dynamics using measured data. This approach offers a natural framework to incorporate unknown nonlinearities as well as to continually adapt online for changes in the robot dynamics. However, the most accurate regression methods, e.g. Gaussian processes regression (GPR) and support vector regression (SVR), suffer from exceptional high computational complexity which prevents their usage for large numbers of samples or online learning to date. Inspired by locally linear regression techniques, we propose an approximation to the standard GPR using local Gaussian processes models. Due to reduced computational cost, local Gaussian processes (LGP) can be applied for larger sample-sizes and online learning. Comparisons with other nonparametric regressions, e.g. standard GPR, nu-SVR and locally weighted projection regression (LWPR), show that LGP has higher accuracy than LWPR close to the performance of standard GPR and nu-SVR while being sufficiently fast for online learning.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Colored Maximum Variance Unfolding

Song, L., Smola, A., Borgwardt, K., Gretton, A.

In Advances in neural information processing systems 20, pages: 1385-1392, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Maximum variance unfolding (MVU) is an effective heuristic for dimensionality reduction. It produces a low-dimensional representation of the data by maximizing the variance of their embeddings while preserving the local distances of the original data. We show that MVU also optimizes a statistical dependence measure which aims to retain the identity of individual observations under the distancepreserving constraints. This general view allows us to design "colored" variants of MVU, which produce low-dimensional representations for a given task, e.g. subject to class labels or other side information.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Assessing Nonlinear Granger Causality from Multivariate Time Series

Sun, X.

In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2008, pages: 440-455, (Editors: Daelemans, W. , B. Goethals, K. Morik), Springer, Berlin, Germany, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), September 2008 (inproceedings)

Abstract
A straightforward nonlinear extension of Granger’s concept of causality in the kernel framework is suggested. The kernel-based approach to assessing nonlinear Granger causality in multivariate time series enables us to determine, in a model-free way, whether the causal relation between two time series is present or not and whether it is direct or mediated by other processes. The trace norm of the so-called covariance operator in feature space is used to measure the prediction error. Relying on this measure, we test the improvement of predictability between time series by subsampling-based multiple testing. The distributional properties of the resulting p-values reveal the direction of Granger causality. Experiments with simulated and real-world data show that our method provides encouraging results.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

(175), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

PDF [BibTex]

PDF [BibTex]


no image
Planning in Decentralized POMDPs with Predictive Policy Representations

Boularias, A., Chaib-Draa, B.

In MASPLAN 2008, pages: 1-7, ICAPS Multiagent Planning Workshop, September 2008 (inproceedings)

Abstract
We discuss the problem of policy representation in stochastic and partially observable systems, and address the case where the policy is a hidden parameter of the planning problem. We propose an adaptation of the Predictive State Representations (PSRs) to this problem by introducing tests (sequences of actions and observations) on policies. The new model, called the Predictive Policy Representations (PPRs), is potentially more compact than the other representations, such as decision trees or Finite-State Controllers (FSCs). In this paper, we show how PPRs can be used to improve the performances of a point-based algorithm for DEC-POMDP.

PDF [BibTex]

PDF [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Bethge, M., Berens, P.

In Advances in neural information processing systems 20, pages: 97-104, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data - the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Perceptual Coupling for Motor Primitives

Kober, J., Mohler, B., Peters, J.

In IROS 2008, pages: 834-839, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2008 (inproceedings)

Abstract
Dynamic system-based motor primitives [1] have enabled robots to learn complex tasks ranging from Tennisswings to locomotion. However, to date there have been only few extensions which have incorporated perceptual coupling to variables of external focus, and, furthermore, these modifications have relied upon handcrafted solutions. Humans learn how to couple their movement primitives with external variables. Clearly, such a solution is needed in robotics. In this paper, we propose an augmented version of the dynamic systems motor primitives which incorporates perceptual coupling to an external variable. The resulting perceptually driven motor primitives include the previous primitives as a special case and can inherit some of their interesting properties. We show that these motor primitives can perform complex tasks such as Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a skilled human player would be challenged. For doing so, we initialize the motor primitives in the traditional way by imitation learning without perceptual coupling. Subsequently, we improve the motor primitives using a novel reinforcement learning method which is particularly well-suited for motor primitives.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Receptive Fields without Spike-Triggering

Macke, J., Zeck, G., Bethge, M.

In Advances in neural information processing systems 20, pages: 969-976, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Stimulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods.

PDF Web [BibTex]

PDF Web [BibTex]


no image
An Automated Combination of Kernels for Predicting Protein Subcellular Localization

Ong, C., Zien, A.

In WABI 2008, pages: 186-197, (Editors: Crandall, K. A., J. Lagergren), Springer, Berlin, Germany, 8th Workshop on Algorithms in Bioinformatics, September 2008 (inproceedings)

Abstract
Protein subcellular localization is a crucial ingredient to many important inferences about cellular processes, including prediction of protein function and protein interactions. While many predictive computational tools have been proposed, they tend to have complicated architectures and require many design decisions from the developer. Here we utilize the multiclass support vector machine (m-SVM) method to directly solve protein subcellular localization without resorting to the common approach of splitting the problem into several binary classification problems. We further propose a general class of protein sequence kernels which considers all motifs, including motifs with gaps. Instead of heuristically selecting one or a few kernels from this family, we utilize a recent extension of SVMs that optimizes over multiple kernels simultaneously. This way, we automatically search over families of possible amino acid motifs. We compare our automated approach to three other predictors on four different datasets, and show that we perform better than the current state of the art. Further, our method provides some insights as to which sequence motifs are most useful for determining subcellular ocalization, which are in agreement with biological reasoning.

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Discriminative K-means for Clustering

Ye, J., Zhao, Z., Wu, M.

In Advances in neural information processing systems 20, pages: 1649-1656, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
We present a theoretical study on the discriminative clustering framework, recently proposed for simultaneous subspace selection via linear discriminant analysis (LDA) and clustering. Empirical results have shown its favorable performance in comparison with several other popular clustering algorithms. However, the inherent relationship between subspace selection and clustering in this framework is not well understood, due to the iterative nature of the algorithm. We show in this paper that this iterative subspace selection and clustering is equivalent to kernel K-means with a specific kernel Gram matrix. This provides significant and new insights into the nature of this subspace selection procedure. Based on this equivalence relationship, we propose the Discriminative K-means (DisKmeans) algorithm for simultaneous LDA subspace selection and clustering, as well as an automatic parameter estimation procedure. We also present the nonlinear extension of DisKmeans using kernels. We show that the learning of the ke rnel matrix over a convex set of pre-specified kernel matrices can be incorporated into the clustering formulation. The connection between DisKmeans and several other clustering algorithms is also analyzed. The presented theories and algorithms are evaluated through experiments on a collection of benchmark data sets.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Block-Iterative Algorithms for Non-Negative Matrix Approximation

Sra, S.

(176), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

Abstract
In this report we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung [19] for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular block-iterative acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of Dhillon and Sra [8]. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

PDF [BibTex]

PDF [BibTex]


no image
Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Gerwinn, S., Macke, J., Seeger, M., Bethge, M.

In Advances in neural information processing systems 20, pages: 529-536, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Policy Gradients with Parameter-based Exploration for Control

Sehnke, F., Osendorfer, C., Rückstiess, T., Graves, A., Peters, J., Schmidhuber, J.

In ICANN 2008, pages: 387-396, (Editors: Kurkova-Pohlova, V. , R. Neruda, J. Koutnik), Springer, Berlin, Germany, 18th International Conference on Artificial Neural Networks, September 2008 (inproceedings)

Abstract
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust standing with a humanoid robot, we show that our method outperforms well-known algorithms from the fields of policy gradients, finite difference methods and population based heuristics. We also provide a detailed analysis of the differences between our method and the other algorithms.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Distribution-free Learning of Bayesian Network Structure

Sun, X.

In ECML PKDD 2008, pages: 423-439, (Editors: Daelemans, W. , B. Goethals, K. Morik), Springer, Berlin, Germany, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, September 2008 (inproceedings)

Abstract
We present an independence-based method for learning Bayesian network (BN) structure without making any assumptions on the probability distribution of the domain. This is mainly useful for continuous domains. Even mixed continuous-categorical domains and structures containing vectorial variables can be handled. We address the problem by developing a non-parametric conditional independence test based on the so-called kernel dependence measure, which can be readily used by any existing independence-based BN structure learning algorithm. We demonstrate the structure learning of graphical models in continuous and mixed domains from real-world data without distributional assumptions. We also experimentally show that our test is a good alternative, in particular in case of small sample sizes, compared to existing tests, which can only be used in purely categorical or continuous domains.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Consistent Minimization of Clustering Objective Functions

von Luxburg, U., Bubeck, S., Jegelka, S., Kaufmann, M.

In Advances in neural information processing systems 20, pages: 961-968, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not achieve this goal. As an alternative, we suggest the paradigm of nearest neighbor clustering‘‘. Instead of selecting the best out of all partitions of the sample, it only considers partitions in some restricted function class. Using tools from statistical learning theory we prove that nearest neighbor clustering is statistically consistent. Moreover, its worst case complexity is polynomial by co nstructi on, and it can b e implem ented wi th small average case co mplexity using b ranch an d bound.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Enhancement of Kinesthetic Perception for Microsurgical Teleoperation

Son, HI., Lee, DY.

In KSME Conference on Bioengineering, pages: 259-260, KSME Bioengineering Conference, September 2008 (inproceedings)

PDF [BibTex]

PDF [BibTex]


no image
A Kernel Statistical Test of Independence

Gretton, A., Fukumizu, K., Teo, C., Song, L., Schölkopf, B., Smola, A.

In Advances in neural information processing systems 20, pages: 585-592, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Whereas kernel measures of independence have been widely applied in machine learning (notably in kernel ICA), there is as yet no method to determine whether they have detected statistically significant dependence. We provide a novel test of the independence hypothesis for one particular kernel independence measure, the Hilbert-Schmidt independence criterion (HSIC). The resulting test costs O(m^2), where m is the sample size. We demonstrate that this test outperforms established contingency table-based tests. Finally, we show the HSIC test also applies to text (and to structured data more generally), for which no other independence test presently exists.

PDF Web [BibTex]

PDF Web [BibTex]


no image
Fitness Expectation Maximization

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In PPSN 2008, pages: 337-346, (Editors: Rudolph, G. , T. Jansen, S. Lucas, C. Poloni, N. Beume), Springer, Berlin, Germany, 10th International Conference on Parallel Problem Solving From Nature, September 2008 (inproceedings)

Abstract
We present Fitness Expectation Maximization (FEM), a novel method for performing ‘black box’ function optimization. FEM searches the fitness landscape of an objective function using an instantiation of the well-known Expectation Maximization algorithm, producing search points to match the sample distribution weighted according to higher expected fitness. FEM updates both candidate solution parameters and the search policy, which is represented as a multinormal distribution. Inheriting EM’s stability and strong guarantees, the method is both elegant and competitive with some of the best heuristic search methods in the field, and performs well on a number of unimodal and multimodal benchmark tasks. To illustrate the potential practical applications of the approach, we also show experiments on finding the parameters for a controller of the challenging non-Markovian double pole balancing task.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Approximation Algorithms for Bregman Clustering Co-clustering and Tensor Clustering

Sra, S., Jegelka, S., Banerjee, A.

(177), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

Abstract
The Euclidean K-means problem is fundamental to clustering and over the years it has been intensely investigated. More recently, generalizations such as Bregman k-means [8], co-clustering [10], and tensor (multi-way) clustering [40] have also gained prominence. A well-known computational difficulty encountered by these clustering problems is the NP-Hardness of the associated optimization task, and commonly used methods guarantee at most local optimality. Consequently, approximation algorithms of varying degrees of sophistication have been developed, though largely for the basic Euclidean K-means (or `1-norm K-median) problem. In this paper we present approximation algorithms for several Bregman clustering problems by building upon the recent paper of Arthur and Vassilvitskii [5]. Our algorithms obtain objective values within a factor O(logK) for Bregman k-means, Bregman co-clustering, Bregman tensor clustering, and weighted kernel k-means. To our knowledge, except for some special cases, approximation algorithms have not been considered for these general clustering problems. There are several important implications of our work: (i) under the same assumptions as Ackermann et al. [1] it yields a much faster algorithm (non-exponential in K, unlike [1]) for information-theoretic clustering, (ii) it answers several open problems posed by [4], including generalizations to Bregman co-clustering, and tensor clustering, (iii) it provides practical and easy to implement methods—in contrast to several other common approximation approaches.

PDF [BibTex]

PDF [BibTex]


no image
Partial Least Squares Regression for Graph Mining

Saigo, H., Krämer, N., Tsuda, K.

In KDD2008, pages: 578-586, (Editors: Li, Y. , B. Liu, S. Sarawagi), ACM Press, New York, NY, USA, 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2008 (inproceedings)

Abstract
Attributed graphs are increasingly more common in many application domains such as chemistry, biology and text processing. A central issue in graph mining is how to collect informative subgraph patterns for a given learning task. We propose an iterative mining method based on partial least squares regression (PLS). To apply PLS to graph data, a sparse version of PLS is developed first and then it is combined with a weighted pattern mining algorithm. The mining algorithm is iteratively called with different weight vectors, creating one latent component per one mining call. Our method, graph PLS, is efficient and easy to implement, because the weight vector is updated with elementary matrix calculations. In experiments, our graph PLS algorithm showed competitive prediction accuracies in many chemical datasets and its efficiency was significantly superior to graph boosting (gboost) and the naive method based on frequent graph mining.

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Enhancement of Kinesthetic Perception for Microsurgical Teleoperation using Impedance-Shaping

Son, HI., Lee, DY.

In International Conference of the IEEE Engineering in Medicine and Biology Society, pages: 1939-1942, IEEE, Piscataway, NJ, USA, 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), August 2008 (inproceedings)

Abstract
A new control scheme is developed in this paper for a bilateral teleoperation system for microsurgical applications. The main objective of the proposed control scheme is to enhance the kinesthetic perception of the operator. First, the kinesthetic perception, based on psychophysics, is classified into three metrics of detection, sensitivity of detection, and discrimination. Additionally, a new performance index is introduced as a combination of these three metrics to quantify the kinesthetic performance. Second, modified macro-micro bilateral control system using an impedance-shaping method is proposed. The proposed controller can increase kinesthetic perception by shaping and magnifying the transmitted impedance to the operator. Finally, the performance of the proposed controller is verified in a comparison with the two-channel position-position (PP) controller, the two-channel force-position (FP) controller, and the four-channel transparency- optimized controller.

Web DOI [BibTex]

Web DOI [BibTex]


no image
Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analysis

Blaschko, M., Lampert, C., Gretton, A.

In ECML PKDD 2008, pages: 133-145, (Editors: Daelemans, W. , B. Goethals, K. Morik), Springer, Berlin, Germany, 19th European Conference on Machine Learning, August 2008 (inproceedings)

Abstract
Kernel canonical correlation analysis (KCCA) is a dimensionality reduction technique for paired data. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying semantics of the data rather than noise. However, meaningful directions are not only those that have high correlation to another modality, but also those that capture the manifold structure of the data. We propose a method that is simultaneously able to find highly correlated directions that are also located on high variance directions along the data manifold. This is achieved by the use of semi-supervised Laplacian regularization of KCCA. We show experimentally that Laplacian regularized training improves class separation over KCCA with only Tikhonov regularization, while causing no degradation in the correlation between modalities. We propose a model selection criterion based on the Hilbert-Schmidt norm of the semi-supervised Laplacian regularized cross-covariance operator, which we compute in closed form.

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
RKHS Representation of Measures Applied to Homogeneity, Independence, and Fourier Optics

Schölkopf, B., Sriperumbudur, B., Gretton, A., Fukumizu, K.

In OWR 2008, pages: 42-44, (Editors: K Jetter and S Smale and D-X Zhou), Mathematisches Forschungsinstitut, Oberwolfach-Walke, Germany, 30. Oberwolfach Report, August 2008 (inproceedings)

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Combining Appearance and Motion for Human Action Classification in Videos

Dhillon, P., Nowozin, S., Lampert, C.

(174), Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany, August 2008 (techreport)

Abstract
We study the question of activity classification in videos and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts. Our approach uses a tracking step which involves Particle Filtering and a local non - parametric clustering step. The motion information is provided by the trajectory of the cluster modes of a local set of particles. The statistical information about the particles of that cluster over a number of frames provides the appearance information. Later we use a “Bag ofWords” model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate among various human actions and thus classify them correctly. We tested our approach on the standard KTH and Weizmann human action datasets and the results were comparable to the state of the art. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with “gross motion” like running, jogging etc. and “local motion” like waving, boxing etc.

PDF [BibTex]

PDF [BibTex]